id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2309.14676
Skew Symmetric Extended Affine Lie algebras
For any skew symmetric matrix over complex numbers, we introduce an EALA and it is called Skew Symmetric Extended Affine Lie Algebra (SSEALA). This way we get a large class of EALAs and most often they are non-isomorphic. In this paper we study irreducible integrable modules for SSEALA with finite dimensional weight spaces. We classify all such modules in the level zero case with non-degenerate skew symmetric matrix.
S. Eswara Rao, Priyanshu Chakraborty
2023-09-26T05:03:27Z
http://arxiv.org/abs/2309.14676v1
# Skew symmetric extended affine Lie algebras ###### Abstract. For any skew symmetric matrix over complex numbers, we introduce an EALA and it is called Skew Symmetric Extended Affine Lie Algebra (SSEALA). This way we get a large class of EALAs and most often they are non-isomorphic. In this paper we study irreducible integrable modules for SSEALA with finite dimensional weight spaces. We classify all such modules in the level zero case with non degenerate skew symmetric matrix. Key words and phrases:Toroidal Lie algebras, Extended affine Lie algebras 2010 Mathematics Subject Classification: 17B67, 17B66 #### Notations: * Throughout this paper we will work over the base field \(\mathbb{C}\). * Let \(\mathbb{C},\mathbb{R},\mathbb{Z}\) denote the set of complex numbers, set of real numbers and set of integers respectively. * Let \(\mathbb{Z}_{+}\) and \(\mathbb{N}\) denote set of non-negative integers and set of positive integers respectively. For \(n\in\mathbb{N}\), \(\mathbb{C}^{n}=\{(x_{1},\ldots,x_{n}):x_{i}\in\mathbb{C},1\leq i\leq n\}\) and \(\mathbb{R}^{n},\mathbb{Z}^{n},\mathbb{N}^{n}\) and \(\mathbb{Z}_{+}^{n}\) are defined similarly. * Elements of \(\mathbb{C}^{n},\mathbb{R}^{n}\) and \(\mathbb{Z}^{n}\) are written in boldface. * For any Lie algebra \(\mathfrak{g}\), \(U(\mathfrak{g})\) will denote universal enveloping algebra of \(\mathfrak{g}\). * Let \((\cdot|\cdot)\) denote the standard inner product on \(\mathbb{C}^{n}\). * For any matrix \(B\), \(B^{T}\) denote the transpose matrix of \(B\). ## 1. Introduction In this paper we introduce a class of Extended Affine Lie Algebras (EALA) which we call Skew Symmetric Extended Affine Lie Algebras ( for short SSEALA). For any skew symmetric matrix \(B\) (\(B^{T}=-B\)) over complex numbers we define an EALA \(\tau_{B}\) which we call Skew Symmetric Extended Affine Lie Algebra (SSEALA). The first author in [R1] introduced Hamiltonian Extended Affine Lie Algebra (HEALA) and a Contact Extended Affine Lie Algebra (KEALA). They both are SSEALA and the underlying skew symmetric matrix is non-degenerate for HEALA (see Example 3.1) and degenerate for KEALA (see Example 3.2). EALAs have been extensively studied in the last two decades, see [AABGP, AG, N1, N2, ABFP] and references there in. There authors mainly study the structure of an EALA and able to give a definite shape to the core of EALA (see 2.1 for the definition of Core of an EALA). Later several authors studied the representations of EALAs, see ([R1, RSB, TB, CLT1, CLT2, SP1, SP2]). In fact the first author of the current paper has classified irreducible integrable modules for HEALA (both level zero and non-zero level) in [R1]. In this paper after introducing SSEALA we attempt to study integrable modules for these EALAs assuming the skew symmetric matrix to be non-degenerate (In the degenerate case the problem is more challenging and our methods does not work in this case). As seen in the previous cases like Toroidal Extended Affine Lie Algebra (TEALA) in [RSB] and Hamiltonian Extended Affine Lie Algebra (HEALA) in [R1] the problem of classifying irreducible integrable modules for \(\tau_{B}\) gets reduced to classification of Jet modules (see Definition 4.1) for the derivation algebra (denoted by \(H_{B}\), see Remark 3.1). Our arguments on the SSEALA reduced the classification problem to the case of Jet modules (see Definition 4.1) of the derivation algebra (see Remark 3.1) of HEALA denoted by \(H_{N}\). The problem for this case has been solved by J.Talboom, [T1]. Our classification problem works only when the core of EALA (see 2.1) acts non trivially on the modules. Otherwise the classification problem gets reduced to the ordinary module of derivation algebras (see Remark 3.1) which is open. Even in the known cases like S-type (TEALA) and H-type (HEALA) the problems are open. We will now give details of each section. In Section 2 we recall the definitions of Toroidal Lie algebra, Full Toroidal Lie Algebra, Toroidal Extended Affine Lie Algebra and Hamiltonian Extended Affine Lie Algebra. We note that the root system of all the algebras are same but with different multiplicities. We also recall the definition of EALA and provide some examples. In Section 3 we introduce the main object Skew Symmetric Extended Affine Lie Algebra (SSEALA) and prove some standard properties (Proposition 3.1). In Example 3.1 and Example 3.2 we explain how HEALA and KEALA are SSEALA by exhibiting the corresponding skew symmetric matrices. Let \(J\) be the matrix for the HEALA (see Example 3.1) and \(B\) be any non-degenerate skew symmetric matrix. It is a classical fact that there exists a matrix \(A\) in \(GL(N,\mathbb{C})\) such that \(A^{T}BA=J\). Suppose \(A\) is an integral matrix in \(GL(N,\mathbb{Z})\) then one can see that \(\tau_{B}\simeq\tau_{J}\) (Proposition 3.2), where \(\tau_{J}\) ia actually HEALA. Unfortunately for any \(B\) (for example when \(B\) has irrational entries or complex entries) we cannot find such an integral matrix \(A\). Thus the matrices \(B\) and \(J\) are related but the corresponding EALAs need not be isomorphic. As noted above the classification of integrable modules for \(\tau_{B}\) gets reduced to the classification of Jet modules (see Definition 4.1) of derivation algebra denoted by \(H_{B}\) (see Remark 3.1), which are actually modules for \(\widetilde{H}_{B}\ltimes A\) with finite dimensional weight spaces. The Section 4 is our main technical section where we reduce the classification of irreducible modules for \(\widetilde{H}_{B}\ltimes A\) to the classification of finite dimensional irreducible modules for a certain infinte dimensional Lie algebra denoted by \(T\). \(T\) contains decreasing sequence of cofinite ideals (see Lemma 4.4). We prove in Lemma 4.7 that on any irreducible finite dimensional module for \(T\), some cofinite ideal acts trivially and hence the finite dimensional module is actually a module for finite dimensional Lie algebra. In Lemmas 4.9,4.10 and 4.11 we prove that the finite dimensional Lie algebra can be taken as \(\mathfrak{sp}_{2m}\oplus R\), where \(R\) is central. Thus \(R\) acts as scalar on the finite dimensional module. We believe that these scalars are trivial but could not prove it. Thus any finite dimensional irreducible module for \(T\) is actually an irreducible module for \(\mathfrak{sp}_{2m}\) with some center acting as scalars. We summarized this results in Theorem 4.1. In Section 5 we classify irreducible integrable modules for \(\tau_{B}\) where the center elements \(K_{1},\dots,K_{N}\) acts trivially (level zero case). The non-zero level case is open. In Section 6 we classify irreducible integrable modules for KEALA in the non-zero level case and the level zero case is open. We have included this as we believe the right EALA for \(K\)-type is KEALA and the corresponding derivation algebra (see Remark 3.1) is \(K\)-type (see [RA]). Let us recall an important class of infinite dimensional Lie algebras which are Lie algebras of Cartan type, arises in the study of vector fields on manifold. These Lie algebras are categorized into four series, namely the General, Special, Hamiltonian and Contact, denoted by \(W_{N},S_{N},H_{N},\) and \(K_{N}\) respectively. We have EALAs for type S (TEALA), and type H (HEALA). For type W we have full toroidal Lie algebra which falls short of EALA. We propose KEALA for type \(K_{N}\) (see [RA] for more motivation). ## 2. Toroidal Lie algebras and Full toroidal Lie algebras Fix a positive integer \(N\). Let \(A=A_{N}=\mathbb{C}[t_{1}^{\pm 1},t_{2}^{\pm 1},\ldots,t_{N}^{\pm 1}]\) be denote the Laurent polynomial ring in \(N\) commuting variables. For \(\mathbf{r}=(r_{1},r_{2},\ldots,r_{N})\in\mathbb{Z}^{N}\) denote \(t^{\mathbf{r}}=t_{1}^{r_{1}}t_{2}^{r_{2}}\ldots t_{N}^{r_{N}}\in A.\) Let \(\mathfrak{g}\) be a finite dimensional simple Lie algebra and \(\mathfrak{h}\) be a Cartan subalgebra of \(\mathfrak{g}\). It is well known that \(\mathfrak{g}\) has a root space decomposition with respect to \(\mathfrak{h}\) given by \(\mathfrak{g}=\mathfrak{h}\bigoplus_{\alpha\in\Delta}\mathfrak{g}_{\alpha}\), where \(\Delta\) is the corresponding finite root system. Let \(<.,.>\) be a symmetric, non-degenerate and invariant bilinear form on \(\mathfrak{g}\). Let us consider the multi-loop algebra \(\mathfrak{g}\otimes A\) with the usual Lie bracket. Let \[\Omega_{A}=span\{t^{\mathbf{r}}K_{i}:1\leq i\leq N,\mathbf{r}\in\mathbb{Z}^{ \mathbf{N}}\},\,\text{where}\,\,K_{i}=t_{i}^{-1}dt_{i},\,\text{for}\,\,1\leq i \leq N.\] It is clear that \(\Omega_{A}\) is \(\mathbb{Z}^{N}\)-graded vector space with each component being \(N\) dimensional. Let \(d_{A}\) be the subspace of \(\Omega_{A}\) defined by \(d_{A}=span\{\underset{i=1}{\overset{N}{\sum}}r_{i}t^{\mathbf{r}}K_{i}: \mathbf{r}\in\mathbb{Z}^{\mathbf{N}}\}\) and set \(Z=\Omega_{A}/d_{A}\). Then \(Z\) is \(\mathbb{Z}^{N}\)-graded, more precisely each non-zero graded components of \(Z\) are \(N-1\) dimensional and zeroth graded component is \(N\) dimensional. Let \(K(\mathbf{u},\mathbf{r})=\underset{i=1}{\overset{N}{\sum}}u_{i}t^{ \mathbf{r}}K_{i}\) for \(\mathbf{u}\in\mathbb{C}^{\mathbf{N}},\mathbf{r}\in\mathbb{Z}^{\mathbf{N}}\). Now we define toroidal Lie algebra. As a vector space toroidal Lie algebra is given by \[\tau=\mathfrak{g}\otimes A\oplus Z\oplus D,\] where \(D\) is the degree derivation space spanned by \(\{d_{1},d_{2},\ldots,d_{N}\}\). Before defining the brackets of the toroidal Lie algebra we fix some notations for this paper. For convenience we set \(X(\mathbf{r})=X\otimes t^{\mathbf{r}}\) and \(\mathfrak{g}(\mathbf{r})=\mathfrak{g}\otimes\mathbb{C}t^{\mathbf{r}}\), \(\mathbf{r}\in\mathbb{Z}^{\mathbf{N}}\) and \(X\in\mathfrak{g}.\) Lie brackets of the toroidal Lie algebra are given by 1. \([X(\mathbf{r}),Y(\mathbf{s})]=[X,Y](\mathbf{r}+\mathbf{s})+<X,Y>K(\mathbf{r}, \mathbf{r}+\mathbf{s}).\) 2. \(Z\) is central in \(\mathfrak{g}\otimes A\). * \([d_{i},X(\mathbf{r})]=r_{i}X(\mathbf{r}),\)\([d_{i},K(\mathbf{u},\mathbf{r})]=r_{i}K(\mathbf{u},\mathbf{r}),\)\([d_{i},d_{j}]=0,\) for all \(\mathbf{r},\mathbf{s}\in\mathbb{Z}^{N},\mathbf{u}\in\mathbb{C}^{N},X,Y\in \mathfrak{g},1\leq i,j\leq N.\) Let \(\widetilde{\mathfrak{h}}=\mathfrak{h}\oplus Z_{0}\oplus D\), where \(Z_{0}=\) span \(\{K_{1},K_{2},\ldots,K_{N}\}\). This \(\widetilde{\mathfrak{h}}\) plays the role of Cartan subalgebra for toroidal Lie algebra \(\tau.\) One should observe that the toroidal Lie algebra is the \(N\) variable generalization of affine Kac-Moody Lie algebra. Let \(DerA\) be the derivation algebra of \(A.\) It is well known that \(DerA\) has a basis given by the vectors \(\{t^{\mathbf{r}}d_{i}:\mathbf{r}\in\mathbb{Z}^{N},1\leq i\leq N\},\) where \(d_{i}=t_{i}\frac{\delta}{\delta t_{i}}\). Let \(D(\mathbf{u},\mathbf{r})=\underset{i=1}{\overset{N}{\sum}}u_{i}t^{\mathbf{r}}d _{i},\) where \(\mathbf{u}=(u_{1},u_{2},\ldots,u_{N})\in\mathbb{C}^{N}\) and \(\mathbf{r}\in\mathbb{Z}^{N}.\) Then \(DerA\) forms a Lie algebra with respect to the bracket operation: \[[D(\mathbf{u},\mathbf{r}),D(\mathbf{v},\mathbf{s})]=D(\mathbf{w},\mathbf{r}+ \mathbf{s}), \tag{2.1}\] where \(\mathbf{w}=(\mathbf{u}|\mathbf{s})\mathbf{v}-(\mathbf{v}|\mathbf{r})\mathbf{u}\) for all \(\mathbf{r},\mathbf{s}\in\mathbb{Z}^{N},\)\(\mathbf{u},\mathbf{v}\in\mathbb{C}^{N}.\) It is known that \(DerA\) admits an abelian extension on \(Z\) by the following actions: \[[D(\mathbf{u},\mathbf{r}),D(\mathbf{v},\mathbf{s})]=D(\mathbf{w},\mathbf{r}+ \mathbf{s})-(\mathbf{u}|\mathbf{s})(\mathbf{v}|\mathbf{r})K(\mathbf{r}, \mathbf{r}+\mathbf{s}), \tag{2.2}\] \[[D(\mathbf{u},\mathbf{r}),K(\mathbf{v},\mathbf{s})]=(\mathbf{u}|\mathbf{s})K( \mathbf{v},\mathbf{r}+\mathbf{s})+(\mathbf{u}|\mathbf{v})K(\mathbf{r}, \mathbf{r}+\mathbf{s}), \tag{2.3}\] for more details see [RM1]. Moreover it is known from [RSS] that \(DerA\) has no non trivial central extension for \(N\geq 2.\) For \(N=1\) above defined abelian extension becomes central extension. Now we are prepared to define the full toroidal Lie algebra. As a vector space full toroidal Lie algebra is given by \[\tilde{\tau}=\mathfrak{g}\otimes A\oplus Z\oplus DerA.\] The bracket operations on \(\tilde{\tau}\) are given by A1, A2, (2.2), (2.3) and the following: \[[D(\mathbf{u},\mathbf{r}),X(\mathbf{s})]=(\mathbf{u}|\mathbf{s})X(\mathbf{r} +\mathbf{s}) \tag{2.4}\] for all \(\mathbf{r},\mathbf{s}\in\mathbb{Z}^{N},\)\(\mathbf{u}\in\mathbb{C}^{N}.\) Now we define an automorphism of \(\tilde{\tau}.\) Let \(GL(N,\mathbb{Z})\) be the groups of integral matrices with determinant \(\pm 1\). Then there is a natural action of \(GL(N,\mathbb{Z})\) on \(\mathbb{Z}^{N}.\) Let us denote this action by \(F\mathbf{r}\) for \(F\in GL(N,\mathbb{Z})\) and \(\mathbf{r}\in\mathbb{Z}^{N}.\) Then each \(F\in GL(N,\mathbb{Z})\) define an automorphism of \(\tilde{\tau}\) by the following map: \[F.X(\mathbf{r})=X(F\mathbf{r}),\ \ \ F.K(\mathbf{u},\mathbf{r})=K(F\mathbf{u},F \mathbf{r}),\ \ \ \ F.D(\mathbf{u},\mathbf{r})=D((F^{T})^{-1}\mathbf{u},F\mathbf{r}). \tag{2.5}\] It is easy to verify that this map defines an automorphism of \(\tilde{\tau}\) (see [RJ] for details). We denote this automorphism of \(\tilde{\tau}\) by \(\Phi_{F}.\) We now define roots and coroots of \(\tilde{\tau}.\) Note that root system of \(\tilde{\tau}\) is same as the root system of toroidal Lie algebra. Therefore we just recall root system for \(\tau\) from [R2]. For \(1\leq i\leq N\) define \(\delta_{i},\omega_{i}\in\widetilde{\mathfrak{h}}^{*}\) (\(\widetilde{\mathfrak{h}}^{*}\) represents the dual of \(\widetilde{\mathfrak{h}}\) ) such that \[\delta_{i}(\mathfrak{h})=0,\quad\delta_{i}(K_{j})=0,\quad\delta_{i} (d_{j})=\delta_{ij},\quad 1\leq j\leq N;\] \[\omega_{i}(\mathfrak{h})=0,\quad\omega_{i}(K_{j})=\delta_{ij}, \quad\omega_{i}(d_{j})=0,\quad 1\leq j\leq N.\] For \(\mathbf{m}=(m_{1},...,m_{N})\in\mathbb{Z}^{N},\) set \(\delta_{\mathbf{m}}=\sum_{i=1}^{N}m_{i}\delta_{i}\). Let \(\pi=\{\alpha_{1},\alpha_{2},...,\alpha_{d}\}\) be a set of simple roots of \(\Delta\) and \(\pi^{\vee}=\{\alpha_{1}^{\vee},\alpha_{2}^{\vee},\ldots,\alpha_{d}^{\vee}\}\) be the set of corresponding coroots of \(\Delta,\) where \(d=\) dim \(\mathfrak{h}\). Then the set of vectors \(\{\alpha_{1},\alpha_{2},...,\alpha_{d},\delta_{1},\delta_{2},\ldots,\delta_{N },\omega_{1},\omega_{2},\ldots,\omega_{N}\}\) forms a basis for \(\widetilde{\mathfrak{h}}^{*}\). Now we extend the bilinear form of \(\mathfrak{h}^{*}\) to a symmetric non-degenerate bilinear form on \(\widetilde{\mathfrak{h}}^{*}\) by defining: \[<\alpha,\delta_{k}>=<\alpha,\omega_{k}>=0,\ \ \text{for all}\ \alpha\in\Delta,1\leq k\leq N,\] \[<\delta_{k},\delta_{p}>=<\omega_{k},\omega_{p}>=0,\ <\omega_{k},\delta_{p}>=\delta_{kp},\ \ 1\leq k,p\leq N.\] Again \(\widetilde{\mathfrak{h}}\) has a basis \(\{\alpha_{1}^{\vee},\alpha_{2}^{\vee},\ldots,\alpha_{d}^{\vee},K_{1},K_{2}, \ldots,K_{N},d_{1},d_{2},\ldots,d_{N}\}\) with bilinear form defined by: \[<h,K_{i}>=<h,d_{i}>=0,\ \text{for all}\ h\in\mathfrak{h}\ \text{and}\ 1\leq i\leq N.\] \[<d_{i},d_{j}>=<K_{i},K_{j}>=0\ \text{and}\ <d_{i},K_{j}>=\delta_{ij}\ \text{for}\ 1\leq i,j\leq N.\] It is clear that dim \(\widetilde{\mathfrak{h}}=\) dim \(\widetilde{\mathfrak{h}}^{*}=2N+d\). Let \(\Delta^{re}=\{\alpha+\delta_{\mathbf{r}}:\alpha\in\Delta,\mathbf{r}\in\mathbb{ Z}^{N}\}\) and \(\Delta^{im}=\{\delta_{\mathbf{r}}:\mathbf{r}\in\mathbb{Z}^{N}\}.\) Then \(\widetilde{\Delta}=\Delta^{re}\cup\Delta^{im}\) is the root system of \(\tilde{\tau}\) with respect to the Cartan subalgebra \(\widetilde{\mathfrak{h}}.\) Let \(\bar{\lambda}\) be denote the restriction of \(\lambda\in\widetilde{\mathfrak{h}}^{*}\) to \(\mathfrak{h}.\) Again any \(\mu\in\mathfrak{h}^{*}\) can be extend to \(\widetilde{\mathfrak{h}}^{*}\) by defining \(\mu(d_{i})=\mu(K_{i})=0\) for \(1\leq i\leq N.\) Therefore any \(\lambda\in\widetilde{\mathfrak{h}}^{*}\) can be expressed as \(\lambda=\bar{\lambda}+\sum_{i=1}^{N}g_{i}\delta_{i}+\sum_{i=1}^{N}s_{i}\omega_{i},\) see [R2] for reference. We also define \(\alpha_{d+j}=-\beta+\delta_{j}\) for \(1\leq j\leq N\), where \(\beta\) is the highest root \(\mathfrak{g}.\) It is easy to see that the set of vectors \(\{\alpha_{1},\ldots,\alpha_{d},\alpha_{d+1},\ldots,\alpha_{d+N},\omega_{1}, \ldots,\omega_{N}\}\) forms an another basis of \(\widetilde{\mathfrak{h}}^{*}.\) For \(\gamma=\alpha+\delta_{\mathbf{m}}\in\Delta^{re},\) define a coroot by \[\gamma^{\vee}=\alpha^{\vee}+\frac{2}{<\alpha,\alpha>}{\sum_{i=1}^{N}}m_{i}K_{ i}.\] For a real root \(\gamma\in\Delta^{re},\) define a reflection \(r_{\gamma}\) on \(\widetilde{\mathfrak{h}}^{*}\) by \[r_{\gamma}(\lambda)=\lambda-\lambda(\gamma^{\vee})\gamma,\ \ \lambda\in\widetilde{ \mathfrak{h}}^{*}.\] Let \(\Omega\) be the Weyl group generated by the reflections corresponding to real roots of \(\widetilde{\Delta}\). **Definition 2.1**.: A module \(V\) for \(\tilde{\tau}\) (or over \(\tau\) ) is said to be integrable if \(V\) satisfy the following properties: 1. \(V\) can be decomposed as \(V=\underset{\lambda\in\tilde{\mathfrak{h}}^{*}}{\bigoplus}V_{\lambda}\), where \(V_{\lambda}=\{v\in V:hv=\lambda(h)v,\forall h\in\widetilde{\mathfrak{h}}^{*}\}\). 2. \(X_{\alpha}(\mathbf{m})\) acts locally nilpotently on \(V\) for all \(\alpha\in\Delta,X_{\alpha}\in\mathfrak{g}_{\alpha},\mathbf{m}\in\mathbb{Z}^{N}\). Let \(V\) be an integrable module for \(\tilde{\tau}\) (or over \(\tau\)) with finite dimensional weight spaces, i.e dim \(V_{\lambda}<\infty\) for all \(\lambda\in\widetilde{\mathfrak{h}}^{*}.\) Let \(P(V)=\{\lambda\in\widetilde{\mathfrak{h}}^{*}:V_{\lambda}\neq 0\}\) be denote the set of all weights of \(V\). Then the following are very standard ( see [R2] ). **Lemma 2.1**.: _(1) \(P(V)\) is \(\Omega\) invariant. (2) dim \(V_{\lambda}=\) dim \(V_{w\lambda}\) for all \(w\in\Omega,\lambda\in P(V).\) (3) If \(\alpha\in\Delta^{re},\lambda\in P(V)\) and \(\lambda(\alpha^{\vee})>0,\) then \(\lambda-\alpha\in P(V)\)_ **Definition 2.2**.: Extended affine Lie algebra (EALA) : Let \(L\) be a Lie algebra such that: 1. \(L\) is endowed with a non-degenerate symmetric bilinear form \((.,.)\) which is invariant (i.e \(([x,y],z)=(x,[y,z])\), for all \(x,y,z\in L\)). 2. \(L\) possesses a non-trivial finite dimensional self-centralizing ad-diagonalizable abelian subalgebra \(H\). To complete the definition of EALA, we need three more axioms. We observe some consequence of EA2 which will be useful to define other three axioms. Note that by property EA2 we have \[L=\underset{\alpha\in H^{*}}{\bigoplus}L_{\alpha}\text{, where }L_{\alpha}=\{x \in L:[h,x]=\alpha(h)x,\forall h\in H\}.\] Let \(R=\{\alpha\in H^{*}:L_{\alpha}\neq 0\}\), this set is called root system of \(L\) with respect to \(H\). Note that \(0\in R\) and \(\alpha,\beta\in R\) with \(\alpha+\beta\neq 0\implies(L_{\alpha},L_{\beta})\neq 0.\) Hence we have \(R=-R.\) The form restricted to \(H\) is non-degenerate, thus we can induce a form on \(H^{*}\). Let \[R^{\times}=\{\alpha\in R:(\alpha,\alpha)\neq 0\},\ \ \ \ \ R^{0}=\{\alpha\in R:( \alpha,\alpha)=0\}.\] The elements of \(R^{\times}\)( respectively \(R^{0}\)) are called non-isotropic (respectively isotropic) roots. It is clear that \(R=R^{\times}\cup R^{0}\). 1. If \(x_{\alpha}\in L_{\alpha}\) for \(\alpha\in R^{\times}\), then \(adx_{\alpha}\) act locally nilpotently on \(L\). 2. \(R\) is a discreat subset of \(H^{*}\). * \(R\) is an irreducible root system. This means that (i) If \(R^{\times}=R_{1}\cup R_{2}\) and \((R_{1},R_{2})=0\), then either \(R_{1}=0\) or \(R_{2}=0\). (ii) If \(\sigma\in R^{0}\), then there exists a \(\alpha\in R^{\times}\) such that \(\sigma+\alpha\in R\). A Lie algebra satisfying EA1 to EA5 is called an EALA. ### Core of EALA The core of the EALA \(L\) is defined as the subalgebra generated by \(\bigcup_{\alpha\in R^{\times}}L_{\alpha}\). It should mention that the core of \(L\) forms an ideal of \(L\). Extensive research has been done on EALA and its representation theory, for instance see [AABGP, AG, N1, N2, ABFP] and references there in. One can observe that the full toroidal Lie algebra falls short to form an EALA, as it does not satisfy (EA1). Now we study some known examples of EALAs. _Example 2.1_.: Consider the Lie algebra \(\mathfrak{g}\otimes A\bigoplus_{i=1}^{i=N}\mathbb{C}K_{i}\bigoplus_{i=1}^{i=N} \mathbb{C}d_{i}\) with the Lie brackets given by \[[X(\mathbf{r}),Y(\mathbf{s})]=[X,Y](\mathbf{r}+\mathbf{s})+\delta_{\mathbf{r} +\mathbf{s},\mathbf{0}}<X,Y>\sum_{i=1}^{i=N}\mathbb{C}K_{i},\] \[[K_{i},K_{j}]=[d_{i},d_{j}]=[K_{i},d_{j}]=0,\] \[[d_{i},X(\mathbf{r})]=r_{i}X(\mathbf{r}),\] and \(K_{i}\) are central, for all \(X,Y\in\mathfrak{g},\mathbf{r},\mathbf{s}\in\mathbb{Z}^{N},1\leq i,j\leq N\). It is the minimal EALA coming from the extension of a multiloop algebra. Irreducible modules for this Lie algebra have been studied in [SP], also see [R3]. _Example 2.2_.: A certain subalgebra of \(\tilde{\tau}\) forms an EALA. Define \(S_{N}=span\{D(\mathbf{u},\mathbf{r}):(\mathbf{u}|\mathbf{r})=0,\mathbf{u}\in \mathbb{C}^{N},\mathbf{r}\in\mathbb{Z}^{N}\}\), this is a subalgebra of \(DerA\). It is known to be a simple Lie algebra of \(S\) type. Let \[\tau(S_{N})=\mathfrak{g}\otimes A\oplus Z\oplus S_{N}.\] Define a bilinear form on \(\tau(S_{N})\) by \[(X(\mathbf{r}),Y(\mathbf{s}))=<X,Y>\delta_{\mathbf{r}+\mathbf{s},\mathbf{0}},\,\text{for all }X,Y\in\mathfrak{g},\,\mathbf{r},\mathbf{s}\ \in\mathbb{Z}^{N},\] \[(D(\mathbf{u},\mathbf{r}),K(\mathbf{v},\mathbf{s}))=\delta_{ \mathbf{r}+\mathbf{s},\mathbf{0}}(\mathbf{u}|\mathbf{v})\text{ for all }\mathbf{r},\mathbf{s}\in\mathbb{Z}^{N},\, \mathbf{u},\mathbf{v}\in\mathbb{C}^{N},\] \[\text{ and all other values are zero.}\] It is a standard fact that \(\tau(S_{N})\) is an EALA with the above bilinear form (see [N2] for general construction of EALAs). It is the largest EALA coming from the extension of a multiloop algebra. There is a analogous notion of twisted version of \(\tau(S_{N})\) and irreducible integrable module for these Lie algebras have been studied in [RSB, TB, CLT1, CLT2]. _Example 2.3_.: We define one another class of subalgebras of \(\tilde{\tau}\) which forms an EALA. We closely follow [T1] to define Hamiltonian Lie algebra. Let \(N=2m\) and \({\bf r}=(r_{1},\ldots,r_{2m}).\) Let \(\overline{{\bf r}}=(r_{m+1},\ldots,r_{2m},-r_{1},\ldots,-r_{m}),\)\(H_{N}=span\{D(\overline{{\bf r}},{\bf r}):0\neq{\bf r}\in\mathbb{Z}^{N}\}\) and \(\widetilde{H_{N}}=H_{N}\oplus D\). It is easy to verify that \(\widetilde{H_{N}}\) is a Lie algebra with respect to the bracket operations \[[D(\overline{{\bf r}},{\bf r}),D(\overline{{\bf s}},{\bf s})]=( \overline{{\bf r}}|{\bf s})D(\overline{{\bf r}+{\bf s}},{\bf r}+{\bf s})\ \mbox{and}\] \[[D({\bf u},0),D(\overline{{\bf r}},{\bf r})]{=}({\bf u}|{\bf r})D( \overline{{\bf r}},{\bf r})\ \mbox{for all}\ {\bf r},{\bf s}\in\mathbb{Z}^{N},{\bf u}\in \mathbb{C}^{N}.\] The Lie algebra \(\widetilde{H_{N}}\) is known as Hamiltonian Lie algebra and it is also called as \(H\) type Lie algebra. Let \(K=span\{K({\bf u},{\bf r}):({\bf u}|\overline{{\bf r}})=0\}\). It is trivial to check by (2.3) that \([H_{N},K]\subseteq K.\) Now consider the Lie algebra, \(\mathfrak{g}\otimes A\oplus Z\oplus\widetilde{H_{N}}\oplus D.\) Note that \(K\) is an ideal in this Lie algebra. Let us define HEALA by \(\tau(H_{N})=\mathfrak{g}\otimes A\oplus Z/K\oplus H_{N}\oplus D.\) It is proved in [R1] that \(\tau(H_{N})\) is an EALA and called it HEALA. Irreducible integrable modules for HEALA have been studied in [R1]. ## 3. Skew symmetric Extended Affine Lie algebras In this section we introduce a class of extended affine Lie algebras called them by Skew symmetric extended affine Lie algebras (In short SSEALAs), see [N2] for general construction of EALAs. We provide some concrete examples of SSEALA which includes few known EALAs arises from extensions of multiloop algebras. Let \(B\) be a non zero \(N\times N\) skew symmetric matrix over \(\mathbb{C}\) i.e, \(B^{T}=-B\). Corresponding to \(B\) we associate an EALA and called them skew symmetric EALA (in short SSEALA). Let \(h_{{\bf r}}=D(B{\bf r},{\bf r})\), for \({\bf r}\in\mathbb{Z}^{N}\), \(G_{B}=\{{\bf r}\in\mathbb{Z}^{N}:B{\bf r}=0\}\), \(H_{B}=span\{D(B{\bf r},{\bf r}):{\bf r}\in\mathbb{Z}^{N}\}\), \(\widetilde{H_{B}}=H_{B}\oplus D.\) It is easy to see that \(H_{B}\) forms a subalgebra of \(DerA\) with bracket operation \([D(B{\bf r},{\bf r}),D(B{\bf s},{\bf s})]=(B{\bf r}|{\bf s})D(B({\bf r}+{\bf s }),{\bf r}+{\bf s}).\) For a skew symmetric matrix \(B\) we associate a skew symmetric bilinear form on \(\mathbb{C}^{N}\) by \(B({\bf x},{\bf y})=(B{\bf x}|{\bf y})=-({\bf x}|B^{T}{\bf y})\). Note that the following properties holds for the bilinear form \(B\). * \((B{\bf r},{\bf r})=0\) for all \({\bf r}\in\mathbb{Z}^{N}\). * \((B{\bf r},{\bf s})=0\) for all \({\bf r}\in\mathbb{Z}^{N}\), \({\bf s}\in G_{B}\). * \((B{\bf r},{\bf s})=0\) for all \({\bf r}+{\bf s}\in G_{B}\). Define \(K_{B}=span\{K(\mathbf{u},\mathbf{r}):(B\mathbf{u}|\mathbf{r})=0\}\). It is trivial to check that \([H_{B},K_{B}]\subseteq K_{B}\) (see (2.3)). Now consider the Lie algebra, \(\mathfrak{g}\otimes A\oplus Z\oplus\widetilde{H_{B}}.\) Note that \(K_{B}\) is an ideal in this Lie algebra. Let us denote \(\widetilde{Z}=Z/K_{B}\underset{i=1}{\overset{N}{\bigoplus}}\mathbb{C}K_{i}\). Let us define SSEALA by \(\tau_{B}=\mathfrak{g}\otimes A\oplus\widetilde{Z}\oplus\widetilde{H_{B}}\). **Proposition 3.1**.: _(1) dim \((\widetilde{Z})_{\mathbf{r}}=0\) if \(0\neq\mathbf{r}\in G_{B}\). (2) dim \((\widetilde{Z})_{\mathbf{r}}=1\) if \(\mathbf{r}\notin G_{B}\). (3) dim \((\widetilde{Z})_{0}=N\). (4) dim \((\widetilde{H}_{B})_{\mathbf{r}}=0\) if \(0\neq\mathbf{r}\in G_{B}\). (5) dim \((\widetilde{H}_{B})_{\mathbf{r}}=1\) if \(\mathbf{r}\notin G_{B}\). (6) dim \((\widetilde{H}_{B})_{0}=N\). (7) \(\tau_{B}\) is an EALA. (8) The core of \(\tau_{B}\) is \(\mathfrak{g}\otimes A\oplus\widetilde{Z}\)._ Proof.: Note that (1) and \((3)-(6)\) follows from definitions. To prove (2), let \(\mathbf{r}\notin G_{B}\) and \(\mathbf{u},\mathbf{v}\in\mathbb{C}^{N}\) such that \((B\mathbf{u}|\mathbf{r})\neq\mathbf{0}\) and \((B\mathbf{v}|\mathbf{r})\neq\mathbf{0}\). Now consider \[(B(\frac{u}{(B\mathbf{u}|\mathbf{r})}-\frac{v}{(B\mathbf{v}|\mathbf{r})})|r)=0.\] This implies that \(K(\mathbf{u},\mathbf{r})=\lambda K(\mathbf{v},\mathbf{r})\) for some non zero \(\lambda\in\mathbb{C}\). Therefore \(K(B\mathbf{r},\mathbf{r})\) can be taken as a basis element for \((\widetilde{Z})_{\mathbf{r}}\) when \(\mathbf{r}\notin G_{B}\). To prove (7), it is easy to verify that all the axioms of EALA are true except to find a bilinear form. Define a bilinear form by \[(X(\mathbf{r}),Y(\mathbf{s}))=<X,Y>\delta_{\mathbf{r}+\mathbf{s}, \mathbf{0}},\text{ for all }X,Y\in\mathfrak{g},\,\mathbf{r},\mathbf{s}\ \in\mathbb{Z}^{N},\] \[(h_{\mathbf{r}},K(B\mathbf{s},\mathbf{s}))=\delta_{\mathbf{r}+ \mathbf{s},\mathbf{0}}(B\mathbf{r}|B\mathbf{s})\text{ for all }\mathbf{r},\mathbf{s}\notin G_{B},\] \[(D(\mathbf{u},0),K(\mathbf{v},0))=(\mathbf{u}|\mathbf{v})\text{ for all }\mathbf{u},\mathbf{v}\in\mathbb{C}^{N}\text{ and all other values are zero.}\] Note that this form descends from the form of \(\tau(S_{N})\) after noting that \((D(B\mathbf{r},\mathbf{r})|K_{B})=0\). It is easy to verify that the form is invariant. Now to check that the form is non degenerate consider \(\mathbf{r}+\mathbf{s}=\mathbf{0}\) and note that \((h_{\mathbf{r}},K(B\mathbf{s},\mathbf{s}))=-(B\mathbf{r},B\mathbf{r})\neq 0\) for all \(\mathbf{r}\notin G_{B}\) and \((X(\mathbf{r}),Y(\mathbf{s}))=<X,Y>\neq 0\), for \(X\in\mathfrak{g}_{\alpha}\) and \(Y\in\mathfrak{g}_{-\alpha}\). (8) is standard to prove by definition of the core of EALA. This completes the proof. _Remark 3.1_.: Each SSEALA comes with a certain subalgebra of \(DerA\) (see Section 2) denoted by \(\widetilde{H}_{B}\). This is called the derivation algebra of SSEALA. _Example 3.1_.: Consider the skew symmetric matrix \[J=\begin{pmatrix}0&I_{m\times m}\\ -I_{m\times m}&0\end{pmatrix}.\] The Lie algebra \(\tau_{J}\) is a SSEALA which the HEALA considered in [R1]. Note that in this example the skew symmetric bilinear form corresponding to \(J\) is non-degenerate and rank of \(G_{B}\) is \(0\). _Example 3.2_.: Now consider the matrix \[J_{1}=\begin{pmatrix}J&&&&1\\ &&&\vdots\\ &&&1\\ -1&\ldots&-1&0\end{pmatrix}\] The Lie algebra \(\tau_{J_{1}}\) is another example of SSEALA. Note that the skew symmetric bilinear form corresponding to \(J_{1}\) is degenerate and rank of \(G_{J_{1}}\) is \(1\). This SSEALA was constructed in [R1] and named as KEALA. Now we recall a result from Goodman and Wallah which we will use later. **Lemma 3.1**.: _Let \(V\) be a finite dimensional vector space of dimension \(n\) and \(B\) be a non-degenerate skew symmetric bilinear form on \(V\). Then \(n=2m\) and there exists a basis \(v_{1},v_{2},\ldots v_{2m}\) of \(V\) such that \(B(v_{i},v_{j})_{1\leq i,j\leq n}=J\)._ _Remark 3.2_.: Let \(V=\mathbb{C}^{N}\) be a vector space with non-degenerate bilinear form \((e_{i},e_{j})=\delta_{ij}\), \(\{e_{1},e_{2},\ldots,e_{N}\}\) is a basis of \(V\). Let \(B\) be a skew symmetric matrix. Define \(B(\mathbf{x},\mathbf{y})=(B\mathbf{x}|\mathbf{y})\). Let \(V_{B}=\{\mathbf{v}\in V:B\mathbf{v}=0\}.\) Then \(B(\mathbf{x},\mathbf{y})=0\) if \(\mathbf{x}\) or \(\mathbf{y}\in V_{B}\). Now consider the induced bilinear form \(\overline{B}\) on \(V/V_{B}\), this \(\overline{B}\) is non-degenerate. Hence by above lemma there exists a basis \(\mathbf{v_{1}},\mathbf{v_{2}},\ldots,\mathbf{v_{2m}}\) such that \(\overline{B}(\mathbf{v_{i}},\mathbf{v_{j}})_{1\leq i,j\leq n}=J\). Let \(k=\text{dim }V_{B}\). Then we have \(N=2m+k\). Also let \(\mathbf{v_{2m+1}},\mathbf{v_{2m+2}},\ldots,\mathbf{v_{N}}\) be a basis for \(V_{B}\). Then with respect to the basis \(\mathbf{v_{1}},\mathbf{v_{2}},\ldots,\mathbf{v_{N}}\) of \(V\), we have \[B^{\prime}=B(\mathbf{v_{i}},\mathbf{v_{j}})_{1\leq i,j\leq n}=\begin{pmatrix}J _{2m\times 2m}&0_{2m\times k}\\ 0_{k\times 2m}&0_{k\times k}\end{pmatrix}\] Now it is a general fact that there exists \(A\in GL_{N}(\mathbb{C})\) such that \(B=A^{T}B^{\prime}A\). Unfortunately we cannot find any relation between \(\tau_{B}\) and \(\tau_{B^{\prime}}\) unless \(A\) is an integral matrix. **Proposition 3.2**.: _Let \(B\) be a skew symmetric matrix in \(\mathfrak{gl}_{N}(\mathbb{C})\) and \(A\in GL_{N}(\mathbb{Z})\) such that \(B=A^{T}B^{\prime}A\) for some \(B^{\prime}\). Then \(\tau_{B}\simeq\tau_{B^{\prime}}\)._ Proof.: Consider the automorphism of full toroidal Lie algebra \(\tilde{\tau}\) given by \(\Phi_{A}\) (2.5) corresponding to the matrix \(A\). Note that \(\Phi_{A}(D(B\mathbf{r},\mathbf{r}))=D((A^{T})^{-1})B\mathbf{r},A\mathbf{r})=D( B^{\prime}A\mathbf{r},A\mathbf{r})\). Since \(A\) is an invertible linear operator from the above we have \(\Phi_{A}(H_{B})=H_{B^{\prime}}.\) Also observe that \(\Phi_{A}(K(\mathbf{u},\mathbf{r}))=(K(A\mathbf{u},A\mathbf{r}))\) and \((B^{\prime}A\mathbf{u}|A\mathbf{r})=((A^{T})^{-1}B\mathbf{u}|A\mathbf{r})=(A^ {T}(A^{T})^{-1}B\mathbf{u}|\mathbf{r})=(B\mathbf{u}|\mathbf{r}).\) Hence \(\Phi_{A}\) maps \(K_{B}\) into \(K_{B^{\prime}}\). Therefore \(\Phi_{A}\) induces an isomorphism between \(\tau_{B}\) and \(\tau_{B^{\prime}}\). _Remark 3.3_.: For a given skew symmetric matrix \(B\in\mathfrak{gl}_{N}(\mathbb{C})\) it is not easy to find a matrix \(A\in GL_{N}(\mathbb{Z})\) such that \(B=A^{T}B^{\prime}A\) holds. Note that \(A^{T}B^{\prime}A\) is obtained from \(B^{\prime}\) by simultaneous row and column operations. Now in the case when \(B=\begin{pmatrix}J&0_{2m,1}\\ 0_{1,2m}&0\end{pmatrix}\) and \(B^{\prime}=J_{1}\) the simultaneous row and column operations induces only integer matrices with entries in the set \(\{1,0,-1\}\). Now instead of providing the elementary row and column operations method we just provide the matrix \(A\), which is given by \[\begin{pmatrix}1&0&0\dots&0&0\dots&0&0\\ 0&I_{m-1}&&\vdots&-I_{m-1}&&0\\ \vdots&&&&&&\vdots\\ 0&\dots&0&0&0&\dots 0&1\\ -1&0_{m-1}&&0&I_{m-1}&&0\\ \vdots&&&&\vdots&&&\vdots\\ -1&-1&\dots&1&\dots&1&-1\end{pmatrix}_{.}\] Now it is easy to check that \(AJ_{1}A^{T}=B\) and \(A\in GL_{2m+1}(\mathbb{Z})\). ## 4. Classification of irreducible modules for \(\widetilde{H_{B}}\ltimes A\). For the rest of the paper we consider \(N=2m\) (except for section 6 where we assume N=2m+1) and \(B\) as a non-degenerate skew symmetric matrix. Consider the action of \(\widetilde{H_{B}}\) on \(A=A_{N}\) as \[h_{\mathbf{r}}.t^{\mathbf{s}}=(B\mathbf{r}|\mathbf{s})t^{\mathbf{r }+\mathbf{s}} \tag{4.2}\] \[D(\mathbf{u},0).t^{\mathbf{s}}=(\mathbf{u}|\mathbf{s})t^{\mathbf{ s}}, \tag{4.1}\] for all \(\mathbf{r},\mathbf{s}\in\mathbb{Z}^{N}\) and \(\mathbf{u}\in\mathbb{C}^{N}\). This action defines a Lie algebra structure on \(\widetilde{H_{B}}\ltimes A\). In this section we classify irreducible modules for \(\widetilde{H_{B}}\ltimes A\) with finite dimensional weight spaces and use this classification to classify modules for SSEALA \(\tau_{B}\). **Definition 4.1**.: A module \(V\) for \(\widetilde{H}_{B}\) is said to be a jet module if \(V\) can be extended to a module for \(\widetilde{H}_{B}\ltimes A\) with associative action of \(A\), in the sense that \(t^{\bf r}t^{\bf s}=t^{\bf r+s}\), for all \({\bf r},{\bf s}\)\(\in\mathbb{Z}^{N}\) and action \(t^{\bf 0}\) as identity operator on \(V\). Let \(V\) be an irreducible jet module for \(\widetilde{H}_{B}\ltimes A\) with finite dimensional weight spaces with respect to \(H=D\oplus\mathbb{C}\). Choose a weight \(\lambda\in H^{*}\) such that \(V_{\lambda}\neq 0\). Then due to irreducibility of \(V\) we have \(V=\bigoplus_{m\in\mathbb{Z}^{n}}V_{m}\), where \(V_{m}=\{v\in V:D(u,0)v=(u|m+\alpha)v\) for all \(u\in\mathbb{C}^{n}\}\) and \(\alpha=(\lambda(d_{1}),\lambda(d_{2}),....,\lambda(d_{n})\in\mathbb{C}^{n}.\) Let \(U\) denote the universal enveloping algebra of \(\widetilde{H}_{B}\ltimes A.\) Let \(L\) be the two sided ideal of \(U\) generated by \(t^{\bf r}t^{\bf s}-t^{\bf r+s}\) and \(t^{0}-1\). Consider \(U^{\prime}=U/L\) an associative algebra. Note that \(U^{\prime}\) is a \(\mathbb{Z}^{N}\) graded algebra, in fact \(U^{\prime}=\bigoplus_{{\bf r}\in\mathbb{Z}^{N}}U^{\prime}_{\bf r}\), where \(U^{\prime}_{\bf r}=\{X\in U^{\prime}:[D({\bf u},0),X]=({\bf u}|{\bf r})X,\forall{ \bf u}\mathbb{C}^{N}\}\). Obviously \(U^{\prime}_{0}\) is an associative algebra. Let \(T({\bf r})=k(-{\bf r})h_{\bf r}-D(B{\bf r},0)\), where \(k({\bf r})=t^{\bf r}\) for all \({\bf r}\in\mathbb{Z}^{N}\). Let \(T=span\{T({\bf r}):{\bf r}\in\mathbb{Z}^{N}\}\). Then it is easy to check that \([T({\bf r}),T({\bf s})]=(B{\bf r}|{\bf s})[T({\bf r}+{\bf s})-T({\bf r})-T({\bf s })].\) In particular, \(T\) forms a Lie algebra with the above Lie bracket. **Lemma 4.1**.: \(U(T)=U^{\prime}_{0}\)_._ Proof.: Note that \(U(T)\subseteq U^{\prime}_{0}\) is obvious. Let \(X\in U^{\prime}_{0}\). Then by PBW theorem we have \(X\) is a linear combination of monomials \(k(-{\bf r_{1}})\dots k(-{\bf r_{l}})h_{{\bf s_{1}}}\dots h_{{\bf s_{k}}}\), where \({\bf r_{i}},{\bf s_{i}}\in\mathbb{Z}^{N}\), \(l,k\in\mathbb{N}\) and \(\sum_{i}{\bf r_{i}}=\sum_{i}{\bf s_{i}}\). Now using the fact \([h_{\bf r},k({\bf s})]=(B{\bf r}|{\bf s})k({\bf r}+{\bf s})\) and \(k({\bf r})k({\bf s})=k({\bf r}+{\bf s})\) we see that each monomial can be written as sum of monomials of the form \(k(-{\bf r_{1}})\dots k(-{\bf r_{l}})h_{{\bf r_{1}}}\dots h_{{\bf r_{l}}}\). Hence \(X\in U(T)\). **Lemma 4.2**.: _(1) Each \(V_{\bf r}\) is irreducible \(T\) module. (2) \(V_{\bf r}\simeq V_{\bf s}\) as \(T\)-module. (3) \(k({\bf s}).V_{\bf r}=V_{{\bf r}+{\bf s}}\)._ Proof.: Let \(u,v\in V_{\bf r}\). Then by irreducibility of \(V\) and weight arguments we find \(X\in U^{\prime}_{0}\) such that \(X.u=v\). Hence (1) follows by Lemma 4.1. Note that \(k({\bf s})k(-{\bf s})=1\) implies \(k({\bf s})\) is invertible, this proves (3). To prove (2) define a map \(f:V_{\bf r}\to V_{\bf s}\) given by \(f(v)=k({\bf s}-{\bf r})v\). Now \[f(T(\mathbf{k})v)=k(\mathbf{s}-\mathbf{r})T(\mathbf{k})v=T(\mathbf{k})k(\mathbf{s} -\mathbf{r})v=T(\mathbf{k})f(v),\] since \(k(\mathbf{r})\) commutes with \(T(\mathbf{k})\). Hence \(f\) is a \(T\) module map. It is easy to see that \(f\) is both injective and surjective. Now it is clear that, \(V\simeq V_{0}\otimes A\), where \(V_{0}\) is a finite dimensional irreducible \(T\)-module. Conversely given any finite dimensional irreducible \(T\)-module \(V_{0}\) we can define a \(\widetilde{H}_{B}\ltimes A\)-module structure on \(V_{0}\otimes A\). Let \(\boldsymbol{\beta}\in\mathbb{C}^{N}\) and define \[D(B\mathbf{r},\mathbf{r}).v\otimes t^{\mathbf{k}}=(B\mathbf{r}| \mathbf{k}+\boldsymbol{\beta})v\otimes t^{\mathbf{k}+\mathbf{r}}+(T(\mathbf{r} )v)\otimes t^{\mathbf{k}+\mathbf{r}}. \tag{4.4}\] \[t^{r}.v\otimes t^{\mathbf{k}}=v\otimes t^{k+r}. \tag{4.3}\] Now we check that the above defined action is a \(\widetilde{H}_{B}\ltimes A\)-module action on \(V_{0}\otimes A\). It is easy to check that \([D(B\mathbf{r},\mathbf{r}),t^{\mathbf{s}}](v\otimes t^{\mathbf{k}})=(B\mathbf{ r}|\mathbf{s})t^{\mathbf{r}+\mathbf{s}}(v\otimes t^{\mathbf{k}}).\) We check that \([D(B\mathbf{r},\mathbf{r}),D(B\mathbf{s},\mathbf{s})](v\otimes t^{\mathbf{k}}) =(B\mathbf{r}|\mathbf{s})D(B(\mathbf{r}+\mathbf{s}),\mathbf{r}+\mathbf{s})(v \otimes t^{\mathbf{k}}).\) Consider \([D(B\mathbf{r},\mathbf{r})D(B\mathbf{s},\mathbf{s})-D(B\mathbf{s},\mathbf{s}) D(B\mathbf{r},\mathbf{r})].v\otimes t^{\mathbf{k}}\) \[=(B\mathbf{s}|\mathbf{k}+\boldsymbol{\beta})\{(B\mathbf{r}| \mathbf{k}+\mathbf{s}+\boldsymbol{\beta})v+T(\mathbf{r})v\}\otimes t^{ \mathbf{k}+\mathbf{r}+\mathbf{s}}+\{(B\mathbf{r}|\mathbf{k}+\mathbf{s}+ \boldsymbol{\beta})T(\mathbf{s})v+T(\mathbf{r})T(\mathbf{s})v\}\otimes\] \[t^{\mathbf{k}+\mathbf{r}+\mathbf{s}}-(B\mathbf{r}|\mathbf{k}+ \boldsymbol{\beta})\{(B\mathbf{s}|\mathbf{k}+\mathbf{r}+\boldsymbol{\beta})v+T (\mathbf{s})v\}\otimes t^{\mathbf{k}+\mathbf{r}+\mathbf{s}}-\{(B\mathbf{s}| \mathbf{k}+\mathbf{r}+\boldsymbol{\beta})T(\mathbf{r})v+T(\mathbf{s})T(\mathbf{ r})v\}\otimes t^{\mathbf{k}+\mathbf{r}+\mathbf{s}}\] \[t^{\mathbf{k}+\mathbf{r}+\mathbf{s}}\] \[=\{(B\mathbf{s}|\mathbf{k}+\boldsymbol{\beta})(B\mathbf{r}| \mathbf{s})-(B\mathbf{r}|\mathbf{k}+\boldsymbol{\beta})(B\mathbf{s}|\mathbf{r} )\}v\otimes t^{\mathbf{k}+\mathbf{s}+\mathbf{r}}+(B\mathbf{r}|\mathbf{s})\{T( \mathbf{s})+T(\mathbf{r})\}v\otimes t^{\mathbf{k}+\mathbf{s}+\mathbf{r}}+\] \[[T(\mathbf{r}),T(\mathbf{s})]v\otimes t^{\mathbf{k}+\mathbf{r}+ \mathbf{s}}\] \[=(B\mathbf{r}|\mathbf{s})(B(\mathbf{r}+\mathbf{s})|\mathbf{k}+\boldsymbol{ \beta})v\otimes t^{\mathbf{k}+\mathbf{r}+\mathbf{s}}+(B\mathbf{r}|\mathbf{s}) T(\mathbf{r}+\mathbf{s})v\otimes t^{\mathbf{k}+\mathbf{r}+\mathbf{s}}\] \[=(B\mathbf{r}|\mathbf{s})D(B(\mathbf{r}+\mathbf{s}),\mathbf{r}+\mathbf{s})v \otimes t^{\mathbf{k}}.\] Therefore to classify irreducible modules for \(\widetilde{H}_{B}\ltimes A\) with finite dimensional weight spaces, it is sufficient to classify finite dimensional irreducible \(T\)-modules. For this we proceed similarly like [R4]. **Lemma 4.3**.: _( Proposition 19.1,[JH]) Let \(\phi:\mathfrak{L}\to\mathfrak{gl}(V)\) be a finite dimensional irreducible representation of a Lie algebra \(\mathfrak{L}\). Then \(\phi(\mathfrak{L})\) is a reductive Lie algebra with at most one dimensional center._ For \(q\in\mathbb{N}\), define \[T_{q}(\mathbf{s},\mathbf{r_{1}},\ldots,\mathbf{r_{q}})=T(\mathbf{s})-\sum_{i} T(\mathbf{s}+\mathbf{r_{i}})+\sum_{i<j}T(\mathbf{s}+\mathbf{r_{i}}+\mathbf{r_{j}})- \cdots+(-1)^{q}T(\mathbf{s}+\mathbf{r_{1}}+\cdots+\mathbf{r_{q}}).\] Let \(I_{q}=span\{T_{q}(\mathbf{s},\mathbf{r_{1}},\ldots,\mathbf{r_{q}}):\mathbf{s},\mathbf{r_{1}},\ldots,\mathbf{r_{q}}\in\mathbb{Z}^{N},q\in\mathbb{N}\}\). **Lemma 4.4**.: _(1) \(T_{q}(\mathbf{s},\mathbf{r_{1}},\ldots,\mathbf{r_{q}})=T_{q}(\mathbf{s}, \mathbf{r_{\sigma(1)}},\ldots,\mathbf{r_{\sigma(q)}}),\) for any permutations \(\sigma\) on \(q\) letters. (2) \(T_{q}(\mathbf{s},\mathbf{r_{1}},\ldots,\mathbf{r_{q}})=T_{q-1}(\mathbf{s}, \mathbf{r_{1}},\ldots,\widehat{\mathbf{r_{j}}},\ldots,\mathbf{r_{q}})-T_{q-1} (\mathbf{s}+\mathbf{r_{j}},\mathbf{r_{1}},\ldots,\widehat{\mathbf{r_{j}}}, \ldots,\mathbf{r_{q}})\), where \(T_{q-1}(\mathbf{s},\mathbf{r_{1}},\ldots,\widehat{\mathbf{r_{j}}},\ldots, \mathbf{r_{q}})\) represents that \(\mathbf{r_{j}}\) does not appear in this expression. (3) \(T_{q}(\mathbf{s},\mathbf{r_{1}},\ldots,\mathbf{r_{q}})=0\) if \(\mathbf{r_{j}}=\mathbf{0}\) for some \(1\leq j\leq q\). (4) \(I_{q+1}\subseteq I_{q}\), for all \(q\in\mathbb{N}\). (5) \(I_{1}=T\). (6) \([T(\mathbf{r}),T_{q}(\mathbf{s},\mathbf{r_{1}},\ldots,\mathbf{r_{q}})]=-(B \mathbf{r}|\mathbf{s})T_{q+1}(\mathbf{s},\mathbf{r},\mathbf{r_{1}},\ldots, \mathbf{r_{q}})+\)_ \[\sum_{i=1}^{q}(B\mathbf{r}|\mathbf{r_{i}})T_{q}(\mathbf{s}+\mathbf{r_{i}}, \mathbf{r_{1}},\ldots,\widehat{\mathbf{r_{i}}},\ldots,\mathbf{r_{q}},\mathbf{ r})\] _for all \(q\geq 2\). (7) \(I_{q}\) is an ideal of \(T\) for all \(q\geq 1\). (8) \([I_{p},I_{q}]\subseteq I_{p+q-2}\) for all \(p+q\geq 3\) and \([I_{1},I_{1}]\subseteq I_{1}\)._ Proof.: Note that \((1),(5)\) follows from the definition. To prove \((2)\) collect all the terms where \(\mathbf{r_{j}}\) does not appear in the expressions of \(T_{q}(\mathbf{s},\mathbf{r_{1}},\ldots,\mathbf{r_{q}})\). It is easy to see that these terms equal to \(T_{q-1}(\mathbf{s},\mathbf{r_{1}},\ldots,\widehat{\mathbf{r_{j}}},\ldots, \mathbf{r_{q}})\). On the other hand sum of the remaining terms of \(T_{q}(\mathbf{s},\mathbf{r_{1}},\ldots,\mathbf{r_{q}})\) is \(-T_{q-1}(\mathbf{s}+\mathbf{r_{j}},\mathbf{r_{1}},\ldots,\widehat{\mathbf{r_ {j}}},\ldots,\mathbf{r_{q}})\). This proves \((2)\) and \((3),(4)\) immediately follows from \((2)\). we prove \((6)\) by induction principal. Note that, \[[T(\mathbf{r}),T_{2}(\mathbf{s},\mathbf{r_{1}},\mathbf{r_{2}})]=[T(\mathbf{r}),T(\mathbf{s})-T(\mathbf{s}+\mathbf{r_{1}})-T(\mathbf{s}+\mathbf{r_{2}})+T( \mathbf{s}+\mathbf{r_{1}}+\mathbf{r_{2}})]\] \[=(B\mathbf{r}|\mathbf{s})\{T(\mathbf{r}+\mathbf{s})-T(\mathbf{r})-T(\mathbf{s} )\}-\sum_{i=1}^{2}(B\mathbf{r}|\mathbf{r_{i}}+\mathbf{s})\{T(\mathbf{r}+ \mathbf{s}+\mathbf{r_{i}})-T(\mathbf{r})-T(\mathbf{s}+\mathbf{r_{i}})\}+\] \[(B\mathbf{r}|\mathbf{s}+\mathbf{r_{1}}+\mathbf{r_{2}})\{T(\mathbf{r}+ \mathbf{s}+\mathbf{r_{1}}+\mathbf{r_{2}})-T(\mathbf{r})-T(\mathbf{s}+\mathbf{r _{1}}+\mathbf{r_{2}})\}\] \[=-(B{\bf r}|{\bf s})T_{3}({\bf s},{\bf r},{\bf r_{1}},\ldots,{\bf r_{2}})+\sum_{i=1 }^{2}(B{\bf r}|{\bf r_{i}})T_{2}({\bf s}+{\bf r_{i}},{\bf r_{1}},\ldots,{\bf\widehat {r_{i}}},\ldots,{\bf r_{2}},{\bf r}).\] Hence the claim is true for \(q=2\). Now assume that the claim is true for \(q\). Then \([T({\bf r}),T_{q+1}({\bf s},{\bf r_{1}},\ldots,{\bf r_{q}})]\) \(=[T({\bf r}),T_{q}({\bf s},{\bf r_{1}},\ldots,{\bf r_{q}})-T_{q}({\bf s}+{\bf r _{q+1}},{\bf r_{1}},\ldots,{\bf r_{q}})]\) \(=-(B{\bf r}|{\bf s})T_{q+1}({\bf s},{\bf r},{\bf r_{1}},\ldots,{\bf r_{q}})+ \sum_{i=1}^{q}(B{\bf r}|{\bf r_{i}})T_{q}({\bf s}+{\bf r_{i}},{\bf r_{1}}, \ldots,{\bf\widehat{r_{i}}},\ldots,{\bf r_{q}},{\bf r})\) \(+(B{\bf r}|{\bf s}+{\bf r_{q+1}})T_{q+1}({\bf s}+{\bf r_{q+1}},{\bf r},{\bf r_ {1}},\ldots,{\bf r_{q}})-\sum_{i=1}^{q}(B{\bf r}|{\bf r_{i}})T_{q}({\bf s}+{ \bf r_{q+1}}+{\bf r_{i}},{\bf r_{1}},\ldots,{\bf\widehat{r_{i}}},\ldots,{\bf r _{q}},{\bf r})\) \(=\ -(B{\bf r}|{\bf s})T_{q+2}({\bf s},{\bf r},{\bf r_{1}},\ldots,{\bf r_{q+1}})\ + \sum_{i=1}^{q+1}(B{\bf r}|{\bf r_{i}})T_{q+1}({\bf s}+{\bf r_{i}},{\bf r_{1}}, \ldots,{\bf\widehat{r_{i}}},\ldots,{\bf r_{q+1}},{\bf r})\). Therefore by induction principal the claim is true for all \(q\geq 2\). Now using \((2,5,6)\) we have that \(I_{q}\) is an ideal of \(T\). We prove (8) in the appendix. \(\Box\) **Lemma 4.5**.: _(1) \(T_{q}({\bf s},{\bf r_{1}},\ldots,{\bf r_{q}})\notin I_{q+1}\)._ _(2) \(\bigcap_{q\geq 1}\)\(I_{q}=\{0\}\)._ _(3) \(T_{q}({\bf s},{\bf r_{1}},{\bf r_{2}},\ldots,{\bf r_{q}})+T_{q}({\bf s},{\bf n_{ 1}},{\bf r_{2}},\ldots,{\bf r_{q}})=T_{q}({\bf s},{\bf r_{1}}+{\bf n_{1}},{\bf r _{2}},\ldots,{\bf r_{q}})\) mod \(I_{q+1}\)._ _(4) \(T_{q}({\bf s},-{\bf r_{1}},{\bf r_{2}},\ldots,{\bf r_{q}})=-T_{q}({\bf s}-{\bf r _{1}},{\bf r_{1}},\ldots,{\bf r_{q}})\)._ _Proof._ Define a map \(\eta:T\to A\) by \(T_{q}({\bf s},{\bf r_{1}},\ldots,{\bf r_{q}})\to\prod_{i=1}^{q}t^{{\bf s}}(1-t ^{{\bf r_{i}}})\)\(=P_{q}({\bf s},{\bf r_{1}},\ldots,{\bf r_{q}})\). Note that, \(\eta(T_{q+1}({\bf s},{\bf r_{1}},\ldots,{\bf r_{q+1}}))\) \(=\eta(T_{q}({\bf s},{\bf r_{1}},\ldots,{\bf r_{q}})-T_{q}({\bf s}+{\bf r_{1}}, {\bf r_{1}},\ldots,{\bf r_{q}}))=\prod_{i=1}^{q}t^{{\bf s}}(1-t^{{\bf r_{i}}})- \prod_{i=1}^{q}t^{{\bf s}+{\bf r_{1}}}(1-t^{{\bf r_{i}}})\) \(=\prod_{i=1}^{q+1}t^{{\bf s}}(1-t^{{\bf r_{i}}})\). This proves that the map is well defined. Now using the method of Claim 1 and Claim 2 of Lemma 3.5 ([R4]) we have \(P_{q}({\bf s},{\bf r_{1}},\ldots,{\bf r_{q}})\) can not be written as linear combination of \(P_{q+1}({\bf s},{\bf r_{1}},\ldots,{\bf r_{q+1}})\), this proves (1). To prove (2) consider \(J_{q}=span\{t^{{\bf r}}(1-t^{{\bf r_{1}}})\ldots(1-t^{{\bf r_{q}}}):{\bf r},{ \bf r_{i}},\ldots,{\bf r_{q}}\in{\mathbb{Z}}^{N}\}.\) Clearly \(J_{q}\) is an ideal in \(A\) and \(J_{q}=J_{1}^{q}\). Note that \(\eta(I_{q})=J_{q}\) implies that to prove (2) it is sufficient to prove that \(\cap_{q\geq 1}J_{q}=0\). Let \(f\in\cap_{q\geq 1}J_{q}\). Then \(f=\sum_{i}^{a}c_{{\bf m_{i}}}t^{{\bf m_{i}}}\), where \({\bf m_{i}}=(m_{i}^{1},\ldots m_{i}^{N})\). We multiply \(f\) by a monomial such that components of \({\bf m_{i}}\), for \(1\leq i\leq a\) are non negative. Choose \(l_{1}=max\{m_{i}^{1}:1\leq i\leq a\}\). Let us assume that \(m_{1}^{1}=\cdots=m_{s}^{1}=l_{1}\). Then \((\frac{d}{dt_{1}})^{l_{1}}f=l_{1}!\!\!\sum_{i}^{s}c_{{\bf m_{i}}}t_{2}^{{\bf m _{i}^{2}}}\ldots t_{N}^{{\bf m^{N}}}\). Continuing this process we can find \(l_{2},\ldots l_{N}\) with the above properties. Now using the operator \(d=(\frac{d}{dt_{1}})^{l_{1}}\ldots(\frac{d}{dt_{1}})^{l_{N}}\) on \(f\) we get \(df\neq 0\) Let \(\sum_{i}^{N}l_{i}=L.\) Now \(f\in J_{L+1}\), hence every non zero components of \(df\) has a factor of the form \((1-t^{\mathbf{m}})\). Therefore \(df(1,1,..,1)=0\), a contradiction. (3) and (4) follows on similar lines as \((2,3)\) of Lemma 3.5 ([R4]). \(\square\) **Lemma 4.6**.: _(1) \(T_{q}(\mathbf{s},\mathbf{r_{1}},\ldots,\mathbf{r_{q}})=T_{q}(\mathbf{r}, \mathbf{r_{1}},\ldots,\mathbf{r_{q}})\) mod \(I_{q+1}\). (2) \(-T_{q}(0,\mathbf{r_{1}},\ldots,\mathbf{r_{q}})=-T_{q}(0,-\mathbf{r_{1}},\ldots,\mathbf{r_{q}})\) mod \(I_{q+1}\). (3) dim \(I_{q}/I_{q+1}\leq N^{q}\). (4) \(I_{q}\) is a co-finite ideal in \(T\)._ _Proof._ Proof is similar to Lemma 3.6 of [R4]. \(\square\) **Lemma 4.7**.: _Let \(J\) be a co-finite ideal in \(T\). Then \(J\) contains \(I_{k}\) for some large \(k.\)_ _Proof._ Consider the map \(\pi:T\to T/J\). Note that the image of \(I_{k}\) under \(\pi\) goes to \(I_{k}/J\cap I_{k}\). Consider the family \(\{I_{k}/J\cap I_{k}\}\) of decreasing sequences of finite dimensional algebras. Thus there exists a \(k\in\mathbb{N}\) such that dim \(I_{k}/J\cap I_{k}=\) dim \(I_{k+i}/J\cap I_{k+i}\) for all \(i\geq 1\). To prove the lemma it is sufficient to prove that \(I_{k}/J\cap I_{k}=0\). **Claim:**\(I_{k}/J\cap I_{k}=0\). Let \(0\neq x_{k}\in I_{k}/J\cap I_{k}\). Since dim \(I_{k}/J\cap I_{k}=\) dim \(I_{k+i}/J\cap I_{k+i}\) and \(\{I_{k}/J\cap I_{k}\}\) is a family of decreasing sequences, there exists \(x_{k+i}\in I_{k+i}/J\cap I_{k+i}\) such that \(x_{k}=x_{k+i}\) mod \(I_{k}\cap J\), i.e \(x_{k}-x_{k+i}=0\) in \(I_{k}/I_{k}\cap J.\) This means that \(x_{k}-x_{k+i}=0\) in \(I_{k+1}/I_{k+1}\cap J.\) Hence there exists \(y_{k+i}\in I_{k+i}\cap J\) such that \(x_{k}-x_{k+i}=y_{k+i}\), i.e \(x_{k}\in I_{k+i}\) for all \(i\geq 1\). This proves that \(x_{k}\in\cap_{k\geq 1}I_{k}\), a contradiction to Lemma 4.5(2). \(\square\) **Proposition 4.1**.: _Let \(V_{0}\) be a finite dimensional irreducible module for \(T\). Then \(I_{3}\) acts trivially on \(V_{0}\)._ _Proof._ Let \(\phi:T\rightarrow\mathfrak{gl}(V_{0})\) be the representation. Then by Lemma 4.7\(I_{k+1}\subseteq ker\phi\) for some large \(k\). Now by Lemma 4.4(7) \([I_{k},I_{k}]\subseteq I_{2k}\) and \(I_{2k}\subseteq ker\phi\) we have \(\phi(I_{k})\) is an abelian ideal. Note that Lemma 4.3 implies that \(\phi(I_{k})\) is central and hence \(\phi[T,I_{k}]\) acts trivially on \(V_{0}\). Lemma 4.4(6) gives \([T(\mathbf{m}),T_{k}(\mathbf{r},\mathbf{s},\ldots,\mathbf{s})]=-(B\mathbf{m}| \mathbf{r})T_{k+1}(\mathbf{r},\mathbf{m},\mathbf{s},\ldots,\mathbf{s})+\)\(q(B\mathbf{m}|\mathbf{s})T_{k}(\mathbf{r}+\mathbf{s},\mathbf{s},\ldots, \mathbf{s},\mathbf{m}).\) Since \(I_{k+1}\) and \([T,I_{k}]\) contained in \(ker\phi\), from the above formula we have \(T_{k}(\mathbf{r}+\mathbf{s},\mathbf{s},\ldots,\mathbf{s},\mathbf{m})\in ker\phi\), when \((B\mathbf{m}|\mathbf{s})\neq 0\). Hence we have \(T_{k}(\mathbf{a},\mathbf{m},\mathbf{s},\ldots,\mathbf{s})\in ker\phi\), when \((B\mathbf{m}|\mathbf{s})\neq 0\) for arbitrary \(\mathbf{a}\in\mathbb{Z}^{N}\). If \((B\mathbf{m}|\mathbf{s})=0\), we consider \(b\in\mathbb{Z}^{N}\) such that \((B\mathbf{b}|s)\neq 0\), this is possible as \(B\) is non-degenerate. Therefore we have \(T_{k}(\mathbf{a},\mathbf{b},\mathbf{s},\ldots,\mathbf{s})\in ker\phi\) and \(T_{k}(\mathbf{a},\mathbf{m-b},\mathbf{s},\ldots,\mathbf{s})\in ker\phi\), as \((B(\mathbf{m-b})|s)\neq 0\). Hence by Lemma 4.5(2) we have \(T_{k}(\mathbf{a},\mathbf{m},\mathbf{s},\ldots,\mathbf{s})\in ker\phi\) for all \(\mathbf{a},\mathbf{m}\in\mathbb{Z}^{N}\). Continuing this process we have \(I_{k}\subseteq ker\phi\). Now instead of \(I_{k+1}\) we consider \(I_{k}\subseteq ker\phi\) and repeat the above process to deduce \(I_{3}\subseteq ker\phi\) (see Lemma 4.4(8) why it fails for \(I_{2}\)). **Lemma 4.8**.: _dim \(T/I_{3}=4m^{2}+2m\)._ Proof.: We know that \(I_{3}\subseteq I_{2}\subseteq I_{1}=T\). It is clear that dim \(T/I_{3}\)= dim \(T/I_{2}\) + dim \(I_{2}/I_{3}\). Note that by Lemma 4.5(3) and 4.6(2,3) we have \(T_{2}(0,\mathbf{r},\mathbf{s})\) is linear in both \(\mathbf{r},\mathbf{s}\) mod \(I_{3}\). Also \(I_{2}\) is spanned by \(T_{2}(0,\mathbf{r},\mathbf{s})\) mod \(I_{3}\), hence \(\{T_{2}(0,\mathbf{e_{i}},\mathbf{e_{j}}):1\leq i,j\leq 2m\}\) forms a basis for \(I_{2}/I_{3}\). By a similar argument we have \(I_{1}\) is spanned by \(T_{1}(0,\mathbf{r})\) mod \(I_{2}\) and \(T_{1}(0,\mathbf{r})\) is linear in \(\mathbf{r}\) mod \(I_{2}\). Hence \(\{T_{1}(0,\mathbf{e_{i}}):1\leq i\leq 2m\}\) forms a basis for \(I_{1}/I_{2}\). This proves the lemma. For a skew symmetric matrix \(B\) define \(\mathfrak{G}_{B}=\{X\in\mathfrak{gl}_{N}(\mathbb{C}):BX=-X^{T}B\}\). It is easy to check that \(\mathfrak{G}_{B}\) is a Lie algebra and \(\mathbf{r}(B\mathbf{r})^{T}\in\mathfrak{G}_{B}\). Note that if \(B\) can be written as \(B=A^{T}B^{\prime}A\) for some \(A\in GL_{N}(\mathbb{C})\) then \(B^{\prime}\) is also a skew symmetric matrix and \(A\mathfrak{G}_{B}A^{-1}=\mathfrak{G}_{B^{\prime}}\). Let \[J=\begin{pmatrix}0&I_{m\times m}\\ -I_{m\times m}&0\end{pmatrix}.\] Then \(J\) is a skew symmetric matrix with \(N=2m\). It is well known that \(\mathfrak{G}_{J}\simeq\mathfrak{sp}_{2m}\) and dimension of \(\mathfrak{G}_{J}\) is \(2m^{2}+m\). Consider a Lie algebra homomorphism \(\psi_{B}:T\rightarrow\mathfrak{G}_{B}\) given by \[\psi_{B}(T(\mathbf{r}))=\mathbf{r}(B\mathbf{r})^{T},\] here we treated \(\mathbf{r}\) as column vector and hence \((B\mathbf{r})^{T}\) is a row vector for all \(\mathbf{r}\in\mathbb{Z}^{N}\). **Lemma 4.9**.: _Let \(B=J\), \(T_{J}=span\{B_{\mathbf{r}}:\mathbf{r}\in\mathbb{Z}^{2m}\}\), where \(B_{\mathbf{r}}=k(-\mathbf{r})D(J\mathbf{r},\mathbf{r})-D(J\mathbf{r},0)\). Then \(\psi_{J}:T_{J}\rightarrow\mathfrak{sp}_{2m}\) is a surjective Lie algebra homomorphism._ Proof.: Let \(\mathbf{\overline{r}}=J\mathbf{r}\) and note that \(\mathbf{r}\mathbf{\overline{r}}^{T}\in\mathfrak{sp}_{2m}.\) For \(1\leq i\leq m\), take \(\mathbf{r=e_{i}}\), then \(\mathbf{r}\mathbf{\overline{r}}^{T}=-\mathbf{e_{i}}\mathbf{e_{m+i}}=-E_{i,m+i}\). Again for \(\mathbf{r=e_{m+i}}\), \(\mathbf{r}\mathbf{\overline{r}}^{T}=-E_{m+i,i}\). It is easy to see that \(\mathbf{r}\mathbf{\overline{s}}^{\mathbf{T}}+\mathbf{\overline{s}}^{\mathbf{T }}\in\mathfrak{sp}_{\mathbf{2m}}\). Now we construct a table from which it follows that \(\psi_{J}\) is surjective (see [JH], Section 1.2 to find a basis of \(\mathfrak{sp}_{2m}\)). \begin{tabular}{|c|c|c|} \hline **r** & **s** & \(\mathbf{r\bar{s}^{T}+s\bar{r}^{T}}\) \\ \hline \(\mathbf{e_{i}}\) & \(\mathbf{e_{j}}\) & \(-(E_{i,m+j}+E_{j,m+i})\) \\ \(\mathbf{e_{i}}\) & \(\mathbf{e_{m+j}}\) & \(-(E_{i,j}-E_{m+j,m+i})\) \\ \(\mathbf{e_{m+i}}\) & \(\mathbf{e_{j}}\) & \(-E_{m+i,m+j}+E_{j,i}\) \\ \(\mathbf{e_{m+i}}\) & \(\mathbf{e_{m+j}}\) & \(E_{m+i,j}+E_{m+j,i}\) \\ \hline \end{tabular} In the above table \(i,j\) runs from \(1\) to \(m\). **Lemma 4.10**.: \(\psi_{B}\) _is a surjective map._ Proof.: From Lemma 3.1 we have \(B=A^{T}JA\) for some \(A\in GL_{N}(\mathbb{C})\). Also we know that \(A\mathfrak{G}_{B}A^{-1}=\mathfrak{G}_{J}.\) Note that, \(\mathfrak{G}_{J}=span\{\mathbf{r}(J\mathbf{r})^{T}:\mathbf{r}\in\mathbb{Z}^{N }\}=span\{\mathbf{r}(J\mathbf{r})^{T}:\mathbf{r}\in\mathbb{C}^{N}\}=span\{A \mathbf{r}(JA\mathbf{r})^{T}:\mathbf{r}\in\mathbb{C}^{N}\},\) for some fixed \(A\in GL_{N}(\mathbb{C})\). Now consider \(\mathbf{r}(B\mathbf{r})^{T}=\mathbf{r}(A^{T}JA\mathbf{r})^{T}=\mathbf{rr}^{T}A ^{T}J^{T}A=\mathbf{r}(JA\mathbf{r})^{T}A=A^{-1}(A\mathbf{r})(JA\mathbf{r})^{T}A.\) Therefore we have \(span\{\mathbf{r}(B\mathbf{r})^{T}:\mathbf{r}\in\mathbb{Z}^{N}\}=A^{-1}span\{A \mathbf{r}(JA\mathbf{r})^{T}:\mathbf{r}\in\mathbb{Z}^{N}\}A=A^{-1}\mathfrak{G} _{J}A=\mathfrak{G}_{B}.\) This completes the proof. **Lemma 4.11**.: \(ker\psi_{B}\) _is central ideal modulo \(I_{3}\)._ Proof.: It is easy to see that \(ker\psi_{B}\) is an ideal and \(I_{3}\subseteq ker\psi_{B}\). Let \(X=\sum a_{\mathbf{r}}T(\mathbf{r})\in ker\psi_{B}\). Then \(\sum a_{\mathbf{r}}\mathbf{r}(B\mathbf{r})^{T}=0.\) Hence for any \(\mathbf{k}=(k_{1},\ldots,k_{N})\) we have, \(\sum a_{\mathbf{r}}\mathbf{r}(B\mathbf{r})^{T}\mathbf{k}=\sum a_{\mathbf{r}}( B\mathbf{r}|\mathbf{k})\mathbf{r}=0\). This implies that \(\sum a_{\mathbf{r}}(B\mathbf{r}|\mathbf{k})r_{i}=0\), for \(1\leq i\leq N\). Now consider \[[T(\mathbf{k}),X]=\sum a_{\mathbf{r}}[T(\mathbf{k}),T(\mathbf{r})]=\sum a_{ \mathbf{r}}(B\mathbf{k}|\mathbf{r})[T(\mathbf{r}+\mathbf{k})-T(\mathbf{k})-T (\mathbf{r})]\] \[=\sum a_{\mathbf{r}}(B\mathbf{k}|\mathbf{r})T_{2}(0,\mathbf{k},\mathbf{r})\,= \,\sum a_{\mathbf{r}}(B\mathbf{k}|\mathbf{r}){\sum_{i}^{N}}r_{i}T_{2}(0, \mathbf{k},\mathbf{e_{i}})=0,\text{ as }\sum a_{\mathbf{r}}(B\mathbf{r}|\mathbf{k})r_{i}=0,\] for \(1\leq i\leq N\). Note that the 4th equality follows from Lemma 4.5(3) and the fact that \(I_{3}\subseteq ker\psi_{B}\). Note that dimension of \(ker\psi_{B}/I_{3}=2m^{2}+m\), which follows from Lemma 4.8 and the fact that dimension of \(\mathfrak{sp}_{2m}=2m^{2}+m\). Now accumulating the results of Proposition 4.1 and Lemmas 4.9,4.10,4.11 we have the following theorem. **Theorem 4.1**.: _(1) Let \(V\) be an irreducible jet module for \(\widetilde{H_{B}}\) with finite dimensional weight spaces. Then \(V\simeq V_{0}\otimes A\), where \(V_{0}\) is an irreducible module for \(T\). (2) Let \(V_{0}\) be a finite dimensional irreducible module for \(T\). Then \(V_{0}\) is an irreducible module for \(\mathfrak{sp}_{2m}\). Conversely given any finite dimensional irreducible module for \(\mathfrak{sp}_{2m}\) it can be extended to an irreducible module for \(T\), where \(ker\psi_{B}\) acts as scalars._ ## 5. Level zero modules for SSEALA In this section we classify level zero irreducible integrable modules for \(\tau_{B}\) with finite dimensional weight spaces. Consider a triangular decomposition of \(\tau_{B}\) given by \(\tau_{B}^{+}=span\{X_{\alpha}(\mathbf{r}):\alpha\in\Delta^{+},\mathbf{r}\in \mathbb{Z}^{N}\}\), \(\tau_{B}^{-}=span\{X_{\alpha}(\mathbf{r}):\alpha\in\Delta^{-},\mathbf{r}\in \mathbb{Z}^{N}\}\), and \(\tau_{B}^{0}=span\{h(\mathbf{r}),K(B\mathbf{r},\mathbf{r}),K(\mathbf{u}, \mathbf{0}),D(\mathbf{u},\mathbf{0}),D(B\mathbf{r},\mathbf{r}):h\in\mathfrak{h },\mathbf{r}\in\mathbb{Z}^{N},\mathbf{u}\in\mathbb{C}^{\mathbf{N}}\}\). Throughout this section fix an irreducible integrable module \(V\) for \(\tau_{B}\) with finite dimensional weight spaces with respect to \(\widetilde{\mathfrak{h}}\). **Theorem 5.1**.: _Let \(V\) be an irreducible integrable module for \(\tau_{B}\) with finite dimensional weight spaces with respect to \(\widetilde{\mathfrak{h}}\). Suppose all \(K_{i}\)\((1\leq i\leq N)\) acting trivially on \(V\). Then there exists a weight vector \(v_{0}\) such that \(\tau_{B}{}^{+}.v_{0}=0\)._ Proof.: First we apply the similar argument as in [R4] to find a weight \(\mu\) of \(V\) such that \(V_{\mu+\alpha}=0\) for all \(\alpha\in\Delta^{+}\). We claim that \(V_{\mu+\alpha+\delta_{\mathbf{m}}}=0\) for all \(\alpha\in\Delta^{+},\mathbf{m}\in\mathbb{Z}^{N}\). If not, suppose \(V_{\mu+\alpha+\delta_{\mathbf{r}}}\neq 0\) for some \(\beta\in\Delta^{+},\mathbf{r}\in\mathbb{Z}^{N}\). **Claim :**\(V_{\lambda+\beta+\delta_{\mathbf{s}}}=0\) for all \(\beta\in\Delta^{+},\mathbf{s}\in\mathbb{Z}^{N}\), \(\lambda=\mu+\alpha+\delta_{\mathbf{r}}\). Suppose this is also not true, then \(V_{\lambda+\beta+\delta_{\mathbf{s}}}\neq 0\) for some \(\beta\in\Delta^{+},\mathbf{s}\in\mathbb{Z}^{N}\). Since \(\alpha,\beta\in\Delta^{+}\) either \(<\alpha+\beta,\alpha>>0\) or \(<\alpha+\beta,\beta>>0\). Let \(<\alpha+\beta,\alpha>>0\). Then we have \(<\lambda+\beta+\delta_{\mathbf{s}},\alpha+\delta_{\mathbf{s}}+\delta_{\mathbf{ r}}>=<\mu+\alpha+\beta,\alpha>>0\). Hence by Lemma 2.1\(V_{\mu+\beta}\neq 0\), a contradiction. This completes the proof. Let \(V^{+}=\{v\in V:\tau_{B}^{+}.v=0\}\). Then by Theorem 5.1 we have \(V^{+}\neq 0\). It follows by some weight argument and PBW theorem that \(V^{+}\) is an irreducible \(\tau_{B}^{0}\)-module. Note that \(\mathfrak{h}\) is central in \(\tau_{B}^{0}\) and hence acts by a single linear functional (say \(\bar{\lambda}\)) on \(V^{+}\). By Lemma 2.1 and Theorem 5.1, \(\bar{\lambda}\) is a dominant integral weight. Now fix a weight vector \(v\in V^{+}\) of weight \(\lambda\). Then all the weights of \(V^{+}\) are lies in the set \(\{\bar{\lambda}+\delta_{\mathbf{r}}+\sum_{i=1}^{N}g_{i}\delta_{i}:\mathbf{r} \in\mathbb{Z}^{N}\}\) and some \(\mathbf{g}=(g_{1},\ldots,g_{N})\in\mathbb{C}^{N}\). _Remark 5.1_.: We assume that \(\mathfrak{g}\otimes A\) acts non-trivially on \(V\). It is easy to see that this condition is equivalent to say that \(\bar{\lambda}\neq 0\). If \(\mathfrak{g}\otimes A\) acts trivially on \(V\), then \(V\) becomes an irreducible module for \(\widetilde{H}_{B}\) and we have no classification result for irreducible modules of \(\widetilde{H}_{B}\). **Proposition 5.1**.: _(1) Weight spaces of \(V^{+}\) are uniformly bounded. (2) \(\widetilde{Z}\) acts trivially on \(V^{+}\)._ Proof.: Let \(\lambda\) be a weight of \(V^{+}\). Then we know that all the weights are given by \(\lambda_{\mathbf{r}}=\bar{\lambda}+\delta_{\mathbf{r}}+\sum_{i=1}^{N}g_{i} \delta_{i}\), for all \(\mathbf{r}\in\mathbb{Z}^{N}\). Define \(\lambda^{0}=min\{\lambda(h):\lambda(h)>0\}\). Note that by Remark 5.1 we get \(\lambda^{0}\in\mathbb{N}.\) Now using similar argument as in [[CP1], Lemma 3.1] with the fact that \(K_{i}\) acting trivially we have, for any \(\lambda_{\mathbf{r}}\) there exists a \(\omega_{r}\in\Omega\) such that \(\omega_{\mathbf{r}}(\lambda_{\mathbf{r}})=\lambda_{\mathbf{s}}\) and \(0\leq s_{i}<\lambda^{0}\) for \(1\leq i\leq N\). Therefore Lemma 2.1 suggests that dimensions of weight spaces of \(V^{+}\) are bounded by \(\max\{dimV^{+}_{\lambda_{\mathbf{s}}}:0\leq s_{i}<\lambda^{0}\}\), this proves (1). Let \(M=\bigoplus_{0\leq s_{i}<\lambda^{0}}V^{+}_{\lambda_{\mathbf{s}}}\) and \(\tau_{N}=\mathfrak{g}\otimes A\oplus\widetilde{Z}\bigoplus_{i=1}^{N}\mathbb{C }d_{i}\). Note that \(\tau_{N}\) is the quotient of \(N\)-variable toroidal Lie algebra. It is also clear from the irreducibility of \(V\) that any \(\tau_{N}\) submodule of \(V\) intersecting \(V^{+}\) non-trivially is generated by elements of \(V^{+}\). Hence by Lemma 2.1 any such submodule is generated by a subset of \(M\). Let \(W_{1}\supset W_{2}\supset\dots\), be a strictly decreasing sequence of \(\tau_{N}\)-submodules of \(V\) intersecting \(V^{+}\) non-trivially. As the dimension of \(M\) is finite, this sequence must terminate. Therefore there exists a minimal submodule, say \(V_{min}\) such that \(V^{+}\cap V_{min}\neq 0\). This \(V_{min}\) may not be irreducible. But we can find an unique irreducible quotient of \(V_{min}\), say \(\widetilde{V}\). Then by [R2] we have \(\widetilde{Z}\) acts trivially on \(\widetilde{V}\). This implies that \(\widetilde{Z}.(V^{+}\cap V_{min})=0\). Now it is easy to see that \(W=\{v\in V^{+}:\widetilde{Z}.v=0\}\) is a non zero \(\tau_{B}^{0}\) submodule of \(V^{+}\), hence \(W=V^{+}\). Let \(\alpha\in\Delta^{+}\) such that \(\lambda(h_{\alpha})\neq 0\). Then by results of [CP2] it follows that \(h_{\alpha}\otimes t^{\mathbf{r}}\) acts trivially on \(V^{+}\) for all \(\mathbf{r}\neq 0\) if and only if \(\lambda(h_{\alpha})=0\). Therefore \(\lambda(h_{\alpha})\neq 0\) implies that \(h_{\alpha}\otimes t^{\mathbf{r}}\) acting non-trivially on \(V^{+}\) for some \(\mathbf{r}\neq 0\). Now it follows from Appendix that \(h_{\alpha}\otimes t^{\mathbf{r}}\) acting non-trivially on \(V^{+}\) for all \(\mathbf{r}\neq 0\). Now we have \(V^{+}\) is an irreducible module for \(\widetilde{H}_{B}\ltimes(\mathfrak{h}\otimes A)\). Here we treat \(\mathfrak{h}\otimes A\) as finitely many copies of \(A\). In the appendix we have taken only one copy of A but the arguments go through for finitely many copies of \(A\) (we need to use the fact that \(\mathfrak{h}\otimes A\) is commutative). Thus from appendix we have \(\lambda(h_{\alpha})\neq 0\) on \(V^{+}\). Hence by Theorem 7.1 we find non-zero scalars \(\lambda_{\alpha},\mu_{\alpha}\) such that on \(V^{+}\) it satisfy the conditions \[h_{\alpha}\otimes t^{\mathbf{r}}.h_{\alpha}\otimes t^{\mathbf{s }}=\lambda_{\alpha}h_{\alpha}\otimes t^{\mathbf{r}+\mathbf{s}},\mathbf{r}, \mathbf{s},\mathbf{r}+\mathbf{s}\notin\{0\}, \tag{5.2}\] \[h_{\alpha}\otimes t^{\mathbf{r}}.h_{\alpha}\otimes t^{-\mathbf{ r}}=\lambda(h_{\alpha})\mu_{\alpha},\forall\mathbf{r}\neq 0 \tag{5.1}\] and \(\lambda_{\alpha}^{2}=\mu_{\alpha}\lambda(h_{\alpha}).\) Then from Lemma 7.5 of [R1] it follows that \(\lambda_{\alpha}=\mu_{\alpha}=c=\lambda(h_{\alpha})\) for all \(\alpha\in\Delta^{+}\) such that \(\lambda(h_{\alpha})\neq 0\) [In Lemma 7.5 of [R1] we get the equation \(\sum_{i}\lambda_{i}(h_{\alpha})a_{i}^{s}=\lambda_{\alpha}\), where LHS is independent of basis of \(V^{+}\) and RHS is independent of s. But by the above equation both are independent of basis and s. Thus arguing as in Lemma 7.5 of [R1] we see that \(\mu_{\alpha}=\lambda_{\alpha}=\lambda(h_{\alpha})=c\)]. We know from Proposition 5.1 that \(\widetilde{Z}.V^{+}=0\). Now consider \(W=\{v\in V:\widetilde{Z}.v=0\}\) and observe that \(W\) is a non-zero \(\tau_{B}^{0}\) submodule of \(V\). Therefore by irreducibility of \(V\), \(\widetilde{Z}\) acts trivially on \(V\). Hence we have a nice description of \(V^{+}\) from Theorem 4.1, given by \(V^{+}\simeq V_{N}\otimes A\), where \(V_{N}\) is a finite dimensional irreducible \(\mathfrak{sp}_{N-1}\) module. **Theorem 5.2**.: _Let \(V\) be an irreducible integrable module for \(\tau_{B}\) having finite dimensional weight spaces with respect to \(\widetilde{h}\) and \(\mathfrak{g}\otimes A\) acts non-trivially on \(V\). Then there exists a finite dimensional irreducible module \(V(\mu)\) for \(\mathfrak{g}\) and an finite dimensional irreducible module \(V_{N}\) for \(\mathfrak{sp}_{N}\) such that \(V\simeq V(\mu)\otimes V_{N}\otimes A\). The action of \(\tau_{B}\) on \(V(\mu)\otimes V_{N}\otimes A\) given by following: \(X(\mathbf{r}).(v\otimes w\otimes t^{\mathbf{s}})=X.v\otimes w\otimes t^{ \mathbf{r}+\mathbf{s}}\), for all \(X\in\mathfrak{g}\), \(\mathbf{r},\mathbf{s}\in\mathbb{Z}^{\mathbf{N}},\)\(v\in V(\mu),w\in V_{N}\), \(\widetilde{Z}\) acts trivially on \(V\) and action of \(\widetilde{H}_{B}\) is given by Theorem 4.1._ Proof.: Note that the highest weight is dominant integral for \(\mathfrak{g}\), hence we can find \(V(\mu)\). Then it is easy to see that \(V(\mu)\otimes V_{N}\otimes A\) is an irreducible module for \(\tau_{B}\). Observe that \(\mathbb{C}v_{\mu}\otimes V_{N}\otimes A\) is the highest weight space for \(\tau_{B}\)-module \(V(\mu)\otimes V_{N}\otimes A\) with respect to the considered triangular decomposition, where \(v_{\mu}\) is the highest weight vector of \(V(\mu)\). Now since highest weight space of \(V\) and \(V(\mu)\otimes V_{N}\otimes A\) are isomorphic as \(\tau_{B}^{0}\)-module. Hence \(V\simeq V(\mu)\otimes V_{N}\otimes A\). _Remark 5.2_.: Let us consider the matrix \(J^{\prime}=\begin{pmatrix}J&0_{2m,1}\\ 0_{1,2m}&0\end{pmatrix}\). From Remark 3.3 we see that \(\tau_{J^{\prime}}\simeq\tau_{J_{1}}\). Though \(J^{\prime}\) looks much simpler than \(J_{1}\), it does not simplify the above arguments any further for \(J_{1}\). ## 6. Non Zero level modules for KEALA In this section we classify non zero level irreducible integrable modules for the KEALA constructed in [R1] and obtained in this paper when \(B=J_{1}\), i.e \(\tau_{J_{1}}\). We start this section with some notations for convenient. Let \(N=2m+1\) for this section. Let \(\underline{\mathbf{r}}=J_{1}\mathbf{r}=(r_{m+1}+r_{N},\ldots,r_{2m}+r_{M},-r_{1 }+r_{N},\ldots,-r_{m}+r_{N},-\sum_{i=1}^{2m}r_{i})\) and \(G_{J_{1}}=\{\mathbf{r}\in\mathbb{Z}^{N}:J_{1}\mathbf{r}=0\}\). Now consider the triangular decomposition of \(\tau_{J_{1}}\) given by \[\tau_{J_{1}}^{+}=span\{\mathfrak{g}^{+}\otimes t^{\mathbf{m}}, \mathfrak{g}\otimes t^{\mathbf{r}},K(\underline{\mathbf{s}},\mathbf{s}),D( \underline{\mathbf{s}},\mathbf{s}):m_{N}=0,r_{N},s_{N}>0,\mathbf{m},\mathbf{r}, \mathbf{s}\in\mathbb{Z}^{N},\mathbf{s}\notin G_{J_{1}}\} \tag{6.2}\] \[\tau_{J_{1}}^{0}=span\{\mathfrak{h}\otimes t^{\mathbf{m}},K( \underline{\mathbf{s}},\mathbf{s}),D(\underline{\mathbf{s}},\mathbf{s}):m_{N} =0,s_{N}=0,\mathbf{m},\mathbf{s}\in\mathbb{Z}^{N},\mathbf{s}\notin G_{J_{1}}\} \oplus D,\] (6.3) \[\tau_{J_{1}}^{-}=span\{\mathfrak{g}^{-}\otimes t^{\mathbf{m}}, \mathfrak{g}\otimes t^{\mathbf{r}},K(\underline{\mathbf{s}},\mathbf{s}),D( \underline{\mathbf{s}},\mathbf{s}):m_{N}=0,r_{N},s_{N}<0,\mathbf{m},\mathbf{r },\mathbf{s}\in\mathbb{Z}^{N},\mathbf{s}\notin G_{J_{1}}\}, \tag{6.1}\] where \(\mathfrak{g}=\mathfrak{g}^{-}\oplus\mathfrak{h}\oplus\mathfrak{g}^{+}\) is the usual triangular decomposition of \(\mathfrak{g}\). Let \(V\) be an irreducible integrable non zero level (i.e not all \(K_{i}\) acting trivially on \(V\)) module for \(\tau_{J_{1}}\) with finite dimensional weight spaces with respect to \(\widetilde{\mathfrak{h}}\). Then by standard argument we can assume that after twisting the module by an automorphism (2.5) we have only \(K_{N}\) acting non trivially and all other \(K_{i}\) acting trivially on \(V\). Now similar to the proof of Proposition 2.4 of [R2] (also see Theorem 2.1 of [RJ]) we have the following theorem. **Theorem 6.1**.: _Let \(V\) be an irreducible integrable non zero level module for \(\tau_{J_{1}}\) with finite dimensional weight spaces with respect to \(\widetilde{\mathfrak{h}}\). Then there exists a weight \(\mu\) of \(V\) such that \(\tau_{J_{1}}^{+}.V_{\mu}=0\)._ Let \(W^{+}=\{v\in V:\tau_{J_{1}}^{+}.v=0\}\). Then by Theorem 6.1, \(W^{+}\neq 0\). By standard argument it follows that \(W^{+}\) is an irreducible module for \(\tau_{J_{1}}^{0}\) with finite dimensional weight spaces. Let \(\lambda\) be a weight of \(W^{+}\). Then by irreducibility of \(W^{+}\) we get weights of \(W^{+}\) lies in the set \(\{\bar{\lambda}+\delta_{\mathbf{r}}+\sum_{i=1}^{N}g_{i}\delta_{i}:\mathbf{r} \in\mathbb{Z}^{2m}\}\) for some \(\mathbf{g}=(g_{1},\ldots,g_{N})\in\mathbb{C}^{N}\) and \(\bar{\lambda}=\lambda|_{\mathfrak{h}}\). _Remark 6.1_.: We assume that \(\bar{\lambda}\neq 0\). Otherwise using \(\mathfrak{sl}_{2}\) theory and integrability of \(V\) we can prove that \(\mathfrak{h}\otimes A_{2m}\) acts trivially on \(W^{+}\). Moreover, for any \(\mathbf{r},\mathbf{s}\in\mathbb{Z}^{N}\) with \(s_{N}=r_{N}=0\), we have \([h(\mathbf{s}),h(\mathbf{r})]=<h,h>K(\mathbf{r},\mathbf{r}+\mathbf{s})\) acts trivially on \(W^{+}\). Now consider \([D(\underline{\mathbf{r}},\mathbf{r}),K(\mathbf{e_{N}},\mathbf{s})]=( \underline{\mathbf{r}}|\mathbf{s})K(\mathbf{e_{N}},\mathbf{r}+\mathbf{s})+( \underline{\mathbf{r}}|\mathbf{e_{N}})K(\mathbf{r},\mathbf{r}+\mathbf{s})\), for \(r_{N}=s_{N}=0\). This implies that \(K(\mathbf{e_{N}},\mathbf{r}+\mathbf{s})\) acts trivially on \(W^{+}\) for all \(r_{N}=s_{N}=0\), when \((\underline{\mathbf{r}}|\mathbf{s})\neq 0\). Now for any \(\mathbf{k}\in\mathbb{Z}^{N}\) with \(k_{N}=0\), it is easy to find \(\mathbf{r},\mathbf{s}\in\mathbb{Z}^{N}\) such that \(r_{N}=s_{N}=0\), \((\underline{\mathbf{r}}|\mathbf{s})\neq 0\) and \(\mathbf{r}+\mathbf{s}=\mathbf{k}\). This proves that \(W^{+}\) is an irreducible module for \(span\{D(\underline{\mathbf{s}},\mathbf{s}):s_{N}=0,\mathbf{s}\in\mathbb{Z}^{N}, \mathbf{s}\notin G_{J_{1}}\}\oplus D\). We proved in Lemma 6.1 that \(span\{D(\underline{\mathbf{s}},\mathbf{s}):s_{N}=0,\mathbf{s}\in\mathbb{Z}^{N}, \mathbf{s}\notin G_{J_{1}}\}\) is isomorphic to \(H_{N-1}\) (see Example 2.3 for the construction of \(H_{N-1}\)) as Lie algebra. Hence \(W^{+}\) is an irreducible module for the Lie algebra \(H_{N-1}\oplus\underset{i=1}{\overset{N-1}{\underset{i=1}{\underset{i=1}{ \underset{i=1}{\underset{i=1}{\underset{i=1}{\underset{i=1}{\underset{i=1}{ \underset{i=1}{\underset{\underset{i=1}\underset{i=1}{\underset{i=1}\underset{i=1}{ \underset{i=1}\underset{i=1}{\underset{i=1}\underset{i=1}{\underset{i=1}\underset{i=1}{ \underset{i=1}\underset{i=1}{\underset{i=1}\underset{i=1}{\underset{i=1}\underset{i=1}{ \underset{i=1}\underset{i=1}{\underset{i=1}\underset{i=1}{\underset{i=1}\underset{i=1}{ \underset{i=1}\underset{i=1}{\underset{i=1}\underset{i=1}{\underset{i=1}\underset{i=1}{ \underset{i=1}\underset{i=1}{\underset{i=1}\underset{i=1}{i}\underset{i=1}{i}\underset{i=1}{i} \underset{i=1}{\underset{i=1}{i}\underset{i=1}{i}\underset{i=1}{i}\underset{i=1}{i} \underset{i=1}{i}\underset{i=1}{i}\underset{i=1}{i}\underset{i=1}{i}\underset{i=1}{i} \underset{i=1}{i}\underset{i=1}{i}\underset{i=1}{i}\underset{i=1}{i}\underset{i=1}{i} \underset{i=1}{i}\underset{i=1}{i}\underset{i=1}{i}\underset{i=1}{i} \underset{i=1}{i}\underset{i=1}{i}\underset{i=1}{i}\underset{i=1}{i}\underset{i=1}{i} \underset{i=1}{i}\underset{i=1}{i}\underset{i=1}{i}\underset{i=1}{i}\underset{i=1}{i} \underset{i=1}{i}\underset{i=1}{i}\underset{i=1}{i}\underset{i=1}{i} \underset{i=1}{i}\underset{i=1}{i}\underset{i=1}{i}\underset{i=1}{i} \underset{i=1}{i}\underset{i=1}{i}\underset{i=1}{i}\underset{i=1}{i} \underset{i=1}{i}\underset{i=1}{i}\underset{i=1}{i}\underset{i=1}{i} \underset{i=1}{i}\underset{i=1}{i}\underset{i=1}{i}\underset{i=1}{i} \underset{i=1}{i}\underset{i=1}{i}\underset{i=1}{i}\underset{i=1}{i}\underset{i= **Proposition 6.1**.: _(1) Weight spaces of \(W^{+}\) are uniformly bounded. (2) \(span\{t^{\mathbf{s}}K_{i}:\mathbf{s}\in\mathbb{Z}^{N-1},1\leq i\leq N-1\}\cap \widetilde{Z}\) acting trivially on \(W^{+}\). (3) \(t^{\mathbf{m}}K_{N}\) acts trivially on \(W^{+}\) for \(\mathbf{m}\in\mathbb{Z}^{N}\setminus\{0\}\) with \(m_{N}=0\)._ Proof.: By the Remark 6.1, there exits \(h\in\mathfrak{h}\) such that \(\bar{\lambda}(h)\neq 0\). Now by the fact that \(\bar{\lambda}\) is dominant integral we have \(\lambda^{0}=min\{\lambda(h):\lambda(h)>0\}>0\). Now we can proceed similarly like Proposition 5.1(1) to prove (1). To prove (2) we consider \(M=\bigoplus_{\begin{subarray}{c}\mathbf{s}\in\mathbb{Z}^{N-1}\\ 0\leq s_{i}<\lambda^{0}\end{subarray}}W^{+}_{\lambda_{\mathbf{s}}}\), where \(\lambda_{\mathbf{s}}\) are the weights of \(W^{+}\). Also let \(\tau_{N-1}=\mathfrak{g}\otimes A_{N-1}\oplus span\{t^{\mathbf{s}}K_{i}: \mathbf{s}\in\mathbb{Z}^{N-1},1\leq i\leq N-1\}\cap\widetilde{Z}\oplus \bigoplus_{i=1}^{N-1}\!\!\mathbb{C}d_{i}\) be the quotient of \(N-1\) variable toroidal Lie algebra. Now the proof follows similarly like 5.1(2). To prove (3), first note that when \(r_{N}=0\) and \(\sum_{i=1}^{N-1}r_{i}=0\), then \(K(e_{N},\mathbf{r})\) is zero in \(\tau_{J_{1}}\). Now consider the following bracket for \(r_{N}=s_{N}=0\). \[[D(\underline{\mathbf{s}},\mathbf{s}),K(\mathbf{e_{N}},\mathbf{r})]=( \underline{\mathbf{s}}|\mathbf{r})K(\mathbf{e_{N}},\mathbf{r}+\mathbf{s})+( \underline{\mathbf{s}}|\mathbf{e_{N}})K(\mathbf{s},\mathbf{r}+\mathbf{s}).\] Since \(K(\mathbf{s},\mathbf{r}+\mathbf{s})\) acting trivially on \(W^{+}\) by (2), hence we have \(K(e_{N},\mathbf{r}+\mathbf{s})\) acts trivially on \(W^{+}\), when \((\underline{\mathbf{s}}|\mathbf{r})\neq 0\). Now for given \(\mathbf{m}\in\mathbb{Z}^{N}\) with \(m_{N}=0\) and \(\sum_{i=1}^{N-1}m_{i}\neq 0\), it is easy to find \(\mathbf{r},\mathbf{s}\in\mathbb{Z}^{N-1}\) such that \((\underline{\mathbf{s}}|\mathbf{r})\neq 0\), \(\sum_{i=1}^{N-1}r_{i}=0\) and \(\mathbf{r}+\mathbf{s}=\mathbf{m}\) (particularly, set \(\mathbf{r}\) as, for \(1\leq i\leq m\), \(r_{i}=a\) and for \(m+1\leq i\leq 2m\), \(r_{i}=-a\), \(a\neq 0\in\mathbb{Z}\). Set \(\mathbf{s}\) as, \(s_{i}=m_{i}-a\), for \(1\leq i\leq m\) and \(s_{i}=m_{i}+a\) for \(m+1\leq i\leq 2m\) ). This completes the proof. **Lemma 6.1**.: _Let \(S=span\{D(\underline{\mathbf{s}},\mathbf{s}):s_{N}=0,\mathbf{s}\notin G_{J_{1 }}\}\). Then \(S\simeq H_{N-1}\)._ Proof.: Define a map \(\eta:S\to H_{N-1}\) by \(\eta(D(\underline{\mathbf{s}},\mathbf{s}))=D(\overline{\mathbf{s}},\mathbf{ s})\), where \(\overline{\mathbf{s}}=J\mathbf{s}\). We check that it is a Lie algebra homomorphism. Note that \[\eta([D(\underline{\mathbf{r}},\mathbf{r}),D(\underline{\mathbf{s}},\mathbf{ s})])=(\underline{\mathbf{r}},\mathbf{s})\eta(D(\underline{\mathbf{r}+ \mathbf{s}},\mathbf{r}+\mathbf{s}))=(\underline{\mathbf{r}},\mathbf{s})(D( \overline{\mathbf{r}+\mathbf{s}},\mathbf{r}+\mathbf{s}))\] \[=(\overline{\mathbf{r}},\mathbf{s})(D(\overline{\mathbf{r}+\mathbf{s}}, \mathbf{r}+\mathbf{s}))=[\eta(D(\underline{\mathbf{r}},\mathbf{r})),\eta(D( \underline{\mathbf{s}},\mathbf{s}))],\] \((\underline{\mathbf{r}},\mathbf{s})=(\overline{\mathbf{r}},\mathbf{s})\) due to the fact that \(s_{N}=0\). It is easy to see that \(\eta\) is bijective. Therefore from Proposition 6.1 and Lemma 6.1 we have that \(W^{+}\) is an irreducible module for \(\mathfrak{h}\otimes A_{N-1}\oplus H_{N-1}\oplus\bigoplus_{i=1}^{N-1}\mathbb{C}d_{i}\) with uniformly bounded weight spaces, as \(K_{N},d_{N}\) are central in \(\tau^{0}_{J_{1}}\). Now we proceed similarly like SSEALA case in Section 5 and conclude that \(W^{+}\) is an irreducible module for \(A_{N-1}\rtimes(H_{N-1}\oplus\bigoplus_{i=1}^{N-1}\mathbb{C}d_{i})\). Since \(H_{N}=H_{J}\), therefore by Theorem 4.1 (also see [T1] Theorem 5.2 ) we have \(W^{+}\simeq V_{N-1}\otimes A_{N-1}\), where \(V_{N-1}\) is a finite dimensional irreducible \(\mathfrak{sp}_{N-1}\) module. Let \(W\) be an irreducible module for \(\tau^{0}_{J_{1}}\). Now consider the Verma module \[M(W^{+})=U(\tau_{J_{1}})\bigotimes_{\tau^{+}_{J_{1}}\oplus\tau^{0}_{J_{1}}}W,\] where \(\tau^{+}_{J_{1}}\) acts trivially on \(W\). Let \(L(W)\) be the unique irreducible quotient of \(M(W)\). Moreover it is easy to see that if \(W_{1}\simeq W_{2}\) as \(\tau^{0}_{J_{1}}\) then \(L(W_{1})\simeq L(W_{2})\) as \(\tau_{J_{1}}\) module. Now since \(W^{+}\simeq V_{N-1}\otimes A_{N-1}\) as \(\tau^{0}_{J_{1}}\) module, hence we have the following theorem. **Theorem 6.2**.: _Let \(V\) be an irreducible integrable non zero level module for \(\tau_{J_{1}}\) with finite dimensional weight spaces with respect to \(\widetilde{\mathfrak{h}}\) and \(\mathfrak{h}\) acts non trivially on \(V\). Then upto a twist of an automorphism \(V\simeq L(V_{N-1}\otimes A_{N-1})\), where \(V_{N-1}\) is a finite dimensional irreducible \(\mathfrak{sp}_{N-1}\) module._ ## 7. appendix In this section we prove two technical result which we have used in last three sections. We follow the approach of [GL, R1, GLZ] to prove Theorem 7.1. For convenient we mention some notations here as in section 4. Recall the action of \(H_{B}\) on \(A\) given by \[h_{\mathbf{r}}.t^{\mathbf{s}}=(B\mathbf{r}|\mathbf{s})t^{\mathbf{r}+\mathbf{s}}, \tag{7.1}\] for all \(\mathbf{r},\mathbf{s}\in\mathbb{Z}^{\mathbf{N}}\). Let \(G=H_{B}\ltimes A\) be the semi direct product of \(H_{B}\) and \(A\). Clearly \(G=\bigoplus_{\mathbf{r}\in\mathbb{Z}^{\mathbf{N}}}G_{\mathbf{r}}\) is a \(\mathbb{Z}^{N}\) graded Lie algebra. Let \(G^{\prime}=\bigoplus_{\mathbf{r}\in\mathbb{Z}^{\mathbf{N}}-\{\mathbf{0}\}}G_{ \mathbf{r}}\), then \(G^{\prime}\) is an ideal of \(G\). Also let \( A^{\prime}=\bigoplus_{\mathbf{r}\in\mathbb{Z}^{\mathbf{N}}-\{ \mathbf{0}\}}\mathbb{C}t^{\mathbf{r}}\). Let \(W\) be an irreducible module for \(G\) which satisfy the following properties. 1. \(W=\bigoplus_{\mathbf{r}\in\mathbb{Z}^{\mathbf{N}}}W_{\mathbf{r}}\) and \(G_{\mathbf{r}}.W_{\mathbf{s}}\subseteq W_{\mathbf{r}+\mathbf{s}}\). 2. \(W\) is uniformly bounded \(G\) module, i.e there exists a natural number \(M\) such that \(\dim\,W_{\mathbf{r}}\leq M\) for all \(\mathbf{r}\in\mathbb{Z}^{N}\). **Theorem 7.1**.: _Let \(A^{\prime}\) acts non-trivially on \(W\) and \(t^{\mathbf{0}}\) acts as a non-zero scalar \(c\) on \(W\). Then there exists non-zero scalars \(\lambda_{\mathbf{r},\mathbf{s}}\) such that \(t^{\mathbf{r}}t^{\mathbf{s}}=\lambda_{\mathbf{r},\mathbf{s}}t^{\mathbf{r}+ \mathbf{s}}\) on \(W\). Moreover_ 1. \(\lambda_{\mathbf{r},\mathbf{s}}=\lambda\) _for all_ \(\mathbf{r},\mathbf{s}\neq 0,\mathbf{r}+\mathbf{s}\neq 0\)_._ 2. \(\lambda_{\mathbf{r},-\mathbf{r}}=\mu\) _for all_ \(\mathbf{r}\neq 0\)_._ 3. \(\lambda_{0,\mathbf{r}}=c\) _and_ \(\lambda^{2}=\mu c\)_._ We prove this theorem using several lemmas. The next two lemmas follows from Lemma 3.1 and Lemma 3.2 of [GLZ]. **Lemma 7.1**.: _Let \(W=\bigoplus_{\mathbf{r}\in\mathbb{Z}^{\mathbb{N}}}W_{\mathbf{r}}\) be an irreducible uniformly bounded module for \(A\). Then dim \(W_{\mathbf{r}}\leq 1\) for all \(\mathbf{r}\in\mathbb{Z}^{\mathbb{N}}\)._ **Lemma 7.2**.: _Let \(g\in U(A)\) such that \(g.v=0\) for some \(v\in W\). Then \(g\) is locally nilpotent on \(W\)._ **Lemma 7.3**.: _(1) Either each \(t^{\mathbf{r}}\), \(r\neq 0\) acts injectively on \(W\) or every \(t^{\mathbf{r}}\) acts locally nilpotently on \(W\). (2) If \(t^{\mathbf{r}}\) acts locally nilpotently for all \(\mathbf{r}\neq\mathbf{0}\), then \(A^{\prime}.W=0\)._ Proof.: Let \(t^{\mathbf{r}}\) acts injectively on \(W\) for some \(\mathbf{r}\neq 0\). Let \(\mathbf{s}\in\mathbb{Z}^{N}\) such that \((B\mathbf{r}|\mathbf{s})\neq 0\) and \(t^{\mathbf{s}}\) acts locally nilpotently on \(W\). Then using the arguments similar to Claim 1 of [[GL],Proposition 3.4 ] we have \(t^{\mathbf{s}}\) acts nilpotently on \(W\). Now following arguments of Claim 3 of [[GL],Proposition 3.4 ] and the fact that \((B\mathbf{r}|\mathbf{s})\neq 0\) we get a contradiction. Hence \(t^{\mathbf{s}}\) acts injectively on \(W\) for all \(\mathbf{s}\) satisfying \((B\mathbf{r}|\mathbf{s})\neq 0\). If \((B\mathbf{r}|\mathbf{s})=0\) then consider \(\mathbf{s_{1}}\in\mathbb{Z}^{N}\) such that \((B\mathbf{r}|\mathbf{s_{1}})\neq 0\) and \((B\mathbf{s_{1}}|\mathbf{s})\neq 0\). Hence by above we have action of \(t^{\mathbf{s}}\) is injective on \(W\). This completes the proof of 1. Proof of 2 follows just by changing \(\bar{\mathbf{r}}\) to \(B\mathbf{r}\) in the proof of Proposition 9.5 of [R1]. **Proof of Theorem 7.1 :** The proof runs parallel to Theorem 9.1, [R1]. From Lemma 7.3 it follows that \(t^{\mathbf{r}}\) acts injectively on \(W\) for all \(\mathbf{r}\neq\mathbf{0}\). This implies that dim \(W_{\mathbf{r}}=\) dim \(W_{\mathbf{s}}\) for all \(\mathbf{r},\mathbf{s}\in\mathbb{Z}^{\mathbb{N}}\). Now by defining \(T_{\mathbf{r},\mathbf{s}}=t^{\mathbf{r}}t^{\mathbf{s}}-\lambda_{\mathbf{r}, \mathbf{s}}t^{\mathbf{r}+\mathbf{s}}\) and following the proof of Theorem 9.1, [R1] we have \[(B\mathbf{l}|\mathbf{s})\lambda_{\mathbf{r},\mathbf{s}+\mathbf{l}}+(B\mathbf{ l}|\mathbf{r})\lambda_{\mathbf{s},\mathbf{r}+\mathbf{l}}-(B\mathbf{l}|\mathbf{r}+ \mathbf{s})\lambda_{\mathbf{r},\mathbf{s}}=0, \tag{7.2}\] for all \(\mathbf{l},\mathbf{r},\mathbf{s}\in\mathbb{Z}^{N}\setminus\{0\}.\) Now we prove some lemmas to complete the proof of Theorem 7.1. **Lemma 7.4**.: _Let \((B{\bf l}|{\bf s})\neq 0\). Then we have_ 1. \(\lambda_{{\bf s}+{\bf l},{\bf s}}=\lambda_{{\bf s},{\bf s}}\)_._ 2. \(\lambda_{{\bf s},{\bf l}}=\lambda_{{\bf s},{\bf s}}=\lambda_{{\bf l},{\bf l}}\)_._ 3. \(\lambda_{{\bf s},{\bf s}}=\lambda_{{\bf s}+{\bf l},{\bf s}+{\bf l}}=\lambda_{{ \bf l},{\bf l}}\)_._ 4. \(\lambda_{{\bf s}+{\bf l},{\bf j}{\bf l}}=\lambda_{{\bf s},{\bf s}}=\lambda_{{ \bf l},{\bf l}}\) _for all_ \(j\in\mathbb{Z}\setminus\{0\}\)_._ 5. \(\lambda_{{\bf s},{\bf j}{\bf s}+{\bf l}}=\lambda_{{\bf s},{\bf s}}=\lambda_{{ \bf l},{\bf l}}\) _for all_ \(j\in\mathbb{Z}\)_._ Proof.: Proof of this lemma runs parallelly with Lemma 9.6 of [R1], just note that \((B{\bf s}|{\bf s})=0\). **Lemma 7.5**.: _Let \({\bf r},{\bf s}\in\mathbb{Z}^{N}\setminus\{0\}\) be such that \((B{\bf r}|{\bf s})=0\) and \({\bf r}+{\bf s}\neq 0\). Then we have the following._ 1. \(\lambda_{{\bf r},{\bf r}}=\lambda_{{\bf s},{\bf s}}=\lambda_{{\bf r}+{\bf s},{ \bf r}+{\bf s}}\)_._ 2. \(\lambda_{{\bf s}+{\bf r},{\bf s}}=\lambda_{{\bf s},{\bf s}}=\lambda_{{\bf r},{ \bf r}}=\lambda_{{\bf r},{\bf s}+{\bf r}}\)_._ 3. \(\lambda_{{\bf r},{\bf s}}=\lambda_{{\bf s},{\bf s}}=\lambda_{{\bf r},{\bf r}}\)_._ 4. \(\lambda_{{\bf s}+{\bf r},j{\bf r}}=\lambda_{{\bf s},{\bf s}}=\lambda_{{\bf r}, {\bf r}}\)_, for all_ \(j\in\mathbb{Z}\setminus\{0\}\)_._ 5. \(\lambda_{{\bf s},{\bf j}{\bf s}+{\bf r}}=\lambda_{{\bf s},{\bf s}}=\lambda_{{ \bf r},{\bf r}}\)_, when_ \(j{\bf s}+{\bf r}\neq 0\)_,_ \(j\in\mathbb{Z}\)_._ Proof.: First note that (4) and (5) follows immediately from (2) and (3). Therefore we prove (1,2,3). Let \(U_{\bf r}\) be denote the orthogonal complement of \(B{\bf r}\) for all \({\bf r}\in\mathbb{Z}^{N}\). Since \(B\) is non degenerate \(\mathbb{Z}^{N}\setminus(U_{\bf r}\cup U_{\bf s}\cup U_{{\bf r}+{\bf s}})\cap \mathbb{Z}^{N}\) is a proper subset of \(\mathbb{Z}^{N}\). Hence there exists a \({\bf k}\in\mathbb{Z}^{N}\) such that \((B{\bf r}|{\bf k}),(B{\bf s}|{\bf k}),(B({\bf r}+{\bf s})|{\bf k})\in\mathbb{C }\setminus\{0\}\). Now from Lemma 7.4 we have \(\lambda_{{\bf r},{\bf r}}=\lambda_{{\bf s},{\bf s}}=\lambda_{{\bf r}+{\bf s}, {\bf r}+{\bf s}}=\lambda_{{\bf k},{\bf k}}\). Notice that \((B{\bf r}|{\bf s}+{\bf k})\neq 0\neq(B{\bf s}|{\bf r}+{\bf k})\). Hence from Lemma 7.4 and the equality \(\lambda_{{\bf r},{\bf r}}=\lambda_{{\bf s},{\bf s}}\) we have \(\lambda_{{\bf r},{\bf r}}=\lambda_{{\bf r},{\bf s}+{\bf k}}=\lambda_{{\bf s}, {\bf s}}=\lambda_{{\bf s},{\bf r}+{\bf k}}\). Now in equation (7.2) substituting \({\bf l}={\bf k}\) we have \(\lambda_{{\bf r},{\bf s}+{\bf k}}=\lambda_{{\bf r},{\bf s}}\). Thus we have \(\lambda_{{\bf r},{\bf r}}=\lambda_{{\bf r},{\bf s}}=\lambda_{{\bf s},{\bf s}}\). Now put \({\bf s}={\bf r}+{\bf s}\) in the last equality, then we have \(\lambda_{{\bf r},{\bf r}}=\lambda_{{\bf r},{\bf r}+{\bf s}}=\lambda_{{\bf r}+{ \bf s},{\bf r}+{\bf s}}\). Similarly, \(\lambda_{{\bf s},{\bf s}}=\lambda_{{\bf s},{\bf r}+{\bf s}}=\lambda_{{\bf r}+{ \bf s},{\bf r}+{\bf s}}\). This completes the proof. The next lemma completes the proof of Theorem 7.1 and it follows from verbatim same proof of Lemma 9.9 and Lemma 9.10 of [R1] using Lemma 7.4 and Lemma 7.5. **Lemma 7.6**.: _(1) \(\lambda_{j{\bf s},j{\bf s}}=\lambda_{{\bf s},{\bf s}}\) for all \(j\in\mathbb{Z}\setminus\{0\}\)._ _(2) \(\lambda_{j{\bf s},p{\bf s}}=\lambda_{{\bf s},{\bf s}}\) for all \(j+p\in\mathbb{Z}\setminus\{0\}\)._ _(3) \(\lambda_{{\bf r},{\bf s}}=\lambda\) for all \({\bf r}+{\bf s}\neq 0\) and \({\bf r},{\bf s}\neq 0\)._ _(4) \(\lambda_{0,{\bf s}}=c\), for all \({\bf s}\in\mathbb{Z}^{N}\)._ _(5) \(\lambda_{{\bf r},-{\bf r}}=\mu\) for all \({\bf r}\neq 0\)._ _(6) \(c\mu=\lambda^{2}\)._ **Proof of Lemma 4.4(8)**, i.e \([I_{p},I_{q}]\subseteq I_{p+q-2}\) for all \(p+q\geq 3\) and \([I_{1},I_{1}]\subseteq I_{1}\). Let \(I\subseteq\{1,2,\ldots,p\}\) and \(J\subseteq\{1,2,\ldots,q\}\). Let \(\mathbf{s_{I}}=\sum_{i\in I}\mathbf{s_{i}}\) and \(\mathbf{r_{J}}=\sum_{i\in J}\mathbf{r_{i}}\). Also set \(\mathbf{s_{\Phi}}=0\) and \(\mathbf{r_{\Phi}}=0\). Then \[T_{p}(\mathbf{k},\mathbf{s_{1}},\ldots,\mathbf{s_{p}})=\sum_{0 \leq a\leq p}\sum_{|I|=a}(-1)^{a}T(\mathbf{k}+\mathbf{s_{I}})\text{ and }\] \[T_{q}(\mathbf{l},\mathbf{r_{1}},\ldots,\mathbf{r_{q}})=\sum_{0 \leq b\leq q}\sum_{|J|=b}(-1)^{b}T(\mathbf{l}+\mathbf{r_{J}}).\] Therefore, \([T_{p}(\mathbf{k},\mathbf{s_{1}},\ldots,\mathbf{s_{p}}),T_{q}(\mathbf{l}, \mathbf{r_{1}},\ldots,\mathbf{r_{q}})]\) \[=\sum_{\begin{subarray}{c}0\leq a\leq p\\ 0\leq b\leq q\end{subarray}}\sum_{|I|=a}(-1)^{a+b}[T(\mathbf{k}+\mathbf{s_{I}} ),T(\mathbf{l}+\mathbf{r_{J}})]\] \[=-\sum_{\begin{subarray}{c}0\leq a\leq p\\ 0\leq b\leq q\end{subarray}}\sum_{|I|=a}(-1)^{a+b}(B(\mathbf{k}+\mathbf{s_{I}} )|\mathbf{l}+\mathbf{r_{J}})(T(\mathbf{k}+\mathbf{s_{I}})+T(\mathbf{l}+ \mathbf{r_{J}})-T(\mathbf{k}+\mathbf{l}+\mathbf{s_{I}}+\mathbf{r_{J}})).\] **Claim 1:**\(\sum_{\begin{subarray}{c}0\leq a\leq p\\ 0\leq b\leq q\end{subarray}}\sum_{|I|=a}(-1)^{a+b}(B(\mathbf{k}+\mathbf{s_{I}} )|\mathbf{l}+\mathbf{r_{J}})T(\mathbf{l}+\mathbf{r_{J}})=0.\) Fix some \(J\) such that \(|J|=b\). Now to prove this claim it is sufficient to prove that \[\sum_{0\leq a\leq p}\sum_{|I|=a}(-1)^{a}(B(\mathbf{k}+\mathbf{s_{I}})| \mathbf{l}+\mathbf{r_{J}})=0,\] which means it is sufficient to prove that \[\sum_{0\leq a\leq p}\sum_{|I|=a}(-1)^{a}(\mathbf{k}+\mathbf{s_{I}})=0\text{. But this sum is equal to}\] \[\sum_{0\leq a\leq p}(-1)^{a}\binom{p}{a}\mathbf{k}+\sum_{0\leq a\leq p}\sum_{ |I|=a}(-1)^{a}\mathbf{s_{I}},\text{ and this is easy to check to be }0.\] **Claim 2:**\(\sum_{\begin{subarray}{c}0\leq a\leq p\\ 0\leq b\leq q\end{subarray}}\sum_{|I|=a\\ |J|=b}(-1)^{a+b}(B(\mathbf{k}+\mathbf{s_{I}})|\mathbf{l}+\mathbf{r_{J}})T( \mathbf{k}+\mathbf{s_{I}})=0.\) This follows similarly like Claim 1. Now we evaluate \(\sum_{\begin{subarray}{c}0\leq a\leq p\\ 0\leq b\leq q\end{subarray}}\sum_{|I|=a\\ |J|=b\end{subarray}}(-1)^{a+b}(B(\mathbf{k}+\mathbf{s_{I}})|\mathbf{l}+\mathbf{ r_{J}})T(\mathbf{k}+\mathbf{l}+\mathbf{s_{I}}+\mathbf{r_{J}}).\) For that we write \(\mathbf{s_{I}}=0\). Then \[\sum_{\begin{subarray}{c}0\leq a\leq p\\ 0\leq b\leq q\end{subarray}}\sum_{|I|=a\\ |J|=b\end{subarray}}(-1)^{a+b}(B(\mathbf{k}+\mathbf{s_{I}})|\mathbf{l}+\mathbf{ r_{J}})T(\mathbf{k}+\mathbf{l}+\mathbf{s_{I}}+\mathbf{r_{J}}).\] **Claim 3:**\(\sum_{\begin{subarray}{c}0\leq a\leq p\\ 0\leq b\leq q\end{subarray}}\sum_{|I|=a\\ |J|=b\end{subarray}}(-1)^{a+b}(B(\mathbf{k}+\mathbf{s_{I}})|\mathbf{l}+\mathbf{ r_{J}})T(\mathbf{k}+\mathbf{s_{I}}+\mathbf{r_{J}}).\) **Claim 4:**\(\sum_{\begin{subarray}{c}0\leq a\leq p\\ 0\leq b\leq q\end{subarray}}\sum_{|I|=a\\ |J|=b\end{subarray}}(-1)^{a+b}(B(\mathbf{k}+\mathbf{s_{I}})|\mathbf{l}+\mathbf{ r_{J}})T(\mathbf{k}+\mathbf{s_{I}}+\mathbf{r_{J}}).\) **Claim 5:**\(\sum_{\begin{subarray}{c}0\leq a\leq p\\ 0\leq b\leq q\end{subarray}}\sum_{|I|=a\\ |J|=b\end{subarray}}(-1)^{a+b}(B(\mathbf{k}+\mathbf{s_{I}})|\mathbf{l}+\mathbf{ r_{J}})T(\mathbf{k}+\mathbf{l}+\mathbf{s_{I}}+\mathbf{r_{J}}).\) **Claim 6:**\(\sum_{\begin{subarray}{c}0\leq a\leq p\\ 0\leq b\leq q\end{subarray}}\sum_{|I|=a\\ |J|=b\end{subarray}}(-1)^{a+b}(B(\mathbf{k}+\mathbf{s_{I}})|\mathbf{l}+\mathbf{ r_{J}})T(\mathbf{k}+\mathbf{l}+\mathbf{s_{I}}+\mathbf{r_{J}}).\) **Claim 7:**\(\sum_{\begin{subarray}{c}0\leq a\leq p\\ 0\leq b\leq q\end{subarray}}\sum_{|I|=a\\ |J|=b\end{subarray}}(-1)^{a+b}(B(\mathbf{k}+\mathbf{s_{I}})|\mathbf{l}+\mathbf{ r_{J}})T(\mathbf{k}+\mathbf{l}+\mathbf{s_{I}}+\mathbf{r_{J}}).\) **Claim 8:**\(\sum_{\begin{subarray}{c}0\leq a\leq p\\ 0\leq b\leq q\end{subarray}}\sum_{|I|=a\\ |J|=b\end{subarray}}(-1)^{a+b}(B(\mathbf{k}+\mathbf{s_{I}})|\mathbf{l}+\mathbf{ r_{J}})T(\mathbf{k}+\mathbf{l}+\mathbf{r_{J}})T(\mathbf{k}+\mathbf{l}+\mathbf{r_{J}}).\) **Claim 9:**\(\sum_{\begin{subarray}{c}0\leq a\leq p\\ 0\leq b\leq q\end{subarray}}\sum_{|I|=a\\ |J|=b\end{subarray}}(-1)^{a+b}(B(\mathbf{k}+\mathbf{s_{I}})|\mathbf{l}+\mathbf{ r_{J}})T(\mathbf{k}+\mathbf{l}+\mathbf{r_{J}})T(\mathbf{k}+\mathbf{l}+\mathbf{r_{J}}).\) **Claim 10:**\(\sum_{\begin{subarray}{c}0\leq a\leq p\\ 0\leq b\leq q\end{subarray}}\sum_{|I|=a\\ |J|=b\end{subarray}}(-1)^{a+b}(B(\mathbf{k}+\mathbf{s_{I}})|\mathbf{l}+\mathbf{ r_{J}})T(\mathbf{k}+\mathbf{l}+\mathbf{r_{J}})=0.\) **Claim 11:**\(\sum_{\begin{subarray}{c}0\leq a\leq p\\ 0\leq b\leq q\end{subarray}}\sum_{|I|=a\\ |J|=b\end{subarray}}(-1)^{a+b}(B(\mathbf{k}+\mathbf{s_{I}})|\mathbf{l}+\mathbf{ r_{J}})T(\mathbf{k}+\mathbf{l}+\mathbf{r_{J}})=0.\) **Claim 12:**\(\sum_{\begin{subarray}{c}0\leq a\leq p\\ 0\leq b\leq q\end{subarray}}\sum_{|I|=a\\ |J|=b\end{subarray}}(-1)^{a+b}(B(\mathbf{k}+\mathbf{s_{I}})|\mathbf{l}+\mathbf{ r_{J}})T(\mathbf{k}+\mathbf{l}+\mathbf{r_{J}})=0.\) **Claim 13:**\(\sum_{\begin{subarray}{c}0\leq a\leq p\\ 0\leq b\leq q\end{subarray}}\sum_{|I|=a\\ |J|=b\end{subarray}}(-1)^{a+b}(B(\mathbf{k}+\mathbf{s_{I}})|\mathbf{l}+\mathbf{ r_{J}})T(\mathbf{k}+\mathbf{l}+\mathbf{r_{J}})=0.\) **Claim 14:**\(\sum_{\begin{subarray}{c}0\leq a\leq p\\ 0\leq b\leq q\end{subarray}}\sum_{|I|=a\\ |J|=b\end{subarray}}(-1)^{a+b}(B(\mathbf{k}+\mathbf{s_{I}})|\mathbf{l}+\mathbf{ r_{J}})T(\mathbf{k}+\mathbf{l}+\mathbf{r_{J}})=0.\) **Claim 15:**\(\sum_{\begin{subarray}{c}0\leq a\leq p\\ 0\leq b\leq q\end{subarray}}\sum_{|I|=a\\ |J|=b\end{subarray}}(-1)^{a+b}(B(\mathbf{k}+\mathbf{s_{I}})|\mathbf{l}+\mathbf{ r_{J}})T(\mathbf{k}+\mathbf{l}+\mathbf{r_{J}})=0.\) **Claim 16:**\(\sum_{\begin{subarray}{c}0\leq a\leq the above expression as the sum of four terms given by \[\sum_{\begin{subarray}{c}0\leq a\leq p\\ 0\leq b\leq q\end{subarray}}\sum_{\begin{subarray}{c}|I|=a\\ |J|=b\end{subarray}}(-1)^{a+b}(B\mathbf{k}|\mathbf{l})T(\mathbf{k}+\mathbf{l}+ \mathbf{s_{I}}+\mathbf{r_{J}}) \tag{7.4}\] \[\sum_{\begin{subarray}{c}0\leq a\leq p\\ 0\leq b\leq q\end{subarray}}\sum_{\begin{subarray}{c}|I|=a\\ |J|=b\end{subarray}}(-1)^{a+b}(B\mathbf{s_{I}}|\mathbf{l})T(\mathbf{k}+\mathbf{l }+\mathbf{s_{I}}+\mathbf{r_{J}})\] (7.5) \[\sum_{\begin{subarray}{c}0\leq a\leq p\\ 0\leq b\leq q\end{subarray}}\sum_{\begin{subarray}{c}|I|=a\\ |J|=b\end{subarray}}(-1)^{a+b}(B\mathbf{k}|\mathbf{r_{J}})T(\mathbf{k}+\mathbf{l }+\mathbf{s_{I}}+\mathbf{r_{J}})\] (7.6) \[\sum_{\begin{subarray}{c}0\leq a\leq p\\ 0\leq b\leq q\end{subarray}}\sum_{\begin{subarray}{c}|I|=a\\ |J|=b\end{subarray}}(-1)^{a+b}(B\mathbf{s_{I}}|\mathbf{r_{J}})T(\mathbf{k}+ \mathbf{l}+\mathbf{s_{I}}+\mathbf{r_{J}}). \tag{7.3}\] Now consider (6.3) which is clearly equal to \((B\mathbf{k}|\mathbf{l})T_{p+q}(\mathbf{k}+\mathbf{l},\mathbf{s_{1}},\ldots, \mathbf{s_{p}},\mathbf{r_{1}},\ldots,\mathbf{r_{q}}).\) Now consider the expression (6.5), to simplify this we fix \(i\in\{1,2,\ldots,q\}\). Then consider the expression \[\sum_{\begin{subarray}{c}0\leq a\leq p\\ 1\leq b\leq q\end{subarray}}\sum_{\begin{subarray}{c}|I|=a\\ |J|=b\end{subarray}}(-1)^{a+b}(B\mathbf{k}|\mathbf{r_{i}})T(\mathbf{k}+\mathbf{ l}+\mathbf{s_{I}}+\mathbf{r_{J}})\] (Note that \[b=0\] not occur here). This is equal to \[(B\mathbf{k}|\mathbf{r_{i}})T_{p+q-1}(\mathbf{k}+\mathbf{l}+\mathbf{r_{i}}, \mathbf{s_{1}},\ldots,\mathbf{s_{p}},\mathbf{r_{1}},\ldots,\mathbf{r_{q}}).\] Hence ( 6.5 ) is equal to \[-\sum_{i=1}^{q}(B\mathbf{k}|\mathbf{r_{i}})T_{p+q-1}(\mathbf{k}+ \mathbf{l}+\mathbf{r_{i}},\mathbf{s_{1}},\ldots,\mathbf{s_{p}},\mathbf{r_{1}},\ldots,\mathbf{r_{i}},\ldots,\mathbf{r_{q}}).\] In the similar manner (6.4) is equal to \(-\sum_{i=1}^{p}(B\mathbf{s_{i}}|\mathbf{l})T_{p+q-1}(\mathbf{k}+\mathbf{l}+ \mathbf{s_{i}},\mathbf{s_{1}},\ldots,\mathbf{\hat{s_{i}}},\ldots,\mathbf{s_{ p}},\mathbf{r_{1}},\ldots,\mathbf{r_{q}}).\) Now fix \(i\in\{1,2,\ldots,p\}\), \(j\in\{1,2,\ldots,q\}\) and consider the expression \[\sum_{\begin{subarray}{c}1\leq a\leq p\\ 1\leq b\leq q\end{subarray}}\sum_{\begin{subarray}{c}|I|=a\\ |J|=b\\ i\in I,j\in J\end{subarray}}(-1)^{a+b}(B\mathbf{s_{i}}|\mathbf{r_{j}})T( \mathbf{k}+\mathbf{l}+\mathbf{s_{I}}+\mathbf{r_{J}})\] \[=(B\mathbf{s_{i}}|\mathbf{r_{j}})T_{p+q-2}(\mathbf{k}+\mathbf{l} +\mathbf{s_{i}}+\mathbf{r_{j}},\mathbf{s_{1}},\ldots,\mathbf{\hat{s_{i}}}, \ldots,\mathbf{s_{p}},\mathbf{r_{1}},\ldots,\mathbf{\hat{r_{j}}},\ldots, \mathbf{r_{q}}).\] Therefore \((\ref{eq:2b})\) is equal to \(\sum_{\begin{subarray}{c}1\leq i\leq p\\ 1\leq j\leq q\end{subarray}}(B\mathbf{s_{i}}|\mathbf{r_{j}})T_{p+q-2}(\mathbf{ k}+\mathbf{l}+\mathbf{s_{i}}+\mathbf{r_{j}},\mathbf{s_{1}},\ldots,\mathbf{\hat{s_{i}}}, \ldots,\mathbf{s_{p}},\mathbf{r_{1}},\ldots,\mathbf{\hat{r_{j}}},\ldots, \mathbf{r_{q}}).\) This argument fails when \(p=q=1\). In fact in that case (6.6) is equal to \((B\mathbf{s_{1}}|\mathbf{r_{1}})T(\mathbf{k}+\mathbf{l}+\mathbf{s_{1}}+ \mathbf{r_{1}}).\) Therefore \([T_{p}(\mathbf{k},\mathbf{s_{1}},\ldots,\mathbf{s_{p}}),T_{q}(\mathbf{l}, \mathbf{r_{1}},\ldots,\mathbf{r_{q}})]\) \[=-(B{\bf k}|{\bf l})T_{p+q}({\bf k}+{\bf l},{\bf s_{1}},\ldots,{\bf s _{p}},{\bf r_{1}},\ldots,{\bf r_{q}})\] \[+ \sum_{i=1}^{q}(B{\bf k}|{\bf r_{i}})T_{p+q-1}({\bf k}+{\bf l}+{\bf r _{i}},{\bf s_{1}},\ldots,{\bf s_{p}},{\bf r_{1}},\ldots,{\bf\hat{r_{i}}},\ldots,{\bf r_{q}})\] \[+ \sum_{i=1}^{p}(B{\bf s_{i}}|{\bf l})T_{p+q-1}({\bf k}+{\bf l}+{\bf s _{i}},{\bf s_{1}},\ldots,{\bf\hat{s_{i}}},\ldots,{\bf s_{p}},{\bf r_{1}},\ldots,{\bf r_{q}})\] \[- \sum_{\begin{subarray}{c}1\leq i\leq p\\ 1\leq j\leq q\end{subarray}}(B{\bf s_{i}}|{\bf r_{j}})T_{p+q-2}({\bf k}+{\bf l }+{\bf s_{i}}+{\bf r_{j}},{\bf s_{1}},\ldots,{\bf\hat{s_{i}}},\ldots,{\bf s_{ p}},{\bf r_{1}},\ldots,{\bf\hat{r_{j}}},\ldots,{\bf r_{q}}),\] when \((p,q)\neq(1,1)\) \[=-(B{\bf k}|{\bf l})T_{2}({\bf k}+{\bf l},{\bf s_{1}},{\bf r_{1}})+(B{\bf k}|{ \bf r_{1}})T_{1}({\bf k}+{\bf l}+{\bf r_{1}},{\bf s_{1}})+(B{\bf s_{1}}|{\bf l })T_{1}({\bf k}+{\bf l}+{\bf s_{1}},{\bf r_{1}})\] \[-(B{\bf s_{1}}|{\bf r_{1}})T_{1}({\bf k}+{\bf l}+{\bf s_{1}}+{\bf r_{1}}),\] when \((p,q)=(1,1)\). This completes the proof.
2301.00021
Anyon statistics through conductance measurements of time-domain interferometry
We propose a method to extract the mutual exchange statistics of the anyonic excitations of a general Abelian fractional quantum Hall state, by comparing the tunneling characteristics of a quantum point contact in two different experimental conditions. In the first, the tunneling current between two edges at different chemical potentials is measured. In the second, one of these edges is strongly diluted by an earlier point contact. We describe the case of the dilute beam in terms of a time-domain interferometer between the anyons flowing along the edge and quasiparticle-quasihole excitations created at the tunneling quantum point contact. In both cases, temperature is kept large, such that the measured current is given to linear response. Remarkably, our proposal does not require the measurement of current correlations, and allows us to carefully separate effects of the fractional charge and statistics from effects of intra- and inter-edge interactions.
Noam Schiller, Yotam Shapira, Ady Stern, Yuval Oreg
2022-12-30T19:00:01Z
http://arxiv.org/abs/2301.00021v1
# Anyon statistics through conductance measurements of time-domain interferometry ###### Abstract We propose a method to extract the mutual exchange statistics of the anyonic excitations of a general Abelian fractional quantum Hall state, by comparing the tunneling characteristics of a quantum point contact in two different experimental conditions. In the first, the tunneling current between two edges at different chemical potentials is measured. In the second, one of these edges is strongly diluted by an earlier point contact. We describe the case of the dilute beam in terms of a time-domain interferometer between the anyons flowing along the edge and quasiparticle-quasihole excitations created at the tunneling quantum point contact. In both cases, temperature is kept large, such that the measured current is given to linear response. Remarkably, our proposal does not require the measurement of current correlations, and allows us to carefully separate effects of the fractional charge and statistics from effects of intra- and inter-edge interactions. _Introduction.--_ It has been almost four decades since the initial proposal that the elementary quasiparticles of fractional quantum Hall (FQH) systems obey anyonic statistics [1]. Despite the apparent maturity of the field, the pursuit to definitively observe the physical quantities and quantum numbers characterizing anyons [2; 3] is constantly being reinvigorated [4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20]. In particular, early 2020 saw two major experimental steps forward: the observation of anyonic braiding in a Fabry-Perot interferometer [21], and demonstration of a so-called "anyon collider" [22; 23] using cross-correlation measurements. Here we show that anyonic statistics can be inferred directly from conductance measurements, without requiring current correlation measurements or explicitly building an interferometer. The configuration we propose to obtain this result consists of a quantum point contact (QPC) between two edges of a general Abelian FQH state which are driven out of equilibrium. The edges may be driven off-equilibrium by one of three methods: injecting a single quasiparticle into one of the edges; injecting a Poissonian, dilute beam of quasiparticles into one of the edges; and placing a finite bias voltage between the edges. Our proposed setup, shown in Fig. 1(a), allows a smooth transition between the dilute Poissonian beam and a full beam at finite bias voltage. This is obtained by tuning a second, injection QPC from fully open (a differential conductance, \(G_{\rm inj}\equiv dI_{\rm inj}/dV\), satisfying \(G_{\rm inj}/\sigma_{xy}\to 0\)) to fully closed (\(G_{\rm inj}/\sigma_{xy}\to 1\)). We henceforth refer to these as the dilute and full limits, respectively. We propose sweeping \(G_{\rm inj}\) through this range, and measuring the ratio \(I/I_{\rm inj}\), where \(I\) is the measured current after the tunneling QPC, and \(I_{\rm inj}\) is the injected incident current, as defined in Fig. 1(a). Comparing the values at the dilute and full limits cancels out non-universal constants, yielding the relation, \[\left[\frac{I(T)}{I_{\rm inj}(T)}\right]_{\rm dilute}\!\!\!\!\!\!\!\!\!\!\!\! =\frac{\nu e^{2}}{2\pi e_{1}^{*}e_{2}^{*}}\sin 2\theta_{12}\left[\frac{I(T)}{I _{\rm inj}(T)}\right]_{\rm full}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! the system to a regime where \(\delta\) only affects observables through a non-universal prefactor, which then cancels out in the ratio of currents given in Eq. (1). We arrive at this regime by employing a careful ordering of the various energy scales in the system, such that \(\hbar I_{\mathrm{inj}}/e\ll k_{B}T\) throughout the entire crossover of \(G_{\mathrm{inj}}\). This ensures that the current \(I\) is given to linear response in \(I_{\mathrm{inj}}\). We present an analytic expression generalizing Eq. (1) outside of this regime in App A, Eq. (10). While in the full limit the edge that enters the tunneling QPC is in equilibrium at chemical potential \(V\), at the dilute limit we need the injection QPC to reflect only a small fraction of the impinging electrons, such that the resulting injection current is Poissonian and rare. Said differently, the injected current in this limit must satisfy \(I_{\mathrm{inj}}\ll\sigma_{xy}V\). Furthermore, the beam must still be dilute when arriving at the tunneling QPC. As such, the distance between the two QPCs must be sufficiently small that no equilibration or dephasing occurs along the way. Finally, we assume that tuning the injection QPC does not affect the transparency of the tunneling QPC, to ensure that all non-universal constants are cancelled when examining the ratio of the two limits. [39] Easy extraction of \(\theta_{12}\) requires \(G_{\mathrm{direct}}\) to be subdominant (see Eq. (1)). Quantitatively, this is the case if both \(k_{B}T\ll eV\) and \(4\delta_{1}<2\) are satisfied. These constraints result from the direct tunneling process being dominated by short time scales. Naive theories describing quasiparticles may satisfy this condition even if the aforementioned non-universal effects change the scaling dimension quite significantly. For example, theory gives \(\delta=1/2m\) for Laughlin quasiparticles. _Edge theory._-- We now define the system's Hamiltonian and derive the current. As shown by Wen, the edge theory of a general Abelian FQH state can be described by \(n\)-boson fields, \(\mathbf{\phi}(x,t)\equiv\left(\phi_{1},\phi_{2},\cdots\phi_{n}\right)^{T}\)[2]. These define the theory in conjunction with a charge vector, \(\mathbf{q}\), which determines the electric charge carried by each boson field, and the so called \(K\)-matrix, which determines the commutation relations between the boson fields, \[[\phi_{i}(x),\partial_{x^{\prime}}\phi_{j}(x^{\prime})]=i2\pi(K^{-1})_{ij} \delta(x-x^{\prime}). \tag{2}\] The filling factor is then given by \(\nu=\mathbf{q}^{T}K^{-1}\mathbf{q}\), and the charge density is given by \(\rho=-\frac{1}{2\pi}\mathbf{q}\cdot\partial_{x}\mathbf{\phi}\). In terms of these fields, the Hamiltonian of a single FQH edge mode is given by \[\mathcal{H}_{\mathrm{edge}}=\frac{1}{4\pi}\sum_{i,j=1}^{n}\int dx\partial_{x} \phi_{i}V_{ij}\partial_{x}\phi_{j}, \tag{3}\] where \(\hat{V}\) is a positive definite matrix describing the velocities of the modes and intra-edge interactions. These edges support quasiparticles of the form \(\psi_{\mathbf{l}}\sim e^{i\mathbf{l}\cdot\mathbf{\phi}}\), where \(\mathbf{l}\) is a vector of integers. The charge of these quasiparticles is then given by \(e_{\mathbf{l}}^{*}=\mathbf{q}^{T}K^{-1}\mathbf{l}\). The configuration of Fig. 1(a) involves two edges, \(u\) and \(d\), tunnel-coupled by a QPC. This is described by two copies of the Hamiltonian \(\mathcal{H}_{\mathrm{edge}}\), time reversed with regard to one another, as well as a tunneling term, \(\mathcal{H}_{T}\), which we treat as a perturbation. Assuming only one type of quasiparticle, denoted by the vector \(\mathbf{l}_{1}\) and carrying charge \(e_{1}^{*}\), tunnels between the edges, this is given Figure 1: (a) Two counter-propagating edge modes (\(u/d\)) of a fractional quantum Hall droplet at filling factor \(\nu\) are connected by a quantum point contact, through which quasiparticles of charge \(e_{1}^{*}\) and scaling dimension \(\delta_{1}\) can tunnel. Current is measured at the lower edge’s drain, denoted by \(I\). A current of \(I_{\mathrm{inj}}\) is injected into the upper edge via a second, injection QPC, e.g. from a third auxiliary edge mode (\(a\)). The injection QPC is placed at a bias voltage of \(V\), and allows tunneling of quasiparticles of charge \(e_{2}^{*}\) and scaling dimension \(\delta_{2}\). All other sources and drains are grounded. (b) The ratio between \(I/I_{\mathrm{inj}}\) in the dilute case and \(I/I_{\mathrm{inj}}\) in the full case, as a function of temperature, for \(\nu=e_{1}^{*}/e=e_{2}^{*}/e=1/3\), and for different scaling dimensions \(\delta_{1}\). For the dilute case, we \(I_{\mathrm{inj}}=10\)pA, and assume \(k_{B}T\ll eV\) for all relevant temperatures, such that the contribution from \(G_{\mathrm{direct}}\) to Eq. (1) is negligible. In the full case, we use \(V=10\upmu V\). Both cases use \(\xi=72\mathrm{mK},\tau_{c}=10^{-13}\mathrm{s}\). When the dilute case satisfies \(\hbar I_{\mathrm{inj}}/e\ll k_{B}T\ll eV\ll\hbar/\tau_{c}\), and the full case satisfies \(\hbar I_{\mathrm{inj}}/e=\nu eV/2\pi\ll k_{B}T\ll\hbar/\tau_{c}\), the ratio approaches an asymptote that does not depend on scaling dimension, allowing extraction of the mutual statistics \(\theta_{12}\). Inset: \(I/I_{\mathrm{inj}}\) for the dilute and full cases as a function of temperature for \(\delta_{1}=1/6\), the canonical value for a Laughlin \(1/3\) state. by \[\mathcal{H}_{T}=\xi\left[\hat{A}+\hat{A}^{\dagger}\right];\;\hat{A}(t)\equiv e^{i \left(\mathbf{l}_{1}\cdot\mathbf{\phi}^{(a)}(0,t)-\mathbf{l}_{1}\cdot\mathbf{\phi}^{(d)}(0,t) \right)}. \tag{4}\] Here, \(\xi\) is a small tunneling amplitude, which we assume to be real, and \(\mathbf{\phi}^{(u)}\left(\mathbf{\phi}^{(d)}\right)\) are the bosonic field operators on the upper (lower) edge. We project the auxiliary edge \(a\) out of the Hamiltonian, as it is only used to "initialize" the state of the edge \(u\). The current that tunnels from the upper edge to the lower edge is then given by the operator, \(\hat{I}_{T}(t)=i\xi e_{1}^{*}\left[\hat{A}^{\dagger}(t)-\hat{A}(t)\right]\). Since the lower edge is grounded, we henceforth identify \(I=\langle\hat{I}_{T}\rangle\). Expanding to leading order in \(\xi\), the current is given by \[I(t)=e_{1}^{*}\xi^{2}\int_{-\infty}^{t}dt^{\prime}\left\langle\left[\hat{A}^{ \dagger}(t),\hat{A}(t^{\prime})\right]+\left[\hat{A}^{\dagger}(t^{\prime}), \hat{A}(t)\right]\right\rangle. \tag{5}\] Here, \([\cdot,\cdot]\) denotes commutation, and expectation values are calculated with respect to the Hamiltonian in the absence of tunneling. _Deviation from Equilibrium._-- It is clear from Eq. (5) that one needs to derive correlation functions such as \(\langle\hat{A}^{\dagger}(t)\hat{A}(t^{\prime})\rangle\). In equilibrium, at temperature \(T\), the system is particle-hole symmetric, and the correlation functions are given by [2; 40] \[\langle\hat{A}^{\dagger}(t)\hat{A}(t^{\prime})\rangle_{0} =\langle\hat{A}(t)\hat{A}^{\dagger}(t^{\prime})\rangle_{0} \tag{6}\] \[=\left[\frac{\pi T\tau_{c}}{\sinh\left(\pi T\left|t-t^{\prime} \right|\right)}\right]^{4\delta_{1}}e^{-i2\pi\delta_{1}\text{sgn}\left(t-t^{ \prime}\right)},\] where \(\delta_{1}\) is the scaling dimension of the quasiparticle \(\mathbf{l}_{1}\), and \(\tau_{c}>0\) is a short time cutoff. Two main features are carried over from Eq. (6) to the correlation functions out of equilibrium - the exponential decay at time difference larger than \(\hbar/T\), and the phase \(e^{2\pi i\delta_{1}}\) associated with an interchange of the time arguments. We now consider two non-equivu cases. In the first we introduce a constant bias voltage \(V\equiv V_{u}-V_{d}\) between the edges. In the setup of Fig. 1(a), this corresponds to a fully closed injection QPC, i.e. \(I_{\text{inj}}=\sigma_{xy}V\). The introduction of the voltages can be formally absorbed into the boson fields by use of a simple gauge transformation, which maps \(\mathbf{\phi}^{(u/d)}(x,t)\mapsto\mathbf{\phi}^{(u/d)}(x,t)+K^{-1}\mathbf{q}V_{u/d}\left( t\mp x/v\right)/\hbar\). This accordingly modifies the correlation functions by a phase factor \[\langle\hat{A}^{\dagger}(t)\hat{A}(t^{\prime})\rangle_{\text{full}} =\langle\hat{A}^{\dagger}(t)\hat{A}(t^{\prime})\rangle_{0}e^{i \frac{\tau_{0}^{*}V}{\hbar}(t-t^{\prime})}, \tag{7}\] \[\langle\hat{A}(t)\hat{A}^{\dagger}(t^{\prime})\rangle_{\text{full }} =\langle\hat{A}(t)\hat{A}^{\dagger}(t^{\prime})\rangle_{0}e^{-i\frac{ \tau_{0}^{*}V}{\hbar}(t-t^{\prime})}.\] In the second non-equilibrium driving, we consider injecting a single quasiparticle, denoted by the vector \(\mathbf{l}_{2}\), into the upper edge at the location \(x_{\text{inj}}<0\) and at time \(t_{\text{inj}}\). This is shown schematically in Fig. 2(a). In view of the commutation relations (2), the application of the quasiparticle creation operator \(e^{-il_{2}\cdot\mathbf{\phi}^{(u)}(x_{\text{inj}},t_{\text{inj}})}\) on the edge creates a soliton in each of the boson fields, \[\mathbf{\phi}^{(u)}(x,t_{\text{inj}})\mapsto\mathbf{\phi}^{(u)}(x,t_{\text{inj}})-2 \pi K^{-1}\mathbf{l}_{2}\Theta\left(x-x_{\text{inj}}\right). \tag{8}\] We assume here the injection happens instantaneously. This assumption will be relaxed to find the subleading term of Eq. (1). The fields at general times can then be obtained using the equations of motion dictated by the Hamiltonian in Eq. (3). If all modes are chiral with the same velocity \(v\), this amounts to replacing \(x-x_{\text{inj}}\to x-x_{\text{inj}}-v\left(t-t_{\text{inj}}\right)\). The soliton thus arrives at the QPC, \(x=0\), at time \(t_{0}\equiv t_{\text{inj}}-x_{\text{inj}}/v\). The \(c\)-number shift in the bosonic field of Eq. (8) leads to a phase shift in the correlator Eq. (6). We see directly from the definition of the operator \(\hat{A}\) in Eq. (4) that \[\langle\hat{A}^{\dagger}(t)\hat{A}(t^{\prime})\rangle_{\text{qp}} =\langle\hat{A}^{\dagger}(t)\hat{A}(t^{\prime})\rangle_{0}e^{2\pi i \mathbf{l}_{1}K^{-1}l_{2}\left[\Theta(t-t_{0})-\Theta\left(t^{\prime}-t_{0}\right) \right]}, \tag{9}\] \[\langle\hat{A}(t)\hat{A}^{\dagger}(t^{\prime})\rangle_{\text{qp}} =\langle\hat{A}(t)\hat{A}^{\dagger}(t^{\prime})\rangle_{0}e^{-2 \pi i\mathbf{l}_{1}K^{-1}l_{2}\left[\Theta(t-t_{0})-\Theta\left(t^{\prime}-t_{0} \right)\right]}.\] The phase we obtain is the standard definition of mutual braiding statistics between two quasiparticles, \(\theta_{12}\equiv\pi\mathbf{l}_{1}K^{-1}\mathbf{l}_{2}\)[2]. The expression in Eq. (9) shows that the product gains a phase of \(e^{2i\theta_{12}\text{sgn}\left(t-t^{\prime}\right)}\) if the arrival time \(t_{0}\) is between the times \(t^{\prime}\) and \(t\), and a trivial phase of \(1\) otherwise. We emphasize how naturally this result came from the underlying theory: the only assumptions necessary to obtain this are the commutation relations, (2), and the existence of quasiparticles in the edge's excitation spectrum. This result holds for different boson modes with different velocities if all solitons arrive at the tunneling QPC more or less concurrently, avoiding dephasing. This is the case if \(|x_{\text{inj}}|/\Delta v\ll\hbar/T\), where \(\Delta v\) is the velocity difference between the fastest and the slowest modes. _Time-domain interferometry._-- The appearance of the phase, \(\theta_{12}\), can be understood as time-domain interferometry of the two distinct \(\pm e_{1}^{*}\) quasiparticle-quasihole excitations, before and after the injected \(e_{2}^{*}\) quasiparticle arrives at the QPC. A similar physical picture has been shown in Ref. [25; 27; 28]. To show this we consider the configuration of a single injected particle, as described in Fig. 2(a). In this case the non-equilibrium correlation function takes the form, \[\langle\hat{A}^{\dagger}(t)\hat{A}(t^{\prime})\rangle_{\text{qp}}=\langle\psi_{ \mathbf{l}_{2}}(t_{0})\hat{A}^{\dagger}(t)\hat{A}(t^{\prime})\psi_{\mathbf{l}_{2}}^{ \dagger}(t_{0})\rangle_{0}, \tag{10}\] i.e., the expectation value is calculated with respect to the state resulting from exciting the ground state \(\ket{0}\) with a single quasiparticle. Here we omit the position variable from the quasiparticle injection operator \(\psi_{\mathbf{l}_{2}}^{\dagger}(t_{0})\), and assume it arrives at the tunneling QPC \(x=0\) at time \(t_{0}\). The current in Eq. (5) is then given by integration over multiple terms of the form in Eq. (10). We define \(\ket{t,t_{0}}_{-}\equiv\hat{A}(t)\psi_{\mathbf{l}_{2}}^{\dagger}(t_{0})\ket{0}\) and \(\ket{t,t_{0}}_{+}\equiv\hat{A}^{\dagger}(t)\psi_{\mathbf{l}_{2}}^{\dagger}(t_{0}) \ket{0}\). Eq. (5) can now be re-written as \[I\propto-\int_{-\infty}^{t}dt^{\prime}\sum_{b=\pm}b\big{|}\left|t,t_{0}\right>_{b }+\left|t^{\prime},t_{0}\right>_{b}\big{|}^{2}. \tag{11}\] The expression above involves two interference terms. The term with \(b=-\) is an interference between creation of \(-e_{1}^{*}\) quasiholes on the upper edge at the QPC at times \(t\) and \(t^{\prime}\). The two interfering processes are shown schematically in Fig. 2(b). As shown in the first row of Eq. (9), these two processes are distinguished by a non-trivial phase of \(e^{i2\theta_{12}}\) if the arrival time \(t_{0}\) is in between the quasiholes' creation times, \(t^{\prime}<t_{0}<t\). Combined with the equilibrium correlation function Eq. (6), one finds that this interference gives a term proportional to \(\cos\left(2\theta_{12}-2\pi\delta\right)\). Using similar arguments, the term with \(b=+\) in Eq. (11), gives an interference term proportional to \(\cos\left(2\theta_{12}+2\pi\delta\right)\). The total contribution from the two terms in Eq. (11) is thus proportional to \(\sin\left(2\theta_{12}\right)\sin\left(2\pi\delta\right)\)[41]. This interference happens entirely in the time domain, and along only one edge. It is however crucial that this edge be part of a two-dimensional bulk. This is important both because the second edge is required to absorb the leftover quasiparticle or quasihole resulting from the pair creation at the QPC, and because the injected quasiparticle must be created within a bulk FQH droplet. Furthermore, the bulk is intimately related to the edge through bulk-edge correspondence. This dictates that the statistical phase contributing to time-domain interference along a single edge, which our setup measures, is the same as the phase obtained from spatial exchange. It is easy to generalize this to injection of multiple quasiparticles: as long as all injected quasiparticles are mutually independent, each injected quasiparticle contributes a phase of \(e^{2i\theta_{12}}\) if and only if the arrival time at the point contact was between \(t^{\prime}\) and \(t\). If we assume this is a Poissonian process, with a quasiparticle injection rate of \(I_{\rm inj}/e_{2}^{*}\), we obtain for \(t>0\) \[\frac{\langle\hat{A}^{\dagger}(t)\hat{A}(0)\rangle_{\rm dilute}}{ \langle\hat{A}^{\dagger}(t)\hat{A}(0)\rangle_{0}} =\sum_{n=0}^{\infty}\frac{(tI_{\rm inj}/e_{2}^{*})^{n}e^{-tI_{\rm inj }/e_{2}^{*}}}{n!}e^{2in\theta_{12}} \tag{12}\] \[=e^{-tI_{\rm inj}/e_{2}^{*}\left(1-e^{2i\theta_{12}}\right)}.\] This is precisely the result given in Refs. [25; 23] for injection along a single edge. Adding injected quasiparticles to the lower edge and generalizing for \(t<0\) are straightforward using the same arguments. _Currents.--_ The effect of driving the system out of equilibrium is completely encapsulated in the correlation functions obtained above. These can then be used to derive any observable of interest, such as charge or heat currents in any of the system's drains, or their respective auto- and cross-correlations. For concreteness, we present the explicit results of such a calculation for the charge current at the lower drain, denoted as \(I\) in Fig. 1. We show that a simple cohort of current measurements is sufficient to obtain the mutual statistics \(\theta_{12}\), without requiring correlation measurements. We focus on the regime where the temperature is large compared to the injected current \(\hbar I_{\rm inj}/ek_{B}T\). For the full limit, this assumption guarantees linear response in the voltage and in the injected current, which in this limit is \(I_{\rm inj}=\sigma_{xy}V\). For the dilute limit, the exponential suppression of the equilibrium correlation function at times larger than \(\hbar/T\), guarantees that the exponent in Eq. (12) may be expanded to first order in \(I_{\rm inj}\). Consequently, \[\frac{\langle\hat{A}^{\dagger}(t)\hat{A}(t^{\prime})\rangle_{\rm full/dilute} }{\langle\hat{A}^{\dagger}(t)\hat{A}(t^{\prime})\rangle_{0}}\approx 1+i\omega_{ \rm f/d}\left(t-t^{\prime}\right), \tag{13}\] where the frequencies \(\omega_{\rm f/d}\) are given by \[\omega_{\rm f}=\frac{e_{1}^{*}V}{\hbar}=\frac{e_{1}^{*}}{\hbar}\frac{I_{\rm inj }}{\sigma_{xy}};\quad\omega_{\rm d}=i\frac{I_{\rm inj}}{e_{2}^{*}}\left(1-e^{2 i\theta_{12}}\right). \tag{14}\] The zeroth order term corresponds to the equilibrium state and does not contribute to the current. The ratio of the two first order contributions is Eq. (1). Explicit calculation of the resulting current in Eq. (5), given in App. A, finds that \[I_{\rm full/dilute}=2\pi e_{1}^{*}(\xi\tau_{c})^{2}(2\pi T\tau_{c})^{4\delta_ {1}-2}\mathcal{B}\left(2\delta_{1},2\delta_{1}\right)\mathrm{Re}\left[\omega_ {\rm f/d}\right], \tag{15}\] where \(\mathcal{B}(x,y)\) is the Euler Beta function. It is thus immediately apparent that by focusing on the ratio between the full and dilute beams, all dependence on \(\delta_{1},T\) and \(\xi\) drops out. Examining the ratio \(I/I_{\rm inj}\), and noting that \(\sigma_{xy}\hbar/e_{1}^{*}e_{2}^{*}=\nu e^{2}/2\pi e_{1}^{*}e_{2}^{*}\) we thus obtain Eq. (1). Figure 2: Time-domain interferometry. (a) I A quasiparticle is injected from the sourced, left edge, through the injection QPC, and into the upper edge. II The injected quasiparticle, by virtue of its chiral motion along the edge, arrives at the tunneling QPC. III A quasiparticle-quasihole pair is created at the tunneling QPC. (b) The two processes by which charge carriers may ultimately arrive at the drain. The injected quasiparticle arrives at the tunneling QPC either before (upper panel) or after (lower panel) the creation quasiparticle-quasihole pair. These two processes interfere, with a relative phase dictated by the mutual statistics phase, \(e^{i2\theta_{12}}\). For general temperatures, the current can no longer be treated as a linear response to the drive of the full or dilute beams. We hence obtain the typical power laws characterizing tunneling in Luttinger liquids [2; 34; 42; 43]. Comparing measurements of the full and dilute limits at the low temperature limit \(T\ll e^{*}V,I_{\rm inj}\) can still give a quantity related to the mutual statistics \(\theta_{12}\), but will explicitly depend on the value of \(\delta_{1}\). We present general expressions for the current in this case in App. A. For a fermionic \(\theta_{12}=\pi\), Eq. (15) gives no current at all for a dilute electron beam. However, Landauer-Buttiker-Imry scattering theory [44] tells us the current is given by the product of the transparencies of the two QPCs along the electron's path, regardless of whether they are close to full transmission or full reflection. This requires accounting for the direct tunneling term in Eq. (1), which now becomes the leading contribution. We do this by accounting for the finite width of the soliton. This leads to the required, Landauer-Buttiker-Imry consistent result of \(I_{\rm dilute}=4\pi^{2}\tau_{c}^{2}\xi^{2}I_{\rm inj}\). The physical intuition behind the requirement of a finite soliton width is that tunneling without time-domain interferometry, dubbed the direct tunneling process in [24; 25], is dominated by short times. Performing these calculations explicitly in App. B, we show that the ratio between the first term in Eq. (1) and \(G_{\rm direct}\) is \(\propto(T\tau_{s})^{4\delta_{1}-2}\), where \(\tau_{s}\) is the soliton width. It has been shown [24; 25] that \(\tau_{s}^{-1}\propto\max\{eV,k_{B}T\}\); as such, to ensure \(G_{\rm direct}\) is sub-dominant, the dilute limit must be measured when \(k_{B}T\ll eV\) and \(4\delta_{1}<2\). Several contemporary experimental setups use the equivalent of non-interacting fermionic formulae to reasonable success [45], corresponding to the limiting value of \(2\delta_{1}=1\). In this case, the second term of Eq. (1) is a numerical coefficient of order one, which may depend solely on \(e^{*},\delta_{1}\) and \(\theta_{12}\). For non-interacting fermions, this coefficient is easily found by comparing to known Landauer-Buttiker-Imry scattering theory [44], but it is straightforward to generalize. We discuss this coefficient further in App. B. _Discussion._-- We propose a simple method to extract anyonic exchange statistics. Our system consists only of a single quantum Hall droplet with two QPCs, which effectively create a time-domain interferometer, as can be identified from current measurements. We thus avoid both current correlation (or noise) measurements, and the need for a real space interferometer, making the identification of the exchange statistics much more accessible than existing experiments. All time-domain interferometry is between pairs of an injected quasiparticle and a tunneling quasiparticle, and occurs at the same edge, as previously proposed in Ref. [25]. Both the exchange statistics \(\theta_{11}\) of the tunneling quasiparticle, and \(\theta_{22}\) of the injection quasiparticle, do not appear in our derivation. Rather, it is the two particles' _mutual statistics_, \(\theta_{12}\) that affect the modified correlation functions, and hence, the physical observables. Likewise, the scaling dimension and electric charge which directly effect observables are only those of the tunneling quasiparticle, \(\delta_{1}\) and \(e_{1}^{*}\) (properties of the injected quasiparticles may implicitly enter through the injection rate). Only in the case where the injected and tunneling quasiparticles are identical, \(\mathbf{l}_{1}=\mathbf{l}_{2}\), do we obtain exchange statistics for a single quasiparticle type. We remark that this is indeed the case in the experiment of Ref. [22], where all quasiparticles are Laughlin \(e^{*}=e/3\) anyons, and subsequent recreations for the \(\nu=1/3\) and \(\nu=2/5\) cases [46; 47; 26]. Interestingly, a recent experiment employing a similar setup, where the injected quasiparticle was a \(e/3\) anyon and the tunneling quasiparticle was an electron, observed Andreev-like reflection [48]. This is consistent with a mutual statistics phase of \(\theta_{12}=\pi\), for which Eq. (1) gives no time-domain interferometry signal. _Acknowledgements._-- We thank Tomer Alkalay, Moty Heiblum, Changki Hong, June-Young Lee and H.-S. Sim for insightful discussions and comments on the manuscript. This work was partially supported by grants from the ERC under the European Union's Horizon 2020 research and innovation programme (grant agreements LEGOTOP No. 788715 and HQMAT No. 817799), the DFG (CRC/Transregio 183, EI 519/7-1), the BSF and NSF (2018643), the ISF Quantum Science and Technology (2074/19). N.S. was supported by the Clore Scholars Programme.
2301.10232
Combating harmful Internet use with peer assessment and differential evolution
Harmful Internet use (HIU) is a term coined for the unintended use of the Internet. In this study, we propose a more accurate HIU measuring method based on the peer assessment and differential evolution approach. The sample data comprises a juvenile population in Poland; 267 subjects assessed 1,513 peers. In addition to classic statistical analysis, differential evolution has been employed. Results indicate that there may be a substantially higher rate of HIU than other studies have indicated. More accurate measurement of the adolescent population influx affected by HIU is needed for healthcare and welfare system planning. Presented in Prague, Czech Republic, 20-22 July 2022.
W. W. Koczkodaj, M. Mazurek, W. Pedrycz, E. Rogalska, R. Roth, D. Strzalka, A. Szymanska, A. Wolny-Dominiak, M. Woodbury-Smith, O. S. Xue, R. Zbyrowski
2022-12-31T21:39:47Z
http://arxiv.org/abs/2301.10232v1
# Combating harmful Internet use with peer assessment and differential evolution ###### Abstract Harmful Internet use (HIU) is a term coined for the unintended use of the Internet. In this study, we propose a more accurate HIU measuring method based on the peer assessment and differential evolution approach. The sample data comprises a juvenile population in Poland; 267 subjects assessed 1,513 peers. In addition to classic statistical analysis, differential evolution has been employed. Results indicate that there may be a substantially higher rate of HIU than other studies have indicated. More accurate measurement of the adolescent population influx affected by HIU is needed for healthcare and welfare system planning. Harmful Internet Use (HIU), peer assessment, rating scale, differential evolution, harm measurement. + Footnote †: publication: _Proc. of the International Conference on Electrical, Computer and Energy Technologies (ICECET 2022)_ 20-22 June 2022, Prague-Czech Republic ## I Introduction When used inappropriately, the harmful Internet use (HIU) becomes a major problem, especially for the younger generation. According to [1], problematic Internet use has been linked to behavioral addiction, major depressive disorder, ADHD, deficit/hyperactivity disorder, sleeping disorders, cognitive deficits, and suicides. Terms like "Internet addict" have commonly been used to recognize the burgeoning destructive potential of the excessive Internet use or being attracted to illicit pasttimes. In this study, a rating scale is utilized as a tool for quantitative measuring of a new social phenomenon: HIU. Considerable efforts are being made to improve its accuracy and reduce the uncertainty of measurements when enhanced by the peer assessment and analyzed with the help of differential evolution. The Netflix September 2020 release of the docudrama "The Social Dilemma" (see [2]) has generated considerable publicity related to HIU, and is a prime illustration of the need for improved measurement of HIU. The Social Dilemma is a pivotal docudrama (see [2]) which delves into the dangers of social media in particular. By interviewing the designers of social media platforms, the film makes a compelling case that social media poses a viable threat to civilization itself. Social media companies constrain us into adopting modes of thinking and behaving in ways that are profitable for corporations, rather than thinking and behaving in ways that are based on our own goals, beliefs, or values. Designers of social media platforms force us to give our time away to corporations selling 'big data' to their clients. We argue that the approach presented here should be used to measure the harm suggested in The Social Dilemma. Group assessments can be improved by: [3, 4, 5], and [6]. Another type of harmful Internet use is presented by [7]. ## II Methods Our intensive search of the most recognized subscription-based scientific citation indexing services: Scopus (by Elsevier, Netherlands) and Web of Knowledge (by Clarivate Analytics, USA) has traced the first formal introduction of the scientific peer review method to the 17th century. Henry Oldenburg (also Henry Oldenburg) FRS, was a German theologian known as the proponent of scientific peer review. The American Medical Association uses the medical peer review in its assessment of the improving quality and safety process in health care organizations as well as in the process of assessing the clinical behavior and compliance with professional society membership standards (see [8]). The self-assessment study (see [9]) was a survey of more than 250,000 individuals 45 years of age or older residing in New South Wales (NSW), the most populous state of Australia. It has demonstrated that individuals are significantly (by 36.5%) more likely to under-report mental disorders. Evidently, the self-assessment should not be regarded as a reliable measurement method. Compared with mobile device recordings, most parents underestimated (35.7%) or overestimated (34.8%) their child's use. The total inaccuracy (70.5%) supports the common knowledge that the vast majority of parents miscalculate their children's time spent using mobile devices. ### _Data acquisition and questionnaires_ In this study, 267 participants took part. Participants evaluated 1,518 of their peers. 185 of 267 subjects were elementary school students. They evaluated 1,238 peers. In addition to this, 82 parents evaluated 280 children, their own children, or children they knew. The age of the assessed participants was between 10 and 22 years. Secondary school students, aged from 12 to 15, have provided answers to questions in two questionnaires. The children's parents were encouraged to take part in this study. One questionnaire required the respondents to provide a list of close friends and family members close to their age. Parents, grandparents, girlfriends, boyfriends, and/or guardians as they are other closely related (by an emotional connection with the subjects for an unbiased assessment) were excluded from this list. Subjects were instructed to list everyone who they could evaluate (not just those who you may suspect of HIU). Friendship or any kind of professional (e.g., study) relationship was regarded as acceptable but more intimate relationships were not since it may impair the objectivity of assessment. The second questionnaire was used to measure the HIU of the peers listed in the first questionnaire. It is discussed in Subsection 3. A small scale rating, investigated in [6], has been used. In the case of children, the scale 0 to 3 is easier to comprehend and use. The meaning of 0 usually signifies absence or lack of knowledge and 3 stands for the maximum of some quality or knowledge about the subject. Modeling was carried out separately for answers obtained from the group of children, university students and parents. ## III Differential evolution meta-heuristic Differential evolution (DE) was introduced in [10]. It has been used in [11] for finding a solution by iterative improvement of a candidate solution (e.g., weights) for a given objective function \(f\) which was are under the curve (AUC) of the receiver characteristic curve (ROC). DE is widely recognized as one of the most powerful meta-heuristics (sometimes called an algorithm) that is based on the classic developmental process used for evolutionary algorithms (EAs). When compared with other traditional approaches, such as EAs, DE uses the scaled differences of vectors to produce new candidate solutions in the population. Hence, no separate probability distribution needs to be used to perturb the population members [10]. DE is also characterized by the advantages of having few parameters leading to the simplicity of implementation. DE could be specified by the following steps: 1. _initialization_ - An initial population that is sampled uniformly at random within the search bounds is created. 2. _mutation_ - Three components namely mutation, crossover and selection are adopted to evolve the initial population. 3. _crossover_ - The mutation and crossover are used to create new solutions. 4. _selection_ - The selection determines the solutions that will breed a new generation is made. DE remains inside a loop until the stopping criterion is met. Each step is explained separately in subsequent subsections. From the syntax point of view, it looks like an algorithm but it is a heuristic since proving its convergence is a challenge for a general case. ### _Initialization_ Like other optimization algorithms, DE starts with a randomly initialized population of order NP. Parameter vectors are called individuals. Each such individual represents a \(D\)-dimensional vector of decision variables (parameters). The _i-th_ individual for the generation \(G\) is denoted as follows: \[{\bf x_{i}}^{G}=[x_{i,1}^{G},\ldots,x_{i,D}^{G}], \tag{1}\] where \(j=1\ldots,D\) and \(i=1,\ldots,{\rm NP}\). Both upper and lower bounds of the decision variables should be restricted to their minimum and maximum for \(i=1,\ldots,NP\) before the population is initialized. \[{\bf B_{L}}=[B_{L,1},\ldots,B_{L,j}]=[\min x_{i,1}^{G},\ldots, \min x_{i,D}^{G}],\] \[{\bf B_{U}}=[B_{U,1},\ldots,B_{U,j}]=[\max x_{i,1}^{G},\ldots, \max x_{i,D}^{G}]\] Once initialization search ranges have been determined, DE assigns each decision variable \(j\) of \(ith\) individual to \(G=0,\) within the specified range as follows [10], for \(G=0,\) \[x_{i,j}^{0}=B_{L,j}+r(B_{U,j}-B_{L,j}), \tag{2}\] where \(r\in[0,1]\) represents a uniformly distributed random number within the range \(0\leq r<1.\) ### _Mutation_ After initialization, the mutation operator produces new solutions by forming a mutant vector (trial vector) for each parent individual (target vector). For each target vector, its corresponding trial vector can be generated by different mutation strategies. Each strategy employs different approaches to make a balance between the exploration and exploitation tendencies. For _the i-th_ target vector at the \(G\) generation the five most well-known mutation strategies are presented as follows: \(r1,r2,r3,r4,r5\in\mathrm{NP}\) are five different randomly generated integer numbers. Furthermore, \(F\) is a scaling factor \(\in[0,2]\) affecting the difference vector and \(\mathrm{best}\in\mathrm{NP}\) is an index of the best individual vector at generation \(G\). 1. DE/rand/1 \[\mathbf{v_{i}}^{G}=\mathbf{x_{r1}}^{G}+F\cdot(\mathbf{x_{r2}}^{G}-\mathbf{x_{ r3}}^{G}),\] 2. DE/best/1 \[\mathbf{v_{i}}^{G}=\mathbf{x_{best}}^{G}+F\cdot(\mathbf{x_{r1}}^{G}-\mathbf{x_ {r2}}^{G}),\] 3. DE/rand-to-best/1 \[\mathbf{v_{i}}^{G}=\mathbf{x_{i}}^{G}+F\cdot(\mathbf{x_{best}}^{G}-\mathbf{x_ {i}}^{G})+F\cdot(\mathbf{x_{r1}}^{G}-\mathbf{x_{r2}}^{G}),\] 4. DE/best/2 \[\mathbf{v_{i}}^{G}=\mathbf{x_{best}}^{G}+F\cdot(\mathbf{x_{r1}}^{G}-\mathbf{x_ {r2}}^{G})+F\cdot(\mathbf{x_{r3}}^{G}-\mathbf{x_{r4}}^{G}),\] 5. DE/rand/2 \[\mathbf{v_{i}}^{G}=\mathbf{x_{r1}}^{G}+F\cdot(\mathbf{x_{r2}}^{G}-\mathbf{x_ {r3}}^{G})+F\cdot(\mathbf{x_{r4}}^{G}-\mathbf{x_{r5}}^{G}),\] ### _Crossover_ In this step, DE applies a discrete crossover approach to each vector and a mutant vector. The basic version of DE incorporates the binomial crossover defined as follows [10]: \[\mathbf{u_{i}}^{G}=\begin{cases}v_{i,j}^{G}\text{ if }(r\leq CR)\text{ or }(j=j_{\text{rand}})\\ x_{i,j}^{G}\text{ otherwise,}\end{cases}\] where: \(CR\) is the user-specified crossover rate which determines the probability of mixing between vectors and mutant vectors, \(j_{rand}\in[0,D]\) is a randomly picked integer number. ### _Selection_ In this step, DE adopts a selection mechanism to choose the best individuals according to their fitness for producing the next generation of population. Toward this goal, it compares performance of the trial and target vectors and copies the better one into next generation; as presented above. \[\mathbf{u_{i}}^{G+1}=\begin{cases}\mathbf{u_{i}}^{G}\text{ if }f(\mathbf{u_{i}}^{G}) \leq f(\mathbf{x_{i}}^{G})\\ \mathbf{x_{i}}^{G}\text{ otherwise,}\end{cases}\] where, \(f\) is the objective function that should be optimized. ## IV Measurement model The population selection followed two common sense rules, "as random as possible" with "as many subjects as is feasible". Respondents provided answers about each assessed individual HIU patterns after the concept of HIU was addressed to them in the class and the presence of a teacher. 1. Q1. I know his/her HIU pattern (N/A?). 2. Q2. He/she prefers HIU than socializing. 3. Q3. His/her acquaintances and/or parents are concerned about his/her HIU. 4. Q4. HIU impair his/her health, hygiene and eating pattern. 5. Q5. He/she avoids other activities. 6. Q6. He/she tried to decrease HIU but failed. 7. Q7. HIU negatively impacts his/her school performance. 8. Q8. Rating of his/her HIU as (0 = never plays, 3 = extremely harmful). 9. Q9. My sex is: F - female, M - male, U - undeclared. ## V Results Poland is the 6th largest economy in the European Union and 21st in the World. With a Human Development Index of 0.865, Poland is in 33rd place on the list and regarded as one of the "very high human development" countries. For these reasons, Poland is representative of economic and social points of view as a developed country. In a group of elementary school students, the respondents evaluated an average of 6 peers each. The smallest number of peers that the respondent evaluated was N=1 and the largest was 24. In the parents' group, the average was 3 (range 1 to 10) children evaluated, the smallest number of evaluations per parent is N=1, the largest was 10 evaluations. A detailed analysis of Q1 shows that 52% of respondents know the HIU patterns of assessed subjects. One of the important aspects of human growth is the socialization process. Social development ensures a safe and healthy relationship with individuals. A creative use of the Internet can have the effect of reinforcing a sense of friendship and connections for teens who play online games with friends. The results obtained in the survey (question Q2) do not support it. Above 61% of children prefer socializing (in analysis the answers with a weight 1.5 - 3.0 were added). Analysis of question Q2 shows that 39% of children see a problem related to avoiding social contacts. An analysis of children's behavior by their parents shows that almost 52% of parents know their children's HIU pattern but do not regard it as a potential problem as illustrated by Fig. 1 "A". Parents admitted their knowledge of the violent content to which their children were exposed. Parents (53%) notice isolation of their children from peers and society. These results suggest that the problem of HIU is not well recognized, despite answers to other related questions indicating HIU problems. The negative opinion threshold value was set to 2, hence the totals were obtained as the sum of answers with 2 and 3. Q8 was subjected to a more detailed analysis. It turns out that when analyzing HIU, parents assess the situation as much worse in other children (67%) than at their own (57%) as shown in Fig. 1 "b". Fig. 1 "c" shows the analysis of question Q7. Nearly 60% of children and 50% of parents do not see a problem related to HIU. Negatively effects of HIU on school performance is seen as a problem by only 26% of parents and 22% of children. An analysis of question Q3 shows that above 65% of respondents and 38% of parents consider HIU as a normal activity. They are not concerned about their HIU via social network. Only 32% of parents regarded HIU as a negative activity. According to the respondents, there is no problem with the deterioration of health or hygiene due to the HIU. The analysis of question Q4 shows that almost 78% of children and 54% of parents believe that HIU does not impair their health, hygiene and eating pattern. Only 19% of parents regard HIU as a contributing factor for deterioration of health or hygiene in their children. Analysis of question Q5 shows that, consistent with the previous question, nearly 67% of children and 46% of parents do not view as a problem of the avoidance other activities as negative to the child's development. Abuse of games and the Internet is seen as a problem by only 24% of parents and is viewed as a lower risk by children (only 15%). The results of the analysis of question Q6 show that nearly Fig. 1: a) Number of answers for question Q1: ‘I know his/her HIU pattern(N/A/2)”; b) Q8 – Analysis of avoiding other activities by children and parents; c) Q7 – HIU negatively affects his/her school performance Fig. 2: a) Preferences between HIU and socializing – the answers for question Q8; b) Answers for scales in groups – question Q8. 73% of children and 59% of parents have not even attempted to reduce their HIU. The majority of respondents (56% of children, 47% of parents) believe that HIU has no application to them or their children. Only 13.5% of parents stated that the school performance of their children had deteriorated, while 22% of children expressed concerns about their peers. Fig. 2 "a" shows the percentage of answers to question Q8: "I rate his/her HIU as (0 = never plays, 3 = extremely harmful)". The inner ring reflects the children's opinion. The external ring shows parents' opinions. The bubble chart in Fig. 2 "b" shows the analysis of HIU. The largest bubble shows the population percent for the score of 1.5 (moderate) to 3 (extremely harmful). Parents have evaluated children (their own and peers of their children). 61% of parents regard HIU as concerning. Elementary school children evaluated other peers' HIU at 51%. The next level of the bubble chart is for the accumulated total from 2 to 3. It is 52% for the parental view and 39% for elementary school children. For the accumulated total 2.5 and 3, the percentage is 30% and 20% respectively. For the extreme case of HIU (3), it is 26% and 17% respectively. Fig. 3 "a" shows the dispersion between responses of parents and children. Fig. 3 "a" indicates that responses of parents are more negative especially for questions 3 and 5. Fig. 3 "b" shows dispersion of ratings provided by parents for the female and male population. These ratings are for question 1. They demonstrate that in the opinion of parents, both sexes are above an assumed neutral point of 1.5. However, question 8 was evaluated by parents in a way indicating that male children have more problems than female children. The difference in responses to question 5 is interesting. It indicates that the male population of children avoids other activities more often than the female population. There are three levels of difference between sexes which are regarded as substantial. ### _Statistical model specification_ In the proposed statistical model, a dichotomous dependent variable \(Y\) takes only two values: 0 and 1. It is a binary (or dichotomous) choice model. The relationship between \(y\) and the explanatory variable consists of modeling the probability for the i-th object of: \[p_{i}=F(x_{i}^{{}^{\prime}}\beta)\] \[x_{i}^{{}^{\prime}}\beta=(1\ X_{1i}\ X_{2i}...\ X_{ki})(\beta_{0}\ \beta_{1}\ \beta_{2}...\ \beta_{k})^{{}^{\prime}}\] \(X_{ki}\) - variable explaining the number \(k\) for the observation \(i\); \(\beta_{k}\) - parameter for the variable explaining the number \(k\). The object of our modeling is a hidden variable \(y*\), whose values are not observed. The hidden variable \(y*\) represents a tendency of an observation unit to make a decision or to take a state corresponding to \(y=1\). It is assumed that if this tendency is positive, then \(y=1\). Thus, \(y=1\) if \(y*>0\), and \(y=0\) if \(y*\leq 0\). The inclination \(y*\) is the following function of the model explanatory variables: \[y_{i}^{*}=x_{i}^{{}^{\prime}}\beta+e_{i}\] \(e_{i}\) - the error of the model (_white noise_). The ordered logit models correspond to the ranking scale of the dependent variable in questions Q6 and Q7. In such models, the explained variable takes discrete values, ordered in a natural way (e.g. 0, 1, 2, 3, 4, 5, 6). Formally, it is assumed that the ordered variable \(y\) is a limited record of a continuous variable \(y*\). In this study, six models have been defined: * M1. Estimations for a group of children (Q6). * M2. Assessment of the group of children (Q7). * M3. The high HIU rating implies a male (Q9). * M4. Estimations for a group of parents (Q6). * M5. Parents' opinion (Q7). * M6. HIU in the group of children (Q9). It is worth noting that the number of correctly predicted cases exceeds 50% for almost all estimated models. The exception is model M5 with only 42.6% correctly predicted cases. In addition, the "likelihood ratio test" for each model allows us to reject the null hypothesis of the equality to 0 for all coefficients. In model M1, all statistically significant exogenous variables have a positive impact on the assessment obtained in question Q6. This means that the high rating obtained in any of the exogenous questions in model M1 (i.e., Q3, Q4, Q5 or Q7) increases the chance that the person Fig. 3: a) Semantic differential for HIU parents and children opinions (medians); b) HIU Male/Female: parental opinions (median). has unsuccessfully tried to quit an unwanted activity. In this model, the care rating of parents or acquaintances (Q3) has the strongest positive impact on the assessment obtained in question (Q6). In model M2, the positive relation of question Q7 with the majority of regressors is also evidenced. The only exception is the gender (Q9). Gender reduces the chance of school-related problems connected to HIU (Q7) for females. The relationship between gender and the HIU rating obtained in the question (Q8) is shown by the logit binary model. For parents' (model M4), (Q6) infers that there is the largest positive dependence between "the stop HIU question" (Q6) and the avoidance of other activities (Q5). Therefore, avoiding other activities (Q5) can be an important predictor of excessive gaming. The parents' increased concern (Q3) and the fact that the respondent knows that the person has a HIU problem (Q1) can be an important predictor of HIU. In the parental perception (model M5), females suffer much less from school problems as a result of HIU (Q9). However, the high rating obtained in the HIU (Q8) strongly implies educational problems (Q7) for both genders. A high impact on school problems also affects avoiding other activities (Q5). A slightly smaller impact on rating (Q7) has a reduced hygiene and food eating pattern (Q4) and poorer interpersonal relationship (Q2). Model M6 shows the relationship between the intensity of the HIU (Q8) and the subject's gender (Q9). The analysis has been carried out for the answers obtained by parents. The conclusion is similar to the answers obtained for the group of children. The high score recorded in the area of HIU (Q8) significantly reduces the chance that the examined person is female. ## VI Conclusions By using the proposed measurement enhancement, our study has indicated that the HIU penetration is at a much higher level among children in Poland than we have previously realized. Our findings are consistent with common sense and observations that using measurements based on assessments by peers we gain in accuracy when compared to self-assessments or parental assessments. Further improvement of accuracy is expected to be gained by adding approaches in [12, 13] in the followup publication. The presented models show a strong correlation between HIU and avoiding other activities, such as sports and live socializing. Poorer levels of hygiene, health, and nutrition pattern can also be, in part, attributed to HIU based on the results of our study. Additionally, gaming often raises the anxiety of acquaintances and/or parents. According to [14]: "How does the school limit children through infrastructure and is it really a limiting phenomenon, or maybe a place of silence will prove to be a good space element for development, for example, spiritual." Research in drug abuse and addiction also teaches us that parents are often the last to know about their children's addiction problems. This discouraging situation is exacerbated by the unreliability of the current measurements of Harmful Internet use. The proposed peer review approach is a more objective way of measurement, which seems to be worth additional research effort as an innovative approach. Poland has the sixth-largest economy in the European Union by nominal GDP and the fifth-largest by purchasing power parity GDP. Poland has been classified as a high-income economy by the World Bank, ranking 22nd by GDP (nominal) and 19th worldwide by GDP (PPP). The 2017 Economic Complexity rank 21 of Poland in the world reflects diverse strong economy. Poland is a conservative society and it is reasonable to assume that our results (hence their importance) are representative for all developed countries. ## Acknowledgment The authors would like to thank the Board of Education in Rzeszow (Poland) for allowing us to collect data and for collecting consent forms signed by parents.
2306.00086
Characterizing the geometry of the Kirkwood-Dirac positive states
The Kirkwood-Dirac (KD) quasiprobability distribution can describe any quantum state with respect to the eigenbases of two observables $A$ and $B$. KD distributions behave similarly to classical joint probability distributions but can assume negative and nonreal values. In recent years, KD distributions have proven instrumental in mapping out nonclassical phenomena and quantum advantages. These quantum features have been connected to nonpositive entries of KD distributions. Consequently, it is important to understand the geometry of the KD-positive and -nonpositive states. Until now, there has been no thorough analysis of the KD positivity of mixed states. Here, we characterize how the full convex set of states with positive KD distributions depends on the eigenbases of $A$ and $B$. In particular, we identify three regimes where convex combinations of the eigenprojectors of $A$ and $B$ constitute the only KD-positive states: $(i)$ any system in dimension $2$; $(ii)$ an open and dense set of bases in dimension $3$; and $(iii)$ the discrete-Fourier-transform bases in prime dimension. Finally, we investigate if there can exist mixed KD-positive states that cannot be written as convex combinations of pure KD-positive states. We show that for some choices of observables $A$ and $B$ this phenomenon does indeed occur. We explicitly construct such states for a spin-$1$ system.
Christopher Langrenez, David R. M. Arvidsson-Shukur, Stephan De Bièvre
2023-05-31T18:05:02Z
http://arxiv.org/abs/2306.00086v1
# Characterizing the geometry of the Kirkwood-Dirac positive states ###### Abstract The Kirkwood-Dirac (KD) quasiprobability distribution can describe any quantum state with respect to the eigenbases of two observables \(A\) and \(B\). KD distributions behave similarly to classical joint probability distributions but can assume negative and nonreal values. In recent years, KD distributions have proven instrumental in mapping out nonclassical phenomena and quantum advantages. These quantum features have been connected to nonpositive entries of KD distributions. Consequently, it is important to understand the geometry of the KD-positive and -nonpositive states. Until now, there has been no thorough analysis of the KD positivity of mixed states. Here, we characterize how the full convex set of states with positive KD distributions depends on the eigenbases of \(A\) and \(B\). In particular, we identify three regimes where convex combinations of the eigenprojectors of \(A\) and \(B\) constitute the only KD-positive states: \((i)\) any system in dimension 2; \((ii)\) an open and dense set of bases in dimension 3; and \((iii)\) the discrete-Fourier-transform bases in prime dimension. Finally, we investigate if there can exist mixed KD-positive states that cannot be written as convex combinations of pure KD-positive states. We show that for some choices of observables \(A\) and \(B\) this phenomenon does indeed occur. We explicitly construct such states for a spin-1 system. ## 1 Introduction In classical mechanics, a joint probability distribution \(\mathcal{P}(\mathbf{x},\mathbf{p})\) can describe a system with respect to two observables, such as position \(\mathbf{x}\) and momentum \(\mathbf{p}\). In quantum mechanics, however, observables generally do not commute and probabilistic descriptions of states with respect to more than one observable are often not available [1, 2, 3, 4, 5, 6]. Nevertheless, one can describe a quantum state with respect to two joint observables via a _quasiprobability_ distribution. Quasiprobability distributions obey all but one of Kolmogorov's axioms for probability distributions [7]: their entries sum to unity; their marginals correspond to the probability distributions given by the Born Rule; but individual quasiprobabilities can take negative or nonreal values. The quasiprobability formalism provides a useful alternative to other descriptions of quantum states. The most famous quasiprobability distribution is the Wigner function. It deals with continuous-variable systems with clear analogues of position and momentum. Most notably, the Wigner function has played a pivotal role in the analyses of quantum states of light [8, 9, 10, 11]. The Wigner function, and other quasiprobability distributions [12, 13, 14, 15, 16, 17, 18], allow techniques from statistics and probability theory to be applied to quantum mechanics. Most modern quantum-information research is phrased in terms of finite-dimensional systems--often systems of qubits. Moreover, the observables of interest are, unlike position and momentum, not necessarily conjugate. The Wigner function is ill-suited for such systems and observables. Instead, recent years have seen a different quasiprobability distribution come to the foreground: the Kirkwood-Dirac (KD) distribution [19, 20, 21, 22, 23, 24]. The KD distribution has proven itself a tremendously versatile tool in studying and developing quantum-information processing. In its standard form, the KD distribution describes a quantum state \(\rho\) with respect to two orthonormal bases \((|a_{i}\rangle)_{i\in[\![1,d]\!]}\) and \((|b_{j}\rangle)_{j\in[\![1,d]\!]}\) in a complex Hilbert space \(\mathcal{H}\) of dimension \(d\). The KD distribution reads \[\forall(i,j)\in[\![1,d]\!]^{2},\ Q_{ij}(\rho)=\langle b_{j}|a_{i}\rangle \langle a_{i}|\rho|b_{j}\rangle. \tag{1.1}\] By associating the two bases with the eigenstates of observables of interest, the KD distribution can be tuned towards a specific problem. So far, the KD distribution has been used to study, describe or develop: direct state tomography [17, 25, 26, 27, 28]; quantum metrology [29, 30, 31]; quantum chaos [32, 21, 33, 34, 35]; weak measurements [36, 37, 38, 39, 40, 21, 41, 42, 43]; quantum thermodynamics [32, 44, 45, 46, 47, 48]; quantum scrambling [21, 33]; Leggett-Garg inequalities [49, 50, 51]; generalised contextuality [39, 45, 46]; consistent-histories interpretations of quantum mechanics [52]; measurement disturbance [53, 40, 54, 23, 24, 55, 56]; and coherence [57]. The list can be made longer, but the point is clear: the Kirkwood-Dirac distribution currently experiences great prosperity and growing interest. Below, a state will be said to be KD positive when its KD distribution only takes on positive or zero values. Such states have been called KD classical elsewhere [22, 23, 24]. We prefer to avoid this terminology since the terms "classical" and "nonclassical" lack unique definitions. The capacity of quasiprobability distributions to describe quantum phenomena hinges on their ability to assume negative or nonreal values. An always-positive (probability) distribution cannot describe all of quantum mechanics. As concerns the KD distribution, nonpositive quasiprobabilities have been linked to various forms of quantum advantages in, for example, weak measurements [39, 41], quantum metrology [29, 30] and quantum thermodynamics [21, 45, 46]. Therefore, it is important to understand: When does a KD distribution assume only positive or zero values? While this question has been addressed for pure states [22, 23, 24], a general study of the mixed KD-positive states is lacking. In this work, we provide such a study. To analyse how the KD distribution underlies nonclassical phenomena, one must first understand the geometric structure of the convex set \(\mathcal{E}_{\mathrm{KD}+}\) of KD-positive states. We know, by the Krein-Milman theorem [58], that this set is the convex hull of the set \(\mathcal{E}_{\mathrm{KD}+}^{\mathrm{ext}}\) of its extreme points: \[\mathcal{E}_{\mathrm{KD}+}=\mathrm{conv}\left(\mathcal{E}_{\mathrm{KD}+}^{ \mathrm{ext}}\right).\] It is, therefore, desirable to have a full description and a convenient characterization of \(\mathcal{E}_{\mathrm{KD}+}^{\mathrm{ext}}\). The set \(\mathcal{E}_{\mathrm{KD}+}^{\mathrm{ext}}\) always contains all the basis states \((|a_{i}\rangle)_{i\in[\![1,d]\!]}\) and \((|b_{j}\rangle)_{j\in[\![1,d]\!]}\). Additionally, \(\mathcal{E}_{\mathrm{KD}+}^{\mathrm{ext}}\) may contain other pure and also mixed states. Experience with similar analyses for the Wigner function, where the mixed-state characterization of Wigner positive states is not fully solved, indicates that it might be difficult to obtain a full characterization of \(\mathcal{E}_{\mathrm{KD}+}^{\mathrm{ext}}\) for general KD distributions. Our results about the convex set \(\mathcal{E}_{\mathrm{KD}+}\) of KD-positive states can be summed up as follows. We first identify for what choices of the bases \((|a_{i}\rangle)_{i\in[\![1,d]\!]}\) and \((|b_{j}\rangle)_{j\in[\![1,d]\!]}\) the only KD-positive states are those that are convex mixtures of the basis states. The following theorem provides a precise statement of these results. We introduce \[\mathcal{A}=\{|a_{i}\rangle\langle a_{i}|\ |\ i\in[\![1,d]\!]\},\quad \mathcal{B}=\{|b_{j}\rangle\langle b_{j}|\ |\ j\in[\![1,d]\!]\},\] which are the families of rank-one projectors associated to the two bases. Also, we write \(U_{ij}=\langle a_{i}|b_{j}\rangle\) for the transition matrix between the two bases and introduce \[m_{\mathcal{A},\mathcal{B}}=\min_{i,j}|\langle a_{i}|b_{j}\rangle|.\] **Theorem 1.1**.: _The equality_ \[\mathcal{E}_{\mathrm{KD+}}=\mathrm{conv}\left(\mathcal{A}\cup\mathcal{B}\right), \tag{1.2}\] _holds under any single one of the following hypotheses:_ 1. _If_ \(d=2\) _(for qubits) and_ \(m_{\mathcal{A},\mathcal{B}}>0\)_;_ 2. _If_ \(d=3\)_, for all_ \(U\) _in a open dense set of probability_ \(1\)_;_ 3. _If_ \(d\) _is prime and_ \(U\) _is the discrete Fourier transform (DFT) matrix;_ 4. _If_ \(U\) _is sufficiently close to some other_ \(U^{\prime}\) _for which Eq. (_1.2_) holds._ Note that Eq. (1.2) is equivalent to \[\mathcal{A}\cup\mathcal{B}=\mathcal{E}_{\mathrm{KD+}}^{\mathrm{pure}}= \mathcal{E}_{\mathrm{KD+}}^{\mathrm{ext}}, \tag{1.3}\] where \(\mathcal{E}_{\mathrm{KD+}}^{\mathrm{pure}}\) denotes the set of pure KD-positive states. For general \(\mathcal{A}\) and \(\mathcal{B}\), one has \[\mathcal{A}\cup\mathcal{B}\subseteq\mathcal{E}_{\mathrm{KD+}}^{\mathrm{pure} }\subseteq\mathcal{E}_{\mathrm{KD+}}^{\mathrm{ext}}. \tag{1.4}\] In other words, Eq. (1.2) corresponds to the simplest situation, where the set of extreme states is minimal. In that case, we have an explicit description of the set of all KD-positive states since the convex set \(\mathrm{conv}\left(\mathcal{A}\cup\mathcal{B}\right)\) forms a polytope with a simple geometric structure, detailed in Appendix A. Note that part (iv) of the theorem guarantees that the property Eq. (1.2) is stable in the sense that it is verified in an open set of unitary matrices. We conjecture that part (ii) of the theorem in fact holds in all dimensions. In other words, we think that the simple structure obtained in Eq. (1.2) is "typically" realized, meaning that it holds in an open dense set of full measure. We have numerically checked this conjecture by randomly choosing unitary matrices \(U\) for dimensions \(d\) up to \(10\) (See Section 4 for details). The following proposition, proven in Section 4, shows a partial result in this direction: **Proposition 1.2**.: _Let \(d\geq 2\). There exists an open dense set of unitaries of probability \(1\) for which \(\mathcal{E}_{\mathrm{KD+}}^{\mathrm{pure}}=\mathcal{A}\cup\mathcal{B}\)._ We stress that we do nevertheless not know if, for the unitaries referred to in the proposition, the stronger property \(\mathcal{E}_{\mathrm{KD+}}=\mathrm{conv}\left(\mathcal{A}\cup\mathcal{B}\right)\) holds. In general, it is a formidable task to identify \(\mathcal{E}_{\mathrm{KD+}}^{\mathrm{pure}}\) and \(\mathcal{E}_{\mathrm{KD+}}^{\mathrm{ext}}\), given two specific bases \(\mathcal{A}\) and \(\mathcal{B}\). Part (iii) of the theorem shows that, when \(U\) is the DFT matrix, and the dimension \(d\) is a prime number, one again satisfies Eq. (1.3). When \(d\) is prime and the columns of \(U\) form two mutually unbiased (MUB) bases with the canonical basis, it is still true that \(\mathcal{E}_{\mathrm{KD+}}^{\mathrm{pure}}=\mathcal{A}\cup\mathcal{B}\) (See [59] and Appendix C). But in that case, we have no information about the possible existence of mixed extreme states. When the dimension is not prime, and \(U\) the DFT matrix, one can identify all pure KD-positive states [23, 59, 24] and one observes that there exist pure KD-positive states that are not basis states, _i.e._\(\mathcal{A}\cup\mathcal{B}\subsetneq\mathcal{E}_{\mathrm{KD+}}^{\mathrm{pure}}\). It is again, to the best of our knowledge, not known in that case if there also exist extreme KD-positive states that are mixed, meaning if \(\mathcal{E}_{\mathrm{KD+}}^{\mathrm{pure}}\subsetneq\mathcal{E}_{\mathrm{KD+}}^ {\mathrm{ext}}\). By analyzing in detail the situation where the transition matrix is real-valued, we provide below (Section 5) examples for which \(\mathcal{A}\cup\mathcal{B}\subsetneq\mathcal{E}_{\mathrm{KD+}}^{\mathrm{pure}} \subsetneq\mathcal{E}_{\mathrm{KD+}}^{\mathrm{ext}}\) or \(\mathcal{A}\cup\mathcal{B}=\mathcal{E}_{\mathrm{KD+}}^{\mathrm{pure}}\subsetneq \mathcal{E}_{\mathrm{KD+}}^{\mathrm{ext}}.\) In these cases, there therefore exist mixed extreme states, some of which we explicitly identify. We will highlight such situations with examples where the bases \(\mathcal{A}\) and \(\mathcal{B}\) are the eigenbases of two spin-\(1\) components in some particular directions. While this situation is in a sense exceptional, it has a precise analogue in the analysis of Wigner function. Indeed, the pure Wigner positive states are known to be the Gaussian states [2]. But it is also well known that the convex hull of the pure Gaussian states (which contains all Gaussian states) does not exhaust all Wigner positive states [60]. As it turns out, even though examples of Wigner positive states not in this convex hull have been constructed [60, 61, 62], a complete description of the extreme states of the set of all Wigner positive states is, to the best of our knowledge, not available. In fact, no mixed extreme states have been explicitly identified for the Wigner function [63]. The remainder of this paper is structured as follows. In Section 2, we describe the general framework of our investigation, recall some definitions and necessary background information, and introduce our notation. In Section 3, we prove several results on the general structure of the geometry of the set of KD-positive states. These results are essential ingredients for the proofs of our main results. In Section 4, we prove Theorem 1.1 and Proposition 1.2. In Section 5, we focus on real unitary matrices to construct examples of mixed states that are KD positive but cannot be written as convex combinations of pure KD-positve states. Section 6 contains our conclusions and outlook. ## 2 The setting and background In this section, we introduce some notation, define KD distributions and recall some of their properties. Throughout this manuscript, we consider a complex Hilbert space \(\mathcal{H}\) of dimension \(d\). We consider also two orthonormal bases \(\left(|a_{i}\rangle\right)_{i\in\llbracket 1,d\rrbracket}\) and \(\left(|b_{j}\rangle\right)_{j\in\llbracket 1,d\rrbracket}\) in \(\mathcal{H}\). We denote by \(U=\left(\langle a_{i}|b_{j}\rangle\right)_{(i,j)\in\llbracket 1,d\rrbracket^{2}}\) the transition matrix between these two bases. If \(\rho\) is a density matrix, we define the KD distribution \(Q(\rho)\) to be the \(d\times d\) matrix [19, 20] \[\forall(i,j)\in\llbracket 1,d\rrbracket^{2},Q_{ij}(\rho)=\langle b_{j}|a_{i} \rangle\langle a_{i}|\rho|b_{j}\rangle. \tag{2.1}\] Note that, for a given \(\rho\), the matrix \(Q(\rho)\) depends on the two bases. Although this will be crucial for our developments below, we do not indicate this dependence, not to burden the notation. The KD distribution thus satisfies the following of Kolmogorov's axioms for joint probability distributions: \[\sum_{i,j}Q_{ij}(\rho)=\operatorname{Tr}\rho=1,\quad\sum_{j}Q_{ij}(\rho)= \langle a_{i}|\rho|a_{i}\rangle,\quad\sum_{i}Q_{ij}(\rho)=\langle b_{j}|\rho| b_{j}\rangle. \tag{2.2}\] However, unlike joint probabilities, \(Q(\rho)\) is in general a complex-valued matrix. We call a state KD positive whenever \(Q_{ij}(\rho)\geq 0\) for all \((i,j)\in\llbracket 1,d\rrbracket^{2}\). The transition matrix \(U\) is determined by the choice of bases and determines whether \(\rho\) is KD positive. For example, if \(U=\mathbb{I}_{d}\), the bases are identical. Then, clearly all states are KD positive. We will say that two bases, \((|a_{i}\rangle)_{i}\) and \((|a^{\prime}_{i}\rangle)_{i}\) or \((|b_{j}\rangle)_{j}\) and \((|b^{\prime}_{j}\rangle)_{j}\), are equivalent if they can be obtained from each other by permutations of the basis vectors and/or phase rotations. In that case, the matrices \(U\) and \(U^{\prime}\) are obtained from one another by permutations of their columns and rows, and global phase rotations of the rows and columns. We shall say such matrices are equivalent. The point of these definitions is that, when the bases (and hence the transition matrices) are equivalent, then the corresponding sets of KD-positive states are identical. In particular, we note for later use that, if \(\phi=(\phi_{1},\ldots,\phi_{d})\in[0,2\pi)^{d}\), \(\psi=(\psi_{1},\ldots,\psi_{d})\in[0,2\pi)^{d}\), and if \[|a^{\prime}_{j}\rangle=\exp(-i\phi_{j})|a_{j}\rangle,\quad|b^{\prime}_{j} \rangle=\exp(-i\psi_{j})|b_{j}\rangle,\] then the transition matrix \(U^{\prime}_{ij}=\langle a^{\prime}_{i}|b^{\prime}_{j}\rangle\) is given by \[U^{\prime}=D(-\phi)UD(\psi), \tag{2.3}\] where, for any \(\phi=(\phi_{1},\ldots,\phi_{d})\in[0,2\pi)^{d}\), \[D(\phi)_{jk}=\exp(-i\phi_{j})\delta_{jk}.\] Let us point out that we will often identify a unit vector \(|\psi\rangle\in\mathcal{H}\) with its projector \(|\psi\rangle\langle\psi|\). The questions we address in this work are of interest only if the two bases are in a suitable sense incompatible. For most of this work we will therefore assume that the unitary matrix \(U\) has no zeros: \[m_{\mathcal{A},\mathcal{B}}=\min_{i,j}|\langle a_{i}|b_{j}\rangle|>0. \tag{2.4}\] This guarantees that \(Q(\rho)\) determines a unique \(\rho\) (see Eq. (3.3)). In addition, it implies that none of the \(|a_{i}\rangle\langle a_{i}|\) commutes with any of the \(|b_{j}\rangle\langle b_{j}|\). This is a weak form of incompatibility between the two bases [24]. Indeed, \(m_{\mathcal{A},\mathcal{B}}>0\) means that if a measurement in the \(\mathcal{A}\) basis yields an outcome \(i\), then a subsequent measurement in the \(\mathcal{B}\) basis may yield any outcome \(j\) with a nonvanishing probability. We recall that a special role is played by mutually unbiased (MUB) bases, for which \(m_{\mathcal{A},\mathcal{B}}\) takes the maximum possible value \(m_{\mathcal{A},\mathcal{B}}=\frac{1}{\sqrt{d}}\). All outcomes \(j\) for a \(\mathcal{B}\)-measurement after an initial measurement in the \(\mathcal{A}\)-basis are then equally probable, and vice versa. ## 3 General structural results In this section, we prove general results regarding the geometry of KD-positive states. We work under the assumption that \(m_{\mathcal{A},\mathcal{B}}>0\). ### The KD symbol of an observable It is well known that the Wigner function can be defined not only for states \(\rho\), but also for arbitrary observables \(F\), in which case it is referred to as the Weyl symbol of \(F\). One can proceed similarly with the KD distribution. Denoting by \(\mathcal{S}_{d}\) the set of self-adjoint operators, we define, \[Q:\left\{\begin{array}{rcl}\mathcal{S}_{d}&\to&\mathcal{M}_{d}(\mathbb{C}) \\ F&\mapsto&(Q_{ij}(F))_{(i,j)\in[\![1,d]\!]^{2}}\end{array}\right., \tag{3.1}\] where \[\forall(i,j)\in[\![1,d]\!]^{2},\ Q_{ij}(F)=\langle a_{i}|F|b_{j}\rangle\langle b _{j}|a_{i}\rangle,\] and where \(\mathcal{M}_{d}(\mathbb{C})\) is the the space of complex \(d\) by \(d\) matrices. We shall refer to \(Q(F)\) as the KD symbol of \(F\). We note that \[\sum_{j=1}^{d}Q_{ij}(F)=\langle a_{i}|F|a_{i}\rangle\in\mathbb{R},\quad\sum_{ i=1}^{d}Q_{ij}(F)=\langle b_{j}|F|b_{j}\rangle\in\mathbb{R},\quad\sum_{i,j}Q_{ ij}(F)=\mathrm{Tr}\,(F). \tag{3.2}\] Also, for \(F,G\in\mathcal{S}_{d}\), we have \[\mathrm{Tr}(FG) = \sum_{(i,j)\in[\![1,d]\!]^{2}}\langle a_{i}|F|b_{j}\rangle \langle b_{j}|G|a_{i}\rangle\] \[= \sum_{(i,j)\in[\![1,d]\!]^{2}}\frac{1}{|\langle a_{i}|b_{j} \rangle|^{2}}\langle b_{j}|a_{i}\rangle\langle a_{i}|F|b_{j}\rangle\langle a_{ i}|b_{j}\rangle\langle b_{j}|G|a_{i}\rangle\] \[= \sum_{(i,j)\in[\![1,d]\!]^{2}}\frac{1}{|\langle a_{i}|b_{j} \rangle|^{2}}Q_{ij}(F)\overline{Q_{ij}(G)}.\] If \(\mathcal{A}\) and \(\mathcal{B}\) are MUB bases, then \[\mathrm{Tr}(FG)=d\sum_{(i,j)\in[\![1,d]\!]^{2}}Q_{ij}(F)\overline{Q_{ij}(G)}= d\mathrm{Tr}(Q(F)Q^{\dagger}(G)).\] One may note the analogy between these two identities and the well known "overlap identity" for the Wigner function/Weyl symbol which expresses \(\mathrm{Tr}\,(FG)\) as a phase space integral of the product of the Wigner function/Weyl symbol of \(F\) and \(G\)[10]. We point out that, when \(m_{\mathcal{A},\mathcal{B}}>0\), the KD symbol \(Q(F)\) determines the observable \(F\) uniquely. The reconstruction formula is [64] \[\forall(i,k)\in\llbracket 1,d\rrbracket^{2},\ \langle a_{i}|F|a_{k}\rangle=\sum_{j=1 }^{d}Q_{ij}(F)\frac{\langle b_{j}|a_{k}\rangle}{\langle b_{j}|a_{i}\rangle}. \tag{3.3}\] This property is sometimes referred to as informational completeness. In other words, the map \(Q\) is injective: \(\mathrm{Ker}Q=\{0\}\) if \(m_{\mathcal{A},\mathcal{B}}>0\), where \(\mathrm{Ker}Q\) denotes the kernel of \(Q\). Since the dimension of the real vector space \(\mathcal{S}_{d}\) is \(d^{2}\), it follows that \(\dim\mathrm{Ran}\left(Q\right)=d^{2}\), where \(\mathrm{Ran}\left(Q\right)\) denotes the image of \(Q\). Hence, \(\mathrm{Ran}\left(Q\right)\) is a \(d^{2}\)-dimensional real vector subspace of the \(2d^{2}\)-dimensional real vector space \(\mathcal{M}_{d}(\mathbb{C})\). Note that a matrix \(M\in\mathcal{M}_{d}(\mathbb{C})\) belongs to \(\mathrm{Ran}\left(Q\right)\) if and only if it satisfies the \(d^{2}\) real linear constraints \[\sum_{j}M_{kj}\frac{\langle b_{j}|a_{i}\rangle}{\langle b_{j}|a_{k}\rangle}= \sum_{j}\overline{M}_{ij}\frac{\langle a_{k}|b_{j}\rangle}{\langle a_{i}|b_{j }\rangle}.\] We will further find it useful to consider the imaginary part of \(Q\): \[\mathrm{Im}Q:\left\{\begin{array}{rcl}\mathcal{S}_{d}&\to&\mathcal{M}_{d}( \mathbb{R})\\ F&\mapsto&(\mathrm{Im}Q_{ij}(F))_{(i,j)\in\llbracket 1,d\rrbracket^{2}}\end{array} \right.,\] which is a real-linear map into the space of real matrices \(\mathcal{M}_{d}(\mathbb{R})\). To streamline the discussion, we introduce the following terminology. We will say that a self-adjoint operator \(F\) is a _KD-real operator_ whenever its KD distribution is real-valued. In other words, \(F\) is KD real if and only if \[F\in V_{\mathrm{KDr}}:=\mathrm{Ker}\,\mathrm{Im}Q=Q^{-1}(\mathcal{M}_{d}( \mathbb{R})).\] We will say it is _KD positive_ if its KD distribution takes on real nonnegative values only. In other words, iff \[F\in V_{\mathrm{KD}+}:=Q^{-1}(\mathcal{M}_{d}(\mathbb{R}^{+}))\subseteq V_{ \mathrm{KDr}}.\] We point out for later use that, since \(V_{\mathrm{KDr}}=\mathrm{Ker}(\mathrm{Im}Q)\subset\mathcal{S}_{d}\) and \(\dim\mathcal{S}_{d}=d^{2}\), \[\dim V_{\mathrm{KDr}}\leq d^{2}. \tag{3.4}\] Clearly, if \(F_{1},F_{2}\in V_{\mathrm{KD}+}\), then \(\lambda_{1}F_{1}+\lambda_{2}F_{2}\in V_{\mathrm{KD}+}\) for all \(\lambda_{1},\lambda_{2}\geq 0\). In particular, \(V_{\mathrm{KD}+}\) is a closed convex cone. Note that it has no extreme points, except for the origin. The case \(\mathcal{E}_{\mathrm{KD}+}=\mathrm{conv}\left(\mathcal{A}\cup\mathcal{B}\right)\): a geometric condition Recall that a density matrix representing a quantum state is a nonnegative operator \(\rho\) satisfying \(\mathrm{Tr}\rho=1\). We will write \(\mathcal{S}_{d,+,1}\) for the set of density matrices and \(\mathcal{S}_{d,+}\) for the set of positive operators. Hence \[\mathcal{E}_{\mathrm{KD}+}=V_{\mathrm{KD}+}\cap\mathcal{S}_{d,+,1}. \tag{3.5}\] Note that \(\mathcal{E}_{\mathrm{KD}+}\) is compact so that, by the Krein-Milman theorem, \(\mathcal{E}_{\mathrm{KD}+}=\mathrm{conv}\left(\mathcal{E}_{\mathrm{KD}+}^{ \mathrm{ex}}\right).\) The question we are addressing in this section is under which conditions on \(U\) it is true that \[\mathcal{E}_{\mathrm{KD}+}=\mathrm{conv}\left(\mathcal{A}\cup\mathcal{B} \right), \tag{3.6}\] where \[\mathrm{conv}\left(\mathcal{A}\cup\mathcal{B}\right) = \mathrm{conv}\left(\{|a_{i}\rangle\langle a_{i}|,|b_{j}\rangle \langle b_{j}|\ |\ 1\leq i,j\leq d\}\right). \tag{3.7}\] In other words, the question is: Is it true or false that all KD-positive states are convex mixtures of the basis states? This is equivalent to checking if the inclusions in Eq. (1.4) are equalities, _i.e._, if \[\mathcal{A}\cup\mathcal{B}=\mathcal{E}_{\mathrm{KD}+}^{\mathrm{pure}}=\mathcal{ E}_{\mathrm{KD}+}^{\mathrm{ext}}. \tag{3.8}\] One can think of Eq. (3.8) as the situation where the set \(\mathcal{E}_{\mathrm{KD}+}\) of KD-positive states is the smallest possible. In a sense then, this corresponds to the choice of two bases \(\mathcal{A}\) and \(\mathcal{B}\) that are "most strongly quantum." Note that, when Eq. (3.8) holds true, \(\mathcal{E}_{\mathrm{KD}+}\) is a convex polytope with 2d known summits \(\{|a_{i}\rangle\langle a_{i}|,|b_{i}\rangle\langle b_{i}|\}_{i\in\llbracket 1,d\rrbracket}\). Its geometry is described in Appendix A. In Proposition 3.2 we will prove conditions of a geometric nature on the set of KD-real operators that are equivalent to Eq. (3.6). We introduce the vector space \[\mathrm{span}_{\mathbb{R}}(\mathcal{A}\cup\mathcal{B}) = \mathrm{span}_{\mathbb{R}}\{|a_{i}\rangle\langle a_{i}|,|b_{j} \rangle\langle b_{j}|\mid 1\leq i,j\leq d\}. \tag{3.9}\] and show the following result: **Lemma 3.1**.: _If \(m_{\mathcal{A},\mathcal{B}}>0\), then_ \[\dim\left(\mathrm{span}_{\mathbb{R}}(\mathcal{A}\cup\mathcal{B})\right)=2d-1 \quad\mathrm{and}\quad\mathcal{E}_{\mathrm{KD}+}\cap\mathrm{span}_{\mathbb{R} }(\mathcal{A}\cup\mathcal{B})=\mathrm{conv}\left(\mathcal{A}\cup\mathcal{B} \right).\] Proof.: To prove the first statement, we consider the linear map \[\Gamma:\left\{\begin{array}{rcl}\mathbb{R}^{2d}&\rightarrow&\mathrm{span}_{ \mathbb{R}}(\mathcal{A}\cup\mathcal{B})\\ \left((\lambda_{i})_{i\in\llbracket 1,d\rrbracket},(\mu_{j})_{j\in\llbracket 1,d \rrbracket}\right)&\mapsto&\sum_{i=1}^{d}\lambda_{i}|a_{i}\rangle \langle a_{i}|+\sum_{j=1}^{d}\mu_{j}|b_{j}\rangle\langle b_{j}|\end{array}\right.\] for which the rank theorem gives \(\dim\left(\mathrm{span}_{\mathbb{R}}(\mathcal{A}\cup\mathcal{B})\right)=2d- \dim\left(\mathrm{Ker}(\Gamma)\right)\). Now suppose that \(F=\sum_{i=1}^{d}\lambda_{i}|a_{i}\rangle\langle a_{i}|+\sum_{j=1}^{d}\mu_{j}|b _{j}\rangle\langle b_{j}|=0\). Then \[\forall(i,j)\in\llbracket 1,d\rrbracket^{2},\langle a_{i}|F|b_{j}\rangle= \langle a_{i}|b_{j}\rangle\left(\lambda_{i}+\mu_{j}\right)=0.\] Now, for any \(i\in\llbracket 1,d\rrbracket\), \(\langle a_{i}|b_{1}\rangle\neq 0\) because \(m_{\mathcal{A},\mathcal{B}}>0\), hence \(\lambda_{i}+\mu_{1}=0\) and finally \(\lambda_{i}=-\mu_{1}\) for all \(i\in\llbracket 1,d\rrbracket\). Exchanging the roles \((\lambda_{i})_{i\in\llbracket 1,d\rrbracket}\) and \((\mu_{j})_{j\in\llbracket 1,d\rrbracket}\), we find that for all \(j\in\llbracket 1,d\rrbracket\), \(\mu_{j}=\mu_{1}\). So, the relation stands as \[\mu_{1}\left(-\sum_{i=1}^{d}|a_{i}\rangle\langle a_{i}|+\sum_{j=1}^{d}|b_{j} \rangle\langle b_{j}|\right)=0,\] which is true for all \(\mu_{1}\in\mathbb{R}\). This means that \(\dim\left(\mathrm{Ker}(\Gamma)\right)=1\) and so \[\dim\left(\mathrm{span}_{\mathbb{R}}(\mathcal{A}\cup\mathcal{B})\right)=2d-1.\] We now turn to the second statement. That \(\mathrm{conv}\left(\mathcal{A}\cup\mathcal{B}\right)\subseteq\mathcal{E}_{ \mathrm{KD}+}\cap\mathrm{span}_{\mathbb{R}}(\mathcal{A}\cup\mathcal{B})\) is immediate. Thus, we only need to prove the other inclusion. Let therefore \(\rho\in\mathcal{E}_{\mathrm{KD}+}\cap\mathrm{span}_{\mathbb{R}}(\mathcal{A} \cup\mathcal{B})\). Hence, there exist \(\lambda_{i},\mu_{j}\in\mathbb{R}\) so that \[\rho=\sum_{i=1}^{d}\lambda_{i}|a_{i}\rangle\langle a_{i}|+\sum_{j=1}^{d}\mu_{j }|b_{j}\rangle\langle b_{j}|.\] Consequently \[\forall(i,j)\in\llbracket 1,d\rrbracket^{2},Q_{ij}(\rho)=\left|\langle a_{i}|b_{ j}\rangle\right|^{2}\left(\lambda_{i}+\mu_{j}\right).\] After a possible reordering of the basis, we can suppose that \(\frac{Q_{11}(\rho)}{|\langle a_{1}|b_{1}\rangle|^{2}}=\min_{(i,j)\in \llbracket 1,d\rrbracket^{2}}\frac{Q_{ij}(\rho)}{|\langle a_{i}|b_{j}\rangle|^{2}}\) so that \[\forall j\in\llbracket 2,d\rrbracket,\mu_{j}-\mu_{1}=\frac{Q_{1j}(\rho)}{| \langle a_{1}|b_{j}\rangle|^{2}}-\frac{Q_{11}(\rho)}{|\langle a_{1}|b_{1} \rangle|^{2}}\geqslant 0\text{ and }\lambda_{j}-\lambda_{1}=\frac{Q_{j1}(\rho)}{|\langle a_{j}|b_{1} \rangle|^{2}}-\frac{Q_{11}(\rho)}{|\langle a_{1}|b_{1}\rangle|^{2}}\geqslant 0.\] Moreover, since \(\rho\) is KD positive, \(Q_{11}(\rho)=|\langle a_{1}|b_{1}\rangle|^{2}\left(\lambda_{1}+\mu_{1}\right)\geqslant 0\). So, either \(\mu_{1}\) or \(\lambda_{1}\) must be nonnegative. Suppose \(\lambda_{1}\geq 0\), then as \(|a_{1}\rangle\langle a_{1}|=\sum_{j=1}^{d}|b_{j}\rangle\langle b_{j}|-\sum_{i= 2}^{d}|a_{i}\rangle\langle a_{i}|\) we can rewrite \(\rho\) as \[\rho=\sum_{i=2}^{d}\left(\lambda_{i}-\lambda_{1}\right)|a_{i}\rangle\langle a _{i}|+\sum_{j=1}^{d}\left(\mu_{j}+\lambda_{1}\right)|b_{j}\rangle\langle b_{j} |=\sum_{i=2}^{d}\left(\lambda_{i}-\lambda_{1}\right)|a_{i}\rangle\langle a_{i} |+\sum_{j=1}^{d}\frac{Q_{1j}(\rho)}{|\langle a_{1}|b_{j}\rangle|^{2}}|b_{j} \rangle\langle b_{j}|.\] Hence \(\rho\in\operatorname{span}_{\mathbb{R}^{+}}(\mathcal{A}\cup\mathcal{B})\). Together with the fact that \(\operatorname{Tr}\rho=1\), this shows that \(\rho\in\operatorname{conv}\left(\mathcal{A}\cup\mathcal{B}\right)\) and finalizes our proof. Recall that \(V_{\operatorname{KDr}}=\operatorname{Ker}(\operatorname{Im}Q)\) and that \[\operatorname{span}_{\mathbb{R}}(\mathcal{A}\cup\mathcal{B})\subset V_{ \operatorname{KDr}}.\] So we conclude that \[2d-1\leq\dim V_{\operatorname{KDr}}=\dim(\operatorname{Ker}(\operatorname{Im }Q))\leq d^{2}. \tag{3.10}\] The following proposition shows that the condition \(\dim V_{\operatorname{KDr}}=2d-1\) is equivalent to the requirement that the basis states are the only extreme KD-positive states which is equivalent to Eq. (3.6). **Proposition 3.2**.: _Suppose \(m_{A,B}>0\). Consider the following statements: (ia) \(V_{\operatorname{KDr}}=\operatorname{span}_{\mathbb{R}}(\mathcal{A}\cup \mathcal{B})\); (ib) \(\dim V_{\operatorname{KDr}}=2d-1\); (iaia) \(\mathcal{E}_{\operatorname{KD}+}^{\operatorname{ext}}=\mathcal{A}\cup \mathcal{B}\); (ib) \(\mathcal{E}_{\operatorname{KD}+}=\operatorname{conv}\left(\mathcal{A}\cup \mathcal{B}\right)\). Then (ia) \(\Leftrightarrow\) (ib) \(\Leftrightarrow\) (iaia) \(\Leftrightarrow\) (iib)._ Proof.: That (ia) \(\Leftrightarrow\) (ib) is immediate and so is the equivalence between (iaia) and (iib). We first show that (ia) implies (iib). Let \(\rho\in\mathcal{E}_{\operatorname{KD}+}\). Then it belongs to \(V_{\operatorname{KDr}}\) and hence, by (ia), \(\rho\in\operatorname{span}_{\mathbb{R}}(\mathcal{A}\cup\mathcal{B})\). Hence, by the second statement of Lemma 3.1, as \(\rho\in\mathcal{E}_{\operatorname{KD}+}\cap\operatorname{span}_{\mathbb{R}}( \mathcal{A}\cup\mathcal{B})\), it follows that \(\rho\in\operatorname{conv}\left(\mathcal{A}\cup\mathcal{B}\right)\). Thus, \(\mathcal{E}_{\operatorname{KD}+}=\operatorname{conv}\left(\mathcal{A}\cup \mathcal{B}\right)\). It remains to show that (iib) implies (ia). We proceed by contraposition. Suppose that (ia) does not hold so that \(\dim V_{\operatorname{KDr}}>2d-1\). Lemma 3.1 then implies that \(\operatorname{span}_{\mathbb{R}}(\mathcal{A}\cup\mathcal{B})\) is a proper subspace of \(V_{\operatorname{KDr}}\). So \[V_{\operatorname{KDr}}=\operatorname{span}_{\mathbb{R}}(\mathcal{A}\cup \mathcal{B})\oplus W,\] with \(W\) equal to the orthogonal complement of \(\operatorname{span}_{\mathbb{R}}(\mathcal{A}\cup\mathcal{B})\) in \(V_{\operatorname{KDr}}\), which is nontrivial by assumption. Note that \(F\in W\) implies that \(\langle a_{i}|F|a_{i}\rangle=0=\langle b_{j}|F|b_{j}\rangle\) for all \((i,j)\in\llbracket 1,d\rrbracket^{2}\). This implies \(\operatorname{Tr}F=0\). In addition, \(Q(F)\) has only real entries by the definition of \(V_{\operatorname{KDr}}\). Choose \(F\in W\backslash\{0\}\), and consider, for all \(x\in\mathbb{R}\) \[\rho(x)=\rho_{*}+xF\text{ where }\rho_{*}=\frac{1}{d}\mathbb{I}_{d}\in \operatorname{conv}\left(\mathcal{A}\cup\mathcal{B}\right).\] Note that \(\operatorname{Tr}\rho(x)=1\) for all \(x\in\mathbb{R}\) and that, for all \(x\in\mathbb{R}\), one has \[\langle\psi|\rho(x)|\psi\rangle\geq\frac{1}{d}-f_{\max}|x|,\] where \(|\psi\rangle\) is any norm-1 vector in \(\mathcal{H}\). Here, \(f_{\max}=\max\{|f_{i}||i\in\llbracket 1,d\rrbracket\}>0\), where the \(f_{i}\) are the eigenvalues of \(F\). In particular, if \(|x|\leq\frac{1}{d\mathbb{I}_{\max}^{2}}\), then \(\rho(x)\) is a positive operator of trace 1. We now show that there exist \(0<x_{+}\leq\frac{1}{d\mathbb{I}_{\max}}<+\infty\) so that \[\forall x\in[-x_{+},x_{+}],\quad\rho(x)\in\mathcal{E}_{\operatorname{KD}+}. \tag{3.11}\] Since \(F\in V_{\mathrm{KDr}}\), we know \(\rho(x)\in V_{\mathrm{KDr}}\). One has, for all \(x\in\mathbb{R}\), \[Q_{ij}(\rho(x))=|\langle a_{i}|b_{j}\rangle|^{2}+xQ_{ij}(F)\geq m_{\mathcal{A}, \mathcal{B}}^{2}-|x|\max_{i,j}|Q_{ij}(F)|.\] Taking \(x_{+}=\min\{\frac{1}{d_{\mathcal{J}_{\max}}},\frac{m_{\mathcal{A},\mathcal{B} }^{2}}{\max_{i,j}|Q_{ij}(F)|}\}>0\), we have Eq. (3.11). This implies that (iib) does not hold since for all \(x\neq 0\), \(\rho(x)\not\in\mathrm{span}_{\mathbb{R}}(\mathcal{A}\cup\mathcal{B})\). ### Characterizing \(\rho\in\mathrm{conv}\,(\mathcal{A}\cup\mathcal{B})\) The following proposition is essential to the proof of Theorem 1.1 (iii). **Proposition 3.3**.: _Suppose \(m_{\mathcal{A},\mathcal{B}}>0\). Then,_ \[\rho\in\mathrm{conv}\,(\mathcal{A}\cup\mathcal{B})\] _if and only if \(\rho\in\mathcal{E}_{\mathrm{KD}+}\) and_ \[\forall(i,j,k,l)\in\llbracket 1,d\rrbracket^{4},\frac{Q_{ij}(\rho)}{\left| \langle a_{i}|b_{j}\rangle\right|^{2}}+\frac{Q_{kl}(\rho)}{\left|\langle a_{k} |b_{l}\rangle\right|^{2}}=\frac{Q_{il}(\rho)}{\left|\langle a_{i}|b_{l}\rangle \right|^{2}}+\frac{Q_{kj}(\rho)}{\left|\langle a_{k}|b_{j}\rangle\right|^{2}}. \tag{3.12}\] Proof.: We first show the reverse implication. Let \(\rho\in\mathcal{E}_{\mathrm{KD}+}\) and satisfying Eq. (3.12). We construct a state \(\rho_{2}\in\mathrm{conv}\,(\mathcal{A}\cup\mathcal{B})\) such that \(Q(\rho_{2})=Q(\rho)\). Since \(m_{\mathcal{A},\mathcal{B}}>0\), we know from Eq. (3.3) that the KD distribution determines the state, such that \(\rho_{2}=\rho\). Note that the basis states have the following KD distribution: \[\forall(i,j,k)\in\llbracket 1,d\rrbracket^{3},Q_{ij}(|a_{k}\rangle\langle a_{k} |)=\left|\langle a_{k}|b_{j}\rangle\right|^{2}\delta_{i,k}\text{ and }Q_{ij}(|b_{k}\rangle\langle b_{k}|)=\left|\langle a_{i}|b_{k}\rangle \right|^{2}\delta_{j,k}.\] By permuting the order of the vectors in \(\mathcal{B}\), we can suppose that \(\frac{Q_{11}(\rho)}{\left|\langle a_{1}|b_{1}\rangle\right|^{2}}=\min_{j\in \llbracket 1,d\rrbracket}\frac{Q_{1j}(\rho)}{\left|\langle a_{1}|b_{j}\rangle \right|^{2}}\). We define \[\rho_{2}=\sum_{i=1}^{d}\lambda_{i}|a_{i}\rangle\langle a_{i}|+\sum_{j=1}^{d} \mu_{j}|b_{j}\rangle\langle b_{j}|\] where \(\lambda_{i}=\frac{Q_{i1}(\rho)}{\left|\langle a_{i}|b_{1}\rangle\right|^{2}}\) for all \(i\in\llbracket 1,d\rrbracket\) and \(\mu_{j}=\frac{Q_{1j}(\rho)}{\left|\langle a_{1}|b_{j}\rangle\right|^{2}}- \frac{Q_{11}(\rho)}{\left|\langle a_{1}|b_{1}\rangle\right|^{2}}\) for \(j\in\llbracket 1,d\rrbracket\) so that \(\mu_{1}=0\). Since \(\rho\) is KD positive, \(\lambda_{i}\geqslant 0\) and \(\mu_{i}\geqslant 0\) for all \(i\in\llbracket 1,d\rrbracket\). Moreover, using Eq. (3.12), one has that \[\mathrm{Tr}\,\rho_{2}=\sum_{j=1}^{d}\mu_{j}+\sum_{i=1}^{d}\lambda _{i} = \sum_{j=1}^{d}\sum_{i=1}^{d}|\langle a_{i}|b_{j}\rangle|^{2}\, \mu_{j}+\sum_{i=1}^{d}\sum_{j=1}^{d}|\langle a_{i}|b_{j}\rangle|^{2}\, \lambda_{i}\] \[= \sum_{j=1}^{d}\sum_{i=1}^{d}|\langle a_{i}|b_{j}\rangle|^{2}\left( \frac{Q_{i1}(\rho)}{\left|\langle a_{i}|b_{1}\rangle\right|^{2}}+\frac{Q_{1j}( \rho)}{\left|\langle a_{1}|b_{j}\rangle\right|^{2}}-\frac{Q_{11}(\rho)}{\left| \langle a_{1}|b_{1}\rangle\right|^{2}}\right)\] \[= \sum_{j=1}^{d}\sum_{i=1}^{d}Q_{ij}(\rho)=1,\] so that \(\rho_{2}\in\mathrm{conv}\,(\mathcal{A}\cup\mathcal{B})\). Using Eq. (3.12) again, we find \(\forall(i,j)\in\llbracket 1,d\rrbracket^{2}\), \[Q_{ij}(\rho_{2}) = |\langle a_{i}|b_{j}\rangle|^{2}\,(\lambda_{i}+\mu_{j})\] \[= |\langle a_{i}|b_{j}\rangle|^{2}\left(\frac{Q_{i1}(\rho)}{\left| \langle a_{i}|b_{1}\rangle\right|^{2}}+\frac{Q_{1j}(\rho)}{\left|\langle a_{1} |b_{j}\rangle\right|^{2}}-\frac{Q_{11}(\rho)}{\left|\langle a_{1}|b_{1} \rangle\right|^{2}}\right)\] \[= Q_{ij}(\rho).\] This shows that \(\rho_{2}=\rho\) so that \(\rho\in\mathrm{conv}\,(\mathcal{A}\cup\mathcal{B})\). For the proof of the direct implication, we note that if \(\rho\in\operatorname{conv}\left(\mathcal{A}\cup\mathcal{B}\right)\), then \(\rho=\sum_{i=1}^{d}\lambda_{i}|a_{i}\rangle\langle a_{i}|+\sum_{j=1}^{d}\mu_{ j}|b_{j}\rangle\langle b_{j}|\) with \(\sum_{i=1}^{d}\lambda_{i}+\mu_{i}=1\), \(\lambda_{i}\geqslant 0\) and \(\mu_{i}\geqslant 0\) for all \(i\in\llbracket 1,d\rrbracket\). The KD distribution of \(\rho\) is given by \[\forall(i,j)\in\llbracket 1,d\rrbracket^{2},Q_{ij}(\rho)=\left|\langle a_{i}|b_{j} \rangle\right|^{2}(\lambda_{i}+\mu_{j})\,.\] Hence, \(\rho\) is KD-positive and for all \((i,j)\in\llbracket 1,d\rrbracket^{2}\), \[\frac{Q_{11}(\rho)}{\left|\langle a_{1}|b_{1}\rangle\right|^{2}}+\frac{Q_{ij} (\rho)}{\left|\langle a_{i}|b_{j}\rangle\right|^{2}}=(\lambda_{1}+\mu_{1})+( \lambda_{i}+\mu_{j})=\lambda_{1}+\mu_{1}+\lambda_{i}+\mu_{j},\] \[\frac{Q_{1j}(\rho)}{\left|\langle a_{1}|b_{j}\rangle\right|^{2}}+\frac{Q_{il} (\rho)}{\left|\langle a_{i}|b_{1}\rangle\right|^{2}}=(\lambda_{1}+\mu_{j})+( \lambda_{i}+\mu_{1})=\lambda_{1}+\mu_{1}+\lambda_{i}+\mu_{j}.\] This implies Eq. (3.12) with \(k=1=l\). For the general case, we write \[\frac{Q_{ij}(\rho)}{\left|\langle a_{i}|b_{j}\rangle\right|^{2}}+\frac{Q_{kl} (\rho)}{\left|\langle a_{k}|b_{l}\rangle\right|^{2}}=\frac{Q_{i1}(\rho)}{ \left|\langle a_{i}|b_{1}\rangle\right|^{2}}+\frac{Q_{1j}(\rho)}{\left|\langle a _{1}|b_{j}\rangle\right|^{2}}-\frac{Q_{11}(\rho)}{\left|\langle a_{1}|b_{1} \rangle\right|^{2}}+\frac{Q_{k1}(\rho)}{\left|\langle a_{k}|b_{1}\rangle \right|^{2}}+\frac{Q_{1l}(\rho)}{\left|\langle a_{1}|b_{l}\rangle\right|^{2}}- \frac{Q_{11}(\rho)}{\left|\langle a_{1}|b_{1}\rangle\right|^{2}}\] and \[\frac{Q_{il}(\rho)}{\left|\langle a_{i}|b_{l}\rangle\right|^{2}}+\frac{Q_{kj} (\rho)}{\left|\langle a_{k}|b_{j}\rangle\right|^{2}}=\frac{Q_{i1}(\rho)}{ \left|\langle a_{i}|b_{1}\rangle\right|^{2}}+\frac{Q_{1l}(\rho)}{\left|\langle a _{1}|b_{l}\rangle\right|^{2}}-\frac{Q_{11}(\rho)}{\left|\langle a_{1}|b_{1} \rangle\right|^{2}}+\frac{Q_{k1}(\rho)}{\left|\langle a_{k}|b_{1}\rangle \right|^{2}}+\frac{Q_{1j}(\rho)}{\left|\langle a_{1}|b_{j}\rangle\right|^{2}}- \frac{Q_{11}(\rho)}{\left|\langle a_{1}|b_{1}\rangle\right|^{2}}.\] The right hand sides of these two equations are identical up to a reorganization of the terms, so \[\frac{Q_{ij}(\rho)}{\left|\langle a_{i}|b_{j}\rangle\right|^{2}}+\frac{Q_{kl} (\rho)}{\left|\langle a_{k}|b_{l}\rangle\right|^{2}}=\frac{Q_{il}(\rho)}{\left| \langle a_{i}|b_{l}\rangle\right|^{2}}+\frac{Q_{kj}(\rho)}{\left|\langle a_{k} |b_{j}\rangle\right|^{2}}.\] This ends our proof. The relations (3.12) are simpler for MUB bases and are given in the following corollary. **Corollary 3.4**.: _Let \(\mathcal{A}\) and \(\mathcal{B}\) be MUB bases. Then_ \[\rho\in\operatorname{conv}\left(\mathcal{A}\cup\mathcal{B}\right)\] _if and only if \(\rho\in\mathcal{E}_{\mathrm{KD}+}\) and_ \[\forall(i,j,k,l)\in\llbracket 1,d\rrbracket^{4},Q_{ij}(\rho)+Q_{kl}(\rho)=Q_{il} (\rho)+Q_{kj}(\rho). \tag{3.13}\] Proof.: This is a direct consequence of Proposition 3.3. ## 4 Proofs of Theorem 1.1 and of Proposition 1.2. For convenience, we restate our theorem: **Theorem 1.1**.: _The equality_ \[\mathcal{E}_{\mathrm{KD}+}=\operatorname{conv}\left(\mathcal{A}\cup\mathcal{B }\right), \tag{1.2}\] _holds under any single one of the following hypotheses:_ 1. _If_ \(d=2\) _(for qubits) and_ \(m_{\mathcal{A},\mathcal{B}}>0\)_;_ 2. _If_ \(d=3\)_, for all_ \(U\) _in a open dense set of probability_ \(1\)_;_ 3. _If_ \(d\) _is prime and_ \(U\) _is the discrete Fourier transform (DFT) matrix;_ 4. _If_ \(U\) _is sufficiently close to some other_ \(U^{\prime}\) _for which Eq. (_1.2_) holds._ Proof of Theorem 1.1 (i).: We can, without loss of generality, suppose that the transition matrix \(U\) is a real matrix by executing appropriate phase changes on the basis vectors. If \(U\) has no zeros (\(m_{\mathcal{A},\mathcal{B}}>0\)), we can therefore write \(U=\begin{pmatrix}\cos(\theta)&\sin(\theta)\\ -\sin(\theta)&\cos(\theta)\end{pmatrix}\) for \(\theta\in\mathbb{R}\backslash\frac{\pi}{2}\mathbb{Z}\). To find the dimension of the space of KD-real operators, we consider \(F\in V_{\mathrm{KDr}}\) and write \[Q_{11}(F) = \langle b_{1}|a_{1}\rangle\langle a_{1}|F|b_{1}\rangle=\langle b _{1}|a_{1}\rangle\langle a_{1}|b_{1}\rangle F_{11}+\langle b_{1}|a_{1} \rangle\langle a_{2}|b_{1}\rangle F_{12}\] \[= \cos(\theta)^{2}F_{11}+\cos(\theta)\sin(\theta)F_{12}\in\mathbb{R},\] with \(F_{ij}=\langle a_{i}|F|a_{j}\rangle\). Since, by hypothesis, \(F\) is self-adjoint and \(\mathrm{Im}Q_{11}(F)=0\), one finds \(\mathrm{Im}F_{12}=0\) so \(F_{12}=F_{21}\). Hence \(F\in V_{\mathrm{KDr}}\) implies that \(F\) is real symmetric. Conversely, one can check that for any real symmetric \(F\), \(Q(F)\) is a real matrix. Consequently, \(\dim(V_{\mathrm{KDr}})=3\). The result then follows from Proposition 3.2. _Proof of Theorem 1.1 (ii)._ This result is restated more explicitly in the following proposition. **Proposition 4.1**.: _In dimension \(d=3\), there exists a set \(\mathcal{W}\) of unitary matrices such that:_ * \(\forall U\in\mathcal{W}\)_,_ \(\mathcal{E}_{\mathrm{KD}+}=\mathrm{conv}\,(\mathcal{A}\cup\mathcal{B})\)_;_ * \(\mathcal{W}\) _is an open and dense subset of the set of unitary matrices;_ * \(\mathcal{W}\) _is a set of probability one for the Haar measure on the unitary group._ Proof.: For any unitary matrix \(U\) with \(m_{\mathcal{A},\mathcal{B}}>0\), we write \(U=(A_{kj}e^{i\phi_{kj}})_{(k,j)\in[\![1,3]\!]^{2}}\), with \(A_{kj}>0\). We define \(\mathcal{W}\) to be the set of unitary matrices in dimension \(3\) for which \(m_{\mathcal{A},\mathcal{B}}>0\) and the following conditions are fulfilled: \[\left\{\begin{array}{lcll}\phi_{21}-\phi_{11}&\neq&0&\quad\quad\quad\left[ \frac{\pi}{2}\right]\\ \phi_{22}-\phi_{12}&\neq&0&\quad\quad\quad\left[\frac{\pi}{2}\right]\\ \phi_{31}-\phi_{11}&\neq&0&\quad\quad\quad\left[\frac{\pi}{2}\right]\\ \phi_{32}-\phi_{12}&\neq&0&\quad\quad\left[\frac{\pi}{2}\right]\\ \phi_{21}-\phi_{11}&\neq&\phi_{22}-\phi_{12}&\left[\pi\right]\\ \phi_{31}-\phi_{11}&\neq&\phi_{32}-\phi_{12}&\left[\pi\right].\end{array}\right. \tag{4.1}\] Let \(U\in\mathcal{W}\). We want to show that \(\mathcal{E}_{\mathrm{KD}+}=\mathrm{conv}\,(\mathcal{A}\cup\mathcal{B})\). According to Proposition 3.2, it is sufficient to show that \(\dim(\mathrm{Ran}\,(\mathrm{Im}Q))=4\). (Here and below, \(\mathrm{Ran}\,(T)\) stands for the range of a linear map \(T\).) For that purpose, we shall consider the \(9\times 9\) matrix \(T\) of the linear map \(\mathrm{Im}Q:\mathcal{S}_{d}\rightarrow\mathcal{M}_{d}(\mathbb{R})\) with respect to the basis \[\{|a_{k}\rangle\langle a_{k}|,(|a_{k}\rangle\langle a_{j}|+|a_{j}\rangle \langle a_{k}|),i(|a_{k}\rangle\langle a_{j}|-|a_{j}\rangle\langle a_{k}|)\}_ {k\in[\![1,3]\!],k<j}\] of \(\mathcal{S}_{d}\) and the canonical basis of \(\mathcal{M}_{d}(\mathbb{R})\). The matrix \(T\) can be readily computed but we do not display it here. Note that, by Eq. (3.10), \(\dim(\mathrm{Ker}(\mathrm{Im}Q))\geqslant 5\), so that \(\dim\left(\mathrm{Ran}\,(\mathrm{Im}Q)\right)=9-\dim(\mathrm{Ker}(\mathrm{Im}Q ))\leqslant 4\). Equality is obtained, _i.e._\(\dim\left(\mathrm{Ran}\,(\mathrm{Im}Q)\right)=4\), if and only if there exists a \(4\) by \(4\) submatrix \(\Sigma\) of \(T\) that has rank \(4\). We will show that the submatrix \(\Sigma\), given by \[\Sigma=\begin{pmatrix}A_{11}A_{21}\sin\left(\phi_{21}-\phi_{11}\right)&A_{11}A _{21}\cos\left(\phi_{21}-\phi_{11}\right)&A_{11}A_{31}\sin\left(\phi_{31}- \phi_{11}\right)&A_{11}A_{31}\cos\left(\phi_{31}-\phi_{11}\right)\\ A_{12}A_{22}\sin\left(\phi_{22}-\phi_{12}\right)&A_{12}A_{22}\cos\left(\phi_{22}- \phi_{12}\right)&A_{12}A_{32}\sin\left(\phi_{32}-\phi_{12}\right)&A_{12}A_{32} \cos\left(\phi_{32}-\phi_{12}\right)\\ -A_{11}A_{21}\sin\left(\phi_{21}-\phi_{11}\right)&-A_{11}A_{21}\cos\left(\phi_{ 21}-\phi_{11}\right)&0&0\\ -A_{12}A_{22}\sin\left(\phi_{22}-\phi_{12}\right)&-A_{12}A_{22}\cos\left(\phi_{ 22}-\phi_{12}\right)&0&0\end{pmatrix},\] is indeed of rank \(4\). To prove this, suppose there exists \((a_{1},a_{2},a_{3},a_{4})\in\mathbb{R}^{4}\) such that \((a_{1},a_{2},a_{3},a_{4})\in\mathrm{Ker}\Sigma\). Then, \[\left\{\begin{array}{rcl}A_{21}(a_{1}\sin\left(\phi_{21}-\phi_{11}\right)+a_ {2}\cos\left(\phi_{21}-\phi_{11}\right))&=&-A_{31}(a_{3}\sin\left(\phi_{31}- \phi_{11}\right)+a_{4}\cos\left(\phi_{31}-\phi_{11}\right))\\ A_{22}(a_{1}\sin\left(\phi_{22}-\phi_{12}\right)+a_{2}\cos\left(\phi_{22}-\phi_{12 }\right))&=&-A_{32}(a_{3}\sin\left(\phi_{32}-\phi_{12}\right)+a_{4}\cos\left( \phi_{32}-\phi_{12}\right))\\ a_{1}\sin\left(\phi_{21}-\phi_{11}\right)+a_{2}\cos\left(\phi_{21}-\phi_{11} \right)&=&0\\ a_{1}\sin\left(\phi_{22}-\phi_{12}\right)+a_{2}\cos\left(\phi_{22}-\phi_{12} \right)&=&0.\end{array}\right.\] The last two rows simplify to \[\left\{\begin{array}{rcl}a_{1}\tan\left(\phi_{21}-\phi_{11}\right)&=&-a_{2},\\ a_{1}\tan\left(\phi_{22}-\phi_{12}\right)&=&-a_{2}.\end{array}\right.\] If \(a_{1}\neq 0\), then \(\tan\left(\phi_{21}-\phi_{11}\right)=\tan\left(\phi_{22}-\phi_{12}\right)\), which contradicts the condition \(\phi_{21}-\phi_{11}\neq\phi_{22}-\phi_{12}[\pi]\). So \(a_{1}=a_{2}=0\). Consequently, the first two conditions reduce to \[\left\{\begin{array}{rcl}a_{3}A_{11}A_{31}\sin\left(\phi_{31}-\phi_{11} \right)+a_{4}A_{31}A_{11}\cos\left(\phi_{31}-\phi_{11}\right)=0,\\ a_{3}A_{12}A_{32}\sin\left(\phi_{32}-\phi_{12}\right)+a_{4}A_{32}A_{12}\cos \left(\phi_{32}-\phi_{12}\right)=0.\end{array}\right.\] Following the same argument, we find that \(a_{3}=a_{4}=0\). Consequently, the matrix \(\Sigma\) has a vanishing kernel and is therefore of rank \(4\). In conclusion, for any unitary matrix in \(\mathcal{W}\) it is true that \(\dim\left(\operatorname{Ran}\left(\operatorname{Im}Q\right)\right)=4\), and hence \(\dim\operatorname{Ker}(\operatorname{Im}Q)=5\). This concludes the proof of the first part of the Proposition. The set \(\mathcal{W}\) is clearly open. We now show that it is dense also. For that purpose, consider an arbitrary unitary matrix \(U\). Suppose it does not belong to \(\mathcal{W}\) so that at least one of the six conditions in Eq. (4.1) is not satisfied for \(U\). We write \(C_{1},C_{2},C_{3}\) for the columns of \(U\) and remark that \(C_{3}=\varepsilon C_{1}\wedge C_{2}\) with \(\varepsilon\in\{-1,1\}\); here \(\wedge\) denotes the vector product. We then construct, for \(\theta\in\mathbb{R}\), the two columns \[C_{1}(\theta)=\begin{pmatrix}A_{11}e^{i(\phi_{11}+\theta)}\\ A_{21}e^{i\phi_{21}}\\ A_{31}e^{i\phi_{31}}\end{pmatrix},\quad C_{2}(\theta)=\begin{pmatrix}A_{12}e^ {i(\phi_{12}-\theta)}\\ A_{22}e^{i\phi_{22}}\\ A_{32}e^{i\phi_{32}}\end{pmatrix}.\] They are orthogonal to each other and normalized. Defining \(C_{3}(\theta)=\varepsilon C_{1}(\theta)\wedge C_{2}(\theta)\), we construct \(U(\theta)=(C_{1}(\theta),C_{2}(\theta),C_{3}(\theta))\). This is a family of unitary matrices for which \(U(\theta)\to U\) when \(\theta\to 0\). By construction, for all \(\theta\in\mathbb{R}\), the conditions of Eq. (4.1) read: \[\left\{\begin{array}{rcl}\phi_{21}-\phi_{11}&\neq&\theta&[\frac{\pi}{2}]\\ \phi_{22}-\phi_{12}&\neq&-\theta&[\frac{\pi}{2}]\\ \phi_{31}-\phi_{11}&\neq&\theta&[\frac{\pi}{2}]\\ \phi_{32}-\phi_{12}&\neq&-\theta&[\frac{\pi}{2}]\\ \phi_{21}-\phi_{11}-\theta&\neq&\phi_{22}-\phi_{12}+\theta&[\pi]\\ \phi_{31}-\phi_{11}-\theta&\neq&\phi_{32}-\phi_{12}+\theta&[\pi]\end{array}\right.\] These conditions are all fulfilled for \(\theta\neq 0\) small enough. This implies that the set \(\mathcal{W}\) is dense. To show the set \(\mathcal{W}\) is of full Haar measure, we show that its complement, \(\mathcal{W}^{\mathrm{c}}\) is of zero Haar measure. The group \(U(3)\) is a \(9\)-dimensional real manifold. Its Haar measure is absolutely continuous with respect to the Lebesgue measure in any local coordinate patch [65]. Now, \(\mathcal{W}^{\mathrm{c}}\) is the union of the sets where one of the inequalities in Eq. (4.1) is an equality and of the sets where one of the matrix elements of \(U\) vanishes. Each of these sets is an lower dimensional submanifold of \(U(3)\). Hence it is of zero Lebesgue measure, which concludes the proof. Proof of Theorem 1.1 (iii).: We write the entries of a DFT transition matrix \(U\) as \(U_{kl}=\frac{\omega^{(k-1)(l-1)}}{\sqrt{p}}\) for all \((k,l)\in\llbracket 1,p\rrbracket^{2}\), where \(\omega=e^{-\frac{2i\pi}{p}}\). In this proof, the indices on the matrix \(U\) and on all other matrices appearing should be thought of as being extended to all integers and as being periodic with period \(p\). As \(\operatorname{conv}\left(\mathcal{A}\cup\mathcal{B}\right)\subseteq\mathcal{E} _{\mathrm{KD}+}\), we only have to prove that \(\mathcal{E}_{\mathrm{KD}+}\subseteq\operatorname{conv}\left(\mathcal{A}\cup \mathcal{B}\right)\). To that end, we use Corollary 3.4. In other words, we need to show that Eq.(3.13) holds for all \(\rho\in\mathcal{E}_{\mathrm{KD}+}\); this is achieved in Eq.(4.8) below. We need the following lemma, which characterizes \(V_{\mathrm{KDr}}\) in the case where \(U\) is the DFT matrix in prime dimension. **Lemma 4.2**.: _Let \(U\) be the DFT matrix in prime dimension \(d=p\). Then, a self-adjoint operator \(F\in\mathcal{S}_{p}\) belongs to \(V_{\mathrm{KDr}}\) if and only if for all \((i,k)\in\llbracket 1,p\rrbracket^{2}\),_ \[F_{i(i+k)}=F_{(i-k)i} \tag{4.2}\] _Here, \(F_{ik}=\langle a_{i}|F|a_{k}\rangle\) for \((i,k)\in\llbracket 1,p\rrbracket^{2}\)._ We remark that Eq.(4.2) means that the matrix \(F\) is constant on its \(d-1\) off-diagonals. _Proof of Lemma 4.2._ For all \((i,j)\in\llbracket 1,p\rrbracket^{2}\) \[Q_{ij}(F)=\sum_{k=1}^{p}\langle b_{j}|a_{i}\rangle\langle a_{k}|b_{j}\rangle F _{ik}=\frac{1}{p}\sum_{k=1}^{p}\omega^{(j-1)(1-i)}\omega^{(j-1)(k-1)}F_{ik}= \frac{1}{p}\sum_{k=1}^{p}\omega^{(j-1)(k-i)}F_{ik}.\] In order to compute \(\mathrm{Im}(Q_{ij}(F))\), we rewrite \(Q_{ij}(F)\) as follows. Let \(i\in\llbracket 2,p\rrbracket\) and \(j\in\llbracket 2,p\rrbracket\). Then, \[Q_{ij}(F)=\frac{1}{2}\left(Q_{ij}(F)+Q_{ij}(F)\right)=\frac{1}{2p}\left(\sum_{ k=1}^{p}\omega^{(j-1)(k-i)}F_{ik}+\sum_{k=1}^{p}\omega^{(j-1)(k-i)}F_{ik}\right).\] We now rewrite the second sum. We note that \(\overline{\omega}^{(j-1)(k^{\prime}-i)}=\omega^{(j-1)(k-i)}\) if and only if \((j-1)(k-i)=(j-1)(i-k^{\prime})\)\([p]\); as \((j-1)\neq 0\)\([p]\), it follows that \(k+k^{\prime}=2i\)\([p]\) and thus that \(k^{\prime}=2i-k\)\([p]\). As the map \(k\in\llbracket 1,p\rrbracket\mapsto(2i-k)\)\([p]\in\llbracket 1,p\rrbracket\) is bijective, one finds that \[Q_{ij}(F)=\frac{1}{2p}\left(\sum_{k=1}^{p}\omega^{(j-1)(k-i)}F_{ik}+\sum_{k=1 }^{p}\omega^{(j-1)[(2i-k)-i]}F_{i(2i-k)}\right).\] Note that the indices on \(F_{ij}\) are considered modulo \(p\). Therefore, \[Q_{ij}(F)=\frac{1}{2p}\left(\sum_{k=1}^{p}\omega^{(j-1)(k-i)}F_{ik}+\overline {\omega}^{(j-1)(k-i)}F_{i(2i-k)}\right). \tag{4.3}\] By changing the summation index, we have \[Q_{ij}(F)=\frac{1}{2p}\left(\sum_{k^{\prime}=1-i}^{p-i}\omega^{(j-1)k^{\prime }}F_{i(i+k^{\prime})}+\overline{\omega}^{(j-1)k^{\prime}}F_{i(i-k^{\prime})} \right).\] As the indices are considered modulo \(p\), the summand is periodic with period \(p\), and we can shift the sum to obtain \[Q_{ij}(F)=\frac{1}{2p}\left(\sum_{k^{\prime}=0}^{p-1}\omega^{(j-1)k^{\prime}} F_{i(i+k^{\prime})}+\overline{\omega}^{(j-1)k^{\prime}}F_{i(i-k^{\prime})} \right).\] If \(k^{\prime}\in\llbracket 1,\frac{p-1}{2}\rrbracket\), then \((p-k^{\prime})\in\llbracket\frac{p+1}{2},p-1\rrbracket\) and \[\omega^{(j-1)(p-k^{\prime})}F_{i[i+(p-k^{\prime})]}+\overline{\omega}^{(j-1)( p-k^{\prime})}F_{i[i-(p-k^{\prime})]}=\omega^{(j-1)k^{\prime}}F_{i(i+k^{\prime})}+ \overline{\omega}^{(j-1)k^{\prime}}F_{i(i-k^{\prime})},\] so that we can group these terms together. This leads to \[Q_{ij}(F)=\frac{1}{p}\left(\sum_{k^{\prime}=1}^{\frac{p-1}{2}}\omega^{(j-1)k^ {\prime}}F_{i(i+k^{\prime})}+\overline{\omega}^{(j-1)k^{\prime}}F_{i(i-k^{ \prime})}\right)+\frac{1}{p}F_{ii}. \tag{4.4}\] We can then finally compute \(\operatorname{Im}(Q_{ij}(F))\) for \((i,j)\in\llbracket 2,p\rrbracket^{2}\): \[\operatorname{Im}\left(Q_{ij}(F)\right) = \frac{1}{p}\sum_{k^{\prime}=1}^{\frac{p-1}{2}}\operatorname{Im} \left(\overline{\omega}^{(j-1)k^{\prime}}F_{i(i-k^{\prime})}+\omega^{(j-1)k^{ \prime}}F_{i(i+k^{\prime})}\right)\] \[= \frac{1}{2p\sqrt{-1}}\sum_{k^{\prime}=1}^{\frac{p-1}{2}}\overline {\omega}^{(j-1)k^{\prime}}F_{i(i-k^{\prime})}-\omega^{(j-1)k^{\prime}}\overline {F_{i(i-k^{\prime})}}+\omega^{(j-1)k^{\prime}}F_{i(i+k^{\prime})}-\overline{ \omega}^{(j-1)k^{\prime}}\overline{F_{i(i+k^{\prime})}}\] \[= \frac{1}{2p\sqrt{-1}}\sum_{k^{\prime}=1}^{\frac{p-1}{2}}\omega^{ (j-1)k^{\prime}}\left(F_{i(i+k^{\prime})}-\overline{F_{i(i-k^{\prime})}} \right)+\overline{\omega}^{(j-1)k^{\prime}}\left(F_{i(i-k^{\prime})}- \overline{F_{i(i+k^{\prime})}}\right).\] Recall that \(F\in V_{\operatorname{KDr}}\) if and only if, for any \(i\in\llbracket 2,p\rrbracket\), the \(p-1\) equations \(\operatorname{Im}(Q_{ij}(F))=0\) for \(j\in\llbracket 2,p\rrbracket\) are satisfied. Indeed, as a consequence of Eq. (3.2), this is equivalent to \(\operatorname{Im}(Q_{ij}(F))=0\) for all \((i,j)\in\llbracket 2,p\rrbracket^{2}\). Hence \(F\in V_{\operatorname{KDr}}\) if and only if \[\sum_{k^{\prime}=1}^{\frac{p-1}{2}}\omega^{(j-1)k^{\prime}}\left(F_{i(i+k^{ \prime})}-\overline{F_{i(i-k^{\prime})}}\right)+\overline{\omega}^{(j-1)k^{ \prime}}\left(F_{i(i-k^{\prime})}-\overline{F_{i(i+k^{\prime})}}\right)=0.\] This system can be rewritten with \(z_{k}=F_{i(i+k)}-\overline{F_{i(i-k)}}\) and \(z_{p-k}=F_{i(i-k)}-\overline{F_{i(i+k)}}\) for \(k\in\llbracket 1,\frac{p-1}{2}\rrbracket\): \[A_{\omega}\begin{pmatrix}z_{1}\\ z_{2}\\ \vdots\\ z_{p-1}\end{pmatrix}=0\] where \[A_{\omega}=\begin{pmatrix}\omega&\omega^{2}&\omega^{2}&\cdots&\omega^{\frac{p -1}{2}}&\overline{\omega}^{\frac{p-1}{2}}&\cdots&\overline{\omega}\\ \omega^{2}&\omega^{4}&\cdots&\omega^{p-1}&\overline{\omega}^{p-1}&\cdots& \overline{\omega}^{2}\\ \vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots\\ \omega^{p-1}&\omega^{2(p-1)}&\cdots&\omega^{\frac{(p-1)^{2}}{2}}&\overline{ \omega}^{\frac{(p-1)^{2}}{2}}&\cdots&\overline{\omega}^{p-1}\end{pmatrix}= \begin{pmatrix}\omega&\omega^{2}&\cdots&\omega^{\frac{p-1}{2}}&\omega^{\frac{p +1}{2}}&\cdots&\omega^{p-1}\\ \omega^{2}&\omega^{4}&\cdots&\omega^{p-1}&\omega^{p+1}&\cdots&\omega^{2(p-1)} \\ \vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots\\ \omega^{p-1}&\omega^{2(p-1)}&\cdots&\omega^{\frac{(p-1)^{2}}{2}}&\omega^{ \frac{(p-1)(p+1)}{2}}&\cdots&\omega^{(p-1)^{2}}\end{pmatrix}.\] The matrix \(A_{\omega}\) is a Vandermonde matrix, written \(V(\omega,\omega^{2},\ldots,\omega^{p-1})\) for which all parameters are different so \(A_{\omega}\) is invertible. This means that for all \(k\in\llbracket 1,p-1\rrbracket,z_{k}=0\). Hence, \(F\in V_{\operatorname{KDr}}\) if and only if \[\forall(i,k)\in\llbracket 2,p\rrbracket\times\llbracket 1,\frac{p-1}{2} \rrbracket,F_{i(i+k)}=\overline{F_{i(i-k)}}=F_{(i-k)i}. \tag{4.5}\] We further rewrite these conditions in a more symmetric form: see Eq. (4.7) below. Consider \((i,k)\in\llbracket 2,p\rrbracket\times\llbracket\frac{p+1}{2},p-1\rrbracket\). As all indices are taken modulo \(p\), \[F_{i(i+k)}=F_{i(i+k-p)}=F_{i[i-(p-k)]}.\] Since \(p-k\in\llbracket 1,\frac{p-1}{2}\rrbracket\), Eq.(4.5) implies that \(F_{i[i-(p-k)]}=\overline{F_{i[i+(p-k)]}}=\overline{F_{i(i-k)}}\). Therefore, we obtain the following recursion relation: \[\forall(i,k)\in\llbracket 2,p\rrbracket\times\llbracket 1,p-1\rrbracket,F_{i(i+k) }=\overline{F_{i(i-k)}}=F_{(i-k)i}. \tag{4.6}\] Next, we want to show that the relation also holds for \(i=1\) and \(k\in\llbracket 1,p-1\rrbracket\). Suppose \(k\in\llbracket 1,p-1\rrbracket\). If \(n\in\llbracket 0,p-2\rrbracket\) and since \(F\) is self-adjoint, \[F_{(nk+1)[(n+1)k+1]}=\overline{F_{[(n+1)k+1](nk+1)}}=\overline{F_{[(n+1)k+1][(n +1)k+1-k]}}.\] As \(n+1\neq 0\)\([p]\) and \(k\neq 0\)\([p]\), it follows that \((n+1)k+1\neq 1\)\([p]\). We can therefore use Eq. (4.6) to obtain \[\forall n\in\llbracket 0,p-2\rrbracket,k\in\llbracket 1,p-1\rrbracket, \quad F_{(nk+1)[(n+1)k+1]}=F_{[(n+1)k+1][(n+1)k+1+k]}=F_{[(n+1)k+1][(n+2)k+1]}.\] It follows from this that for all \(n\in\llbracket 1,p-1\rrbracket\), \(F_{1(k+1)}=F_{(nk+1)[(n+1)k+1]}\). And thus, \(F_{1(1+k)}=F_{(1-k)1}\) which is the above relation for \(n=p-1\). Thus, this shows that Eq.(4.5) holds for \(i=1\). Summing up, \(F\in\mathcal{E}_{\mathrm{KD}+}\) if and only if \[F_{i(i+k)} = F_{(i-k)i}\mbox{ for all }(i,k)\in\llbracket 1,p\rrbracket\times \llbracket 1,p\rrbracket. \tag{4.7}\] We can now use this result to show that \(\rho\in\mathcal{E}_{\mathrm{KD}+}\) implies that \(\rho\in\mathrm{conv}\left(\mathcal{A}\cup\mathcal{B}\right)\), by showing that Eq. (3.13) holds. Indeed, since \(\rho\in\mathcal{E}_{\mathrm{KD}+}\) implies that \(\rho\in V_{\mathrm{KD}r}\), it follows from Eq. (4.4) and Lemma 4.2 that for all \((i,j)\in\llbracket 1,p\rrbracket^{2}\), \[Q_{ij}(\rho)+Q_{11}(\rho) = \frac{2}{p}\sum_{k=1}^{\frac{p-1}{2}}\mathrm{Re}\left(\omega^{(j- 1)k^{\prime}}\rho_{i(i+k^{\prime})}\right)+\frac{1}{p}\rho_{ii}+\frac{2}{p} \sum_{k=1}^{\frac{p-1}{2}}\mathrm{Re}\left(\rho_{1(1+k^{\prime})}\right)+ \frac{1}{p}\rho_{11} \tag{4.8}\] \[= \frac{2}{p}\sum_{k=1}^{\frac{p-1}{2}}\mathrm{Re}\left(\omega^{(j -1)k^{\prime}}\rho_{1(1+k^{\prime})}\right)+\frac{1}{p}\rho_{11}+\frac{2}{p} \sum_{k=1}^{\frac{p-1}{2}}\mathrm{Re}\left(\rho_{i(i+k^{\prime})}\right)+ \frac{1}{p}\rho_{ii}\] \[= Q_{1j}(\rho)+Q_{i1}(\rho).\] This establishes the relations (3.13) for \((i,j)\in\llbracket 1,p\rrbracket^{2}\) and \(k=l=1\). As in the proof of Proposition 3.3, this implies that they hold for all \(k,l\in\llbracket 1,p\rrbracket^{2}\). Thus, we have proven that \(\mathcal{E}_{\mathrm{KD}+}\subseteq\mathrm{conv}\left(\mathcal{A}\cup \mathcal{B}\right)\). This ends the proof. **Remark :** An alternative proof of Theorem 1.1.(iii) can be obtained as follows. Lemma 4.2 implies that \(F\) is constant on its \((d-1)\) off-diagonals and as \(F\) is self-adjoint, only \(\frac{d-1}{2}\) of these values are independent. Hence, the off-diagonals of F are determined by \((d-1)\) real parameters. The diagonal of \(F\) contains \(d\) real parameters. Lemma 4.2 implies that \(V_{\mathrm{KD}r}\) is a \((2d-1)\) real vector space. Proposition 3.2\((ib)\) then implies that \(\mathcal{E}_{\mathrm{KD}+}=\mathrm{conv}\left(\mathcal{A}\cup\mathcal{B}\right)\). _Proof of Theorem 1.1 (iv)._ Note that this statement means that the set of \(U\) for which Eq. (1.2) holds is open. The result follows from the following Proposition. **Proposition 4.3**.: _Let \(U\) be such that \(m_{\mathcal{A},\mathcal{B}}>0\) and \(\mathcal{E}_{\mathrm{KD}+}=\mathrm{conv}\left(\mathcal{A}\cup\mathcal{B}\right)\). Let \(U_{\varepsilon}\) be a family of unitary transition matrices between bases \(\mathcal{A}_{\varepsilon}\) and \(\mathcal{B}_{\varepsilon}\) satisfying \(\lim_{\varepsilon\to 0}U_{\varepsilon}=U\). Then, for all \(\varepsilon\) sufficiently small, one has \(\mathcal{E}_{\mathrm{KD}+}^{\varepsilon}=\mathrm{conv}\left(\mathcal{A}_{ \varepsilon},\mathcal{B}_{\varepsilon}\right)\)._ Proposition 4.3 states that the set of \(U\) for which \(\mathcal{E}_{\mathrm{KD}+}=\mathrm{conv}\left(\mathcal{A}\cup\mathcal{B}\right)\) is an open set, so this proposition proves part (iv) of Theorem 1.1. _Proof._ Consider \[\mathcal{L}=\{M\in\mathcal{M}_{d}(\mathbb{R})\mid\sum_{i}M_{ij}=0,\ \sum_{j}M_{ij}=0\},\] which is a \((d-1)^{2}\)-dimensional real vector space. As a result of Eq. (3.2), one has \[\mathrm{Im}Q_{\varepsilon}:\mathcal{S}_{d}\to\mathcal{L}.\] Here, \(Q_{\varepsilon}(\,\cdot\,)\) is the KD distribution associated to \(U_{\varepsilon}\). Suppose that, for \(\varepsilon=0\), \(\mathcal{E}_{\mathrm{KD}+}=\mathrm{conv}\left(\mathcal{A}\cup\mathcal{B}\right)\). Then, according to Proposition 3.2, \(\dim(\mathrm{Ker}(\mathrm{Im}Q))=2d-1\) and hence, since \(\dim\mathcal{S}_{d}=d^{2}\), it follows that \(\mathrm{Im}Q\) is surjective. We now show that, for sufficiently small \(\varepsilon\), \(\mathrm{Im}Q_{\varepsilon}\) is also surjective. Writing \[\mathcal{S}_{d}=\mathrm{Ker}(\mathrm{Im}Q)\oplus[\mathrm{Ker}(\mathrm{Im}Q)]^ {\perp},\] it follows that \(\widehat{\mathrm{Im}Q}:=\mathrm{Im}Q_{\left|[\mathrm{Ker}(\mathrm{Im}Q)]^{ \perp}\right.}\) is a linear isomorphism between \([\mathrm{Ker}(\mathrm{Im}Q)]^{\perp}\) and \(\mathcal{L}\). It therefore has an inverse \(\widehat{\mathrm{Im}Q}^{-1}\). Let us now write \[\delta U_{\varepsilon}:=U_{\varepsilon}-U,\] with \(\delta U_{\varepsilon}\to 0\) as \(\varepsilon\to 0\). Then, \[\widehat{\delta\mathrm{Im}Q_{\varepsilon}}:=\widehat{\mathrm{Im}Q_{\varepsilon}}- \widehat{\mathrm{Im}Q},\] with \(\delta\mathrm{Im}Q_{\varepsilon}\to 0\). We now consider \(\widehat{\mathrm{Im}Q_{\varepsilon}}:=\mathrm{Im}Q_{\varepsilon}|[\mathrm{ Ker}(\mathrm{Im}Q)]^{\perp}\), so that \[\widehat{\mathrm{Im}Q_{\varepsilon}}=\widehat{\mathrm{Im}Q}+\widehat{ \delta\mathrm{Im}Q_{\varepsilon}}:[\mathrm{Ker}(\mathrm{Im}Q)]^{\perp}\to \mathcal{L}.\] To conclude the proof, we show that \(\widehat{\mathrm{Im}Q_{\varepsilon}}\) is a linear isomorphism. One has \[\widehat{\mathrm{Im}Q_{\varepsilon}}=\widehat{\mathrm{Im}Q}(\widehat{ \mathbb{I}}+\widehat{\mathrm{Im}Q}^{-1}\widehat{\delta\mathrm{Im}Q_{\varepsilon }}).\] Since \((\widehat{\mathbb{I}}+\widehat{\mathrm{Im}Q}^{-1}\widehat{\delta\mathrm{Im}Q_{ \varepsilon}})\) is a small perturbation of the identity, it is invertible. So \(\widehat{\mathrm{Im}Q_{\varepsilon}}\) is invertible as the composition of two invertible maps. This implies that \(\mathrm{Im}Q_{\varepsilon}\) is surjective and, hence, that the \(\dim\left(\mathrm{Ker}(\mathrm{Im}Q_{\varepsilon})\right)=2d-1\). Proposition 3.2 then implies that \(\mathcal{E}_{\mathrm{KD}+}^{\varepsilon}=\mathrm{conv}\left(\mathcal{A}_{ \varepsilon},\mathcal{B}_{\varepsilon}\right)\). **Remark.** We note that, in dimension \(d=2\), if there is a zero in \(U\), then \(U\) is, up to phase changes, either equal to \(\begin{pmatrix}1&0\\ 0&1\end{pmatrix}\) or to \(\begin{pmatrix}0&1\\ 1&0\end{pmatrix}\). In that case, the two bases are not distinct and all pure states are KD-positive, and thus all mixed state are also KD-positive. In higher dimension \(d\geq 3\), the presence of zeroes in \(U\) considerably complicates the analysis. We conjecture that Theorem 1.1 (ii) is true in all dimensions \(d\geq 2\). We numerically checked this conjecture in dimensions \(d\) up to \(10\). For that purpose, we sampled random unitary matrices according to the Haar measure on the unitary group and computed numerically the rank of \(\mathrm{Im}Q\) for each such matrix. When it equals \((d-1)^{2}\), Proposition 3.2 (ib) guarantees that \(\mathcal{E}_{\mathrm{KD}+}=\mathrm{conv}\left(\mathcal{A}\cup\mathcal{B}\right)\). We did not find any instance where this condition was not satisfied. We now prove Proposition 1.2, which can be seen as a first step in the proof of this conjecture. For convenience, we repeat it here: **Proposition 1.2**.: _Let \(d\geq 2\). There exists an open dense set of unitaries of probability \(1\) for which \(\mathcal{E}_{\mathrm{KD}+}^{\mathrm{pure}}=\mathcal{A}\cup\mathcal{B}\)._ Proof.: We define \(\mathcal{V}\) to be the set of \(d\) by \(d\) unitary matrices \(U\) satisfying \(m_{\mathcal{A},\mathcal{B}}>0\) and \[\forall(k,k^{\prime},j,j^{\prime})\in[\![1,d]\!]^{4},k\neq k^{\prime},j\neq j ^{\prime},\ \phi_{k,j}-\phi_{k^{\prime},j}\neq\phi_{k,j^{\prime}}-\phi_{k^{\prime},j^{ \prime}}[2\pi]. \tag{4.9}\] Here we wrote, as before, \(U_{kj}=\langle a_{k}|b_{j}\rangle=A_{kj}\exp(i\phi_{kj})\), with \(A_{kj}>0\). We will first show that, if \(U\in\mathcal{V}\), then the only pure KD-positive states are the basis states. We proceed by contradiction. Suppose that there exists a pure KD-classical state \(|\psi\rangle\in\mathcal{H}\) that is not a basis state. By reordering the two bases, we can suppose that \(S_{\mathcal{A}}(\psi)=[\![1,n_{\mathcal{A}}(\psi)]\!]\) and \(S_{\mathcal{B}}(\psi)=[\![1,n_{\mathcal{B}}(\psi)]\!]\) with \(n_{\mathcal{A}}(\psi)\geqslant 2\) and \(n_{\mathcal{B}}(\psi)\geqslant 2\). Then, we change the phases of the basis states as follows: \(|a_{j}\rangle\) is changed to \(e^{-i\alpha_{j}}|a_{j}\rangle\) for \(j\in[\![1,n_{\mathcal{A}}(\psi)]\!]\) where \(\alpha_{j}=\arg(\langle a_{j}|\psi\rangle)\) and \(|b_{j}\rangle\) is changed to \(e^{i\beta_{j}}|b_{j}\rangle\) for \(j\in[\![1,n_{\mathcal{B}}(\psi)]\!]\) where \(\beta_{j}=\arg(\langle b_{j}|\psi\rangle)\). Thus, for all \((i,j)\in[\![1,n_{\mathcal{A}}(\psi)]\!]\times[\![1,n_{\mathcal{B}}(\psi)]\!]\), \[\langle a_{i}|\psi\rangle\in\mathbb{R}_{*}^{+},\langle\psi|b_{j}\rangle\in \mathbb{R}_{*}^{+}\ \mathrm{and}\ Q_{ij}(\psi)=A_{ij}e^{i(\phi_{ij}+\alpha_{i}+\beta_{j})} \langle a_{i}|\psi\rangle\langle\psi|b_{j}\rangle\in\mathbb{R}_{*}^{+}.\] Consequently, \[\forall(i,j)\in[\![1,n_{\mathcal{A}}(\psi)]\!]\times[\![1,n_{\mathcal{B}}( \psi)]\!],\phi_{ij}+\alpha_{i}+\beta_{j}=0\ [2\pi].\] As \(n_{\mathcal{A}}(\psi)\geqslant 2\) and \(n_{\mathcal{B}}(\psi)\geqslant 2\), it follows that \[\phi_{11}-\phi_{12}=\beta_{2}-\beta_{1}\ [2\pi]\ \mathrm{and}\ \phi_{21}-\phi_{22}=\beta_{2}-\beta_{1}\ [2\pi].\] Thus, \[\phi_{11}-\phi_{12}=\phi_{21}-\phi_{22}\ [2\pi],\] which is a contradiction because \(U\in\mathcal{V}\). Therefore, the only KD-classical pure states associated to \(U\in\mathcal{V}\) are the basis states. We now show that the set \(\mathcal{V}\) is an open and dense set. That it is open follows directly from its definition. It remains to show that it is dense. For that purpose, we will show below that the set \(\mathcal{Z}\) defined by \[\phi_{11}-\phi_{12}\neq\phi_{21}-\phi_{22}\ [2\pi]\ \text{and}\ m_{\mathcal{A}, \mathcal{B}}>0\] is dense. Reordering the basis elements, it then follows that all sets defined by \[\phi_{k,j}-\phi_{k^{\prime},j}\neq\phi_{k,j^{\prime}}-\phi_{k^{\prime},j^{ \prime}}\ [2\pi]\] are dense. One concludes that \(\mathcal{V}\) is dense as a finite intersection of dense sets. It remains to prove that \(\mathcal{Z}\) is dense. Let \(U\in\mathcal{Z}\); hence \(m_{\mathcal{A},\mathcal{B}}>0\) and \(\phi_{11}-\phi_{12}=\phi_{21}-\phi_{22}\ [2\pi]\). We denote by \((C_{i})_{i\in[1,d]}\) the columns of \(U\). We define, for \(\theta\in\mathbb{R}\) \[C_{1}(\theta)=\begin{pmatrix}A_{11}e^{i(\phi_{11}+\theta)}\\ A_{21}e^{i\phi_{21}}\\ \vdots\\ A_{d1}e^{i\phi_{d1}}\end{pmatrix}\ \text{and}\ C_{2}(\theta)=\begin{pmatrix}A_{12}e^{ i(\phi_{12}-\theta)}\\ A_{22}e^{i\phi_{22}}\\ \vdots\\ A_{d2}e^{i\phi_{d2}}\end{pmatrix}. \tag{4.10}\] Note that \(C_{1}(\theta)\) and \(C_{2}(\theta)\) are normalized and orthogonal. By applying the Gram-Schmidt algorithm to \(\begin{pmatrix}C_{1}(\theta)&C_{2}(\theta)&C_{3}&\cdots&C_{d}\end{pmatrix}\), we obtain a unitary matrix \[U_{\theta}=\begin{pmatrix}C_{1}(\theta)&C_{2}(\theta)&C_{3}(\theta)&\cdots&C_{ d}(\theta)\end{pmatrix}\] such that \(U(\theta)\to U\) when \(\theta\to 0\). Therefore, for \(\theta\neq 0\) small enough, \(m_{\mathcal{A},\mathcal{B}}(\theta)>0\) and \[\phi_{21}-\phi_{11}-\theta\neq\phi_{22}-\phi_{12}+\theta\ [2\pi].\] This proves \(\mathcal{V}\) is dense. We finally show that \(\mathcal{V}\) is a set of probability one for the unique normalized Haar measure on the unitary group. Note that \(\mathcal{V}\) is contained in the complement of the union of the subsets \[\phi_{k,j}-\phi_{k^{\prime},j}=\phi_{k,j^{\prime}}-\phi_{k^{\prime},j^{ \prime}}\ [2\pi]\] and of the subsets where one of the elements of \(U\) vanishes. Each of those subsets is a lower dimensional submanifold of the unitary group. Since the Haar measure is known to be absolutely continuous with respect to Lebesgue measure in any local coordinate system on the unitary group [65], this implies that the Haar measure of these manifolds vanishes. The same property therefore holds for their union, so that indeed \(\mathcal{V}\) has measure \(1\). Proving Eq. (1.2) for a particular \(U\) can be hard, as the result on the DFT in prime dimensions (Theorem 1.1 (iii)) shows. It is certainly not always true. To see this, one may first note that for the DFT in non-prime dimensions, it is well known (see for example [24, 59]) that \(\mathcal{A}\cup\mathcal{B}\subsetneq\mathcal{E}_{\text{KD+}}^{\text{pure}}\). We do not know, however, if in this case \(\mathcal{E}_{\text{KD+}}^{\text{pure}}\subsetneq\mathcal{E}_{\text{KD+}}^{ \text{ext}}\). In the next section we construct examples where \[\mathcal{A}\cup\mathcal{B}\subsetneq\mathcal{E}_{\text{KD+}}^{\text{pure}} \subsetneq\mathcal{E}_{\text{KD+}}^{\text{ext}}.\] We further point out again that Eq. (1.2) is notably different from what happens for the continuous-variable Wigner positivity. Indeed, there exist Wigner positive states outside the convex hull of the pure Wigner positive states [2, 66]. We note also that the discrete-variable Wigner function in \(d=3\) has positive states outside the convex hull of its pure positive states [63]. This is not the case for the KD distribution associated to the DFT, as a result of Theorem 1.1 (iii). ## 5 Extreme KD-positive states are not necessarily pure Below, we construct examples where \[\mathcal{A}\cup\mathcal{B}\subsetneq\mathcal{E}_{\mathrm{KD}+}^{\mathrm{pure}} \subsetneq\mathcal{E}_{\mathrm{KD}+}^{\mathrm{ext}}\quad\text{or}\quad \mathcal{A}\cup\mathcal{B}=\mathcal{E}_{\mathrm{KD}+}^{\mathrm{pure}}\subsetneq \mathcal{E}_{\mathrm{KD}+}^{\mathrm{ext}}.\] In other words, in these cases there exist mixed extreme KD-positive states. In particular, when such states are viewed as a convex combination of pure states, at least one of those pure states must be KD-negative. In fact, the convex combination of any state in \[\mathcal{E}_{\mathrm{KD}+}\setminus\mathrm{conv}\left(\mathcal{E}_{\mathrm{KD }+}^{\mathrm{pure}}\right),\] must include pure KD-negative states. Proposition 3.2 states that \(\mathcal{E}_{\mathrm{KD}+}^{\mathrm{ext}}=\mathcal{A}\cup\mathcal{B}\) is true if and only if the space \(V_{\mathrm{KD}\mathrm{r}}\) of KD-real operators is of its smallest possible dimension: \(\dim V_{\mathrm{KD}\mathrm{r}}=2d-1\) (see Eq. (3.10)). In Section 5.1, we show that this is never the case if \(U\) is a real (hence orthogonal) matrix and \(d\geq 3\) (Lemma 5.1). Let us point out that this statement does not contradict Theorem 1.1 (ii), which states that in dimension \(d=3\), the equality \(\mathcal{E}_{\mathrm{KD}+}=\mathrm{conv}\left(\mathcal{A}\cup\mathcal{B}\right)\) holds with probability one; nor does it contradict our conjecture that it holds for all \(d\geq 3\). Indeed, the space of unitary matrices is of dimension \(d^{2}\); the space of orthogonal matrices is only of dimension \(\frac{d(d+1)}{2}\), i.e., a "thin" subset. More formally, the space of orthogonal matrices is a hypersurface of lower dimension with empty interior and is therefore of zero probability among all unitary matrices. The result of Lemma 5.1 allows us to construct examples of bases \(\mathcal{A}\) and \(\mathcal{B}\) for which \(\mathcal{E}_{\mathrm{KD}+}^{\mathrm{pure}}\subsetneq\mathcal{E}_{\mathrm{KD}+}^ {\mathrm{ext}}\). In Section 5.2, we provide an explicit example of an orthogonal matrix \(U^{\star}\) in \(d=3\) for which \[\mathcal{A}\cup\mathcal{B}\subsetneq\mathcal{E}_{\mathrm{KD}+}^{\mathrm{pure}} \subsetneq\mathcal{E}_{\mathrm{KD}+}^{\mathrm{ext}}.\] In Section 5.3, we first show that the transition matrix between the bases of two different spin components of a spin-\(s\) system is always (equivalent to) a real transition matrix. We then show that the a matrix \(U^{\star}\) constructed in Secion 5.2 arises for a spin-\(1\) system, in which the two bases \(\mathcal{A}\) and \(\mathcal{B}\) correspond to the eigenvectors of the spin component in the \(z\) direction and another, specific direction, respectively. Finally, in Section 5.4, we show that there exist examples in all dimensions \(d=2^{n}\) (for integer \(n\)) and in all dimensions \(d=4m\) (for integer \(m\leq 166\)) for which \[\mathcal{A}\cup\mathcal{B}=\mathcal{E}_{\mathrm{KD}+}^{\mathrm{pure}}\subsetneq \mathcal{E}_{\mathrm{KD}+}^{\mathrm{ext}}.\] In these cases, the two bases are perturbations of real MUB bases. For completeness, we mention that we did not find any example where \[\mathcal{A}\cup\mathcal{B}\subsetneq\mathcal{E}_{\mathrm{KD}+}^{\mathrm{pure}} =\mathcal{E}_{\mathrm{KD}+}^{\mathrm{ext}}.\] ### Real transition matrices: structural results **Lemma 5.1**.: _If \(U\) is real and \(m_{\mathcal{A},\mathcal{B}}>0\), then \(V_{\mathrm{KD}\mathrm{r}}=\mathcal{S}_{d,\mathrm{r}}\), where \(\mathcal{S}_{d,\mathrm{r}}\) is the set of self-adjoint operators that have a real and symmetric matrix on the \(\mathcal{A}\) (and hence also on the \(\mathcal{B}\)) basis. Hence, \(\dim V_{\mathrm{KD}\mathrm{r}}=\frac{d(d+1)}{2}\). If in addition \(d\geq 3\), then \(\mathcal{A}\cup\mathcal{B}\subsetneq\mathcal{E}_{\mathrm{KD}+}^{\mathrm{ext}}\)._ Note that \(\frac{d(d+1)}{2}\) is strictly larger than \(2d-1\) for all \(d>2\). We can further interpret this lemma as follows. If \(U\) is real, then the observables \(F\) that have a real KD symbol are precisely those described by a real symmetric matrix on the \(\mathcal{A}\) and \(\mathcal{B}\) bases. This constitutes a concrete identification of \(V_{\mathrm{KD}\mathrm{r}}\), which is not available in general. In this situation, the kernel of \(\mathrm{Im}Q\), which is \(V_{\mathrm{KD}\mathrm{r}}\), is large and in particular, larger than \(\mathrm{span}_{\mathbb{R}}(\mathcal{A}\cup\mathcal{B})\). Proof.: Suppose that \(F\in V_{\mathrm{KDr}}\), so \[\forall(i,j)\in\llbracket 1,d\rrbracket^{2},\ \mathrm{Im}(Q_{ij}(F))=0.\] We know also that \(\forall(i,j)\in\llbracket 1,d\rrbracket^{2}\), \(Q_{ij}(F)=\langle b_{j}|a_{i}\rangle\langle a_{i}|F|b_{j}\rangle=\langle b_{j} |a_{i}\rangle\sum_{k=1}^{d}\langle a_{k}|b_{j}\rangle\langle a_{i}|F|a_{k}\rangle\). As \(\langle b_{j}|a_{i}\rangle\langle a_{k}|b_{j}\rangle\in\mathbb{R}\), we finally find that \[\mathrm{Im}(Q_{ij}(F))=\sum_{k=1,k\neq i}^{d}\langle b_{j}|a_{i}\rangle \langle a_{k}|b_{j}\rangle\mathrm{Im}\left(\langle a_{i}|F|a_{k}\rangle\right) =\langle b_{j}|a_{i}\rangle\left(\sum_{k=1,k\neq i}^{d}\mathrm{Im}\left( \langle a_{i}|F|a_{k}\rangle\right)\langle a_{k}|b_{j}\right)=0.\] Since this equation holds for all \(j\), by fixing \(i\in\llbracket 1,d\rrbracket\) we can write \[\forall j\in\llbracket 1,d\rrbracket,\sum_{k=1,k\neq i}^{d}\mathrm{Im}\left( \langle a_{i}|F|a_{k}\rangle\right)\langle a_{k}|b_{j}\rangle=0\] and thus \[\sum_{k=1,k\neq i}^{d}\mathrm{Im}\left(\langle a_{i}|F|a_{k}\rangle\right)|a _{k}\rangle=0.\] So, for all \(k\in\llbracket 1,d\rrbracket,\mathrm{Im}\left(\langle a_{i}|F|a_{k}\rangle \right)=0\) and thus, \(F\in\mathcal{S}_{d,\mathrm{r}}\). For the last statement, note that, if \(\mathcal{E}_{\mathrm{KDr}+}^{\mathrm{ext}}=\mathcal{A}\cup\mathcal{B}\), then \(\mathcal{E}_{\mathrm{KD}+}=\mathrm{conv}\left(\mathcal{A}\cup\mathcal{B}\right)\) and hence, by Proposition 3.2, \(\dim V_{\mathrm{KDr}}=2d-1\), which is a contradiction with the fact that \(\dim V_{\mathrm{KDr}}=\frac{d(d+1)}{2}\). As announced above, it is the goal of this section to exhibit mixed KD-positive states that cannot be written as convex combinations of the pure KD-positive states. In order to find and analyse such states, we will concentrate on dimension \(d=3\) where the analysis is tractable. Note that, when \(d=3\), then \(\dim V_{\mathrm{KDr}}=6\) and \(\dim(\mathrm{span}_{\mathbb{R}}(\mathcal{A}\cup\mathcal{B}))=5\), so that \(\mathrm{span}_{\mathbb{R}}(\mathcal{A}\cup\mathcal{B})\) is a co-dimension \(1\) subspace of \(V_{\mathrm{KDr}}\). We characterize the one-dimensional subspace of \(V_{\mathrm{KDr}}\) perpendicular to \(\mathrm{span}_{\mathbb{R}}(\mathcal{A}\cup\mathcal{B})\), as follows. We use the Hilbert-Schmidt inner product on \(\mathcal{S}_{d,\mathrm{r}}\). Then, \(F_{\perp}\in\mathcal{S}_{d,\mathrm{r}}\) is a unit vector orthogonal to \(\mathrm{span}_{\mathbb{R}}(\mathcal{A}\cup\mathcal{B})\) if and only if \[\forall i\in\llbracket 1,3\rrbracket,\ \mathrm{Tr}\left(F_{\perp}|a_{i} \rangle\langle a_{i}|\right)=0,\ \mathrm{Tr}\left(F_{\perp}|b_{i}\rangle\langle b_{i}|\right)=0,\ \mathrm{and}\ \mathrm{Tr}\,F_{\perp}^{2}=1.\] It follows that the matrix of \(F_{\perp}\) in the \(\mathcal{A}\)-basis is of the form \[F_{\perp}=\frac{1}{\sqrt{2}}\begin{pmatrix}0&f_{1}&f_{2}\\ f_{1}&0&f_{3}\\ f_{2}&f_{3}&0\end{pmatrix} \tag{5.1}\] with \(f=(f_{1},f_{2},f_{3})\in\mathbb{R}^{3}\) and \[f_{1}^{2}+f_{2}^{2}+f_{3}^{2}=1.\] The vector \(f\) is, up to a sign, uniquely determined by the conditions \(\mathrm{Tr}\left(F_{\perp}|b_{j}\rangle\langle b_{j}|\right)=0\) for all \(j\in\llbracket 1,3\rrbracket\). Since \(\mathrm{Tr}\,F_{\perp}=0\), \(F_{\perp}\) can be neither positive nor negative. We can assume that \(f_{-}<0\leq f_{0}\leq f_{+}\) are its three eigenvalues. Consequently, \[V_{\mathrm{KDr}}=\mathrm{span}_{\mathbb{R}}(\mathcal{A}\cup\mathcal{B})\oplus \mathbb{R}F_{\perp}. \tag{5.2}\] Any \(F\in V_{\mathrm{KDr}}\) can then be decomposed uniquely as \(F=F_{\mathrm{b}}+xF_{\perp}\), with \(F_{\mathrm{b}}\in\mathrm{span}_{\mathbb{R}}(\mathcal{A}\cup\mathcal{B})\) and \(x\in\mathbb{R}\). Note that \[\mathrm{Tr}\,F=\mathrm{Tr}\,F_{\mathrm{b}},\quad\mathrm{Tr}\,F^{2}=\mathrm{ Tr}\,F_{\mathrm{b}}^{2}+x^{2}. \tag{5.3}\] We finally note that it follows from results in [24] that, when \(d=3\) and \(m_{\mathcal{A},\mathcal{B}}>0\), there is a finite set of pure KD-positive states. Further structural information on the set \(\mathcal{E}_{\mathrm{KD}+}\) and its extreme points, for real orthogonal \(U\), is given in Lemma B.1 and Proposition B.2. The above results can be summarized as follows. There exists a convex subset \(\mathcal{D}\) of \(\operatorname{span}_{\mathbb{R}}(\mathcal{A}\cup\mathcal{B})\), real numbers \(x_{\min}\leq 0\leq x_{\max}\), a concave function \[x_{+}:\mathcal{D}\to[x_{\min},x_{\max}]\subset\mathbb{R},\] as well as a convex function \[x_{-}:\mathcal{D}\to[x_{\min},x_{\max}]\subset\mathbb{R},\] so that \[\mathcal{E}_{\mathrm{KD}+}=\{\sigma+[x_{-}(\sigma),x_{+}(\sigma)]F_{\perp} \mid\sigma\in\mathcal{D}\}.\] In other words, \(\mathcal{E}_{\mathrm{KD}+}\) can be seen as the intersection of the subgraph of \(x_{+}\) and of the supergraph of \(x_{-}\). As a result, the extreme points of \(\mathcal{E}_{\mathrm{KD}+}\) lie on the graphs of \(x_{-}\) and of \(x_{+}\): \[\mathcal{E}_{\mathrm{KD}+}^{\mathrm{ext}}\subseteq x_{+}(\mathcal{D})\cup x_{ -}(\mathcal{D}).\] Since the precise form of the functions \(x_{\pm}\) as well as of the set \(\mathcal{D}\) depend in a nontrivial manner on the matrix \(U\), determining explicitly the nature of the set of extreme KD-positive states is far from straigthforward for general real \(U\), even in dimension \(3\). In the next subsection, we study a special example where the nature of the extreme KD-positive states can be mapped out in more detail. In particular, we show that \(\mathcal{E}_{\mathrm{KD}+}\) is not necessarily a polytope. ### Extreme KD-positive states can be mixed: an example in \(d=3\). We now provide an example of an orthogonal \(U\) in \(d=3\) for which \[\mathcal{A}\cup\mathcal{B}\subsetneq\mathcal{E}_{\mathrm{KD}+}^{\mathrm{pure} }\subsetneq\mathcal{E}_{\mathrm{KD}+}^{\mathrm{ext}}.\] The first strict inclusion follows from Proposition 3.2 and says that there exist additional pure KD-positive states distinct from the basis states. In our example, there will, in addition, be mixed extreme KD-positive states. In other words, the polytope \(\operatorname{conv}\big{(}\mathcal{E}_{\mathrm{KD}+}^{\mathrm{pure}}\big{)}\) does not exhaust all of \(\mathcal{E}_{\mathrm{KD}+}\). To motivate our choice of \(U\), we first recall that it was shown in [24] (Theorem 13) that, when \(d=3\), any \(U\) for which \(\frac{m_{\mathcal{A},\mathcal{B}}}{M_{\mathcal{A},\mathcal{B}}}>\frac{1}{ \sqrt{2}}\) has the property that \(\mathcal{E}_{\mathrm{KD}+}^{\mathrm{pure}}=\mathcal{A}\cup\mathcal{B}\). If there existed such \(U\) among the real orthogonal matrices, this would imply by Lemma 5.1 that \(\mathcal{E}_{\mathrm{KD}+}^{\mathrm{pure}}\subsetneq\mathcal{E}_{\mathrm{KD}+}^ {\mathrm{ext}}\). However, such real orthogonal matrices do not exist. The largest value of \(\frac{m_{\mathcal{A},\mathcal{B}}}{M_{\mathcal{A},\mathcal{B}}}\) that can be obtained for real orthogonal matrices is \(\frac{1}{2}\), attained for the matrix \[U^{\star}=\frac{1}{3}\begin{pmatrix}-1&2&2\\ 2&-1&2\\ 2&2&-1\end{pmatrix}. \tag{5.4}\] We further know from [23] that in \(d=3\), a pure KD-positive state \(\ket{\psi}\) of any transition matrix with \(m_{\mathcal{A},\mathcal{B}}>0\) satisfies \(n_{\mathcal{A},\mathcal{B}}(\psi)=n_{\mathcal{A}}(\psi)+n_{\mathcal{B}}(\psi)=4\), where \[n_{\mathcal{A}}(\psi)=\sharp\{i\in\llbracket 1,d\rrbracket\mid\bra{a_{i}} \psi\neq 0\},\text{ and }n_{\mathcal{B}}(\psi)=\sharp\{j\in\llbracket 1,d \rrbracket\mid\bra{b_{j}}\psi\neq 0\}. \tag{5.5}\] Here, \(\sharp\{\cdot\}\) denotes the cardinality of \(\{\cdot\}\). One can construct all such states and check if they are KD-positive. Doing so, we find, in addition to the basis states, the following three KD-positive states: \[\ket{\varphi_{3}}=\frac{|a_{1}\rangle-|a_{2}\rangle}{\sqrt{2}},\ \ket{\varphi_{2}}=\frac{|a_{3} \rangle-|a_{1}\rangle}{\sqrt{2}},\text{ and }\ket{\varphi_{1}}=\frac{|a_{2} \rangle-|a_{3}\rangle}{\sqrt{2}}.\] Their associated KD distributions are \[Q(\varphi_{3})=\frac{1}{6}\begin{pmatrix}1&2&0\\ 2&1&0\\ 0&0&0\end{pmatrix},\ Q(\varphi_{2})=\frac{1}{6}\begin{pmatrix}1&0&2\\ 0&0&0\\ 2&0&1\end{pmatrix},\text{ and }Q(\varphi_{1})=\frac{1}{6}\begin{pmatrix}0&0&0\\ 0&1&2\\ 0&2&1\end{pmatrix},\] respectively. Here, to lighten the notation, we write \(Q(\varphi_{i}):=Q(|\varphi_{i}\rangle\langle\varphi_{i}|)\). The operator \(F_{\perp}\) in Eq. (5.1) is readily computed to be \[F_{\perp}=\frac{1}{\sqrt{6}}\begin{pmatrix}0&1&1\\ 1&0&1\\ 1&1&0\end{pmatrix}.\] A simple computation shows that \[|\varphi_{i}\rangle\langle\varphi_{i}|=F_{b}^{i}+x_{i}F_{\perp},\] with \(x_{i}=\langle\varphi_{i}|F_{\perp}|\varphi_{i}\rangle=-\frac{1}{\sqrt{6}}\) and \[F_{b}^{i}=\frac{5}{6}\sum_{k\neq i}|b_{k}\rangle\langle b_{k}|+\frac{1}{12}|b _{i}\rangle\langle b_{i}|-\frac{3}{4}|a_{i}\rangle\langle a_{i}|.\] Hence, that the triangle with vertices \(|\varphi_{i}\rangle\langle\varphi_{i}|\) lies in a plane parallel to \(\operatorname{span}_{\mathbb{R}}(\mathcal{A}\cup\mathcal{B})\) at a distance \(|x_{i}|\) from it. It follows that \(\operatorname{conv}\big{(}\mathcal{E}_{\mathrm{KD+}}^{\mathrm{pure}}\big{)}\) lies between \(x=-\frac{1}{\sqrt{6}}\) and \(x=0\). Indeed, if \[\rho=\sum_{i=1}^{3}\lambda_{i}|a_{i}\rangle\langle a_{i}|+\mu_{i}|b_{i} \rangle\langle b_{i}|+\delta_{i}|\varphi_{i}\rangle\langle\varphi_{i}|\] with \(\sum_{i=1}^{3}\lambda_{i}+\mu_{i}+\delta_{i}=1\) where \(\lambda_{i}\geqslant 0,\mu_{i}\geqslant 0,\text{ and }\delta_{i}\geqslant 0\) for all \(i\llbracket 1,3\rrbracket\), then \[x_{\rho}=\operatorname{Tr}(\rho F_{\perp})=\sum_{i=1}^{3}\delta_{i}\langle \varphi_{i}|F_{\perp}|\varphi_{i}\rangle=-\frac{1}{\sqrt{6}}\sum_{i=1}^{3} \delta_{i}.\] Thus, \(-\frac{1}{\sqrt{6}}\leqslant x_{\rho}\leqslant 0\). It follows that any KD-positive states for which \(x>0\) cannot belong to \(\operatorname{conv}\big{(}\mathcal{E}_{\mathrm{KD+}}^{\mathrm{pure}}\big{)}\). We now construct such states. We consider \(\rho(x)=\frac{1}{3}I_{d}+xF_{\perp}\). Its eigenvalues are \(\left\{\frac{2(1+x\sqrt{6})}{6},\frac{2-x\sqrt{6}}{6},\frac{2-x\sqrt{6}}{6}\right\}\) which are positive if and only if \(x\in[-\frac{1}{\sqrt{6}},\frac{2}{\sqrt{6}}]\). Only in these cases is \(\rho(x)\) a quantum state. Moreover, \[Q(F_{\perp})=\frac{\sqrt{6}}{27}\begin{pmatrix}-2&1&1\\ 1&-2&1\\ 1&1&-2\end{pmatrix},\] and consequently \[Q(\rho(x))=\frac{1}{27}\begin{pmatrix}1-2x\sqrt{6}&4+x\sqrt{6}&4+x\sqrt{6}\\ 4+x\sqrt{6}&1-2x\sqrt{6}&4+x\sqrt{6}\\ 4+x\sqrt{6}&4+x\sqrt{6}&1-2x\sqrt{6}\end{pmatrix}.\] So \(\rho(x)\) is KD-positive if and only if \(x\in[-\frac{4}{\sqrt{6}},\frac{1}{2\sqrt{6}}]\). Finally, \(\rho(x)\) is a KD-positive state provided that \(x\in[-\frac{1}{\sqrt{6}},\frac{1}{2\sqrt{6}}]\). Thus, for all \(x\in]0,\frac{2}{3}]\), \(\rho(x)\) is a KD-positive mixed state and from what precedes, it follows that \(\rho(x)\) is not in the convex hull of the KD-positive pure states. This construction therefore exhibits explicit KD-positive mixed states that are not in the convex hull of \(\mathcal{E}_{\mathrm{KD+}}^{\mathrm{pure}}\). In addition, Lemma B.1 allows the generalisation of this construction, which shows that, for every \(\rho\in\operatorname{Int}(\operatorname{conv}\left(\mathcal{A}\cup\mathcal{B }\right))\), there exists a continuous family of states \(\rho(x)=\rho+xF_{\perp}\) which are not in the convex hull of KD-positive pure states. Here and elsewhere, \(\operatorname{Int}(A)\) stands for the interior of \(A\). Since \(\mathcal{E}_{\mathrm{KD+}}\) is a convex and compact set it follows from the Krein-Milman Theorem [58] that it is the convex hull of its extreme points. We thus reach the conclusion that \(\mathcal{E}_{\mathrm{KD+}}\) has extreme points that are not pure and not in the convex hull of the pure KD-positive states. We identify some of them explicitly in Appendix B.2. It is known that mixed extreme states exist also for the discrete variable Wigner function (at least in \(d=3\)[63]), as well as in the continuous variable Wigner function [60, 61, 62]. However, to the best of our knowledge, such states have never been explicitly identified. **Comment.** We point out that the set \(\mathcal{E}_{\mathrm{KD}+}\) is in the example above not a polytope, since it has an infinite number of extreme points. To see this, note that, if \(\mathcal{E}_{\mathrm{KD}+}\) was a polytope, any 2-dimensional section of it would be a polygon. However, if we consider the 2-dimensional plane containing \(F_{\perp}\), \(\frac{1}{2}(|a_{1}\rangle\langle a_{1}|+|a_{2}\rangle\langle a_{2}|)\) and \(|a_{3}\rangle\langle a_{3}|\) and we intersect it with \(\mathcal{E}_{\mathrm{KD}+}\), then simple computations show that this intersection is not a polygon, as illustrated in Fig.1. Hence, \(\mathcal{E}_{\mathrm{KD}+}\) is not a polytope. ### An application: the case of spin-\(s\) In this subsection, we show how the transition matrix \(U^{\star}\) of the previous subsection arises naturally in a spin-1 system. First, we show that the transition matrices between spin component bases for spin-\(s\) systems are (equivalent to) a real matrix. Let \(|z,m\rangle\) be the standard basis vectors of \(J_{z}\), with eigenvalues \(m=-s,\ldots,s\). Let \[R(\alpha,\beta,\gamma)=R_{z}(\alpha)R_{y}(\beta)R_{z}(\gamma),\] be the rotation matrix with Euler angles \(\alpha,\beta,\gamma\). Further, let \[\hat{R}(\alpha,\beta,\gamma)=\exp(-i\alpha J_{z})\exp(-i\beta J_{y})\exp(-i \gamma J_{z}),\] be the irreducible unitary action of the rotation group on the spin-\(s\) space \(\mathcal{H}^{(s)}\). Then the states \[\hat{R}(\alpha,\beta,\gamma)|z,m\rangle\] are the eigenstates of \[\hat{R}(\alpha,\beta,\gamma)e_{z}\cdot\vec{J},\] where \(e_{z}\) is the unit vector along the \(z\)-axis and \(\vec{J}=(J_{x},J_{y},J_{z})\)[67]. We define \[|a_{m}\rangle=\exp(-i\alpha m)|z,m\rangle,\quad|b_{m}\rangle=\exp(i\gamma m) \hat{R}(\alpha,\beta,\gamma)|z,m\rangle. \tag{5.6}\] Let us write \(\mathcal{E}_{\mathrm{KD}+}(\alpha,\beta,\gamma)\) for the corresponding space of KD-positive states. Then the transition matrix \(U(\alpha,\beta,\gamma)\) between these two bases has matrix elements \[U_{m^{\prime}m}(\alpha,\beta,\gamma)=d^{(s)}_{m^{\prime}m}(\beta),\] Figure 1: A point \((k,x)\) on this graph represents the state \(\frac{k}{2}(|a_{1}\rangle\langle a_{1}|+|a_{2}\rangle\langle a_{2}|)+(1-k)|a_ {3}\rangle\langle a_{3}|+xF_{\perp}\). A state is KD positive if and only if it lies inside the region drawn. Thus, this figure displays a 2-dimensional section of \(\mathcal{E}_{\mathrm{KD}+}\) that is not a polygon. This shows that \(\mathcal{E}_{\mathrm{KD}+}\) is not a polytope. where \(d^{(s)}(\beta)\) is Wigner's small \(d\)-matrix for spin-\(s\)[67]. Note that the transition matrix depends only on the Euler angle \(\beta\), not on the two others. Consequently, the same is true for \(\mathcal{E}_{\mathrm{KD}+}(\alpha,\beta,\gamma)=\mathcal{E}_{\mathrm{KD}+}(\beta)\). Wigner's small \(d\)-matrix is real-valued so that the theory of the previous subsections applies. In particular, one never has \(\mathcal{E}_{\mathrm{KD}+}^{\mathrm{ext}}(\alpha,\beta,\gamma)=\mathcal{A} \cup\mathcal{B}\) in this situation. Let us now concentrate on the case \(s=1\). One can then check that, with \(\alpha_{0}=-\gamma_{0}=\frac{\pi}{4}\) and \(\beta_{0}=\arccos(-1/3)\), \[R(\alpha_{0},\beta_{0},\gamma_{0})=\frac{1}{3}\begin{pmatrix}-1&2&2\\ 2&-1&2\\ 2&2&-1\end{pmatrix}.\] This unitary can be interpreted as a rotation by the angle \(\pi\) about the axis \(n=\frac{1}{\sqrt{3}}\begin{pmatrix}1\\ 1\\ 1\end{pmatrix}\). The \(\{|b_{m}\rangle\}\) therefore forms, for \(m=-1,0,1\), the eigenbasis of the observable \(e_{z^{\prime}}\cdot J\) where \[e_{z^{\prime}}=R(\alpha_{0},\beta_{0},\gamma_{0})e_{z},\] so that \(e_{z^{\prime}}\cdot J=\frac{1}{3}(2J_{x}+2J_{y}-J_{z})\). Hence, \[U(\beta_{0})=d^{(1)}(\beta_{0})=\begin{pmatrix}\frac{1+\cos(\beta_{0})}{2}&- \frac{\sin(\beta_{0})}{\sqrt{2}}&\frac{1-\cos(\beta_{0})}{2}\\ \frac{\sin(\beta_{0})}{\sqrt{2}}&\cos(\beta_{0})&-\frac{\sin(\beta_{0})}{\sqrt {2}}\\ \frac{1-\cos(\beta_{0})}{2}&\frac{\sin(\beta_{0})}{\sqrt{2}}&\frac{1+\cos( \beta_{0})}{2}\end{pmatrix}=\frac{1}{3}\begin{pmatrix}1&-2&2\\ 2&-1&-2\\ 2&2&1\end{pmatrix}.\] Furthermore, \[U(\beta_{0})=d^{(1)}(\beta_{0})=\begin{pmatrix}1&0&0\\ 0&-1&0\\ 0&0&-1\end{pmatrix}U^{\star}\begin{pmatrix}-1&0&0\\ 0&-1&0\\ 0&0&1\end{pmatrix},\] where \(U^{\star}\) is as in Eq. (5.4). In conclusion, it then follows from Eq. (2.3) that the matrix \(U(\beta_{0})\) is equivalent to the matrix \(U^{\star}\). As a result, if, for spin-1, the bases \(\mathcal{A}\) and \(\mathcal{B}\) are as in Eq. (5.6), with \(\beta=\beta_{0}\), then the set of KD-positive states \(\mathcal{E}_{\mathrm{KD}+}(\beta_{0})\) is as described in the previous subsection. In particular, there then exist mixed KD-positive states that are not mixtures of pure KD-positive states. Examples where \(\operatorname{conv}\left(\mathcal{A}\cup\mathcal{B}\right)=\operatorname{conv }\left(\mathcal{E}_{\mathrm{KD}+}^{\mathrm{pure}}\right)\subsetneq\mathcal{E} _{\mathrm{KD}+}\) In this subsection, we show that there exist bases \(\mathcal{A}\) and \(\mathcal{B}\) for which \[\mathcal{A}\cup\mathcal{B}=\mathcal{E}_{\mathrm{KD}+}^{\mathrm{pure}}\subsetneq \mathcal{E}_{\mathrm{KD}+}^{\mathrm{ext}}, \tag{5.7}\] so that \[\operatorname{conv}\left(\mathcal{A}\cup\mathcal{B}\right)=\operatorname{conv} \left(\mathcal{E}_{\mathrm{KD}+}^{\mathrm{pure}}\right)\subsetneq\mathcal{E} _{\mathrm{KD}+}. \tag{5.8}\] In words, in this situation, the only pure KD-positive states are the basis states, but those do not exhaust all extreme KD-positive states: there exist also mixed extreme KD-positive states. With the following proposition we show the occurrence of such situations in a large class of examples in dimensions \(d=4n\) (with integer \(n\leq 166\)) or \(2^{m}\) with \(m\in\mathbb{N}^{\star}\)[68, 69]. These examples are less explicit than the three-dimensional example of the previous subsections. **Proposition 5.2**.: _Suppose that \(U\) is a real-valued transition matrix for MUB bases in dimension \(d\geqslant 4\). Then, there exists a real-valued and unitary matrix \(W\) close to \(U\), such that \(W\) satisfies Eq. (5.7)._ Proof.: We write \(U=(\langle a_{i}^{U}|b_{j}^{U}\rangle)_{(i,j)\in[1,d]^{2}}\). We have that \[m_{\mathcal{A},\mathcal{B}}^{U}=\min_{(i,j)\in[1,d]^{2}}\left|\langle a_{i}^{U} |b_{j}^{U}\rangle\right|=\frac{1}{\sqrt{d}}\] and \[M_{\mathcal{A},\mathcal{B}}^{U}=\max_{(i,j)\in[1,d]^{2}}\left|\langle a_{i}^{U} |b_{j}^{U}\rangle\right|=\frac{1}{\sqrt{d}}\] because \(U\) is the transition matrix for MUB bases. It follows from Theorem 13 and (the proof of) Theorem 5 in [24] that for all \(\varepsilon>0\) small enough, there exists a real unitary matrix \(W=(\langle a_{i}^{W}|b_{j}^{W}\rangle)_{(i,j)\in[1,d]^{2}}\) such that \(\left|\left|U-W\right|\right|_{\infty}\leqslant\varepsilon\) and with the property that the only KD-positive pure states for \(W\) are the basis states: \(\mathcal{E}_{\mathrm{KD}+}^{\mathrm{pure}}=\mathcal{A}\cup\mathcal{B}\). Thus, \(\mathrm{span}_{\mathbb{R}}(\mathcal{E}_{\mathrm{KD}+}^{\mathrm{pure}})= \mathrm{span}_{\mathbb{R}}(\mathcal{A}\cup\mathcal{B})\). On the other hand, since \(W\) is real, we know from Lemma 5.1 that \(V_{\mathrm{KDr}}=\mathcal{S}_{d,r}\). Hence \(\dim(V_{\mathrm{KDr}})=\frac{d(d+1)}{2}>2d-1=\dim(\mathrm{span}_{\mathbb{R}}( \mathcal{A}\cup\mathcal{B}))\), so that it follows from Proposition 3.2 that \(\mathrm{conv}\left(\mathcal{A}\cup\mathcal{B}\right)\subsetneq\mathcal{E}_{ \mathrm{KD}+}\). Thus, Eq. (5.7) holds. ## 6 Conclusions and discussion In recent years, the Kirkwood-Dirac quasiprobability distribution has risen as a versatile and powerful tool to study and develop protocols in discrete-variable quantum-information processing. Given two bases \(\mathcal{A}\) and \(\mathcal{B}\) and a state \(\rho\), the associated KD distribution can be either a probability distribution or not. As reviewed in our Introduction, the existence of negative or nonreal entries in a KD distribution has been linked to quantum phenomena in several areas of quantum mechanics. This motivated us to investigate the divide between positive and nonpositive KD distributions. Previous studies [22, 23, 24] have mapped out sufficient and necessary conditions for a pure state to assume a nonpositive KD distribution. But, to the best of our knowledge, no previous work has provided such an analysis for mixed states. In this work, we have presented the first thorough analysis of the set of mixed states that assume positive KD distributions. Our results can be grouped in two categories. * Firstly, we have established that in several scenarios the set of KD positive states equals the convex combinations of the bases' states \(\mathcal{A}\) and \(\mathcal{B}\). In particular, we have proven this to be the case for: \((i)\) any qubit (\(d=2\)) system provided that \(m_{\mathcal{A},\mathcal{B}}>0\); \((ii)\) an open dense set of probability \(1\) of possible choices of bases \(\mathcal{A}\) and \(\mathcal{B}\) in dimension \(3\); \((iii)\) prime dimensions, when the unitary transition matrix between the two bases \(\mathcal{A}\) and \(\mathcal{B}\) is the discrete Fourier transform; and \((iv)\) any two bases that are sufficiently close to some other pair of bases for which the property holds. In addition to having shown that \(\mathcal{E}_{\mathrm{KD}+}=\mathrm{conv}\left(\mathcal{A}\cup\mathcal{B}\right)\) for randomly chosen bases in dimension \(d=3\), we conjecture that this is true also in higher dimensions \(d\geq 4\). We have given analytical and numerical evidence to that effect. * Secondly, we have proven that there exist scenarios where the set of KD-positive states includes mixed states that cannot be written as convex combinations of pure KD-positive states: in other words, we have shown that in such cases \(\mathcal{E}_{\mathrm{KD}+}\neq\mathrm{conv}\left(\mathcal{E}_{\mathrm{KD}+}^{ \mathrm{pure}}\right)\). This mirrors what happens for mixed Wigner positive states which are known not to all be mixtures of pure Wigner positive states [60, 61, 62]. However, we go further by explicitly constructing, for a specific spin-\(1\) system, extreme KD-positive states that cannot be written as convex mixtures of pure KD-positive states. For the Wigner distribution, such extreme positive states have not yet been constructed. Having a good understanding of the KD-positive states is a prerequisite for an efficient study of states for which the KD distribution takes negative or nonreal values. The latter are known to be related to nonclassical phenomena in various applications. To analyze the connection between mixed nonpositive states and nonclassicality, one should investigate measures and monotones of KD nonpositivity. In an upcoming paper, currently under construction, we use the findings of this work to analyze the so-called KD negativity [34] of mixed states. Furthermore, our follow-up paper extends to mixed states the characterization of KD-positive states via their support uncertainty, as was done for pure states in [22, 23, 24]. _Acknowledgments_: This work was supported in part by the Agence Nationale de la Recherche under grant ANR-11-LABX-0007-01 (Labex CEMPI), by the Nord-Pas de Calais Regional Council and the European Regional Development Fund through the Contrat de Projets Etat-Region (CPER), and by the CNRS through the MITI interdisciplinary programs. We thank Girton College, Cambridge, for support of this work. D.R.M. Arvidsson-Shukur thanks Nicole Yunger Halpern for useful discussions. Appendix A The geometric structure of \(\operatorname{conv}\left(\mathcal{A}\cup\mathcal{B}\right)\) The polytope \(\operatorname{conv}\left(\mathcal{A}\cup\mathcal{B}\right)\) has \(2d\) vertices and lies in the \(2(d-1)\)-dimensional affine subspace of \(\operatorname{span}_{\mathbb{R}}(\mathcal{A}\cup\mathcal{B})\) determined by the constraint \(\operatorname{Tr}F=1\). In this Appendix, we identify its interior and the facets making up its boundary. We will see the interior is not empty, so that the polytope is \(2(d-1)\)-dimensional. Its facets are therefore \((2d-3)\)-dimensional. They are polytopes with \(2(d-1)\) vertices. In particular, when \(d=3\), \(\operatorname{conv}\left(\mathcal{A}\cup\mathcal{B}\right)\) is a four-dimensional object. Its boundary has \(9\) facets, which are three-dimensional tetrahedra. **Lemma A.1**.: _Let \(m_{\mathcal{A},\mathcal{B}}>0\). Let \(\rho=\sum_{i}\lambda_{i}|a_{i}\rangle\langle a_{i}|+\sum_{j}\mu_{j}|b_{j} \rangle\langle b_{j}|\in\operatorname{conv}\left(\mathcal{A}\cup\mathcal{B}\right)\), with \(\lambda_{i}\geq 0,\mu_{j}\geq 0\). Then_ \[\rho\in\operatorname{Int}\left(\operatorname{conv}\left(\mathcal{A}\cup \mathcal{B}\right)\right)\Leftrightarrow\lambda_{1}\lambda_{2}\ldots\lambda_{d }\neq 0\text{ or }\mu_{1}\mu_{2}\ldots\mu_{d}\neq 0.\] (A.1) Note that the expression of \(\rho\) as a convex combination of the basis vectors is not unique. What we are saying is that, if \(\rho\) can be expressed in the manner stated, then it belongs to the interior of the polytope, and vice versa. Proof.: \(\Rightarrow\) We will show the contrapositive. Suppose therefore that \(\lambda_{1}\lambda_{2}\ldots\lambda_{d}=0=\mu_{1}\mu_{2}\ldots\mu_{d}\). We need to show \(\rho\not\in\operatorname{Int}(\operatorname{conv}\left(\mathcal{A}\cup \mathcal{B}\right))\). We can assume, without loss of generality, that \(\lambda_{d}=0=\mu_{d}\). Then \(Q(\rho)_{dd}=0.\) Now consider, for \(\varepsilon>0\), \(\delta\rho=\varepsilon(|a_{1}\rangle\langle a_{1}|-|a_{d}\rangle\langle a_{d}|)\). Then, \(\operatorname{Tr}(\rho+\delta\rho)=1\), \(\rho+\delta\rho\in\operatorname{span}_{\mathbb{R}}(\mathcal{A}\cup\mathcal{ B})\) and \[Q(\rho+\delta\rho)_{dd}=-\varepsilon|\langle a_{d}|b_{d}\rangle|^{2}<0.\] Hence \(\rho+\delta\rho\not\in\operatorname{conv}\left(\mathcal{A}\cup\mathcal{B}\right)\) so that \(\rho\) belongs to the boundary of \(\operatorname{conv}\left(\mathcal{A}\cup\mathcal{B}\right)\). \(\Leftarrow\) We consider the case where \(\lambda_{1}\lambda_{2}\ldots\lambda_{d}\neq 0\), the other case being analogous. We can suppose without loss of generality that \(\lambda_{1}=\min\lambda_{i}>0\). Then \[\sum_{i}\lambda_{i}|a_{i}\rangle\langle a_{i}|=\frac{\lambda_{1}}{2}|a_{1} \rangle\langle a_{1}|+\sum_{i\neq 1}(\lambda_{i}-\frac{\lambda_{1}}{2})|a_{i} \rangle\langle a_{i}|+\frac{\lambda_{1}}{2}\sum_{j}|b_{j}\rangle\langle b_{j}|.\] Hence \[\rho=\frac{\lambda_{1}}{2}|a_{1}\rangle\langle a_{1}|+\sum_{i\neq 1}(\lambda_ {i}-\frac{\lambda_{1}}{2})|a_{i}\rangle\langle a_{i}|+\sum_{j}(\mu_{j}+\frac{ \lambda_{1}}{2})|b_{j}\rangle\langle b_{j}|.\] Note that all coefficients are strictly positive. Now consider a perturbation \[\delta\rho=\sum_{i}\varepsilon_{i}|a_{i}\rangle\langle a_{i}|+\sum_{j} \delta_{j}|b_{j}\rangle\langle b_{j}|\] with \(\varepsilon_{i},\delta_{j}\in\mathbb{R}\) and \(\operatorname{Tr}\delta\rho=0\), then \[\rho+\delta\rho\in\operatorname{conv}\left(\mathcal{A}\cup\mathcal{B}\right)\] provided the \(\varepsilon_{i},\delta_{j}\) are small enough. In other words, there is a small ball centered on \(\rho\) that belongs to \(\operatorname{conv}\left(\mathcal{A}\cup\mathcal{B}\right)\). As a consequence of the proof, we also have the following result: **Corollary A.2**.: _Let \(m_{\mathcal{A},\mathcal{B}}>0\) and let \(\rho\) be a density matrix. Then \(\rho\in\operatorname{Int}(\operatorname{conv}\left(\mathcal{A}\cup\mathcal{B} \right))\) if and only if there exist \(\lambda_{i}>0,\mu_{j}>0\) so that \(\rho=\sum_{i}\lambda_{i}|a_{i}\rangle\langle a_{i}|+\sum_{j}\mu_{j}|b_{j} \rangle\langle b_{j}|\)._ Let us introduce the notation \[[i_{1},i_{2},\ldots,i_{k};j_{1},j_{2},\ldots,j_{\ell}]=\operatorname{conv} \left(|a_{i_{1}}\rangle\langle a_{i_{1}}|,\ldots,|a_{i_{k}}\rangle\langle a_{ i_{k}}|,|b_{j_{1}}\rangle\langle b_{j_{1}}|,\ldots,|b_{j_{\ell}}\rangle \langle b_{j_{\ell}}|\right),\] for any choice \(1\leq i_{1}<i_{2}<\ldots i_{k}\leq d\), \(1\leq j_{1}<j_{2}<\cdots<j_{\ell}\leq d\). Also, for \(i\in\llbracket 1,d\rrbracket\), we write \(\bar{i}=\llbracket 1,d\rrbracket\setminus\{i\}\). Then the Lemma implies that \[\partial(\operatorname{conv}\left(\mathcal{A}\cup\mathcal{B}\right))=\cup_{ i,j}[\bar{i};\bar{j}].\] When \(d=3\), this becomes \[\partial(\operatorname{conv}\left(\mathcal{A}\cup\mathcal{B}\right))=\cup_{i_ {1}<i_{2};j_{1}<j_{2}}[i_{1},i_{2};j_{1},j_{2}].\] The boundary is then the union of \(9\) tetrahedra. They are glued together along \(18\) triangles of one of the following forms: \([i_{1},i_{2};j]\) with \(i_{1}<i_{2}\) or \([i;j_{1},j_{2}]\) with \(j_{1}<j_{2}\). Appendix B The geometry of \(\mathcal{E}_{\mathrm{KD}+}\): the case of real orthogonal \(U\) in \(d=3\) In this section, we give some more details about the geometry of the convex set \(\mathcal{E}_{\mathrm{KD}+}\) of all KD-positive states in the particular case where \(U\) is a real orthogonal matrix in dimension \(d=3\) (Section B.1). We then analyze in detail the example of Section 5.2 for which both \(\operatorname{conv}\left(\mathcal{A}\cup\mathcal{B}\right)\subsetneq\mathcal{ E}_{\mathrm{KD}+}\) and \(\operatorname{conv}(\mathcal{E}_{\mathrm{KD}+}^{\mathrm{pure}})\subsetneq \mathcal{E}_{\mathrm{KD}+}\) (Section B.2). ### Identifying \(\mathcal{E}_{\mathrm{KD}+}\). We recall that, as in Section 5, in dimension \(3\), if \(U\) is real, we can write \[V_{\mathrm{KDr}}=\operatorname{span}_{\mathbb{R}}(\mathcal{A}\cup\mathcal{B} )\oplus\mathbb{R}F_{\perp}.\] Here, \(F_{\perp}\) is orthogonal to the \(\operatorname{span}_{\mathbb{R}}(\mathcal{A}\cup\mathcal{B})\). We denote by \(P_{\mathcal{A},\mathcal{B}}\) the orthogonal projection on \(\operatorname{span}_{\mathbb{R}}(\mathcal{A}\cup\mathcal{B})\) associated to this decomposition, and by \(T_{\perp}:F\in V_{\mathrm{KDr}}\mapsto\operatorname{Tr}(FF_{\perp})\). Hence, the orthogonal projection of \(F\) on \(\mathbb{R}F_{\perp}\) is given by \(T_{\perp}(F)F_{\perp}\). The following technical lemma and proposition collect the main properties of the set \(\mathcal{E}_{\mathrm{KD}+}\) in this particular situation. **Lemma B.1**.: _Let \(\sigma\in\operatorname{span}_{\mathbb{R}}(\mathcal{A}\cup\mathcal{B})\). Then we have either \((\sigma+\mathbb{R}F_{\perp})\cap\mathcal{E}_{\mathrm{KD}+}=\emptyset\) or there exists \(-\infty<x_{-}(\sigma)\leqslant x_{+}(\sigma)<+\infty\) such that_ \[(\sigma+\mathbb{R}F_{\perp})\cap\mathcal{E}_{\mathrm{KD}+}=\sigma+[x_{-}( \sigma),x_{+}(\sigma)]F_{\perp}.\] Proof.: If \(\sigma\in\operatorname{span}_{\mathbb{R}}(\mathcal{A}\cup\mathcal{B})\), then \((\sigma+\mathbb{R}F_{\perp})\cap\mathcal{E}_{\mathrm{KD}+}\) is a compact convex set. Suppose the set is not empty. Therefore, as \(T_{\perp}\) is continuous, \(T_{\perp}((\sigma+\mathbb{R}F_{\perp})\cap\mathcal{E}_{\mathrm{KD}+})\) is a non-empty compact interval of \(\mathbb{R}\). This interval can be written as \((\sigma+\mathbb{R}F_{\perp})\cap\mathcal{E}_{\mathrm{KD}+}=\sigma+[x_{-}( \sigma),x_{+}(\sigma)]F_{\perp}\) with \(-\infty<x_{-}(\sigma)\leqslant x_{+}(\sigma)<+\infty\). Let \(\mathcal{D}=P_{\mathcal{A},\mathcal{B}}(\mathcal{E}_{\mathrm{KD+}})\) which is a subset of \(\mathrm{span}_{\mathbb{R}}(\mathcal{A}\cup\mathcal{B})\). Note that the second alternative of Lemma B.1 happens if and only if \(\sigma\in\mathcal{D}\). In other words, \(\mathcal{D}\) is the domain of definition of \(x_{-}\) and \(x_{+}\). We will designate by \(\mathrm{Int}(\mathcal{D})\) the interior of \(\mathcal{D}\) as a subset of \(\mathrm{span}_{\mathbb{R}}(\mathcal{A}\cup\mathcal{B})\). **Proposition B.2**.: _We have the following properties:_ 1. _If_ \(\sigma\in\mathrm{Int}(\mathrm{conv}\,(\mathcal{A}\cup\mathcal{B}))\)_,_ \(-\infty<x_{-}(\sigma)<0<x_{+}(\sigma)<+\infty\)_;_ 2. _If_ \(\sigma\in\mathcal{D}\) _and_ \(\sigma\notin\mathrm{conv}\,(\mathcal{A}\cup\mathcal{B})\)_, then either_ \(0<x_{-}(\sigma)\leq x_{+}(\sigma)<+\infty\) _or_ \(-\infty<x_{-}(\sigma)\leq x_{+}(\sigma)<0\)_;_ 3. _If_ \(\sigma\in\mathcal{A}\cup\mathcal{B}\) _then_ \(x_{-}(\sigma)=x_{+}(\sigma)=0\)_;_ 4. _The function_ \(x_{+}\) _has a maximum_ \(x_{\max}\) _on_ \(\mathcal{D}\)_. Moreover, the extreme points of_ \[\mathcal{Y}_{+}=\{\sigma+x_{\max}F_{\perp}\mid\sigma\in\mathcal{D},x_{+}( \sigma)=x_{\max}\}\] _are extreme points of_ \(\mathcal{E}_{\mathrm{KD+}}\)_;_ 5. _The function_ \(x_{-}\) _has a minimum_ \(x_{\min}\) _on_ \(\mathcal{D}\) _. Moreover, the extreme points of_ \[\mathcal{Y}_{-}=\{\sigma+x_{\min}F_{\perp}\mid\sigma\in\mathcal{D},x_{-}( \sigma)=x_{\min}\}\] _are extreme points of_ \(\mathcal{E}_{\mathrm{KD+}}\)_;_ 6. _The function_ \(x_{+}\) _(resp._ \(x_{-}\)_) is concave (resp. convex) on_ \(\mathcal{D}\)_. Thus, it is continuous on_ \(\mathrm{Int}(\mathcal{D})\)_._ In particular, the proposition implies that \(\mathcal{E}_{\mathrm{KD+}}\) lies between the "bounding planes": \[\{F\in V_{\mathrm{KDr}}\mid\mathrm{Tr}\,FF_{\perp}=x_{\max}\}\quad\text{and} \quad\{F\in V_{\mathrm{KDr}}\mid\mathrm{Tr}\,FF_{\perp}=x_{\min}\}.\] Proof.: (i) Suppose \(\sigma\in\mathrm{Int}(\mathrm{conv}\,(\mathcal{A}\cup\mathcal{B}))\), then, by Corollary A.2, \(\sigma=\sum_{i=1}^{d}\lambda_{i}|a_{i}\rangle\langle a_{i}|+\mu_{i}|b_{i} \rangle\langle b_{i}|\) with \(\lambda_{i}>0\) and \(\mu_{i}>0\) for all \(i\in\llbracket 1,d\rrbracket\). Thus, \(\min_{(i,j)\in\llbracket 1,d\rrbracket^{2}}Q_{i,j}(\sigma)>0\). Then, for \(\varepsilon>0\), \[\forall(i,j)\in\llbracket 1,d\rrbracket^{2},\;Q_{i,j}(\sigma+\varepsilon F_{ \perp})=Q_{i,j}(\sigma)+\varepsilon Q_{i,j}(F_{\perp})>\min_{(i,j)\in \llbracket 1,d\rrbracket^{2}}Q_{i,j}(\sigma)-\varepsilon\max_{(i,j)\in \llbracket 1,d\rrbracket^{2}}|Q_{i,j}(F_{\perp})|\;.\] Thus, there exists an \(\varepsilon_{1}>0\) such that for \(\varepsilon\in[0,\varepsilon_{1}]\), \(\sigma+\varepsilon F_{\perp}\in V_{\mathrm{KD+}}\). Here, we recall that \(V_{\mathrm{KD+}}\) is the set of self-adjoint operators with positive KD distributions. Moreover, for \(|\psi\rangle\in\mathcal{H}_{1}\), for all \(\varepsilon\in\mathbb{R}^{+}\), \[\langle\psi|\sigma+\varepsilon F_{\perp}|\psi\rangle\geqslant\langle\psi| \sigma|\psi\rangle-\varepsilon\max_{|\phi\rangle\in\mathcal{H}_{1}}|\langle \phi|F_{\perp}|\phi\rangle|\] and \[\begin{array}{rcl}\langle\psi|\sigma|\psi\rangle=\sum_{i=1}^{d}\lambda_{i} \,|\langle\psi|a_{i}\rangle|^{2}+\mu_{i}\,|\langle\psi|b_{i}\rangle|^{2}& \geqslant&\min_{i\in\llbracket 1,d\rrbracket}\{\lambda_{i},\mu_{i}\}\sum_{i=1}^{d}| \langle\psi|a_{i}\rangle|^{2}+|\langle\psi|b_{i}\rangle|^{2}\\ &\geqslant&2\min_{i\in\llbracket 1,d\rrbracket}\{\lambda_{i},\mu_{i}\}>0. \end{array}\] Consequently, there exists an \(\varepsilon_{2}>0\) such that for \(\varepsilon\in[0,\varepsilon_{2}]\), for all \(|\psi\rangle\in\mathcal{H}_{1}\), \(\langle\psi|\sigma+\varepsilon F_{\perp}|\psi\rangle\geqslant 0\). Therefore, for \(\varepsilon\in[0,\min(\varepsilon_{1},\varepsilon_{2})]\), \(\sigma+\varepsilon F_{\perp}\) is a density matrix with a positive KD distribution so that \(0<x_{+}(\sigma)\). By changing \(\varepsilon\) to \(-\varepsilon\) in the previous lines, it follows that \(x_{-}(\sigma)<0\). (ii) If \(\sigma\notin\mathrm{conv}\,(\mathcal{A}\cup\mathcal{B})\), then by Lemma 3.1, \(\sigma\notin\mathcal{E}_{\mathrm{KD+}}\) and thus, either \(0<x_{-}(\sigma)\) or \(0>x_{+}(\sigma)\). (iii) Suppose \(\sigma=|a_{1}\rangle\langle a_{1}|\), then \(\mathrm{Tr}((\sigma+xF_{\perp})^{2})=1+x^{2}\). For \(x\neq 0\), \(\mathrm{Tr}((\sigma+xF_{\perp})^{2})>1\) implying that \(\sigma+xF_{\perp}\) is not a state. Hence, \(\sigma+xF_{\perp}\notin\mathcal{E}_{\mathrm{KD+}}\) for all \(x\neq 0\). Thus, \(x_{+}(\sigma)=0=x_{-}(\sigma)\). * As \(\mathcal{E}_{\mathrm{KD+}}\) is a compact set, \(\mathrm{T}_{\perp}\) is bounded and reaches its bounds on \(\mathcal{E}_{\mathrm{KD+}}\). Especially, it reaches its maximum \(x_{\mathrm{max}}\) which is strictly positive. Note that \[\mathcal{Y}_{+}=\{\rho\in\mathcal{S}_{d,+,1}\mid\mathrm{T}_{\perp}(\rho)= \mathrm{Tr}(F_{\perp}\rho)=x_{\mathrm{max}}\}\cap\mathcal{E}_{\mathrm{KD+}}.\] (B.1) Thus, \(\mathcal{Y}_{+}\) is compact, convex and not empty so it has an extreme point. Let \(\rho_{e}\) be such an extreme point. We show, by contradiction, that \(\rho_{e}\) is also an extreme point of \(\mathcal{E}_{\mathrm{KD+}}\). Suppose that \(\rho_{e}\) is not, and write \(\rho_{e}=\lambda\rho_{1}+(1-\lambda)\rho_{2}\) with \(\rho_{1},\rho_{2}\in\mathcal{E}_{\mathrm{KD+}}\) and \(\lambda\in(0,1)\). So, \(\mathrm{T}_{\perp}(\rho_{1})\leqslant x_{\mathrm{max}}\) and \(\mathrm{T}_{\perp}(\rho_{2})\leqslant x_{\mathrm{max}}\). Now, suppose \(\mathrm{T}_{\perp}(\rho_{1})<x_{\mathrm{max}}\). Then, \(\mathrm{T}_{\perp}(\rho_{e})<x_{\mathrm{max}}\), which is a contradiction. Thus, \(\mathrm{T}_{\perp}(\rho_{1})=\mathrm{T}_{\perp}(\rho_{2})=x_{\mathrm{max}}\), which show that \(\rho_{1},\rho_{2}\in\mathcal{Y}_{+}.\) As \(\rho_{e}\) is an extreme point of \(\mathcal{Y}_{+}\), \(\rho_{1}=\rho_{2}=\rho_{e}\) and so \(\rho_{e}\) is an extreme point of \(\mathcal{E}_{\mathrm{KD+}}\). * The proof is analogous to the one of (iv). * We show that \(x_{+}\) is concave on its domain of definition. As \(\mathcal{D}\) is the projection of a compact convex set, it is a compact convex set. Take \(\sigma,\sigma^{\prime}\in\mathcal{D}\) and \(\lambda\in[0,1]\). We will show that \(x_{+}(\lambda\sigma+(1-\lambda)\sigma^{\prime})\geqslant\lambda x_{+}(\sigma)+ (1-\lambda)x_{+}(\sigma^{\prime})\). We have that \[\lambda\sigma+(1-\lambda)\sigma^{\prime}+\left[\lambda x_{+}(\sigma)+(1- \lambda)x_{+}(\sigma^{\prime})\right]F_{\perp}=\lambda(\sigma+x_{+}(\sigma)F_ {\perp})+(1-\lambda)(\sigma^{\prime}+x_{+}(\sigma^{\prime})F_{\perp}).\] As a convex combination of KD-positive states, it is a KD-positive state such that \(\lambda x_{+}(\sigma)+(1-\lambda)x_{+}(\sigma^{\prime})\in[x_{-}(\lambda\sigma +(1-\lambda)\sigma^{\prime}),x_{+}(\lambda\sigma+(1-\lambda)\sigma^{\prime})]\). Therefore, \(\lambda x_{+}(\sigma)+(1-\lambda)x_{+}(\sigma^{\prime})\leqslant x_{+}(\lambda \sigma+(1-\lambda)\sigma^{\prime})\). Thus, \(x_{+}\) is a concave function on \(\mathcal{D}\). It is then continuous on \(\mathrm{Int}(\mathcal{D})\)[58]. The same argument shows that \(x_{-}\) is convex and thus also continuous on \(\mathrm{Int}\mathcal{D}\). ### Identifying \(\mathcal{E}_{\mathrm{KD+}}^{\mathrm{ext}}\) : an example of mixed extreme states. We now identify some extreme mixed states of \(\mathcal{E}_{\mathrm{KD+}}\) for the unitary matrix \[U=\frac{1}{3}\begin{pmatrix}-1&2&2\\ 2&-1&2\\ 2&2&-1\end{pmatrix},\] (B.2) introduced in Section 5.2, using Proposition B.2 (iv). Note that we identified all pure KD-positive states for \(U\) in Section 5.2 and that they all lie below \(\mathrm{span}_{\mathbb{R}}(\mathcal{A}\cup\mathcal{B})\): if \(\rho\in\mathcal{E}_{\mathrm{KD+}}^{\mathrm{pure}}\), then \(\mathrm{Tr}\left(\rho F_{\perp}\right)\leq 0\). So any \(\rho\in\mathcal{E}_{\mathrm{KD+}}^{\mathrm{ext}}\) for which \(\mathrm{Tr}\left(\rho F_{\perp}\right)>0\) is a mixed extreme KD-positive state. We explicitly find some of those states as follows. We first determine in Lemma B.3 the maximum \(x_{\mathrm{max}}\) of the function \(x_{+}\), which is strictly positive. This allows us to give a precise description of the set \(\mathcal{Y}_{+}\) in Proposition B.2, and in particular of its extreme points. **Lemma B.3**.: _Let \(U\) be as in Eq. (B.2). Then, the maximum \(x_{\mathrm{max}}\) of \(x_{+}\) on \(\mathcal{D}\) is_ \[x_{\mathrm{max}}=\max_{\sigma\in\mathcal{D}}x_{+}(\sigma)=\frac{1}{2\sqrt{6}}.\] _This value is reached for \(\sigma=\frac{1}{3}\mathbb{I}_{3}\in\mathrm{conv}\left(\mathcal{A}\cup\mathcal{ B}\right)\)._ Proof.: We set \(\sigma\in\mathrm{span}_{\mathbb{R}}(\mathcal{A}\cup\mathcal{B})\) with \(\mathrm{Tr}\left(\sigma\right)=1\) so that \[\sigma=\sum_{i=1}^{3}\lambda_{i}|a_{i}\rangle\langle a_{i}|+\mu_{i}|b_{i} \rangle\langle b_{i}|\text{ and }\sum_{i=1}^{3}\lambda_{i}+\mu_{i}=1.\] For all \(x\in\mathbb{R}\), we compute \[Q(\sigma+xF_{\perp})=\frac{1}{27}\begin{pmatrix}3(\mu_{1}+\lambda_{1})-2x \sqrt{6}&12(\mu_{2}+\lambda_{1})+x\sqrt{6}&12(\mu_{3}+\lambda_{1})+x\sqrt{6} \\ 12(\mu_{1}+\lambda_{2})+x\sqrt{6}&3(\mu_{2}+\lambda_{2})-2x\sqrt{6}&12(\mu_{3}+ \lambda_{2})+x\sqrt{6}\\ 12(\mu_{1}+\lambda_{3})+x\sqrt{6}&12(\mu_{2}+\lambda_{3})+x\sqrt{6}&3(\mu_{3}+ \lambda_{3})-2x\sqrt{6}\end{pmatrix}.\] Thus, if \[3(\mu_{1}+\lambda_{1})-2x\sqrt{6}<0\text{ or }3(\mu_{2}+\lambda_{2})-2x\sqrt{6}<0 \text{ or }3(\mu_{3}+\lambda_{3})-2x\sqrt{6}<0,\] or equivalently, if \[x>\frac{3}{2\sqrt{6}}\min_{i\in[\![1,3]\!]}(\mu_{i}+\lambda_{i}),\] then \(\sigma+xF_{\perp}\) is not KD-positive. Hence, since \(\sigma+x_{+}(\sigma)F_{\perp}\) is KD-positive, \[x_{+}(\sigma)\leqslant\frac{3}{2\sqrt{6}}\min_{i\in[\![1,3]\!]}(\mu_{i}+ \lambda_{i}).\] (B.3) Thus, \[x_{+}(\sigma)\leqslant\frac{3}{2\sqrt{6}}\inf_{\{\lambda_{i},\mu_{i}\}}\min_{ i\in[\![1,3]\!]}(\mu_{i}+\lambda_{i}),\] where the infimum is taken over all the \((\lambda_{i},\mu_{i})_{i\in[\![1,3]\!]}\) such that \(\sigma=\sum_{i=1}^{3}\lambda_{i}|a_{i}\rangle\langle a_{i}|+\mu_{i}|b_{i} \rangle\langle b_{i}|\). Moreover, \(\min_{i\in[\![1,3]\!]}(\mu_{i}+\lambda_{i})\leqslant\frac{1}{3}\) for any \(\sigma=\sum_{i=1}^{3}\lambda_{i}|a_{i}\rangle\langle a_{i}|+\mu_{i}|b_{i} \rangle\langle b_{i}|\in\operatorname{span}_{\mathbb{R}}(\mathcal{A}\cup \mathcal{B})\) with \(\operatorname{Tr}\left(\sigma\right)=1\). Indeed, suppose there exists a \(\sigma\) such that \(\min_{i\in[\![1,3]\!]}(\mu_{i}+\lambda_{i})>\frac{1}{3}\), then \(\operatorname{Tr}(\sigma)\geqslant 3\min_{i\in[\![1,3]\!]}(\mu_{i}+\lambda_{i})>1\), which is a contradiction. Thus, \[\inf_{\{\lambda_{i},\mu_{i}\}}\min_{i\in[\![1,3]\!]}(\mu_{i}+\lambda_{i}) \leqslant\frac{1}{3}.\] Therefore, \[x_{+}(\sigma)\leqslant\frac{1}{2\sqrt{6}}.\] Consequently, as the bound does not depend on \(\sigma\), we obtain that \[x_{\max}\leqslant\frac{1}{2\sqrt{6}}.\] It is proven, in Section 5.2, that \(x_{+}(\frac{1}{3}\overline{1}3)=\frac{1}{2\sqrt{6}}\). Consequently, \(x_{\max}=\frac{1}{2\sqrt{6}}\). **Proposition B.4**.: _The set \(\mathcal{Y}_{+}=\mathcal{E}_{\mathrm{KD}+}\cap\{\rho\in\mathcal{S}_{d,+,1}\mid \operatorname{Tr}(\rho F_{\perp})=\frac{1}{2\sqrt{6}}\}\) is of the form_ \[\left\{\frac{1}{3}\overline{1}_{d}+\frac{1}{2\sqrt{6}}F_{\perp}+\lambda_{1}(|a _{1}\rangle\langle a_{1}|-|b_{1}\rangle\langle b_{1}|)+\lambda_{2}(|a_{2} \rangle\langle a_{2}|-|b_{2}\rangle\langle b_{2}|)\right\},\] _with \(|\lambda_{2}-\lambda_{1}|\leqslant\frac{3}{8}\), \(|\lambda_{1}|\leqslant\frac{3}{8}\) and \(|\lambda_{2}|\leqslant\frac{3}{8}\). Its extreme points are obtained for the following values of \((\lambda_{1},\lambda_{2})\):_ \[\left\{\left(0,\frac{3}{8}\right),\left(0,-\frac{3}{8}\right),\left(\frac{3}{8 },0\right),\left(-\frac{3}{8},0\right),\left(\frac{3}{8},\frac{3}{8}\right), \left(-\frac{3}{8},-\frac{3}{8}\right)\right\}.\] (B.4) Proof.: Let \(\rho\in\mathcal{Y}_{+}\). Then there exists \(\sigma\in\mathcal{D}\) so that \(\rho=\sigma+\frac{1}{2\sqrt{6}}F_{\perp}\). In addition, \(x_{+}(\sigma)=\frac{1}{2\sqrt{6}}\). Then, Eq. (B.3) implies that there exist \(\lambda_{i},\mu_{i}\in\mathbb{R}\) so that \[\sigma=\sum_{i}(\lambda_{i}|a_{i}\rangle\langle a_{i}|+\mu_{i}|b_{i}\rangle \langle b_{i}|)\] and so that \(\min_{i\in[\![1,3]\!]}\mu_{i}+\lambda_{i}=\frac{1}{3}\). Indeed, if such a decomposition does not exist, as \(\sigma\in\operatorname{span}_{\mathbb{R}}(\mathcal{A}\cup\mathcal{B})\), all decompositions \[\sigma=\sum_{i}(\alpha_{i}|a_{i}\rangle\langle a_{i}|+\beta_{i}|b_{i}\rangle \langle b_{i}|)\] satisfy \(\min_{i\in\llbracket 1,3\rrbracket}(\alpha_{i}+\beta_{i})<\frac{1}{3}\). Thus, as shown in Eq (B.3), \[x_{+}(\sigma)\leqslant\frac{3}{2\sqrt{6}}\min_{i\in\llbracket 1,3\rrbracket}( \alpha_{i}+\beta_{i})<\frac{1}{2\sqrt{6}},\] which is a contradiction. Since \(\sum_{i}(\lambda_{i}+\mu_{i})=1\), this implies \(\lambda_{i}+\mu_{i}=\frac{1}{3}\) for \(i=1,2,3\). It follows that there exist \((\lambda_{i})_{i\in\llbracket 1,3\rrbracket}\) so that \[\sigma=\frac{1}{3}\mathbb{I}_{3}+\lambda_{1}(|a_{1}\rangle\langle a_{1}|-|b_{1 }\rangle\langle b_{1}|)+\lambda_{2}(|a_{2}\rangle\langle a_{2}|-|b_{2}\rangle \langle b_{2}|)+\lambda_{3}(|a_{3}\rangle\langle a_{3}|-|b_{3}\rangle\langle b _{3}|).\] As \(\sum_{i=1}^{3}|a_{i}\rangle\langle a_{i}|=\sum_{i=1}^{3}|b_{i}\rangle\langle b _{i}|\), we can simplify the expression to obtain \[\sigma=\frac{1}{3}\mathbb{I}_{3}+\lambda_{1}(|a_{1}\rangle\langle a_{1}|-|b_{ 1}\rangle\langle b_{1}|)+\lambda_{2}(|a_{2}\rangle\langle a_{2}|-|b_{2} \rangle\langle b_{2}|).\] (B.5) The KD distribution of \(\rho=\sigma+\frac{1}{2\sqrt{6}}F_{\perp}\) is \[Q(\sigma+\frac{1}{2\sqrt{6}}F_{\perp})=\frac{1}{27}\begin{pmatrix}0&4+12( \lambda_{1}-\lambda_{2})+\frac{1}{2}&4+12\lambda_{1}+\frac{1}{2}\\ 4-12(\lambda_{1}-\lambda_{2})+\frac{1}{2}&0&4+12\lambda_{2}+\frac{1}{2}\\ 4-12\lambda_{1}+\frac{1}{2}&4-12\lambda_{2}+\frac{1}{2}&0\end{pmatrix}.\] Since \(\rho\in\mathcal{Y}_{+}\), it is KD positive, which is equivalent to \[-4+12|\lambda_{1}|\leqslant\frac{1}{2},\ -4+12|\lambda_{2}|\leqslant\frac{1}{2 },\ \text{and}\ -4+12|\lambda_{1}-\lambda_{2}|\leqslant\frac{1}{2},\] or \[|\lambda_{1}|\leqslant\frac{3}{8},|\lambda_{2}|\leqslant\frac{3}{8},\ \text{and}\ | \lambda_{1}-\lambda_{2}|\leqslant\frac{3}{8}.\] (B.6) The eigenvalues of \(\rho=\sigma+\frac{1}{2\sqrt{6}}F_{\perp}\) are : \[\left\{\frac{1}{4},\frac{3}{8}+\frac{1}{24}\sqrt{9+8^{3}(\lambda_{1}^{2}+ \lambda_{2}^{2}-\lambda_{1}\lambda_{2})},\frac{3}{8}-\frac{1}{24}\sqrt{9+8^{3} (\lambda_{1}^{2}+\lambda_{2}^{2}-\lambda_{1}\lambda_{2})}\right\}.\] Consequently, \(\rho=\sigma+\frac{1}{2\sqrt{6}}F_{\perp}\) is a positive operator if and only if \[\frac{3}{8}-\frac{1}{24}\sqrt{9+8^{3}(\lambda_{1}^{2}+\lambda_{2}^{2}-\lambda_ {1}\lambda_{2})}\geqslant 0.\] This equation is equivalent to \[\lambda_{1}^{2}+\lambda_{2}^{2}+(\lambda_{1}-\lambda_{2})^{2}\leqslant 2\left( \frac{3}{8}\right)^{2}.\] (B.7) We have therefore established that \(\rho\in\mathcal{Y}_{+}\) if and only if it can be written as \(\rho=\sigma+\frac{1}{2\sqrt{6}}F_{\perp}\) with \(\sigma\) as in Eq. (B.5) and with \(\lambda_{1},\lambda_{2}\) satisfying Eq. (B.6)-(B.7). The set \(\mathcal{C}^{\prime}=\{(\lambda_{1},\lambda_{2})\in\mathbb{R}^{2}\mid\lambda_{1 }^{2}+\lambda_{2}^{2}+(\lambda_{1}-\lambda_{2})^{2}\leqslant 2\left(\frac{3}{8} \right)^{2}\}\) is bounded by an ellipse and is as such convex. The set \(\mathcal{C}=\{(\lambda_{1},\lambda_{2})\in\mathbb{R}^{2}\mid|\lambda_{1}| \leqslant\frac{3}{8},|\lambda_{2}|\leqslant\frac{3}{8}\) and \(|\lambda_{1}-\lambda_{2}|\leqslant\frac{3}{8}\}\) is a convex hexagon. Its extreme points are identified to be those given in Eq. (B.4). Noting that these extreme points lie on the ellipse bounding \(\mathcal{C}^{\prime}\), we conclude that in fact \(\mathcal{C}\subsetneq\mathcal{C}^{\prime}\); see Fig. 2. This proves our Proposition. ## Appendix C Pure KD-positive states for MUBs The pure KD-positive states of MUB bases can be characterized as follows. **Theorem C.1**.: _Suppose \(\mathcal{A}\) and \(\mathcal{B}\) are MUB bases. Then: (i) A pure state \(|\psi\rangle\) is KD positive iff \(n_{\mathcal{A}}(\psi)n_{\mathcal{B}}(\psi)=d\); (ii) If \(d\) is a prime number, then the only pure KD-positive states are the basis states._ This result is implicit in [59]. We provide a simple proof below. Note that this result implies that, when \(d\) is a prime number, then the only pure KD-positive states of MUB bases are their basis states. This last result was proven for the DFT in [24], where the same result is also obtained for perturbations of MUB bases that are completely incompatible, a notion introduced in [23]. It is not known, to the best of our knowledge, if under the hypotheses of the theorem, there do also exist mixed KD-positive states. Proof.: Suppose \(|\psi\rangle\) is a KD-positive state. By permuting the order and changing the phases of basis states, we can suppose that \[S_{\mathcal{A}}(\psi)=\llbracket 1,n_{\mathcal{A}}(\psi)\rrbracket,S_{ \mathcal{B}}(\psi)=\llbracket 1,n_{\mathcal{B}}(\psi)\rrbracket,\left( \left\langle a_{i}|\psi\right\rangle\right)_{i\in\llbracket 1,d\rrbracket} \in\left(\mathbb{R}^{+}\right)^{d}\text{ and }\left(\left\langle b_{j}|\psi \right\rangle\right)_{j\in\llbracket 1,d\rrbracket}\in\left(\mathbb{R}^{+}\right)^{d}.\] Here \(S_{\mathcal{A}}(\psi)=\{i\in\llbracket 1,d\rrbracket,\langle a_{i}|\psi \rangle\neq 0\}\) and \(n_{\mathcal{A}}(\psi)=\sharp S_{\mathcal{A}}(\psi)\). The same definitions hold for \(\mathcal{B}\). Hence, since \(U\) is the transition matrix for MUB bases and since \(Q(\psi)\in\left(\mathbb{R}^{+}\right)^{d^{2}}\), one concludes that \(\forall(i,j)\in\llbracket 1,n_{\mathcal{A}}(\psi)\rrbracket\times \llbracket 1,n_{\mathcal{B}}(\psi)\rrbracket,\langle a_{i}|b_{j}\rangle=\frac{1}{ \sqrt{d}}\). By construction, one has \[\forall i\in\llbracket 1,n_{\mathcal{A}}(\psi)\rrbracket,\langle a_{i}|\psi \rangle=\sum_{j=1}^{n_{\mathcal{B}}(\psi)}\langle a_{i}|b_{j}\rangle\langle b _{j}|\psi\rangle=\frac{1}{\sqrt{d}}\sum_{i=1}^{n_{\mathcal{B}}(\psi)} \langle b_{j}|\psi\rangle\] which is independent of \(i\). Similarly, \[\forall j\in\llbracket 1,n_{\mathcal{B}}(\psi)\rrbracket,\langle b_{j}|\psi \rangle=\sum_{i=1}^{n_{\mathcal{A}}(\psi)}\langle b_{j}|a_{i}\rangle\langle a _{i}|\psi\rangle=\frac{1}{\sqrt{d}}\sum_{i=1}^{n_{\mathcal{A}}(\psi)}\langle a _{i}|\psi\rangle\] which is independent of \(j\) so that \[\langle b_{1}|\psi\rangle=\frac{1}{\sqrt{d}}\sum_{i=1}^{n_{\mathcal{A}}(\psi )}\langle a_{i}|\psi\rangle=\frac{1}{d}\sum_{i=1}^{n_{\mathcal{A}}(\psi)} \sum_{i=1}^{n_{\mathcal{B}}(\psi)}\langle b_{j}|\psi\rangle=\frac{\langle b_{ 1}|\psi\rangle}{d}\sum_{i=1}^{n_{\mathcal{A}}(\psi)}\sum_{i=1}^{n_{\mathcal{B }}(\psi)}1=\frac{\langle b_{1}|\psi\rangle}{d}n_{\mathcal{A}}(\psi)n_{ \mathcal{B}}(\psi).\] As \(\langle b_{1}|\psi\rangle\neq 0\), one finds that \[n_{\mathcal{A}}(\psi)n_{\mathcal{B}}(\psi)=d.\] In particular, when \(d\) is prime, this implies that \(n_{\mathcal{A}}(\psi)=1\) or \(n_{\mathcal{B}}(\psi)=1\). In either case, \(|\psi\rangle\) is a basis state.
2309.09141
Event-based Compositional Reasoning of Information-Flow Security for Concurrent Systems
High assurance of information-flow security (IFS) for concurrent systems is challenging. A promising way for formal verification of concurrent systems is the rely-guarantee method. However, existing compositional reasoning approaches for IFS concentrate on language-based IFS. It is often not applicable for system-level security, such as multicore operating system kernels, in which secrecy of actions should also be considered. On the other hand, existing studies on the rely-guarantee method are basically built on concurrent programming languages, by which semantics of concurrent systems cannot be completely captured in a straightforward way. In order to formally verify state-action based IFS for concurrent systems, we propose a rely-guarantee-based compositional reasoning approach for IFS in this paper. We first design a language by incorporating ``Event'' into concurrent languages and give the IFS semantics of the language. As a primitive element, events offer an extremely neat framework for modeling system and are not necessarily atomic in our language. For compositional reasoning of IFS, we use rely-guarantee specification to define new forms of unwinding conditions (UCs) on events, i.e., event UCs. By a rely-guarantee proof system of the language and the soundness of event UCs, we have that event UCs imply IFS of concurrent systems. In such a way, we relax the atomicity constraint of actions in traditional UCs and provide a compositional reasoning way for IFS in which security proof of systems can be discharged by independent security proof on individual events. Finally, we mechanize the approach in Isabelle/HOL and develop a formal specification and its IFS proof for multicore separation kernels as a study case according to an industrial standard -- ARINC 653.
Yongwang Zhao, David Sanan, Fuyuan Zhang, Yang Liu
2023-09-17T02:57:05Z
http://arxiv.org/abs/2309.09141v1
# Event-based Compositional Reasoning of ###### Abstract High assurance of information-flow security (IFS) for concurrent systems is challenging. A promising way for formal verification of concurrent systems is the rely-guarantee method. However, existing compositional reasoning approaches for IFS concentrate on language-based IFS. It is often not applicable for system-level security, such as multicore operating system kernels, in which secrecy of actions should also be considered. On the other hand, existing studies on the rely-guarantee method are basically built on concurrent programming languages, by which semantics of concurrent systems cannot be completely captured in a straightforward way. In order to formally verify state-action based IFS for concurrent systems, we propose a rely-guarantee-based compositional reasoning approach for IFS in this paper. We first design a language by incorporating "Event" into concurrent languages and give the IFS semantics of the language. As a primitive element, events offer an extremely neat framework for modeling system and are not necessarily atomic in our language. For compositional reasoning of IFS, we use rely-guarantee specification to define new forms of unwinding conditions (UCs) on events, i.e., event UCs. By a rely-guarantee proof system of the language and the soundness of event UCs, we have that event UCs imply IFS of concurrent systems. In such a way, we relax the atomicity constraint of actions in traditional UCs and provide a compositional reasoning way for IFS in which security proof of systems can be discharged by independent security proof on individual events. Finally, we mechanize the approach in Isabelle/HOL and develop a formal specification and its IFS proof for multicore separation kernels as a study case according to an industrial standard - ARINC 653. Information-flow security, Noninterference, Compositional Reasoning, Rely-guarantee, Multicore, Separation Kernel, ARINC 653 2019 ## 1 Introduction Information-flow security (IFS) [25] deals with the problem of preventing improper release and modification of information in complex systems. It has been studied at multiple levels of abstraction, such as the application level, the operating system level, and the hardware level. Nowadays critical and high-assurance systems are designed for multi-core architectures where multiple subsystems are running in parallel. For instance, recent microkernels like Xtratull [6] are shared-variable concurrent systems, where the scheduler and system services may be executed simultaneously on different cores of a processor. Information-flow security of concurrent systems is an increasingly important and challenging problem. Traditionally, language-based IFS [25] at the application level defines security policies of computer programs and concerns the data confidentiality to prevent information leakage from _High_ variables to _Low_ ones. However, language-based IFS is often not applicable for system-level security, because (1) in many cases it is impossible to classify _High_ and _Low_ variables; (2) data confidentiality is a weak property and is not enough for system-level security; and (3) language-based IFS is not able to deal with intransitive policies straightforwardly. Therefore, state-action based IFS [24, 28], which can deal with data confidentiality and secrecy of actions together, is usually adopted in formal verification of microkernels [15], separation kernels [23, 27, 8, 32], and microprocessors [29]. The state-action based IFS is defined on a state machine and security proof is discharged by proving a set of unwinding conditions (UCs) [24] that examine individual transitions of the state machine. Although compositional reasoning of language-based IFS has been studied [17, 20], the lack of compositional reasoning of state-action based IFS prevents applying this approach to formally verifying large and concurrent systems. The rely-guarantee method [12, 31] represents a fundamental compositional method for correctness proofs of concurrent systems with shared variables. However, the existing studies on the rely-guarantee method concentrate on concurrent programs (e.g. [16, 22, 31]) which are basically represented in imperative languages with extensions of concurrency. Concurrent systems are not just concurrent programs, for example, the occurrence of exceptions/interrupts from hardware is beyond the scope of programs. The existing languages and their relevance proof systems do not provide a straightforward way to specify and reason concurrent systems. Moreover, the formalization of concurrent programs in existing rely-guarantee methods is at source code level. Choosing the right level of abstraction instead of the low-level programs allows both precise information flow analysis and high-level programmability. Finally, IFS and its formal verification on multicore separation kernels are challenging. As an important sort of concurrent systems, multicore separation kernels establish an execution environ ment, which enables different criticality levels to share a common set of physical resources, by providing to their hosted applications spatial/temporal separation and controlled information flow. The security of separation kernels is usually achieved by the Common Criteria (CC) [21] evaluation, in which formal verification of IFS is mandated for high assurance levels. Although formal verification of IFS on monocore microkernels and separation kernels has been widely studied (e.g. [8, 10, 18, 19, 23, 27, 32]), to the best of our knowledge, there is no related work about compositional reasoning of IFS on multicore operating systems in the literature. To address the above problems, we propose a rely-guarantee-based compositional reasoning approach for verifying information-flow security of concurrent systems in this paper. We first propose an event-based concurrent language \(-\pi\)-Core, which combines elements of concurrent programming languages and system specification languages. In \(\pi\)-Core, an event system represents a single-processing system and is defined by a set of _events_, each of which defines the state transition that can occur under certain circumstances. A concurrent system is defined as a parallel event system on shared states, which is the parallel composition of event systems. Due to the shared states and concurrent execution of event systems, the execution of events in a parallel event system is in an interleaved manner. Then, we define the IFS semantics of \(\pi\)-Core which includes IFS properties and an unwinding theorem to show that UCs examining small-step and atomic actions imply the IFS. In order to compositionally verify IFS of \(\pi\)-Core, we provide a rely-guarantee proof system for \(\pi\)-Core and prove its soundness. Next, we use rely-guarantee specification to define new forms of UCs on events, i.e., event UCs, which examines big-step and non-atomic events. A soundness theorem for event UCs shows that event UCs imply the small-step UCs, and thus the IFS. In such a way, we provide a compositional reasoning for IFS in which security proof of systems can be discharged by local security proof on events. In detail, we make the following contributions: * We propose an event-based language \(\pi\)-Core and its operational semantics by incorporating "Event" into concurrent programming languages. The language could be used to create formal specification of concurrent systems as well as to design and implement the system. Beside the semantics of software parts, the behavior of hardware parts of systems could be specified. * We define the IFS semantics of \(\pi\)-Core on a state machine, which is transformed from \(\pi\)-Core. A transition of the state machine represents an atomic execution step of a parallel event system. A set of IFS properties and small-step UCs are defined on the state machine. We prove an unwinding theorem, i.e., small-step UCs imply the IFS of concurrent systems. * We build a rely-guarantee proof system for \(\pi\)-Core and prove its soundness. This work is the first effort to study the rely-guarantee method for system-level concurrency in the literature. We provide proof rules for both parallel composition of event systems and nondeterministic occurrence of events. Although, we use the proof system for compositional reasoning of IFS in this paper, it is possible to use the proof system for the functional correctness and safety of concurrent systems. * We propose a rely-guarantee-based approach to compositionally verifying IFS of \(\pi\)-Core. Based on the rely-guarantee specification of events, we define new forms of UCs on big-step and non-atomic events. We prove the soundness, i.e., event UCs imply the small-step UCs of \(\pi\)-Core, and thus the security. This work is the first effort to study compositional reasoning of state-action based IFS. * We formalize the \(\pi\)-Core language, the IFS semantics, the rely-guarantee proof system, and compositional reasoning of IFS in the Isabelle/HOL theorem prover 1. All results have been proved in Isabelle/HOL. We also create a concrete syntax for \(\pi\)-Core which is convenient to specify and verify concurrent systems. Footnote 1: The sources files in Isabelle are available as supplementary material. The official web address will be available in camera ready version. * By the compositional approach and its implementation in Isabelle/HOL, we develop a formal specification and its IFS proof of multicore separation kernels according to the ARINC 653 standard. This work is the first effort to formally verify the IFS of multicore separation kernels in the literature. In the rest of this paper, we first give an informal overview in Section 2 which includes the background, problems and challenges in this work, and an overview of our approach. Then we define the \(\pi\)-Core language in Section 3 and its IFS semantics in Section 4. The rely-guarantee proof system is presented in Section 5. In Section 6, we discuss the rely-guarantee approach of IFS. The study case of multicore separation kernels is presented in Section 7. Finally we discuss related work and conclude in Section 8. ## 2 Informal Overview In this section, we first present technical background, problems and challenges in this work. Then, we overview our approach. ### Background Rely-guarantee method.Rely-guarantee [12, 31] is a compositional proof system that extends the specification of concurrent programs with rely and guarantee conditions. The two conditions are predicates over a pair of states and characterizes, respectively, how the environment interferes with the program under execution and how the program guarantees to the environment. Therefore, the specification of a program is a quadruple \((p,R,G,q)\), where \(p\) and \(q\) are pre- and post-conditions, and \(R\) and \(G\) are rely and guarantee conditions. A program satisfies its specification if, given an initial state satisfying \(p\) and an environment whose transitions satisfy \(R\), each atomic transition made by the program satisfies \(G\) and the final state satisfies \(q\). A main benefit of this method is compositionality, i.e., the verification of large concurrent programs can be reduced to the independent verification of individual subprograms. Information-flow security.The notion _noninterference_ is introduced in [9] in order to provide a formal foundation for the specification and analysis of IFS policies. The idea is that a security domain \(u\) is noninterferting with a domain \(v\) if no action performed by \(u\) can influence the subsequent outputs seen by \(v\). Language-based IFS [25] defines security policies of programs and handles two-level domains: _High_ and _Low_. The variables of programs are assigned either _High_ or _Low_ labels. Security hereby concerns the data confidentiality to prevent information leakage, i.e. variations of the _High_-level data should not cause a variation of the _Low_-level data. Intransitive policies [24] cannot be addressed by traditional language-based IFS [28]. This problem is solved in [24], where noninterference is defined in a state-action manner. The state-action based noninterference concerns the visibility of _actions_, i.e. the secrets that actions introduce in the system state. It is usually chosen for verifying system-level security, such as general purpose operating systems and separation kernels [18]. Language-based IFS is generalized to arbitrary multi-domain policies in [28] as a new state-action based notion _nonleakage_. In [28], nonleakage and the classical noninterference are combined as a new notion _noninfluence_, which considers both the data confidentiality and the secrecy of actions. These properties have been instantiated for operating systems in [18] and formally verified on the seL4 monocore microkernel [19]. ### Problems and Challenges _Rely-guarantee languages are not straightforward for systems._ The studies on the rely-guarantee method focus on compositional reasoning of concurrent programs. Hence, the languages used in rely-guarantee methods (e.g. [16, 22, 31]) basically extend imperative languages by parallel composition. The semantics of a system cannot be completely captured by these programming languages. For instance, interrupt handlers (e.g., system calls and scheduling) in microkernels are programmed in C language. It is beyond the scope of C language when and how the handlers are triggered. However, it is necessary to capture this kind of system behavior for the security of microkernels. The languages in the rely-guarantee method do not provide a straightforward way to specify and verify such behavior in concurrent systems. Jones et al. [13] mention that employing "Actions" [4] or "Events" [2] into rely-guarantee can differ an extremely neat framework for modelling systems. On the other hand, nondeterminism is also necessary for system specification at abstraction levels, which is also not supported by languages in the rely-guarantee method. _Incorporating languages and state machines for IFS._ The rely-guarantee method defines a concurrent programming language and a set of proof rules w.r.t. semantics of the language. The rely/guarantee condition is a set of state pairs, where the action triggering the state transition is not taken into account. It is the same as language-based IFS which defines the security based on the state trace. However, state-action based IFS is defined on a state machine and takes actions into account for secrecy of actions. Rely-guarantee-based compositional reasoning of state-action based IFS requires the connection between the programming language and the state machine. We should create the relation of program execution and rely/guarantee conditions to the actions. _Compositionality of state-action based IFS is unclear._ Language-base IFS concerns information leakage among state variables and is a weaker property than state-action based IFS. Compositional verification of language-based IFS has been studied (e.g. [17, 20]) before. As a strong security property, compositionality of state-action based IFS for concurrent system is still unclear. The standard proof of state-action based IFS is discharged by proving a set of unwinding conditions that examine individual transitions of the system. Here, the individual transition is executed in an atomic manner. Directly applying the unwinding conditions to concurrent systems may lead to explosion of the proof space due to the interleaving. The atomicity of actions on which unwinding conditions are defined has to be relaxed for compositional reasoning such that unwinding conditions can be defined on more coarse-grained level of granularity. _Verifying IFS of multicore microkernels is difficulty._ Formal verification of IFS on monocore microkernels has been widely studied (e.g. [8, 10, 18, 19, 23, 27, 32]). IFS of seL4 assumes that interrupts are disabled in kernel mode to avoid in-kernel concurrency [19]. The assumption simplifies the security proof by only examining big-step actions (e.g., system calls and scheduling). In multicore microkernels, the kernel code is concurrently executed on different processor cores with the shared memory. The verification approaches for monocore microkernels are not applicable for multicore. ### Our Approach In order to provide a rely-guarantee proof system for concurrent systems, we first introduce _events_ into programming languages in the rely-guarantee method. An example of events in the concrete syntax is shown in Fig. 1. An event is actually a non-atomic and parametrized state transition of systems with a set of guard conditions to constrain the type and value of parameters, and current state. The body of an event defines the state transition and is represented by imperative statements. We provide a special parameter \(\kappa\) for events to indicate the execution context of an event, i.e., on which single-processing system that the event is executing. For instance, the \(\kappa\) could be used to indicate the current processor core in multicore systems. An event system represents the behavior of a single-processing system and has two forms of event composition, i.e. _event sequence_ and _event set_. The event sequence models the sequential execution of events. The event set models the nondeterministic occurrence of events, i.e., events in this set can occur when the guard condition is satisfied. The parallel composition of event systems is fine-grained since small-step actions in events are interleaved in semantics of \(\pi\)-Core. This relaxes the atomicity constraint of events in other approaches (e.g. Event-B [2]). It is obvious that concurrent programs represented by the languages in [16, 22, 31] could be represented by \(\pi\)-Core too. State-action based IFS is defined and proved based on a state machine. We construct a state machine from a parallel event system in \(\pi\)-Core. Each action of the machine is a small-step action of events. To relate the small step to the action, each transition rule in operational semantics of \(\pi\)-Core has an action label to indicate the kind of the transition. The action label shows the information about action type and in which event system the action executes. On the other hand, we add a new element, i.e. event context, in the configuration in the semantics. The event context is a function to indicate which event is currently executing in each event system. Then, IFS of \(\pi\)-Core is defined on the state machine. In this paper, we use two-level unwinding conditions, i.e. small-step and event unwinding conditions. The small-step UCs examine small steps in events, which is atomic. The unwinding theorem shows that satisfaction of small-step UCs implies the security. This is the IFS semantics of \(\pi\)-Core by following traditional IFS. The problem of directly applying the unwinding theorem is the explosion of proof space due to interleaving and the small-step conditions. A solution is to enlarge the granularity to the event level, and thus we define the event UCs of \(\pi\)-Core. Since the guarantee condition of an event characterizes how the event modifies the environment, the event UCs are defined based on the guarantee condition of events. Finally, the compositionality of state-action based IFS means that if all events defined in a concurrent system satisfy the event UCs and the system is closed, then the system is secure. We conclude this by the soundness of event UCs, i.e., event UCs imply the small-step UCs in \(\pi\)-Core. ## 3 The \(\pi\)-Core Language This section introduces the \(\pi\)-Core language including its abstract syntax, operational semantics, and computations. ### Abstract Syntax By introducing "Events" into concurrent programming languages, we create a language with four levels of elements, i.e., _programs_ Figure 1: An Example of Event represented by programming languages, _events_ constructed based on programs, _event systems_ composed by events, and _parallel event systems_ composed by event systems. The abstract syntax of \(\pi\)-Core is shown in Fig. 2. The syntax of programs is intuitive and is used to describe the behavior of events. The **Basic**\(f\) command represents an atomic state transformation, for example, an assignment and the **Skip** command. The **Await**\(b\)\(P\) command executes program \(P\) atomically whenever boolean condition \(b\) holds. The **Nondt**\(r\) command defines the potential next states via the state relation \(r\). It can be used to model nondeterministic choice. The rest are well-known. An event is actually a parametrized program to represent the state change of an event system. In an event, \(\alpha\) with the type of \((p\times\mathcal{K})\rightarrow(g\times P)\) is an event specification, where \(p\) is the parameters, \(\mathcal{K}\) indicates the label of an event system, \(g\) is the guard condition of the event, and \(P\) is a program which is the body of the event. An event **BasicEvt**\(\alpha\) can occur under concrete parameters \(p\) in event system \(\kappa\) when its guard condition (i.e. \(fst(\alpha(p,\kappa))\)) is true in current state. Then, it behaves as an anonymous event **AnonyEvt**\(( snd(\alpha(p,\kappa)))\). An anonymous event is actually a wrapper of a program to represent the intermediate specification during execution of events. The event system indeed constitutes a kind of state transition system. It has two forms of event composition, i.e. _event sequence_ and _event set_. For an event set, when the guard conditions of some events are true, then one of the corresponding events necessarily occurs and the state is modified accordingly. When the occurred event is finished, the guard conditions are checked again, and so on. For an event sequence \(\mathcal{E};\mathcal{S}\), when the guard condition of event \(\mathcal{E}\) is true, then \(\mathcal{E}\) necessarily occurs and the state is modified accordingly, finally it behaves as event system \(\mathcal{S}\). A concurrent system is modeled by a parallel event system, which is the parallel composition of event systems. The parallel composition is a function from \(\mathcal{K}\) to event systems. Note that a model eventually terminates is not mandatory. As a matter of fact, most of the systems we study run forever. We introduce an auxiliary function to query all events defined in event systems and parallel event systems as follows. \[\begin{cases}evts(\mathcal{E}_{0}\ \oplus\ \mathcal{E}_{1}\...\ \oplus\ \mathcal{E}_{n})=\{\mathcal{E}_{0},\mathcal{E}_{1},...,\mathcal{E}_{n}\}\\ evts(\mathcal{E};\mathcal{S})=\{\mathcal{E}\}\cup evts(\mathcal{S})\\ evts(\mathcal{P}\mathcal{S})\triangleq\bigcup_{\kappa}evts(\mathcal{P} \mathcal{S}(\kappa))\end{cases}\] ### Operational Semantics Semantics of \(\pi\)-Core is defined via transition rules between configurations. A configuration \(\mathcal{C}\) is defined as a triple \((\sharp,s,x)\), where \(\sharp\) is a specification (e.g., a program, an event, an event system, or a parallel event system), \(s\) is a state, and \(x:\mathcal{K}\rightarrow\mathcal{E}\) is an event context. The event context indicates which event is currently executed in an event system. We use \(\sharp_{\mathcal{C}}\), \(s_{\mathcal{C}}\), and \(x_{\mathcal{C}}\) to represent the three parts of a configuration \(\mathcal{C}\) respectively. A system can perform two kinds of transitions: _action transitions_, performed by the system itself, and _environment transitions_, performed by a different system of the parallel composition or by an arbitrary environment. A transition rule of actions has the form \((\sharp_{1},s_{1},x_{1})\overset{\delta}{\longrightarrow}(\sharp_{2},s_{2},x_ {2})\), where \(\delta=t\bar{\omega}\kappa\) is a label indicating the kind of transition. \(t:=c\ |\ \mathcal{E}\), where \(c\) is a program action and \(\mathcal{E}\) is the occurrence of event \(\mathcal{E}\). \(\bar{\omega}\kappa\) means that the action \(\delta\) occurs in event system \(\kappa\). A rule of environment transition has the form \((\sharp,s,x)\overset{\epsilon}{\longrightarrow}(\sharp,s^{\prime},x^{\prime})\), where \(e\) is the label of environment transition. Intuitively, a transition made by the environment may change the state and the event context but not the specification. Transition rules of actions are shown in Fig. 3. The transition rules of programs are mostly standard. The \(\overset{c^{*}}{\longrightarrow}\) in the Awat rule is the reflexive transitive closure of \(\overset{c}{\longrightarrow}\). The program action modifies the state but not the event context. The execution of **AnonyEvt**\(P\) mimics program \(P\). The BasicEvt rule shows the occurrence of an event. The currently executing event of event system \(\kappa\) in the event context is updated. The EvtSet, EvtSet1, and EvtSeq2 rules means that when an event occurs in an event set, the event executes until it finishes in the event system. The Par rule shows that execution of a parallel event system is modeled by a nondeterministic interleaving of the atomic execution of event systems. \(\mathcal{P}\mathcal{S}(\kappa\mapsto\mathcal{S}^{\prime})\) is a function derived from \(\mathcal{P}\mathcal{S}\) by mapping \(\kappa\) to \(\mathcal{S}^{\prime}\). ### Computation A _computation_ of \(\pi\)-Core is a sequence of transitions, which is defined as the form \[\mathcal{C}_{0}\overset{\mathbf{t}_{0}}{\longrightarrow}\mathcal{C}_{1} \overset{\mathbf{t}_{1}}{\longrightarrow}...\overset{\mathbf{t}_{n}}{ \longrightarrow}\mathcal{C}_{n}\overset{\mathbf{t}_{n-1}}{\longrightarrow}...,(where\ \mathbf{t}:=\delta\ |\ e)\] We define the set of computations of parallel event systems \(\Psi_{\mathcal{P}\mathcal{S}}\), as the set of lists of configurations inductively defined as follows, where \(\#\) is the connection operator of two lists. The one-element list of configurations is always a computation. Two consecutive configurations are part of a computation if they are the initial and final configurations of an environment or action transition. \[\begin{cases}[(\mathcal{P}\mathcal{S},s,x)]\in\Psi_{\mathcal{P}\mathcal{S}}\\ (\mathcal{P}\mathcal{S},s_{1},x_{1})\#cs\in\Psi_{\mathcal{P}\mathcal{S}}\\ \qquad\qquad\Longrightarrow(\mathcal{P}\mathcal{S},s_{2},x_{2})\#(\mathcal{P} \mathcal{S},s_{1},x_{1})\#cs\in\Psi_{\mathcal{P}\mathcal{S}}\\ (\mathcal{P}\mathcal{S}_{2},s_{2},x_{2})\overset{\delta}{\longrightarrow}( \mathcal{P}\mathcal{S}_{1},s_{1},x_{1})\land(\mathcal{P}\mathcal{S}_{1},s_{1},x _{1})\#cs\in\Psi_{\mathcal{P}\mathcal{S}}\\ \qquad\qquad\Longrightarrow(\mathcal{P}\mathcal{S}_{2},s_{2},x_{2})\#( \mathcal{P}\mathcal{S}_{1},s_{1},x_{1})\#cs\in\Psi_{\mathcal{P}\mathcal{S}} \\ \qquad\qquad\Longrightarrow(\mathcal{P}\mathcal{S}_{2},s_{2},x_{2})\#( \mathcal{P}\mathcal{S}_{1},s_{1},x_{1})\#cs\in\Psi_{\mathcal{P}\mathcal{S}} \end{cases}\] The computations of programs, events, and event systems are defined in a similar way. We use \(\Psi(\mathcal{P}\mathcal{S})\) to denote the set of computations of a parallel event system \(\mathcal{P}\mathcal{S}\). The function \(\Psi(\mathcal{P}\mathcal{S},s,x)\) denotes the computations of \(\mathcal{P}\mathcal{S}\) executing from an initial state \(s\) and event context \(x\). The computations of programs, events, and event systems are also denoted as the \(\Psi\) function. For each computation \(\varpi\in\Psi(\mathcal{P}\mathcal{S})\), we use \(\varpi_{i}\) to denote the configuration at index \(i\). For convenience, we use \(\varpi\) to denote computations of programs, events, and event systems too. We say that a parallel event system \(\mathcal{P}\mathcal{S}\) is a _closed system_ when there is no environment transition in computations of \(\mathcal{P}\mathcal{S}\). We define an equivalent relation on computations as follows. Here, we concern the state, event context, and transitions, but not the specification of a configuration. Figure 2: Abstract Syntax of the \(\pi\)-Core Language **Definition 1**: **(Simulation of Computations)**. A computation \(\varpi_{1}\) is a simulation of \(\varpi_{2}\), denoted as \(\varpi_{1}\asymp\varpi_{2}\), if * \(len(\varpi_{1})=len(\varpi_{2})\) * \(\forall i<len(\varpi_{1})-1.\)\(s_{\varpi_{1_{i}}}=s_{\varpi_{2_{i}}}\wedge x_{\varpi_{1_{i}}}=x_{\varpi_{2_{i}}} \wedge(\varpi_{1_{i}}\stackrel{{\delta}}{{\longrightarrow}}\) \(\varpi_{1_{i+1}})=(\varpi_{2_{i}}\stackrel{{\delta}}{{\longrightarrow}} \varpi_{2_{i+1}})\) ## 4 Information-flow Security of \(\pi\)-Core This section discusses state-action based IFS of the \(\pi\)-Core language. We consider the security of parallel event systems that are closed. We first introduce the security policies. Then, we construct a state machine from \(\pi\)-Core. Based on the state machine, we present the security properties and the unwinding theorem. ### IFS Configuration In order to discuss the security of a parallel event system \(\mathcal{PS}\), we assume a set of security domains \(\mathcal{D}\) and a security policy \(\leadsto\) that restricts the allowable flow of information among those domains. The security policy \(\leadsto\) is a reflexive relation on \(\mathcal{D}\). \(d_{1}\leadsto d_{2}\) means that actions performed by \(d_{1}\) can influence subsequent outputs seen by \(d_{2}\). \(\not\leadsto\) is the complement relation of \(\leadsto\). We call \(\leadsto\) and \(\not\leadsto\) the _interference_ and _noninterference_ relations respectively. Each event has an execution domain. Traditional formulations in the state-action based IFS assume a static mapping from events to domains, such that the domain of an event can be determined solely from the event itself [24, 28]. For flexibility, we use a dynamic mapping, which is represented by a function \(dom\_e:S\times\mathcal{K}\times\mathcal{E}\to\mathcal{D}\), where \(S\) is the system state. The \(\mathcal{PS}\) is _view-partitioned_ if, for each domain \(d\in\mathcal{D}\), there is an equivalence relation \(\stackrel{{\delta}}{{\sim}}\) on \(S\). For convenience, we define \(\mathcal{C}_{1}\stackrel{{\delta}}{{\sim}}\mathcal{C}_{2} \triangleq\mathcal{S}_{c_{1}}\stackrel{{\delta}}{{\leadsto}}s_{c _{2}}\). An observation function of a domain \(d\) to a state \(s\) is defined as \(ob(s,d)\). For convenience, we define \(ob(\mathcal{C},d)\triangleq ob(s_{\mathcal{C}},d)\). ### State Machine Representation of \(\pi\)-Core IFS semantics of \(\pi\)-Core consider small-step actions of systems. A small-step action in the machine is identified by the label of a transition, the event that the action belongs to, and the domain that triggers the event. We construct a nondeterministic state machine for a parallel event system as follows. **Definition 2**: **.** A state machine of a closed \(\mathcal{PS}\) executing from an initial state \(s_{0}\) and initial event context \(x_{0}\) is a quadruple \(\mathcal{M}=\langle\Delta,A,step,\mathcal{C}_{0}\rangle\), where * \(\Delta\) is the set of configurations. * \(A\) is the set of actions. An action is a triple \(a=\langle\delta,ev,d\rangle\), where \(\delta\) is a transition label, \(ev\) is an event, and \(d\) is a domain. * \(step:A\rightarrow\mathbb{P}(\Delta\times\Delta)\) is the transition function, where \(step(a)=\{(\mathcal{C},\mathcal{C}^{\prime})\mid\mathcal{C}\stackrel{{ \delta_{a}}}{{\longrightarrow}}\mathcal{C}^{\prime}\wedge((\delta_{a}= ev_{a}@\kappa\wedge dom\_e(s_{\mathcal{C}},\kappa,ev_{a})=d_{a}))\vee(\delta_{a}=c@\kappa \wedge ev_{a}=x_{\mathcal{C}}(\kappa)\wedge dom\_e(s_{\mathcal{C}},\kappa, ev_{a})=d_{a}))\}\). * \(\mathcal{C}_{0}=\langle\mathcal{PS},s_{0},x_{0}\rangle\) is the initial configuration. Based on the function \(step\), we define the function \(run\) as shown in Fig. 4 to represent the execution of a sequence of actions. We prove the following lemma to ensure that the state machine is an equivalent representation of the \(\pi\)-Core language. **Lemma 1**: **.** The state machine defined in Definition 2 is an equivalent representation of \(\pi\)-Core, i.e., * If \((\mathcal{C}_{1},\mathcal{C}_{2})\in run(as)\), then \(\exists\varpi\in\Psi_{\mathcal{PS}}\wedge\varpi_{0}=\mathcal{C}_{1}\wedge last (\varpi)=\mathcal{C}_{2}\wedge(\forall j<len(\varpi)-1.\)\(\varpi_{j}\stackrel{{\delta_{a}}}{{\longrightarrow}}\varpi_{(j+1)})\), and * If \(\varpi\in\Psi_{\mathcal{PS}}\wedge\varpi_{0}=\mathcal{C}_{1}\wedge last(\varpi)= \mathcal{C}_{2}\wedge(\forall j<len(\varpi)-1.\)\(\neg(\varpi_{j}\stackrel{{ e}}{{\longrightarrow}}\varpi_{(j+1)}))\), then \(\exists as.\)\((\mathcal{C}_{1},\mathcal{C}_{2})\in run(as)\wedge(\forall j<len(\varpi)-1.\)\(\varpi_{j}\stackrel{{\delta_{a}}}{{\longrightarrow}}\varpi_{(j+1)})\), \(len(\varpi)-1.\)\(\varpi_{j}\stackrel{{\delta_{a}}_{j}}{{\longrightarrow}}\varpi_{(j+1)})\) Since we consider closed parallel event systems, there is no environment transition in the computations of \(\mathcal{PS}\), i.e., \(\forall j<len(\varpi)-1.\)\(\neg(\varpi_{j}\stackrel{{ e}}{{\longrightarrow}}\varpi_{(j+1)})\). ### Information-flow Security Properties We now discuss the IFS properties based on the state machine constructed above. By following the security properties in [28], we define _noninterference_, _nonleakage_, and _noninfluence_ properties in this work. The auxiliary functions used by IFS are defined in detail in Fig. 4. The function \(execution(\mathcal{C},as)\) (denoted as \(\mathcal{C}\triangleright as\)) returns the set of final configurations by executing a sequence of actions \(as\) from a configuration \(\mathcal{C}\), where \(\lhd\) is the domain restriction of a relation. By Figure 3: Operational Semantics of \(\pi\)-Core Language the function \(execution\), the reachability of a configuration \(\mathcal{C}\) from the initial configuration \(\mathcal{C}_{0}\) is defined as \(reachable(\mathcal{C})\) (denoted as \(\mathcal{R}(\mathcal{C})\)). The essence of intransitive noninterference is that a domain \(d\) cannot distinguish the final states between executing a sequence of actions \(as\) and executing its purged sequence. In the intransitive purged sequence (\(ipurge(as,d)\) in Fig. 4), the actions of domains that are not allowed to pass information to \(d\) directly or indirectly are removed. In order to express the allowed information flows for the intransitive policies, we use a function \(sources(as,d)\) as shown in Fig. 4, which yields the set of domains that are allowed to pass information to a domain \(d\) when an action sequence \(as\) executes. The observational equivalence of an execution is thus denoted as \(\mathcal{C}_{1}\bow as_{1}\stackrel{{ d}}{{\simeq}}\mathcal{C}_{2} \bow as_{2}\), which means that a domain \(d\) is identical to any two final states after executing \(as_{1}\) from \(\mathcal{C}_{1}\) (\(\mathcal{C}_{1}\bow as_{1}\)) and executing \(as_{2}\) from \(\mathcal{C}_{2}\). The classical nontransitive noninterference [24] is defined as the _noninterference_ property as follows. \[noninterference\triangleq\forall\;as,d.\;\mathcal{C}_{0}\bow as\stackrel{{ d}}{{\simeq}}\mathcal{C}_{0}\triangleright ipurge(as,d)\] The above definition of noninterference is based on the initial configuration \(\mathcal{C}_{0}\), but concurrent systems usually support _warm_ or _cold start_ and they may start to execute from a non-initial configuration. Therefore, we define a more general version \(noninterference_{-}\)\(r\) as follows based on the function \(reachable\). This general noninterference requires that the system starting from any reachable configuration is secure. It is obvious that this noninterference implies the classical noninterference due to \(\mathcal{R}(\mathcal{C}_{0})=True\). \[noninterference_{-}r\triangleq\] \[\forall\;as,d,\mathcal{C}.\;\mathcal{R}(\mathcal{C})\longrightarrow\mathcal{ C}\triangleright as\stackrel{{ d}}{{\simeq}}\mathcal{C}\triangleright ipurge(as,d)\] The intuitive meaning of _nonleakage_ is that if data are not leaked initially, data should not be leaked during executing a sequence of actions. Concurrent systems are said to preserve nonleakage when for any pair of reachable configuration \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\) and an observing domain \(d\), if (1) \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\) are equivalent for all domains that may (directly or indirectly) interfere with \(d\) during the execution of \(as\), i.e. \(\mathcal{C}_{1}\stackrel{{ sources(as,d)}}{{\approx}}\mathcal{C}_{2}\), then \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\) are observationally equivalent for \(d\) and \(as\). Noninfluence is the combination of nonleakage and classical noninterference. Noninfluence ensures that there is no secrete data leakage and secrete actions are not visible according to the information-flow security policies. The two security properties are defined as follows. We have that _noninfluence_ implies _noninterference_. \[nonleakage\triangleq \forall\,\mathcal{C}_{1},\mathcal{C}_{2},d,as.\;\mathcal{R}( \mathcal{C}_{1})\wedge\mathcal{R}(\mathcal{C}_{2})\] \[\longrightarrow\mathcal{C}_{1}\stackrel{{ sources(as,d)}}{{\approx}}\mathcal{C}_{2} \longrightarrow\mathcal{C}_{1}\triangleright as\stackrel{{ d}}{{\simeq}}\mathcal{C}_{2}\triangleright as\] \[noninfluence\triangleq\forall\,\mathcal{C}_{1},\mathcal{C}_{2},d,as.\; \mathcal{R}(\mathcal{C}_{1})\wedge\mathcal{R}(\mathcal{C}_{2})\] \[\longrightarrow\mathcal{C}_{1}\stackrel{{ sources(as,d)}}{{\approx}}\mathcal{C}_{2} \longrightarrow\mathcal{C}_{1}\triangleright as\stackrel{{ d}}{{\simeq}}\mathcal{C}_{2} \triangleright ipurge(as,d)\] ### Small-step Unwinding Conditions and Theorem The standard proof of IFS is discharged by proving a set of unwinding conditions [24] that examine individual execution steps of the system. This paper also follows this approach. We first define the small-step unwinding conditions as follows. **Definition 3** (**Observation Consistent - OC**).: For a parallel event system \(\mathcal{PS}\), the equivalence relation \(\sim\) are said to be _observation consistent_ if \[\forall\mathcal{C}_{1},\mathcal{C}_{2},d.\;\mathcal{C}_{1}\stackrel{{ d}}{{\sim}}\mathcal{C}_{2}\longrightarrow ob(\mathcal{C}_{1},d)= ob(\mathcal{C}_{2},d)\] **Definition 4** (**Locally Respects - LR**).: A parallel event system \(\mathcal{PS}\) locally respects \(\sim\) if \[\forall a,d,\mathcal{C}.\;\mathcal{R}(\mathcal{C})\longrightarrow d_{a} \not\succ d\longrightarrow\] \[(\forall\mathcal{C}^{\prime}.\;(\mathcal{C},\mathcal{C}^{\prime}) \in step(a)\longrightarrow\mathcal{C}\stackrel{{ d}}{{\sim}} \mathcal{C}^{\prime})\] **Definition 5** (**Step Consistent - SC**).: A parallel event system \(\mathcal{PS}\) is step consistent if \[\forall a,d, \mathcal{C}_{1},\mathcal{C}_{2}.\;\mathcal{R}(\mathcal{C}_{1}) \wedge\mathcal{R}(\mathcal{C}_{2})\longrightarrow\] \[(\forall\mathcal{C}^{\prime}_{1},\mathcal{C}^{\prime}_{2}.\;( \mathcal{C}_{1},\mathcal{C}^{\prime}_{1})\in step(a)\wedge(\mathcal{C}_{2}, \mathcal{C}^{\prime}_{2})\in step(a)\] \[\longrightarrow\mathcal{C}^{\prime}_{1}\stackrel{{ d}}{{\sim}} \mathcal{C}^{\prime}_{2})\] The locally respects condition means that an action \(a\) that executes in a configuration \(\mathcal{C}\) can affect only those domains to which the domain executing \(a\) is allowed to send information. The step consistent condition says that the observation by a domain \(d\) after an action \(a\) occurs can depend only on \(d\)'s observation before \(a\) occurs, as well as the observation by the domain executing \(a\) before \(a\) occurs if that domain is allowed to send information to \(d\). We prove the small-step unwinding theorem for _noninfluence_ and _nonleakage_ as follows. **Theorem 1** (**Unwinding Theorem of Noninfluence**).: \[OC\wedge LR\wedge SC\Longrightarrow noninfluence\] **Theorem 2** (**Unwinding Theorem of Nonleakage**).: \[OC\wedge LR\wedge SC\Longrightarrow nonleakage\] ## 5 Rely-Guarantee Proof System for \(\pi\)-Core For the purpose of compositional reasoning of IFS, we propose a rely-guarantee proof system for \(\pi\)-Core in this section. We first introduce the rely-guarantee specification and its validity. Then, a set of proof rules and their soundness for the compositionality are discussed. ### Rely-Guarantee Specification A rely-guarantee specification for a system is a quadruple \(RGCond=\langle pre,R,G,pst\rangle\), where \(pre\) is the pre-condition, \(R\) is the rely condition, \(G\) is the guarantee condition, and \(pst\) is the post condition. Figure 4: Auxiliary Functions of Information-flow Security The assumption and commitment functions following a standard way are defined as follows. \[A(pre,R)\triangleq\{\varpi\mid s_{\varpi_{0}}\in pre\land(\forall i< len(\varpi)-1.\\ (\varpi_{i}\stackrel{{ e}}{{\longrightarrow}}\varpi_{i +1})\longrightarrow(s_{\varpi_{i}},s_{\varpi_{i+1}})\in R)\}\] \[C(G,pst)\triangleq\{\varpi\mid(\forall i<len(\varpi)-1.\\ (\varpi_{i}\stackrel{{ s}}{{\longrightarrow}} \varpi_{i+1})\longrightarrow(s_{\varpi_{i}},s_{\varpi_{i+1}})\in G)\\ \wedge(\vec{s}_{last(\varpi)}=\textbf{None}\longrightarrow s_{ \varpi_{n}}\in post)\}\] For an event, the commitment function is similar, but the condition \(\vec{s}_{last(\varpi)}=\textbf{AnonyEvt None}\). Since event systems and parallel event systems execute forever, the commitment function of them is defined as follows. We release the condition on the final state. \[C(G,pst)\triangleq\{\varpi\mid(\forall i<len(\varpi)-1.\\ (\varpi_{i}\stackrel{{\delta}}{{\longrightarrow}} \varpi_{i+1})\longrightarrow(s_{\varpi_{i}},s_{\varpi_{i+1}})\in G)\}\] Validity of rely-guarantee specification in a parallel event system means that the system satisfies the specification, which is precisely defined as follows. Validity for programs, events, and event systems are defined in a similar way. Definition 6 (Validity of Rely-Guarantee Specification): A parallel event system \(\mathcal{PS}\) satisfies its specification \(\langle pre,R,G,pst\rangle\), denoted as \(\models\mathcal{PS}\ \textbf{sat}\ \langle pre,R,G,pst\rangle\), iff \(\forall s,x\). \(\Psi(\mathcal{PS},s,x)\cap A(pre,R)\subseteq C(G,pst)\). ### Proof Rules We present the proof rules in Fig. 5, which gives us a relational proof method for concurrent systems. \(UNIV\) is the universal set. The proof rules for programs are mostly standard [22, 31]. For **Nondt**\(r\), any state change in \(r\) requires that \(pst\) holds immediately after the action transition and the transition should be in \(G\) relation. Before and after this action transition there may be a number of environment transitions, \(stable(pre,R)\) and \(stable(pst,R)\) ensure that \(pre\) and \(pst\) hold during any number of environment transitions in \(R\) before and after the action transition, respectively. An anonymous event is just a wrapper of a program, and they have the same state and event context in their computations according to the AnonyEvt transition rule in Fig. 3. Therefore, **AnonyEvt**\(P\) satisfies the rely-guarantee specification iff the program \(P\) satisfies the specification. A basic event is actually a parametrized program with a list of parameters \(p\) and a execution context \(\kappa\). A basic event satisfies its rely-guarantee specification, if for any program mapping from \(p\) and \(\kappa\) satisfies the rely-guarantee condition with augmented pre-condition by the guard condition of the event. Since the occurrence of an event does not change the state (BasicEvt rule in Fig. 3), we require that \(\forall s\). \((s,s)\in G\). Moreover, there may be a number of environment transitions before the event occurs. \(stable(pre,R)\) ensures that \(pre\) holds during the environment transitions. We now introduce the proof rule for event systems. The EvtSeq rule is similar to Seq and is intuitive. Recall that when an event occurs in an event set, the event executes until it finishes in the event system. Then, the event system behaves as the event set. Thus, events in an event system do not execute in interleaving manner. To prove that an event set holds its rely-guarantee specification \(\langle pre,R,G,pst\rangle\), we have to prove eight premises (EvtSet rule in Fig. 5). The first one requires that each event together with its specification be derivable in the system. The second one requires that the pre-condition for the event set implies all the event's preconditions. The third one is a constraint on the rely condition of event \(i\). An environment transition for \(i\) corresponds to a transition from the environment of the event set. The fourth one imposes a relation among the guarantee conditions of events and that of the event set. Since an action transition of the event set is performed by one of its events, the guarantee condition \(Gs_{i}\) of each event must be in the guarantee condition of the event set. The fifth one requires that the post-condition of each event must be in the overall post-condition. Since the event set behaves as itself after an event finishes, the sixth premise says that the post-condition of each event should imply the pre-condition of each event. The meaning of the last two premises are the same as we mentioned before. The Conseq rule allows us to strengthen the assumptions and weaken the commitments. The meaning of the Par rule is also standard. ### Soundness The soundness of rules for events is straightforward and is based on the rules for programs, which are proved by the same way in [31]. To prove soundness of rules for event systems. First, we show how to decompose a computation of event systems into computations of its events. Definition 7 (Serialization of Events): A computation \(\varpi\) of event systems is a serialization of a set of events \(\{\mathcal{E}_{1},\mathcal{E}_{2},...,\mathcal{E}_{n}\}\), denoted by \(\varpi\ \llq\ [\mathcal{E}_{1},\mathcal{E}_{2},...,\mathcal{E}_{n}\}\), iff there exist a set of computations \(\varpi_{1},...,\varpi_{m}\), where for \(1\leq i\leq m\) there exists \(1\leq k\leq n\) that \(\varpi_{i}\in\Psi_{\mathcal{E}}(\mathcal{E}_{k})\), such that \(\varpi\asymp\varpi_{1}\#\varpi_{2}\#...\#\varpi_{m}\). Lemma 2: For any computation \(\varpi\) of an event system \(\mathcal{S}\), \(\varpi\ \llq\ [\mathcal{E}]\), \(evts(\mathcal{S})\). The soundness of the EvtSeq rule is proved by two cases. For any computation \(\varpi\) of "\(\mathcal{E}\); \(\mathcal{S}\)", the first case is that the execution of event \(\mathcal{E}\) does not finish in \(\varpi\). In such a case, \(\varpi\ \llq\ [\mathcal{E}]\). By the first premise of this rule, we can prove the soundness; In the second case, the execution of event \(\mathcal{E}\) finishes in \(\varpi\). In such a case, we have \(\varpi=\varpi_{1}\#\varpi_{2}\), where \(\varpi_{1}\llq\ [\mathcal{E}]\) and \(\varpi_{2}\llq evts(\mathcal{S})\). By the two premises of this rule, we can prove the soundness. The soundness of the EvtSet rule is complicated. From Lemma 2, we have that for any computation \(\varpi\) of the event set, \(\varpi\asymp\varpi_{1}\#\varpi_{2}\#...\#\varpi_{m}\), for \(1\leq i\leq m\) there exists \(1\leq k\leq n\) that \(\varpi_{i}\in\Psi_{\mathcal{E}}(\mathcal{E}_{k})\). When \(\varpi\) is in \(A(pre,R)\), from \(\forall i\leq n,j\leq n\). \(pst_{i}\subseteq pres_{j}\), \(\forall i\leq n\). \(pre\subseteq pres_{i}\), and \(\forall i\leq n\). \(R\subseteq Rs_{i}\), we have that there is one \(k\) for each \(\varpi_{i}\) that \(\varpi_{i}\) is in \(A(pres_{k},Rs_{k})\). By the first premise in the EvtSet rule, we have \(\varpi_{i}\) is in \(C(Gs_{k},pts_{k})\). Finally, with \(\forall i\leq n\). \(Gs_{i}\subseteq G\) and \(\forall i\leq n\). \(pts_{i}\subseteq post\), we have that \(\varpi\) is in \(C(G,pst)\). Finally, the soundness theorem of the rule for parallel composition is shown as follows. Theorem 3 (Soundness of Parallel Composition Rule): \[\vdash\ \mathcal{PS}\ \textbf{sat}\ \langle pre,R,G,pst\rangle\Longrightarrow \models\ \mathcal{PS}\ \textbf{sat}\ \langle pre,R,G,pst\rangle\] To prove this theorem, we first use _conjoin_ of computations to decompose a computation of parallel event systems into computations of its event systems. Definition 8: A computation \(\varpi\) of a parallel event system \(\mathcal{PS}\) and a set of computations \(\widehat{\varpi}:\mathcal{K}\rightarrow\Psi_{\mathcal{S}}\) conjoin, denoted by \(\varpi\propto\widehat{\varpi}\), iff * \(\forall\kappa\). \(len(\varpi)=len(\widehat{\varpi}(\kappa))\). * \(\forall\kappa\). \(j<len(\varpi)\). \(s_{\varpi_{j}}=s_{\varpi(\kappa)_{j}}\wedge x_{\varpi_{j}}=x_{\varpi(\kappa)_{j}}\). * \(\forall\kappa\). \(j<len(\varpi)\). \(\vec{s}_{\varpi_{j}}(\kappa)=\vec{s}_{\varpi(\kappa)_{j}}\). * for \(j<len(\varpi)-1\), one of the following two cases holds: * \(\varpi_{j}\stackrel{{ e}}{{\longrightarrow}}\varpi_{j+1}\), and \(\forall\kappa\). \(\widehat{\varpi}(\kappa)_{j}\stackrel{{ e}}{{\longrightarrow}} \widehat{\varpi}(\kappa)_{j+1}\). \(\bullet\)\(\varphi_{j}\stackrel{{\text{t}\Omega\kappa_{j}}}{{\longrightarrow}} \omega_{j+1}\), \(\widehat{\varpi}(\kappa_{1})_{j}\stackrel{{\text{t}\Omega\kappa_{j}}}{{ \longrightarrow}}\widehat{\varpi}(\kappa_{1})_{j+1}\), and \(\forall\kappa\neq\kappa_{1}\). \(\widehat{\varpi}(\kappa)_{j}\stackrel{{\text{t}\Omega\kappa_{j}}}{{ \longrightarrow}}\widehat{\varpi}(\kappa)_{j+1}\). **Lemma 3**.: The semantics of \(\pi\)-Core is compositional, i.e., \(\Psi(\mathcal{PS},s,x)=\{\varpi\mid(\exists\widehat{\varpi}\mid(\forall\kappa.\ \widehat{\varpi}(\kappa)\in\Psi(\mathcal{PS}(\kappa),s,x))\wedge\varpi\propto \widehat{\varpi})\}\). We define the new forms of the locally respects and step consistent on events as follows. We assume a function \(\Gamma:evts(\mathcal{PS})\to RGCond\), where \(RGCond\) is the type of the rely-guarantee specification, to specify the rely-guarantee specification of events in \(\mathcal{PS}\). \(G_{\Gamma(ev)}\) is the guarantee condition in the rely-guarantee specification of the event \(ev\). Since the observation consistent condition has nothing to do with actions, we do not define a new form of this condition. **Definition 9** (Locally Respects on Events - LRE).: A parallel event system \(\mathcal{PS}\) locally respects \(\leadsto\) on events if \[\forall ev\ d\ s\ s^{\prime}\ \kappa.\ ev\ evts(\mathcal{PS})\wedge(s,s^{ \prime})\in G_{\Gamma(ev)}\] \[\longrightarrow(dom\_e(s,\kappa,ev)\not\leadsto d)\longrightarrow s \stackrel{{\text{\tiny$d$}}}{{\sim}}s^{\prime}\] **Definition 10** (Step Consistent on Events - SCE).: A parallel event system \(\mathcal{PS}\) is step consistent on events if \[\forall ev,d,s_{1},s_{2}.\ ev\ evts(\mathcal{PS})\wedge s_{1} \stackrel{{\text{\tiny$d$}}}{{\sim}}s_{2}\longrightarrow\] \[((dom\_e(s,\kappa,ev)\leadsto d)\longrightarrow(s_{1}\stackrel{{ \text{\tiny$dom\_e(s,\kappa,ev)$}}}{{\sim}}s_{2}))\longrightarrow\] \[(\forall s_{1}^{\prime},s_{2}^{\prime}.\ (s_{1},s_{1}^{\prime})\in G_{\Gamma(ev)} \wedge(s_{2},s_{2}^{\prime})\in G_{\Gamma(ev)}\] \[\longrightarrow s_{1}^{\prime}\stackrel{{\text{ \tiny$d$}}}{{\sim}}s_{2}^{\prime})\] The locally respects condition requires that when an event \(ev\) executes, the modification of \(ev\) to the environment can affect only those domains which the domain executing \(ev\) is allowed to send information. The step consistent condition requires that the observation by a domain \(d\) when executing an event \(ev\) can depend only on \(d\)'s observation before \(ev\) occurs, as well as the observation by the domain executing \(ev\) before \(ev\) occurs if that domain is allowed to send information to \(d\). Different with the small-step UCs which examines each action in events in Subsection 4.4, the event UCs consider the affect of events to the environment. To prove the compositionality, we first show two lemmas as follows. Lemma 4 shows the consistency of the event context in computations of a closed \(\mathcal{PS}\). Lemma 5 shows the compositionality of guarantee conditions of events in a valid and closed parallel event system. **Lemma 4**.: For any closed \(\mathcal{PS}\), if events in \(\mathcal{PS}\) are basic events, i.e., \(\forall ev\in evts(\mathcal{PS})\). \(is\_basic(ev)\), then for any computation \(\varpi\) of \(\mathcal{PS}\), we have \[\forall i<len(\varpi)-1,\kappa.\ (\exists t.\ \varpi_{i} \stackrel{{\text{\tiny$t\Omega\kappa$}}}{{\longrightarrow}} \varpi_{i+1})\] \[\longrightarrow(\exists ev\in evts(\mathcal{PS}).\ x_{\varpi_{i} }(\kappa)=ev)\] **Lemma 5**.: For any \(\mathcal{PS}\), if Figure 5: Rely-guarantee Proof Rules for \(\pi\)-Core * events in \(\mathcal{PS}\) are basic events, i.e., \(\forall ev\in evts(\mathcal{PS})\). \(is\_basic(ev)\). * events in \(\mathcal{PS}\) satisfy their rely-guarantee specification, i.e., \(\forall ev\in evts(\mathcal{PS})\). \(\vdash\)\(\mathit{ev}\)**sat**\(\Gamma(ev)\). * \(\vdash\)\(\mathcal{PS}\)**sat**\(\langle\{s_{0}\},\varnothing,UNIV,UNIV\rangle\). then for any computation \(\varpi\in\Psi(\mathcal{PS},s_{0},x_{0})\), we have \[\forall i<len(\varpi)-1,\kappa. (\exists t.\varpi_{i}\stackrel{{ t\alpha_{i}}}{{ \longrightarrow}}\varpi_{i+1})\] \[\longrightarrow(s_{\varpi_{i}},s_{\varpi_{i+1}})\in G_{\Gamma(x_ {\varpi_{i}}(\kappa))}\] Based on the two lemmas, we have the following lemma for the soundness of event UCs, i.e., the conditions imply the small-step ones. **Lemma 6**: **(Soundness of Unwinding Conditions on Events)**_. For any \(\mathcal{PS}\), if_ * \(\mathcal{C}_{0}=(\mathcal{PS},s_{0},x_{0})\)_._ * _events in_ \(\mathcal{PS}\) _are basic events, i.e.,_ \(\forall ev\in evts(\mathcal{PS})\)_._ \(is\_basic(ev)\)_._ * _events in_ \(\mathcal{PS}\) _satisfy their rely-guarantee specification, i.e.,_ \(\forall ev\in evts(\mathcal{PS})\)_._ \(\vdash\)\(\mathit{ev}\)**sat**\(\Gamma(ev)\)_._ * \(\vdash\)\(\mathcal{PS}\)**sat**\(\langle\{s_{0}\},\varnothing,UNIV,UNIV\rangle\)_._ _then \(\mathcal{M}=\langle\Delta,A,step,\mathcal{C}_{0}\rangle\), which is constructed according to Definition 2, satisfies that_ \[LRE\Longrightarrow LR\quad and\quad SCE\Longrightarrow SC\] We require that all events in \(\mathcal{PS}\) are basic events to ensure the event context in computations of \(\mathcal{PS}\) is consistent. It is reasonable since anonymous events are only used to represent the intermediate specification during execution of events. The last assumption is a highly relaxed condition and is easy to be proved. First, we only consider closed concurrent systems starting from the initial state \(s_{0}\). Thus, the pre-condition only has the initial state and the rely condition is empty. Second, we concerns the environment affect of an event to other events, but not the overall modification, and thus the guarantee condition is the universal set. Third, IFS only concerns the action transition, but not the final state. Thus, the post-condition is the universal set. From this lemma and the small-step unwinding theorems (Theorems 1 and 2), we have the compositionality of IFS as follows. **Theorem 4**: **(Compositionality of IFS)**_. For any \(\mathcal{PS}\), if_ * \(\mathcal{C}_{0}=(\mathcal{PS},s_{0},x_{0})\)_._ * _events in_ \(\mathcal{PS}\) _are basic events, i.e.,_ \(\forall ev\in evts(\mathcal{PS})\)_._ \(is\_basic(ev)\)_._ * _events in_ \(\mathcal{PS}\) _satisfy their rely-guarantee specification, i.e.,_ \(\forall ev\in evts(\mathcal{PS})\)_._ \(\vdash\)\(\mathit{ev}\)**sat**\(\Gamma(ev)\)_._ * \(\vdash\)\(\mathcal{PS}\)**sat**\(\langle\{s_{0}\},\varnothing,UNIV,UNIV\rangle\)_._ _then \(\mathcal{M}=\langle\Delta,A,step,\mathcal{C}_{0}\rangle\), which is constructed according to Definition 2, satisfies that_ \[OC\wedge LRE\wedge SCE\Longrightarrow noninfluence\] _and_ \[OC\wedge LRE\wedge SCE\Longrightarrow nonleakage\] By this theorem and Lemma 1, we provide a compositional approach of IFS for \(\pi\)-Core. ## 7 Verifying IFS of Multicore Separation Kernels By the proposed compositional approach for verifying IFS and its implementation in Isabelle/HOL, we develop a formal specification and its IFS proof of multicore separation kernels in accordance with the ARINC 653 standard. In this section, we use the concrete syntax created in Isabelle to represent the formal specification. ### Architecture of Multicore Separation Kernels The ARINC 653 standard - Part 1 in Version 4 [3] released in 2015 specifies the baseline operating environment for application software used within Integrated Modular Architecture on a multicore platform. It defines the _system functionality_ and requirements of _system services_ for separation kernels. As shown in Fig. 6, separation kernels in multicore architectures virtualise the available CPUs offering to the partitions virtual CPUs. A partition can use one or more virtual CPUs to execute the internal code. Separation kernels schedule partitions in a fixed, cyclic manner. Information-flow security of separation kernels is to assure that there are no channels for information flows between partitions other than those explicitly provided. The security policy used by separation kernels is the _Inter-Partition Flow Policy_ (IPFP), which is intransitive. It is expressed abstractly in a partition flow matrix \(\textbf{partition\_flow}:partition\times partition\to mode\), whose entries indicate the mode of the flow. For instance, \(\textbf{partition\_flow}(P_{1},P_{2})=SAMPLING\) means that a partition \(P_{1}\) is allowed to send information to a partition \(P_{2}\) via a sampling-mode channel which supports multicast messages. ### System Specification As a study case, the formal specification only considers the partitions, partition scheduling, and inter-partition communication (IPC) by sampling channels. We assume that the processor has two cores, \(\kappa_{0}\) and \(\kappa_{1}\). A partition is basically the same as a program in a single application environment. Partitions have access to channels via _ports_ which are the endpoints of channels. A significant characteristic of ARINC 653 is that the basic components are statically configured at built-time. The configuration is defined in Isabelle as follows. We create a constant \(conf\) used in events. \(c2s\) is the mapping from cores to schedulers and is bijective. \(p2c\) is the deployment of partitions to schedulers and a partition could execute on some cores concurrently. A set of configuration constraints are defined to ensure the correctness of the system configuration. The kernel state defined as follows concerns states of schedulers and channels. The state of a scheduler shows which is the currently executing partition. The state of a channel is mainly about messages in its one-size buffer. **record**\(\mathit{Config}=c2s::\)\(\mathit{Core}\Rightarrow\)\(\mathit{Sched}\)\(p2s::\)\(\mathit{Part}\Rightarrow\)\(\mathit{Sched}\)\(p2p::\)\(\mathit{Port}\Rightarrow\)\(\mathit{Part}\) Figure 6: Architecture of Multicore Separation Kernels **EVENT**: _Schedule_\(ps\) @ \(\kappa\)**WHERE**: \(ps\) _typeof_\([]\)**: **THEN**: \(cur:=\) _cur ((c2s conf)_\(\kappa:=\) _SOME_p_. (c2s conf)_\(\kappa\) _(p2s conf) p )_ **END**: \(\textbf{EVENT**: _Write_Sampling_Message ps @ \(\kappa\)**WHERE**: \(ps\) _typeof_\([PORT,MSG]\wedge\)_is_src_sampport conf (ps!0) \(\wedge\) _(p2 conf)_ _(ps!0) (cur (gsch conf \(\kappa\))_)_ **THEN**: _schema := schan (ch_srcsumport conf (ps!0) := Some (ps!1))_ **END**: \(\textbf{EVENT**: _Read_Sampling_Message ps @ \(\kappa\)**WHERE**: \(ps\) _typeof_\([PORT]\wedge\)_is_dest_sampport conf (ps!0) \(\wedge\) _(p2p conf) (ps!0) (cur (gsch conf \(\kappa\))_)_ **THEN**: \(\textbf{Skip}\)**: _END_: \(\textbf{EVENT**: _Core_Init ps @ \(\kappa\)**WHERE**: \(True\)**: \(\textbf{THEN}\)**: \(\textbf{SCIP} ### Security Proof According to Lemma 4, to show the information-flow security of our formal specification we only need to prove the assumptions of this theorem and that the events satisfy the event UCs. The first assumption of Lemma 4 is satisfied on the state machine straightforwardly. The second one is trivial. The third and fourth ones are proved by the rely-guarantee proof rules defined in Fig. 5. Next, we have to show satisfaction of event UCs in the formal specification. For each event in the formal specification, we prove that it satisfies the event UCs. ## 8 Related Work and Conclusion _Rely-guarantee method._ Initially, the rely-guarantee method for shared variable concurrent programs is to establish a post-condition for final states of terminated computations [31]. The languages used in rely-guarantee methods (e.g., [12, 16, 22, 31]) are basically imperative programming languages with concurrent extensions (e.g., parallel composition, and \(auxit\) statement). In this paper, we propose a rely-guarantee proof system for an event-based language, which incorporates the elements of system specification languages into existing rely-guarantee languages. We employee "Events" [2] into rely-guarantee and provide event systems and parallel composition of them to model single-processing and concurrent systems respectively. Our proposed language enables rely-guarantee-based compositional reasoning at the system level. Event-B [2] is a refinement-based formal method for system-level modeling and analysis. In a machine in Event-B, the execution of an event, which describes a certain observable transition of the state variables, is considered to be atomic and takes no time. The parallel composition of Event-B models is based on shared events [26], which can be considered as in message-passing manner. In [11], the authors extend Event-B to mimic rely-guarantee style reasoning for concurrent programs, but not provide a rely-guarantee framework for Event-B. In this paper, \(\pi\)-Core is a language for shared variable concurrent systems. \(\pi\)-Core provides a more expressive language than Event-B for the body of events. The execution of events in \(\pi\)-Core is not necessarily atomic and we provide a rely-guarantee proof system for events. _Formal verification of information-flow security._ Formal verification of IFS has attracted many research efforts in recent years. Language-based IFS [25] defines security policies on programming languages and concerns the data confidentiality among program variables. The compositionality of language-based IFS has been studied (e.g. [17, 20]). But for security at the system levels, the secrecy of actions is necessary such as for operating system kernels. State-action based IFS is formalized in [24] on a state machine and generalized and extended in [28] by nondeterminism. The IFS properties in [24, 28] are defined and verified on the seL4 microkernel. However, the compositionality of state-action based IFS [18, 24, 28] has not been studied in the literature. Recently, formal verification of microkernels and separation kernels is considered as a promising way for high-assurance systems [14]. Information-flow security has been formally verified on the seL4 microkernel[19], PROSPER hypervisor [8], ED separation kernel [10], ARINC 653 standard [32], and INTEGRITY-178 [23], etc. In [18, 19, 32], the IFS properties are dependent with separation kernels, i.e., there is a specific security domain (_scheduler_) in the definition of the properties. In our paper, the IFS properties are more general and we do not need to redefine new IFS properties in our study case. On the other hand, all these efforts are enforced on monocore kernels. Latest efforts on this topic aim at interruptable OS kernels, e.g., [7, 30]. However, formal verification of multicore kernels is still challenging. Although the formal specification is very abstract, we present the first effort of using the rely-guarantee method to compositional verification of multicore kernels in the literature. _Discussion._ Although, we only show the compositional reasoning of IFS by the rely-guarantee proof system in this paper, it is possible to use the proof system for the functional correctness and safety of concurrent systems. Invariants of concurrent systems could be compositionally verified by the rely-guarantee specification of events in the system. Deadlock-free of a concurrent system is possible to be verified by the pre- and post-conditions of events. For the functional correctness, we may extend the superposition refinement [5] by considering the rely-guarantee specification to show that a concrete event preserves the refined one. This is one of our future work. By an implicit transition system in the semantics of an event system in \(\pi\)-Core, events provide a concise way to define the system behavior. Abrial [1] introduces a method to represent sequential programs by event-based languages. Based on this method and the concurrent statements in \(\pi\)-Core, concurrent programs in other rely-guarantee methods can also be expressed by \(\pi\)-Core. By the state machine representation of \(\pi\)-Core, any state-action based IFS properties can be defined and verified in \(\pi\)-Core. In this paper, we create a nondeterministic state machine from \(\pi\)-Core, but we use the deterministic forms of IFS properties in [28] since the nondeterministic forms are not refinement-closed. This is also followed in [18] for seL4. discussion on the guard, pre-condition, and the guarantee condition. _Conclusion and future work._ In this paper, we propose a rely-guarantee-based compositional reasoning approach for verifying information-flow security of concurrent systems. We design the \(\pi\)-Core language, which incorporates the concept of "Events" into concurrent programming languages. We define the information-flow security and develop a rely-guarantee proof system for \(\pi\)-Core. For the compositionality of IFS, we relax the atomicity constraint on the unwinding conditions and define new forms of them on the level of events. Then, we prove that the new unwinding conditions imply the security of \(\pi\)-Core. The approach proposed in this paper has been mechanized in the Isabelle/HOL theorem prover. Finally, we create a formal specification for multicore separation kernels and prove the information-flow security of it. In the future, we would like to further study the refinement in \(\pi\)-Core and the information-flow security preservation during the refinement. Then, we will create a complete formal specification for multicore separation kernels according to ARINC 653 and use the refinement to create a model at the design level. ## Acknowledgments We would like to thank Jean-Raymond Abrial and David Basin of ETH Zurich, Gerwin Klein and Ralf Huuck of NICTA, Australia for their suggestions.
2309.09627
Electrolaryngeal Speech Intelligibility Enhancement Through Robust Linguistic Encoders
We propose a novel framework for electrolaryngeal speech intelligibility enhancement through the use of robust linguistic encoders. Pretraining and fine-tuning approaches have proven to work well in this task, but in most cases, various mismatches, such as the speech type mismatch (electrolaryngeal vs. typical) or a speaker mismatch between the datasets used in each stage, can deteriorate the conversion performance of this framework. To resolve this issue, we propose a linguistic encoder robust enough to project both EL and typical speech in the same latent space, while still being able to extract accurate linguistic information, creating a unified representation to reduce the speech type mismatch. Furthermore, we introduce HuBERT output features to the proposed framework for reducing the speaker mismatch, making it possible to effectively use a large-scale parallel dataset during pretraining. We show that compared to the conventional framework using mel-spectrogram input and output features, using the proposed framework enables the model to synthesize more intelligible and naturally sounding speech, as shown by a significant 16% improvement in character error rate and 0.83 improvement in naturalness score.
Lester Phillip Violeta, Wen-Chin Huang, Ding Ma, Ryuichi Yamamoto, Kazuhiro Kobayashi, Tomoki Toda
2023-09-18T09:58:36Z
http://arxiv.org/abs/2309.09627v2
# Electrolaryngeal Speech Intelligibility Enhancement through Robust Linguistic Encoders ###### Abstract We propose a novel framework for electrolaryngeal speech intelligibility enhancement through the use of robust linguistic encoders. Pretraining and fine-tuning approaches have proven to work well in this task, but in most cases, various mismatches, such as the speech type mismatch (electrolaryngeal vs. typical) or a speaker mismatch between the datasets used in each stage, can deteriorate the conversion performance of this framework. To resolve this issue, we propose a linguistic encoder robust enough to project both EL and typical speech in the same latent space, while still being able to extract accurate linguistic information, creating a unified representation to reduce the speech type mismatch. Furthermore, we introduce HuBERT output features to the proposed framework for reducing the speaker mismatch, making it possible to effectively use a large-scale parallel dataset during pretraining. We show that compared to the conventional framework using mel-spectrogram input and output features, using the proposed framework enables the model to synthesize more intelligible and naturally sounding speech, as shown by a significant 16% improvement in character error rate and 0.83 improvement in naturalness score. Lester Phillip Violeta\({}^{1}\), Wen-Chin Huang\({}^{1}\), Ding Ma\({}^{1}\), Ryuichi Yamamoto\({}^{1}\), Kazuhiro Kobayashi\({}^{1,2}\), Tomoki Toda\({}^{1}\)\({}^{1}\)Nagoya University, Japan, \({}^{2}\)TARVO, Inc., Japan Intelligibility enhancement, electrolaryngeal speech, atypical speech ## 1 Introduction Voice conversion (VC) [1], the task known as changing the speaker information while keeping linguistic information unchanged, has had rapid improvements in the age of deep learning. One of its sub-applications, intelligibility enhancement [2, 3, 4], has made way for atypical speakers to regain the ability to speak like typical speakers. Atypical speakers have difficulties in producing phoneme sounds and speak at a slower rate, making daily communication a tedious task for them. One type of atypical speech, electrolaryngeal (EL) speech, is produced by speakers diagnosed with a disrupted larynx, the organ responsible for generating the source excitation. While an electrolarynx [5] is used as a replacement for the larynx, the resulting speech becomes unnatural due to the electrolarynx producing robotic-like source excitation and being unable to produce natural pitch variation. For pitch-based languages like Japanese, changing the pitch throughout a sentence as well as the use of voiced/unvoiced sounds is essential to infer the meaning of different words, making this an important task. Several previous works in intelligibility enhancement have found that an effective solution is to first learn the alignments between typical and atypical speech through a parallel dataset. Since EL speakers speak at a slower rate and are unable to pronounce some phoneme sounds, learning the alignment between the two is important in this task. For example, [6] does this by using a strong sequence model such as a Transformer [7, 8]. Due to data scarcity and the data-hungry nature of Transformer-based models, several works [4, 9] have emphasized the effectiveness of pretraining on a large-scale typical speech dataset and fine-tuning it on a small-scale atypical speech dataset. However, a major problem in this naive pretraining and fine-tuning framework is that the typical and EL speech types are vastly different from each other. Thus, although a simple pretraining and fine-tuning approach brings improvements, there is a performance ceiling in such an approach. Our previous work [10] resolved this by observing that fine-tuning first with large-scale synthetic speech can effectively soften the mismatch between the speech types and speakers, making the pretraining and fine-tuning approach more effective. However, there is still a lot of room for improvement in further reducing the speech type and speaker mismatches, as the synthesis performance is still far from human-level speech. We resolve the speech type and speaker mismatch issues encountered in pretraining and fine-tuning approaches by introducing a new framework which uses recognition, alignment, and synthesis modules. Specifically, we use strong recognition modules containing dense linguistic information (such as phonetic posteriorgrams [11] and HuBERT [12, 13] features) as input and output features of the alignment module, effectively allowing the alignment module to focus on solely learning linguistic features. Through the recognition module allowing focus on modeling linguistic information, we effectively remove the speech type and speaker mismatches occuring between each stage during pretraining and fine-tuning, resulting in better performance compared to the baseline. Moreover, with the use of a Diffusion-based [14] synthesis decoder to generate the target speaker mel-spectrogram from the HuBERT output features, we shift the burden of synthesizing to a target speaker to this module due to its strong generation capabilities, improving the generation quality of the waveform. Finally, this proposed framework optimizes the use of parallel VC pretraining to further improve performance. Our contributions are as follows: * We propose a novel framework for electrolaryngeal speech intelligibility enhancement, composed of recognition, alignment, and synthesis modules. We show that using this framework can synthesize speech with a 16% CER improvement and a 0.83 higher naturalness score compared to the baseline. * We resolve the speech type mismatch issues by developing a linguistic encoder robust to both EL and typical speech types. Through a unified representation being used as inputs, the alignment module can focus on solely modeling the linguistic features, resulting in significantly more intelligible speech. * We show the other important components of the framework, such as using HuBERT output features and using parallel VC pretraining in ablation studies. ## 2 Conventional Framework We use our previous work [10] as our baseline, which uses the Transformer [7, 8] to transform the mel-spectrogram of an EL speech utterance into a mel-spectrogram of a typical speech utterance. A pre-training technique using text-to-speech (TTS) and autoencoder (AE) was used to efficiently learn linguistic information from a large-scale typical speech data. The process is done by training a TTS model with the target speaker. Then, an AE-style pretraining is conducted by using the decoder of the TTS model as initialization parameters, and reconstructing the target speaker by also using it as inputs. The decoder parameters are frozen, such that the encoder is efficiently pretrained. The network is first fine-tuned on the parallel synthetic EL and typical speech. We found that fine-tuning first on synthetic EL speech (even with lots of mispronunciations in synthesis) softens the speech type and speaker mismatches when fine-tuning. Then, network is fine-tuned on the target EL and typical speech data. Moreover, since the TTS pretraining technique uses text information as inputs and models strong linguistic information [8], such a speaker-independent pretraining style was beneficial in reducing the speech type and speaker mismatches when fine-tuning. Although bringing in large improvements, the framework is still limited as it is still far from human-level speech. One main problem is that the use of mel-spectrograms contains a lot of information related to the speech type and speaker, which degrades the performance due to the speech type and speaker mismatches between the datasets used in fine-tuning in each stage. One way to resolve this issue is by using linguistic encoders, which has shown success in several works in speech synthesis [11, 1]. By using a linguistic encoder to extract dense linguistic information from speech and using these as the input and output features, the focus during conversion can be on the linguistic-related features, reducing the speech type and speaker mismatches. This has been applied to intelligibility enhancement where works such as [15] use a fine-tuned automatic speech recognition (ASR) model on the atypical speech; however, this approach does not use the pretraining and fine-tuning framework used in the majority of works. Thus, although the ASR model can effectively extract linguistic features from the atypical speech, the pretraining on the large-scale typical speech dataset becomes less effective as the ASR model fine-tuned on the atypical speech cannot properly decode typical speech. We further investigate how to develop such a robust linguistic encoder and its performance when used as input and output features to improve intelligibility. ## 3 Proposed Framework An overview of the entire framework can be seen in Fig. 1. We detail the task of each module of the proposed framework below. ### Recognition module The recognition module uses a linguistic encoder estimating the phonetic posteriorgrams (PPGs) from the bottleneck features of an ASR encoder to extract the linguistic information. In our previous work [16], we showed that an effective approach to improving ASR model performance for EL speech is through a three-stage training framework. First, the model was pretrained on a large-scale typical speech dataset. Next, we fine-tuned the network on synthesized EL speech in an intermediate fine-tuning stage. Due to the limited data in training an EL speech synthesis model, the synthesized speech also contained lots of mispronunciations. However, we found that since the model used this stage to learn the EL speech characteristics instead of the linguistic information, it was sufficient enough for the synthetic EL speech to only represent the EL speech features. Finally, we fine-tuned the network on the ground truth EL speech to learn the linguistic features and decode at a high accuracy. We adopt this framework as the backbone of the recognition module. The goal of the linguistic encoder now is to be robust enough to remove the speech type features from both typical and EL speech, while also accurately extracting linguistic information. With a unified representation, the performance of a pretraining and fine-tuning framework becomes robust to the speech type mismatches. Although this has been an easy task in typical VC, several ASR works have found developing speaker-independent models [17, 18, 19, 20] for atypical speakers a difficult task due to the high variance in their speech. A naive approach to resolve this would be to simply fine-tune the ASR model on both the EL and typical speech such that the model is not only optimized for EL speech. However, similar to previous works, we found that fine-tuning the ASR model on both types of speech at the same time causes degradations. To improve this, we simply introduce a speech type ID loss \(L_{\text{SID}}\) during training. Since our previous work discovered that the intermediate fine-tuning focuses on learning speech type identity features, we make the network learn both speech types during this stage. Let \(X=\{X_{\text{TYP}},X_{\text{EL}}\}\) be the training data, which is composed of a typical and an EL dataset \(X_{\text{TYP}}\) and \(X_{\text{EL}}\). The speech type ID loss \(L_{\text{SID}}\) identifies whether the speaker is a typical or EL speaker and is optimized using a binary cross-entropy loss. Since we use both EL and typical data, we mask the outputs from the typical speech inputs during the calculation of the CTC/Attention losses \(L_{\text{ctx}}\) and \(L_{\text{attn}}\)[21]. The masking avoids making the model learn two highly variant types of speech, improving decoding performance. Through this approach, we effectively optimize the ASR model for EL speech, while also ensuring that it does not forget how to decode typical speech. We show in Eq. 1 the detailed loss calculation during intermediate fine-tuning. After the intermediate fine-tuning stage, we fine-tune on \(X_{\text{EL}}\) with the CTC/Attention losses \(L_{\text{ctx}}\) and \(L_{\text{attn}}\)[21] as usual. \[L_{\text{ASR}}=L_{\text{SID}}(X)+L_{\text{ev}}(X_{EL})+L_{\text{attn}}(X_{EL}) \tag{1}\] Figure 1: An overview of the proposed framework. The framework contains three main modules to convert from EL to typical speech: the recognition, alignment, and synthesis modules. Note that each module is trained separately. ### Alignment module The alignment module resolves the intelligibility enhancement aspect. To improve intelligibility, the alignment module needs to fulfill two tasks. First, due to the different temporal structure of EL speech, the model needs to increase the speaking rate similar to a typical speaker. Next, since EL speakers cannot produce certain phonemes, the alignment module also needs to correct the phoneme pronunciation. Similar to the baseline described in Section 2, we adopt the use of a Transformer [7, 8] sequence-to-sequence model to resolve these issues. We also follow the same fine-tuning procedure with synthetic data and then the target data due to its success. We improve this framework by using the PPG features produced by the recognition module as the inputs. These PPG features would further reduce the mismatches in speech type and speakers during pretraining and fine-tuning, as the linguistic encoder allows the alignment module to solely focus on modeling linguistic information. To further reduce the burden on the alignment network, we also use HuBERT as the output features. Aside from HuBERT providing dense linguistic information, using a variant of HuBERT with soft features [13] has also been found successful in removing speaker features and in cross-lingual settings, which would further improve the performance of the synthesis module described later. Moreover, the TTS/AE pretraining described in Section 2 was initially used in our baseline due to the unavailability of a large-scale parallel dataset; however, with the release of [22], we first verify whether parallel VC is indeed better. Although using parallel VC pretraining would directly model the fine-tuning task, this would not contain the speaker-independent properties of the TTS/AE pretraining, which might cause more degradations in the multiple fine-tuning stages due to the speech type and speaker mismatches. However, owing to the proposed framework focusing solely on linguistic features, we remove this possibility. ### Synthesis module As our goal is to force the alignment module to focus only on modeling linguistic information, the task of synthesizing into the target speaker is placed on a synthesis module. Since the typical dataset used as a target speaker is also limited in size, we use a Diffusion model [14] as the decoder of this module, as this framework has been proven effective in synthesizing speech in a target speaker even in few-shot settings [23]. To improve the few-shot performance of the Diffusion model, similar to [23], we pretrain first on a large-scale multi-speaker dataset with classifier-free guidance [24] and use fixed speaker embeddings and HuBERT features as conditioning features. Then, we adapt the model to the few-shot data for another set of iterations. To train the model, we iteratively add noise for \(N\) timesteps to the mel-spectrogram and predict the noise at timestep \(n\) during training by using the noisy mel-spectrogram at \(n-1\) as input along with the conditioning features. During inference, we pass in Gaussian noise and predict the mel-spectrogram after \(N\) iterations. Finally, to synthesize the audio waveforms from the predicted mel-spectrograms, we use HiFiGAN (V1) [25] as the vocoder. ## 4 Experimental settings ### Datasets The EL dataset is spoken in Japanese; thus, unless otherwise stated, the described dataset is also in Japanese. For the recognition module, we followed the same training framework as in [16] to train a linguistic encoder. We first pretrained on a large-scale typical speech dataset containing around 2k hours of speech data [26]. Next, we fine-tuned the network on a total of 27k utterances of synthetic EL data and typical speech. Finally, we fine-tuned the network on our privately acquired EL speech data. We evaluated the performance of the linguistic encoder using the aforementioned EL data, and its parallel counterpart spoken by a typical speaker. We used a 116/40/40 split for the train, dev, and test data. We also conducted ablation studies on the performance when using a larger dataset of 15 speakers, containing both their simulated EL (by using an external electrolarynx) and typical speech. For the alignment module, we first pretrained our network on HiFiCaptain [22], a large-scale parallel dataset of typical speakers, containing around 18k utterances in total. We used the female speaker as the source and the male speaker as the target. Then, we used the same setup as in [10] where we first fine-tuned on synthetic EL, synthesized from text from the JSUT [27] corpus. We then fine-tuned the model on our target parallel EL and typical speech data in Table 1. Note that compared to our previous work in [10], this current split is different, as we composed the evaluation data with longer utterances to show the effectiveness of the proposed method. We used the same 116/40/40 split used in the recognition module, so the test data is unseen by both the recognition and alignment modules. For the synthesis decoder, we used the JVS dataset [28], a dataset containing 30 hours of speech from 100 speakers, to pretrain the model before fine-tuning it on the target typical speech. For the synthesis vocoder, we used a pretrained model on VCTK [29], an English dataset with 44 speakers of around 40 hours in total. information, we used a pretrained WavLM model1 (which was fine-tuned for speaker verification) as speaker embeddings and fused it to each residual block using conditional layer normalization [32]. We set the number of diffusion steps \(N\) to 100. No changes were made in HiFiGAN (V1). Footnote 1: [https://huggingface.co/microsoft/wavlm-base-plus-sv](https://huggingface.co/microsoft/wavlm-base-plus-sv) ### Evaluation metrics For objective evaluations, we measured the synthesis quality through metrics such as character error rate (CER), mel-cepstral distortion (MCD), log F0 root mean square error (F0 RMSE), and log F0 correlation (F0 CORR). For CER, we used the same Conformer model in Table 2 trained on the large-scale typical speech. For subjective evaluations, we recruited 15 native Japanese speakers to measure the naturalness of the synthesized speech using a 5-scale mean opinion score (MOS) test2. Footnote 2: Demo: lesterphillip.github.io/icassp2024_el_sie ## 5 Results and Discussion ### Validating the recognition module We first present how to develop a robust linguistic encoder. We investigate different training setups as shown in Table 2. First, we see the difficulty in using a speaker-independent model, as optimizing on either EL or typical speech results in degradations in the other. Using a model optimized just on EL speech would degrade the large-scale pretraining stage. Next, we see that fine-tuning with multiple EL and typical speakers to make the model more generalized, also does not have any effectiveness, caused by the high variance between these speakers. Finally, we show that simply fine-tuning the model on both EL and typical speech can be effective but not fully optimized, as there is still a performance gap from the speaker-dependent setups. We show that our proposed method of using a speech type ID loss and masking the typical speech during CTC-Attention loss calculation makes the model learn to decode both EL and typical speech. This is because the model learns how to decode EL speech, while also not forgetting typical speech features learned during pretraining through the speech type ID loss. To verify this, removing the masking of typical speech results in slightly worse scores. Through this, we can decode both EL and typical speech at an accuracy similar to the speaker-dependent setups. ### Comparison of input/output features We show the effectiveness of the proposed linguistic encoder in this task. As seen in Table 3 our proposed method of using the PPG/HuBERT features (Sys. 4) can significantly improve the synthesized speech with a 16% improvement in CER and 0.83 in naturalness score over Sys. 1, the baseline that uses mel-spectrograms as inputs. This proves our initial hypothesis that the proposed linguistic encoder can effectively remove speech type information while also extracting accurate linguistic information. We also conducted a study by using mel-spectrogram outputs. As shown in Sys. 5, using HuBERT instead of mel-spectrograms as outputs helps further stabilizes the model, as it also contains dense linguistic information similar to the PPGs. Aside from this, Sys. 3 and 4 that used HuBERT features and the synthesis decoder had the top naturalness scores, further showing the effectiveness and necessity of a synthesis decoder over directly predicting the mel-spectrogram. ### Comparison of pretraining techniques In Section 3.2, we discussed that the TTS/AE pretraining also helps in resolving the speech type and speaker mismatches during pretraining and fine-tuning through its speaker-independent pretraining style [8]. However, upon comparing the baseline techniques, Sys. 2 has slightly better scores than Sys. 1 except in MOS. Thus, we prove that TTS/AE is not sufficient to create a speaker-independent property. Through the proposed approach in Sys. 4, we can directly model the fine-tuning task by using parallel VC pretraining, while also being able to implement a speaker-independent property by using PPG/HuBERT as input and output features, which reduces the mismatches during each fine-tuning stage. It is important to note that although Sys. 3 used both speaker-independent training styles, since the input (PPG) and output (HuBERT) features were different, the proposed method was not able to fully utilize the effectiveness of AE pretraining. Moreover, we find that compared to the other systems, Sys. 4 has the highest F0 RMSE score and the second lowest F0 CORR score, showing that the proposed method truly allowed the alignment module to focus on modeling linguistic information, but caused a small tradeoff in modeling pitch. ## 6 Conclusions We proposed the use of robust linguistic encoders that removes speech features from both EL and typical speech. The major benefit that this brings is that it creates a unified representation for both EL and typical speech, reducing the speech type mismatches between each dataset in a pretraining and fine-tuning framework. The proposed method allows the model to focus on modeling intelligibility, where it outperforms the baseline with a 16% improvement in CER and a 0.83 higher naturalness score. **Acknowledgements** This work was partly supported by AMED under Grant Number JP21dk0310114, Japan, and by JST CREST under Grant Number JPMJCR19A3. \begin{table} \begin{tabular}{l l l l l l l l l} \hline \hline **(System) Description** & **Inputs** & **Outputs** & **Pretraining method** & **MCD (\(\downarrow\))** & **CER (\(\downarrow\))** & **F0 RMSE (\(\downarrow\))** & **F0 CORR (\(\uparrow\))** & **MOS (\(\uparrow\))** \\ \hline (1) Baseline [10] & mel & mel & TTS/AE & 7.78 & 35.0 & 51.19 & 0.30 & 2.42 \(\pm\) 0.17 \\ (2) Baseline (ablation) & mel & mel & Parallel VC & 7.70 & 33.5 & **49.95** & 0.35 & 2.38 \(\pm\) 0.18 \\ (3) Proposed & PPG & HuBERT & TTS/AE & 7.45 & 32.2 & 50.39 & 0.28 & 2.90 \(\pm\) 0.17 \\ **(4) Proposed** & **PPG** & **HuBERT** & **Parallel VC** & **7.14** & **19.0** & 52.41 & 0.29 & **3.25 \(\pm\) 0.15** \\ (5) Proposed (ablation) & PPG & mel & Parallel VC & 7.54 & 29.1 & 51.16 & **0.37** & 2.78 \(\pm\) 0.18 \\ \hline Ground truth & - & - & - & - & 4.3 & - & - & 4.85 \(\pm\) 0.07 \\ \hline \hline \end{tabular} \end{table} Table 3: Objective and subjective evaluation results on the synthesized speech from different systems, along with the ground truth recorded speech. MOS is calculated with a 95% confidence interval. We detail the input and output features, along with the pretraining method used in the alignment module.
2309.00165
From Flow to Jamming: Lattice Gas Automaton Simulations in Granular Materials
We introduce the first extension of a Lattice Gas Automaton (LGA) model to accurately replicate observed emergent phenomena in granular materials with a special focus on previously unexplored jamming transitions by incorporating gravitational effects, energy dissipation in particle collisions, and wall friction. We successfully reproduce flow rate evolution, density wave formation, and jamming transition observed in experiments. We also explore the critical density at which jamming becomes probable. This research advances our understanding of granular dynamics and offers insights into the jamming behavior of granular materials.
M. Gaber, Raquel H. Ribeiro, J. Kozicki
2023-08-31T23:04:29Z
http://arxiv.org/abs/2309.00165v1
# From Flow to Jamming: ###### Abstract We introduce the first extension of a Lattice Gas Automaton (LGA) model to accurately replicate observed emergent phenomena in granular materials with a special focus on previously unexplored jamming transitions by incorporating gravitational effects, energy dissipation in particle collisions, and wall friction. We successfully reproduce flow rate evolution, density wave formation, and jamming transition observed in experiments. We also explore the critical density at which jamming becomes probable. This research advances our understanding of granular dynamics and offers insights into the jamming behavior of granular materials. + Footnote †: journal: Physical Review Letters ## 1 Introduction Granular materials, composed of solid particles like sand or grains, challenge conventional matter classifications [1; 2; 3; 4]. Unlike regular substances, they mimic solid [5; 6], liquid [2; 4; 7], or gas states [8] depending on the environment conditions [4; 9; 5], even forming a distinct matter phase (or a transitional bridge) between the primary phases [3]. Their unique attributes (such as insensitivity to temperature changes and nonlinear friction [10]) challenge predictive modeling [11], and yet offer the promise of innovative applications in the energy and pharmaceutics sectors [3]. The boundary between solid- and fluid-like states in granular flow is marked by the _jamming transition_, a well-studied shift from a flowing to a static state [12; 13; 14; 15; 16], triggered by critical control parameters: density, temperature, and shear stress [15]. This process resembles the glass transition in amorphous materials, from dynamic to ordered states [15]. Increased density naturally halts dynamics in granular materials, leading to the static, "jammed" state [14; 15; 16; 17; 18; 19]. Teitel et al. linked density to viscosity, showing viscosity rises with density until a critical point, triggering the solid jammed state [16]. Shear stress and temperature further influence this state--when either falls below a critical threshold, the reduced mobility leads to the system becoming "jammed." Density waves emerge as spatial density variations that reveal flow dynamics and mechanical traits [5; 20], and are therefore especially challenging to model. They are triggered by vibrations and shear forces imprinting intricate patterns and structures [21; 22], and collectively contributing to particle motion. This study leverages Lattice Gas Automaton (LGA) simulations to extend prior experimental work on density waves and jamming transition phenomena. Our novel contribution lies in the simulation of the jamming process by specifying appropriate model rules, addressing an unexplored gap in prior work and triggering potential practical applications [23]. ## 2 Model Description Modeling fluid flow dynamics requires solving the Navier-Stokes equations--a well-known computationally intensive task [24; 25; 26; 27]. In contrast, the Lattice Gas Automaton (LGA) model introduced by Frisch et al. [28] offers a more efficient approach. Unlike traditional methods that require the definition of macroscopic variables and solving partial differential equations, LGA uses rules valid at microscopic scales to predict macroscopic behavior. This bottom-up approach simplifies continuous equations into discrete rules, rendering LGA an accurate model for fluid simulation [29]. Notably, LGA effectively models granular materials, aligning closely with experimental observations [22; 30; 31; 32]. In this section, we present a modified LGA variant tailored for simulating density waves and jamming transitions in granular materials. ### Model Configuration The LGA model comprises a \(2D\) lattice with \(L\times L\) hexagonal cells, each having six neighboring cells. Each cell can be occupied by one or more particles, empty, or act as a wall. The particles possess velocity vectors indicating their movement directions. The velocity vector consists of six binary elements, with only one element set to 1 representing movement in the direction of that element (Fig. 1). ### Adding Gravitational Effects We follow the approach by Kozicki et al. [30] and introduce a parameter \(g\), ranging from 0 to 1, which represents the probability of particles changing their velocity vector direction toward gravity at each iteration. For example, if the particle's direction is 0 (Fig. 1), its new direction value will be 4 with probability \(g\). If the particle's direction were 2, its new direction would be 3 with probability \(g\). This modification allows for parabolic behavior in individual particle velocities, consistent with the findings of Kozicki et al. [30]. ### Adding Energy Dissipation In LGA models, particle interactions are typically assumed to be elastic collisions, which works well to model ideal gases but not granular materials [33]. Several research experiments have found that granular particle collision can be elastic or show energy dissipation due to particle crushability or grain roughness that varies from one particle to another [22; 23; 31]. To address this possibility, we adapt the collision model proposed by Herrmann et al. [6] and introduce an energy dissipation parameter, \(p\), ranging from 0 to 1--a value of 0 corresponds to perfectly elastic collisions, while a value of 1 signals fully inelastic collisions with complete energy dissipation. Since the roughness of particles is not known in advance, an additional parameter \(p\) accounts for probabilistic energy dissipation in collisions, following the rules depicted in Figs. 2-4. Additionally, we implement mirror deflection to simulate the system's behavior when particles collide with walls. ### The Onset of the Jamming Transition Experimental studies have consistently demonstrated the critical role of the formation of an arch structure at the narrow opening of a hopper in initiating jamming transitions [34]. This arch structure results from the friction between the particles and the hopper walls, creating a barrier that obstructs the material flow. To model friction within a lattice gas system, we have carefully designed a friction rule that governs the interaction between particles and their surrounding environment. Firstly, the walls exert a force in the opposite direction to the movement of particles. Secondly, particles can transmit frictional forces when they come into contact. However, for the friction force to take effect, particles must be compressed from both sides, creating a state of compression. Figure 5a shows particles arranging themselves into an arch-like structure, with arrows indicating the upward-pointing frictional forces due to the compression of particles toward the walls. The friction force is transmitted upwards through the compressed particles, ultimately leading to the structure acting as a barrier, impeding the flow of particles. ## 3 Simulation Results ### Entropy of the LGA Model In the absence of gravitational and frictional effects, the model is suitable for simulating standard fluid flows and is expected to adhere to the principles of thermodynamics. Notably, the second law of thermodynamics should apply, indicating that the system's entropy, when out of equilibrium, should increase with time. This implies that the number of possible microstates available to the system should progressively grow. We initiated the simulation with a low entropy state to verify this, as shown Figure 1: The cells are tagged with the particle’s velocity. A value of 0 indicates a particle at rest, while values \(1-6\) represent particles in movement. Figure 4: A three-particle collision event can lead to different outcomes depending on the value of \(p\). Collisions conserve energy and momentum in the absence of energy dissipation (\(p=0\)). However, when \(p>0\), there are three possible outcomes, each with a probability of \(p/3\). In each outcome, one of the particles experiences energy dissipation. Note that the hexagonal lattice configuration limits the direction of movement to the given array of possibilities. Figure 3: Collisions between particles at an angle yield different outcomes depending on the energy dissipation parameter, \(p\). For \(p=0\), collision conservation laws govern energy and momentum. With \(p>0\), one of the particles loses velocity through energy dissipation. Note that given the hexagonal grid, the collision angles are equal to the rebound angles. Figure 2: Head-on particle collisions result in one of three deflection outcomes, each with a 1/3 probability. in Fig. 6a. The parameters used for simulation in this section and for the following sections are listed in Tab. 1. The system, depicted in Figure 6, starts with a low entropy state, where all particles are placed in one corner of the box. As the system evolves, the particles will collide and occupy a larger portion of the available volume (Figs 6 b-f). Consequently, the number of possible microstates at equilibrium will significantly exceed the restricted microstates confined to the \(20\times 20\) region, increasing entropy. To quantify the entropy of this system, we employ the Shannon entropy, defined as follows: \[H(x)=-\sum_{i=1}^{n}P(x_{i})\log(P(x_{i})) \tag{1}\] In the context of information entropy, \(x\) represents a random variable, while \(x_{i}\) represents the possible outcomes associated with that random variable. The term \(P(x_{i})\) denotes the probability of a specific outcome \(x_{i}\). In the present study, the equation is applied by making \(x\) akin to a random indicator variable: 1, indicating the cell is occupied by a particle, or 0, showing the cell is empty. To compute the probability \(P(x_{i})\), we conduct 50 distinct simulations, each spanning 500 steps. This generates an ensemble of 50 microstate configurations for each iteration. The probability \(P(x_{i})\) for a given cell \(x_{i}\) in a particular iteration is then computed as the average occupancy frequency of that cell over the 50 distinct simulations. This average is determined by dividing the number of times cell \(x_{i}\) is occupied by the particle by the total number of simulations (50). After computing the probabilities \(P(x_{i})\) for all cells, they are summed using equation (1). This procedure is repeated for each iteration in the model's evolution. In Fig. 7, we show the time evolution of entropy in the system that accompanies the stages depicted in Fig. 6. A transient decline in entropy is allowed--rapid entropy shifts can occur in out of equilibrium states. We further confirm the finding by Tribel and Boon that the monotonous increase in LGA entropy is not guaranteed [35]. This fluctuating behavior persists equilibrium is reached, and a homogeneous distribution of particles across the entire box and a high entropy ensue. ### Density Waves in a hopper flow Behringer et al. initially documented density waves in 1989 [5]. Employing a Plexiglas hopper with adjustable width, the team captured gravity-driven sand flow using X-ray imaging through digital fluoroscopy. They examined two sand types: rough (resulting in inelastic collisions due to crushability and shear strain) and smooth (with predominant elastic collisions). Figure 5: Illustration of an arch-like structure formation as a result of the upward frictional forces caused by the compression from both sides of the walls in 5a. In contrast, 5b illustrates a scenario where the particle does not experience friction from the wall due to the absence of compression from the left side. Figure 6: The time evolution of the particles after being confined to a small region in a box. A video of the simulation can be consulted here. \begin{table} \begin{tabular}{l c c c c} \hline \hline & \(g\) & \(p\) & Grid H\(\times\)W & Density \\ \hline Particles in a box (Figs. 6–7) & 0 & 0 & 100\(\times\)100 & 10\% \\ Density Waves (Figs. 8–9) & 0.2 & 0.1 & 200\(\times\)50 & 70\% \\ Narrow Pipe (Figs. 10–11) & 0.2 & 0.1 & 200\(\times\)10 & 85\% \\ Jamming (Figs. 12–13) & 0.2 & 0.1 & 200\(\times\)20 & 5\%–100\% \\ \hline \hline \end{tabular} \end{table} Table 1: Parameter Values for Each Simulation Figure 7: The entropy over time for the system initialized with a non-equilibrium state (Fig. 6). These experiments established insights into density wave formation and behavior in granular materials. We successfully reproduced the phenomenon of density waves observed in the experimental study through simulations. As depicted in Fig. 8, the flow forms a central low-density region positioned directly above the outlet. This central region serves as the primary pathway for the flow, while the adjacent regions near the walls exhibit limited movement. We also observed the emergence of low-density regions propagating upwards between the central region and the walls, contrary to the downward sand flow. These propagating density waves persist and are followed by additional waves until they reach the system's top, eventually collapsing. This simulation-based observation aligns with the previous experimental findings on the propagation of density waves [5]. The flow rate evolution at the hopper outlet, measured by counting the number of particles leaving the outlet every iteration, is depicted in Fig. 9. It is characterized by a consistent flow rate during the particle discharge, followed by a non-linear reduction towards the end that can be attributed to the decreasing particle density towards the end. The observed flow rate behavior aligns with experimental findings [9; 36]. ### Density Waves in narrow pipes In 1994, Peng et al. provided a distinctive perspective on density wave propagation, relevant to industrial applications [37]. Their study, utilizing narrow pipes instead of conventional hoppers, examined smooth and rough sand. Density wave propagation exclusively occurred with rough sand. The underlying reason is the inelastic collisions experienced by rough sand particles during descent. These collisions lead to energy dissipation, causing certain particles to lose velocity. Consequently, these decelerated particles lag behind, triggering successive collisions and more energy loss. This cascading effect resembles shock waves in traffic, where one car's abrupt stop triggers a chain reaction of deceleration among preceding vehicles. Our model successfully replicated the results when we adjusted the simulation settings to a narrow pipe and replaced the hopper outlet with periodic boundary conditions. This adjustment allowed particles that flowed through the bottom of the pipe to re-enter the system through the top, ensuring a continuous flow. The simulation results are presented in Fig. 10. Fig. 10 illustrates a notable phenomenon--the emergence of a shock wave of high density and propagation in an upward direction. This behavior is prominently observed in narrow pipes, as the occurrence of inelastic collisions results in the halting of numerous particles from flowing, consequently leading to the formation of the density wave (or shock wave). Figure 11 provides a comprehensive visualization of the temporal propagation of density waves, following a methodology similar to that employed by Peng et al. [37]. Multiple snapshots were taken from the narrow pipe at different time intervals to generate the figure. Each snapshot involved dividing the pipe into small vertical segments and calculating the density within each segment. The density values were then represented using a grayscale scheme, with darker regions indicating higher den Figure 8: Snapshots of the sand flow in a hopper taken from the model simulation. The three consecutive snapshots, separated by a few iterations, illustrate the temporal evolution of the particle discharge process. A video of the simulation can be consulted here. Figure 9: Flow rate of the granular materials measured leaving the hopper outlet. The data is averaged over ten iterations, showing a roughly constant flow rate (indicated by the black horizontal line) until a decrease occurs towards the end. sities. These snapshots were sequentially arranged from left to right, creating the impression of multiple narrow pipes aligned horizontally, with time progressing from left to right. The graph serves as a clear demonstration of the formation and movement of density waves. The initial state, depicted on the left side of the graph, portrays a semi-uniform density distribution. As time elapses from left to right, density waves' emergence and upward propagation become increasingly evident. It is worth noting that once the density wave reaches the top of the graph, it reappears at the bottom due to the periodic boundary representation implemented in the simulation. After \(10,000\) iterations near the right side of the figure, the number of density waves reduces as the voids merge into larger voids. ### Jamming Transition As mentioned above, the jamming transition occurs when an arch structure forms at the narrow opening of a hopper, obstructing the particle flow. This structure originates from the friction between the particles and the hopper walls, effectively blocking the granular flow. Multiple experimental studies, such as the one by Kiwing et al. [34] have explored the underlying mechanisms governing this process highlighting the role of the specific hopper structure. The emergence of this configuration is stochastic in nature and dependent on particle's phase space. Furthermore, these investigations shed light on the role of density in jamming transition. Higher densities are associated with increased jamming transition, as they entail more particles and a more extensive set of possible microstates, which significantly enhances the probability that one of the configurations will lead to the formation of an arch and subsequent jamming. Consequently, the jamming transition is not solely contingent upon reaching a critical density value but becomes more likely above this critical density threshold. This distinction sets the jamming transition apart from traditional phase transitions, such as the water-ice transition, typically occurring at specific critical values of control parameters like temperature or pressure. Our simulations identified a critical density threshold above which jamming transition is more likely to occur, as illustrated in Fig. 13. The simulation depicted in Fig. 12 involved initializing a narrow hopper with varying packing densities of particles, followed by measuring the resulting flow rate. To achieve this, a predetermined number of particles corresponding to each density was randomly distributed within the hopper. The simulation was then executed for 5000 iterations. At each density, the flow rate was quantified as the average number of particles exiting through the hopper outlet per unit of time. To ensure the robustness of the results, this process was repeated 30 times for each density. The reported flow rate in Fig. 13 is the average flow rate from the 30 simulations for each density value, with error bars indicating the 95% confidence intervals. As expected, Fig. 13 demonstrates a persistent linear relationship between the particle density and the flow rate until reaching a density of approximately 20%, beyond which the system contains enough particles to increase the probability of arch formation, reducing the flow rate. As the density approaches 100%, the box becomes densely packed with particles, resulting in a near-certain arch formation and a complete halt in particle flow. It should be noted that, statistically, the flow rate will not reach zero as a few particles may still exit the system before the arch forms. We also explored jamming and flowing outcomes for mul Figure 11: Sequential snapshots illustrating density wave propagation in a narrow pipe over time. Each snapshot represents a vertical segment of the pipe, with density depicted using grayscale colors (darker shades indicating higher densities). Figure 10: The set of figures, from left to right, shows the evolution of the flow in a narrow pipe and under periodic boundary conditions. Note the formation of shock waves. A link to the simulation video can be consulted here. tiple particle densities. Jamming is relatively rare for densities below 20%, with infrequent flowing events for densities above 20%. This probabilistic nature of jamming indicates that higher densities increase the likelihood of its occurrence, again corroborating previous experimental findings [34]. ## 4 Conclusion This study reports on the first successful simulation of granular materials using an LGA framework. The model accurately replicates critical empirical observations by incorporating gravitational effects, energy dissipation in particle collisions, and frictional interactions. The main novelty of this research lies in the reproduction of the jamming transition, previously only observed experimentally. We recognized the probabilistic nature of the critical density threshold for arch formation demonstrating how it is more likely at higher densities. Our tailored LGA model also succeeds in reproducing the flow rate evolution and density wave formation at the hopper outlet. The simulated non-linear reduction in flow rate towards the end of the discharge process suggests the role of density fluctuations in arch formations. These findings further our understanding of granular dynamics, providing valuable insights into the complex behavior of granular materials. Based on these results, one natural direction to follow up would be to delve into how the jamming transition in granular materials might affect the propagation and behavior of density waves. Investigating the interplay between the jamming transition and the emergence of density waves could unveil intriguing connections between these two phenomena, potentially shedding light on the underlying mechanisms that govern both processes. This venue of research could contribute to a more comprehensive understanding of the complex behaviors exhibited by granular materials near the jamming transition and their implications for various practical applications.
2309.17419
Enumerating minimal solution sets for metric graph problems
Problems from metric graph theory like Metric Dimension, Geodetic Set, and Strong Metric Dimension have recently had a strong impact in parameterized complexity by being the first known problems in NP to admit double-exponential lower bounds in the treewidth, and even in the vertex cover number for the latter, assuming the Exponential Time Hypothesis. We initiate the study of enumerating minimal solution sets for these problems and show that they are also of great interest in enumeration. Specifically, we show that enumerating minimal resolving sets in graphs and minimal geodetic sets in split graphs are equivalent to enumerating minimal transversals in hypergraphs (denoted Trans-Enum), whose solvability in total-polynomial time is one of the most important open problems in algorithmic enumeration. This provides two new natural examples to a question that emerged in recent works: for which vertex (or edge) set graph property $\Pi$ is the enumeration of minimal (or maximal) subsets satisfying $\Pi$ equivalent to Trans-Enum? As very few properties are known to fit within this context -- namely, those related to minimal domination -- our results make significant progress in characterizing such properties, and provide new angles to approach Trans-Enum. In contrast, we observe that minimal strong resolving sets can be enumerated with polynomial delay. Additionally, we consider cases where our reductions do not apply, namely graphs with no long induced paths, and show both positive and negative results related to the enumeration and extension of partial solutions.
Benjamin Bergougnoux, Oscar Defrain, Fionn Mc Inerney
2023-09-29T17:27:42Z
http://arxiv.org/abs/2309.17419v2
# Enumerating minimal solution sets ###### Abstract Problems from metric graph theory such as Metric Dimension, Geodetic Set, and Strong Metric Dimension have recently had a strong impact on the field of parameterized complexity by being the first problems in NP to admit double-exponential lower bounds in the treewidth, and even in the vertex cover number for the latter. We initiate the study of enumerating minimal solution sets for these problems and show that they are also of great interest in enumeration. More specifically, we show that enumerating minimal resolving sets in graphs and minimal geodetic sets in split graphs are equivalent to hypergraph dualization, arguably one of the most important open problems in algorithmic enumeration. This provides two new natural examples to a question that emerged in different works this last decade: for which vertex (or edge) set graph property \(\Pi\) is the enumeration of minimal (or maximal) subsets satisfying \(\Pi\) equivalent to hypergraph dualization? As only very few properties are known to fit within this context--namely, properties related to minimal domination--our results make significant progress in characterizing such properties, and provide new angles of approach for tackling hypergraph dualization. In a second step, we consider cases where our reductions do not apply, namely graphs with no long induced paths, and show these cases to be mainly tractable. **Keywords:** algorithmic enumeration, hypergraph dualization, metric dimension, geodetic sets, strong metric dimension, resolving sets, matroids. Introduction Metric graph theory is a central topic in mathematics and computer science that is the subject of many books and a vast number of articles, with far-reaching applications such as in group theory [11, 12], matroid theory [1], computational learning theory [13, 14, 15, 16], and computational biology [1], to name a few. Two very well-studied metric graph problems that arise in the context of network design and network monitoring are the metric dimension [17, 15] and geodetic set [12] problems from the 1970s and 1990s, respectively. There is a rich literature concerning these two problems, in particular since their non-local nature has given rise to difficult and intriguing algorithmic questions and results. Furthermore, this has spurred the development of many interesting variants of these problems, such as [17, 13, 1, 1], with applications being found in various domains like network verification [2], chemistry [14], and genomics [15]. In particular, the strong metric dimension problem [16] was introduced in 2004, and has begun to gather momentum of late. In this paper, we study the algorithmic enumeration of minimal solution sets for the metric dimension, geodetic set, and strong metric dimension problems. In the Metric Dimension problem, given a graph \(G\) and a positive integer \(k\), the question is whether there exists a subset \(S\subseteq V(G)\) of at most \(k\) vertices such that, for any pair of vertices \(u,v\in V(G)\), there exists a vertex \(w\in S\) with \(\mathsf{dist}(u,w)\neq\mathsf{dist}(v,w)\). A set of vertices \(S\subseteq V(G)\) that satisfies the latter property is known as a _resolving set_ of \(G\). This is one of the problems that was shown to be \(\mathsf{NP}\)-complete in Garey and Johnson's book [1]. In the last 10 years, the complexity of Metric Dimension was greatly refined, with it being shown that it is \(\mathsf{NP}\)-complete in unit disk graphs [15], bipartite graphs, co-bipartite graphs, line graphs of bipartite graphs, and split graphs [10], bounded-degree planar graphs [11], and interval and permutation graphs of diameter 2 [14]. On the positive side, while it was known for a long time that Metric Dimension is linear-time solvable in trees [17], in the last decade it was shown that it is also linear-time solvable in cographs [10], chain graphs [13], cactus block graphs [12], and bipartite distance-hereditary graphs [15]. Moreover, it was shown to be polynomial-time solvable in outerplanar graphs [11]. The last decade also witnessed the emergence and subsequent thorough study of the parameterized complexity of Metric Dimension, with the seminal paper on this topic showing it to be \(\mathsf{W}[2]\)-hard parameterized by the solution size \(k\), even in subcubic bipartite graphs [16]. On the tractable side, Metric Dimension admits an \(\mathsf{XP}\) algorithm parameterized by the feedback edge set number [10], and \(\mathsf{FPT}\) al gorithms parameterized by the max leaf number [14], the modular-width and the combined parameter treelength plus maximum degree [1], the treedepth and the combined parameter clique-width plus diameter [1], and the distance to cluster (co-cluster, respectively) [1, 1]. Further, the \(\mathsf{FPT}\) result for the combined parameter clique-width plus diameter in [1] uses Courcelle's theorem, while an explicit \(\mathsf{FPT}\) algorithm for Metric Dimension parameterized by the combined parameter treewidth plus diameter was given in [1], with the latter algorithm building on ideas from an \(\mathsf{FPT}\) algorithm parameterized by the treewidth in chordal graphs [1]. In contrast, Metric Dimension is \(\mathsf{W}[1]\)-hard parameterized by the combined parameter pathwidth plus maximum degree [1] and the combined parameter feedback vertex set number plus pathwidth [1, 1], and para-\(\mathsf{NP}\)-hard parameterized by the pathwidth [1]. Lastly, unless the \(\mathsf{ETH}\) fails, Metric Dimension cannot be solved in time \(2^{o(n)}\), even on bipartite graphs, nor in time \(2^{o(\sqrt{n})}\) on planar bipartite graphs [1]. In the Geodetic Set problem, given a graph \(G\) and a positive integer \(k\), the question is whether there exists a subset \(S\subseteq V(G)\) of at most \(k\) vertices such that every vertex in \(G\) is on a shortest path between two vertices of \(S\). A set of vertices \(S\subseteq V(G)\) that satisfies the latter property is known as a _geodetic set_ of \(G\). Like Metric Dimension, Geodetic Set is also \(\mathsf{NP}\)-complete in co-bipartite graphs [1], interval graphs [1], line graphs, and graphs of diameter \(2\)[1]. However, it is polynomial-time solvable in well-partitioned chordal graphs (a generalization of split graphs) [1], outerplanar graphs [10], distance-hereditary graphs (includes cographs) [11], block-cactus graphs, and proper interval graphs [1]. The parameterized complexity of Geodetic Set has also recently been thoroughly investigated, with the main contributions coming from [11]. In [11], they first observed that the reduction from [12] implied that Geodetic Set is \(\mathsf{W}[2]\)-hard parameterized by the solution size \(k\), even in chordal bipartite graphs. They then proved that it is \(\mathsf{W}[1]\)-hard parameterized by the combined parameter solution size plus feedback vertex set number plus pathwidth [11]. They complemented this hardness result by showing that Geodetic Set is \(\mathsf{FPT}\) parameterized by the treedepth, the combined parameter clique-width plus diameter, and the feedback edge set number [11]. As with Metric Dimension, Geodetic Set is also \(\mathsf{FPT}\) parameterized by the treewidth in chordal graphs [1]. In the Strong Metric Dimension problem, given a graph \(G\) and a positive integer \(k\), the question is whether there exists a subset \(S\subseteq V(G)\) of at most \(k\) vertices such that, for any pair of vertices \(u,v\in V(G)\), there exists a vertex \(w\in S\) with either \(u\) belonging to a shortest \(w\)-\(v\) path or \(v\) belonging to a shortest \(w\)-\(u\) path. A set of vertices \(S\subseteq V(G)\) that satisfies the latter property is known as a _strong resolving set_ of \(G\). Arguably the most significant result concerning this problem is that there exists a polynomial-time reduction from an instance \((G,k)\) of Strong Metric Dimension to an instance \((G^{\prime},k)\) of Vertex Cover, where \(V(G)=V(G^{\prime})\) and the edges of \(G^{\prime}\) are between pairs of vertices that are so-called "mutually maximally distant" in \(G\)[12]. The relationship between these two problems was further studied in [13]. This connection implies that certain positive algorithmic results for Vertex Cover carry over to Strong Metric Dimension, e.g., they are both \(\mathsf{FPT}\) parameterized by the solution size \(k\). Unsurprisingly, many hardness results for Vertex Cover can be passed on to Strong Metric Dimension[14]. Recently, Metric Dimension, Geodetic Set, and Strong Metric Dimension were shown to be important well beyond the field of metric graph theory by being the first problems in \(\mathsf{NP}\) to admit conditional double-exponential lower bounds in the treewidth (\(\mathsf{tw}\)), and even the vertex cover number (\(\mathsf{vc}\)) for Strong Metric Dimension[15]. In particular, they proved that, unless the \(\mathsf{ETH}\) fails, these problems do not admit \(2^{2^{o(\mathsf{tw})}}\cdot n^{O(1)}\)-time algorithms, even in bounded diameter graphs, and that this lower bound holds even for the vertex cover number for Strong Metric Dimension[15]. Further, they proved that, unless the \(\mathsf{ETH}\) fails, Metric Dimension and Geodetic Set do not admit \(2^{o(\mathsf{vc}^{2})}\cdot n^{O(1)}\)-time algorithms [15]. The lower bounds concerning the vertex cover number parameterizations yielded that, unless the \(\mathsf{ETH}\) fails, Metric Dimension and Geodetic Set do not admit kernelization algorithms that reduce the solution size and output a kernel with \(2^{o(k+\mathsf{vc})}\) vertices, and Strong Metric Dimension does not admit a kernelization algorithm that outputs a kernel with \(2^{o(\mathsf{vc})}\) vertices [15]. Notably, to the best of our knowledge, kernelization lower bounds of this kind were priorly only known for two other problems: Edge Clique Cover[12] and Biclique Cover[16]. Further, this improved upon the result of [17], which ruled out a polynomial kernel for Metric Dimension parameterized by \(k+\mathsf{vc}\), unless the polynomial hierarchy collapses to its third level. It is essential to note that all of the above lower bounds from [15] were complemented by matching upper bounds in the same paper. Importantly, the technique the authors of [15] developed and used to obtain these lower bound results has already proved fruitful in obtaining similar results for other problems in \(\mathsf{NP}\), such as a machine teaching problem [16]. Despite these problems being well-studied from an algorithmic complexity point of view, surprisingly they have yet to be studied from the perspective of enumeration. We remedy this by initiating the study of enumerating minimal solution sets--the gold standard for enumeration--for the following problems. Minimal Resolving Sets Enumeration (MinResolving) **Input:** A graph \(G\). **Output:** The set of (inclusion-wise) minimal resolving sets of \(G\). Minimal Strong Resolving Sets Enumeration (MinStrongResolving) **Input:** A graph \(G\). **Output:** The set of (inclusion-wise) minimal strong resolving sets of \(G\). Minimal Geodetic Sets Enumeration (MinGeodetic) **Input:** A graph \(G\). **Output:** The set of (inclusion-wise) minimal geodetic sets of \(G\). In the same manner that these problems had a strong impact in parameterized complexity [12], we show that they are also of great interest in enumeration. Specifically, they relate to very classical problems such as the enumeration of maximal independent sets in graphs, which admits a polynomial-delay algorithm [16], and the enumeration of minimal transversals in hypergraphs, which is arguably one of the most important open problems in algorithmic enumeration [1, 1]. In the minimal transversals enumeration problem, usually denoted Trans-Enum and also known as hypergraph dualization, we are given a hypergraph \(\mathcal{H}\) and the goal is to list all (inclusion-wise) minimal subsets of vertices that hit every edge of \(\mathcal{H}\). To date, the best-known algorithm for Trans-Enum performs in incremental1 quasi-polynomial time by generating the \(i^{\text{th}}\) minimal transversal of \(\mathcal{H}\) in time \(N^{o(\log N)}\), where \(N=|\mathcal{H}|+i\)[12]. Since then, a lot of effort has been made to solve the problem in total-polynomial time in restricted cases.2 Most notably, polynomial-delay algorithms have been obtained for \(\beta\)-acyclic hypergraphs [1] and hypergraphs of bounded degeneracy [1] or without small holes [13]. Incremental-polynomial time algorithms have been exhibited for bounded conformality hypergraphs [14] and geometric instances [1]. Footnote 1: The different notions from enumeration complexity are defined in Section 2. Footnote 2: It should be noted that a total-polynomial time algorithm was recently claimed by Wild in an arXiv preprint [23]; the proof, however, contains a major flaw (Claim (4), Section 3) that, to the best of our knowledge, has not been corrected since. Due to the inherent difficult nature of the problem, and since no substantial progress has been made in the general case since [12], over time Trans-Enum has gained the status of a "landmark problem" in terms of tractability, in between problems admitting total-polynomial algorithms, and those for which the existence of such algorithms is impossible unless \(\mathsf{P}=\mathsf{NP}\). This has motivated the study of particular cases for problems that have been proved to be at least as hard3 as Trans Enum; see, e.g., [1, 1, 10]. One of the most successful examples is the case of minimal dominating sets enumeration, with many particular cases shown to admit total-polynomial time algorithms [1, 1, 13, 14, 15]. On the other hand, for problems that are notably harder than Trans-Enum and for which the existence of total sub-exponential algorithms is open, adapting the algorithm of [11] as in [16, 17], or using it as a subroutine as in [1, 20, 21] has also proved to be fruitful. Footnote 3: The _total-polynomial algorithm_ for the first one implies a total-polynomial algorithm for the second; if the reverse direction holds as well, then the problems are said to be (polynomially) _equivalent_. In light of these results, a line of research that emerged from [13] consists of exploring the following question; see, e.g., the survey [14] or the introduction in [10] for explicit mentions of this emergence. **Question 1.1**.: _For which vertex (or edge) set graph property \(\Pi\) is the enumeration of minimal (or maximal) subsets satisfying \(\Pi\) equivalent to Trans-Enum?_ In this paper, we make progress on Question 1.1 by first showing that Trans-Enum is equivalent to both MinResolving (in general graphs) and MinGeodetic on split graphs. Notably, this adds two new natural problems to the very short list of problems known to exhibit this property. Surprisingly, this contrasts with the complexity status of MinStrongResolving that we show to be solvable with polynomial delay using a relationship established in [1]. Interestingly, we in addition show that MinGeodetic is a particular case of enumerating the minimal flats of the graphic matroid associated to \(K_{n}\) that are transversals of a given \(n\)-hypergraph. To the best of our knowledge, the latter problem is open, and thus, this encloses MinGeodetic by two generation problems whose complexity statuses are unsettled to date. Hence, disproving the equivalence between MinGeodetic and Trans-Enum by, e.g., showing that the problem does not admit a total-polynomial time algorithm unless \(\mathsf{P}=\mathsf{NP}\),4 would imply that the aforementioned variant of flats enumeration is intractable, which is currently unknown. Footnote 4: As \(\mathsf{NP}\)-hard problems are believed not to admit quasi-polynomial time algorithms. Finally, we observe that the difficulty of the problems we study is tightly related to the maximum length of an induced path in the graph at hand. This motivates the study of these problems on graphs that do not contain long induced paths. While enumerating minimal geodetic and revolving sets is harder than Trans-Enum on \(P_{5}\) and \(P_{6}\)-free graphs, respectively, we show that they admit linear-delay algorithms in \(P_{4}\)-free graphs using a variant of Courcelle's theorem for enumeration and clique-width [15]. Preliminaries We begin with the definitions of the relevant notions from enumeration complexity. We say that an algorithm runs in _total-polynomial_ time if it outputs every solution and stops in a time which is polynomial in the size of the input plus the output. Moreover, if the algorithm outputs the \(i^{\text{th}}\) solution in a time which is polynomial in the size of the input plus \(i\), then it is said to be _incremental-polynomial_. An enumeration algorithm is said to be running with _polynomial delay_ if before the first output, between two consecutive outputs, and after the last output it runs in a time which is polynomial in the size of the input. Clearly, an algorithm running with polynomial delay is incremental-polynomial, and an incremental-polynomial algorithm is total-polynomial; we refer the reader to [1, 10] for more details on enumeration complexity. We assume the reader is familiar with graph and hypergraph terminologies and refer to [1] and [1] for the definitions that are not recalled below. A _hypergraph_\(\mathcal{H}\) is a set of vertices \(V(\mathcal{H})\) together with a family of edges \(E(\mathcal{H})\subseteq 2^{V(\mathcal{H})}\). It is called a _graph_ when each of its edges has size precisely two, and _Sperner_ if no two distinct edges \(E,F\in\mathcal{H}\) are such that \(E\subseteq F\). A _transversal_ of \(\mathcal{H}\) is a subset \(T\subseteq V(\mathcal{H})\) such that \(E\cap T\neq\emptyset\) for all \(E\in E(\mathcal{H})\). It is called _minimal_ if it is inclusion-wise minimal. The set of minimal transversals of \(\mathcal{H}\) is denoted by \(Tr(\mathcal{H})\), and the problem of listing \(Tr(\mathcal{H})\) given \(\mathcal{H}\) by Trans-Enum. Given a graph \(G\) and two vertices \(x,y\), we note \(\mathsf{dist}(x,y)\) the length of a shortest \(x\)-\(y\) path in \(G\). The _diameter_ of a graph is the maximum distance among all pairs of vertices. Two vertices \(u,v\) are called _false twins_ if their open neighborhoods \(N(u)\) and \(N(v)\) are equal, and _twins_ if in addition they are adjacent. Given an integer \(k\), we call \(P_{k}\) an induced subgraph of \(G\) isomorphic to a path on \(k\) vertices. We say that a vertex is _complete_ (resp. _anti-complete_) to a subset \(S\subseteq V(G)\) if it is adjacent (resp. non-adjacent) to every vertex in \(S\). We now recall the definitions from the introduction. A _resolving set_ in \(G\) is a subset \(S\subseteq V(G)\) such that, for any pair of vertices \(a,b\) in \(G\), there exists \(x\in S\) such that \(\mathsf{dist}(a,x)\neq\mathsf{dist}(b,x)\). We will also say that \(x\)_distinguishes_ the pair \(a,b\), a terminology that will come handy in the rest of the paper. A _strong resolving set_ of \(G\) is a subset \(S\subseteq V(G)\) such that, for any pair of distinct vertices \(a,b\) in \(G\), there exists \(x\in S\) such that either \(a\) lies on an \(x\)-\(b\) shortest path, or \(b\) lies on an \(x\)-\(a\) shortest path. Note that any strong resolving set is a resolving set, while the opposite is not true in general. A _geodetic set_ of \(G\) is a subset \(S\subseteq V(G)\) such that any vertex of \(G\) lies on an \(x\)-\(y\) shortest path for some \(x,y\in S\). Analogously to resolving sets, we will say that the pair \(x,y\)_distinguishes_ the vertex \(v\) whenever \(v\) lies on an \(a\)-\(b\) shortest path. We say that a resolving set (resp. strong resolving set, geodetic set) is minimal if it is (inclusion-wise) minimal. Given a hypergraph \(\mathcal{H}\) on vertex set \(\{v_{1},\ldots,v_{n}\}\) and edge set \(\{E_{1},\ldots,E_{m}\}\), the _incidence bipartite graph_ of \(\mathcal{H}\) is the bipartite graph with bipartition \(V=\{v_{1},\ldots,v_{n}\}\) and \(H=\{e_{1},\ldots,e_{m}\}\), with an edge between \(v_{i}\in V\) and \(e_{j}\in\mathcal{H}\) if \(v_{i}\) belongs to \(E_{j}\). The _non-incidence bipartite graph_ of \(\mathcal{H}\) is the graph with the same vertices, but where there is an edge between \(v_{i}\in V\) and \(e_{j}\in\mathcal{H}\) if \(v_{i}\) does _not_ belong to \(E_{j}\). Finally, the _(non-)incidence co-bipartite graph_ of \(\mathcal{H}\) is the (non-)incidence bipartite graph of \(\mathcal{H}\) where \(V\) and \(H\) are completed into cliques. We refer to the next sections and their reductions for illustrations of these constructions. ## 3 Resolving sets In this section, we prove that Trans-Enum and MinResolving are equivalent, and show our reductions to preserve polynomial delay. As for MinStrongResolving, we show that it is equivalent to the enumeration of maximal independent sets in graphs, and hence, that it admits a polynomial-delay algorithm. We first deal with the reduction from MinResolving. It is clear from the definition of distinguishing a pair of vertices that the resolving sets of a graph \(G\) are exactly the transversals of the hypergraph \(\mathcal{H}\) with the same vertex set and with an edge \(E_{ab}:=\{v\in V(G):\mathsf{dist}(a,v)\neq\mathsf{dist}(b,v)\}\) for every pair \(a,b\) of distinct vertices in \(G\). Since \(\mathcal{H}\) has \(n\) vertices and \(O(n^{2})\) edges, and as it can be constructed in polynomial time in \(n\), we derive the following. **Theorem 3.1**.: _There is a polynomial-delay algorithm for MinResolving whenever there is one for Trans-Enum._ Let us now deal with the reduction from Trans-Enum. Let \(\mathcal{H}\) be a hypergraph on vertex set \(\{v_{1},\ldots,v_{n}\}\) and edge set \(\{E_{1},\ldots,E_{m}\}\). For convenience in our proof, we will furthermore assume that \(n\) and \(m\) are powers of \(2\) greater than \(2\), and that no edge of \(\mathcal{H}\) contains the full set of vertices. Note that these assumptions can be conducted without loss of generality, in particular since \(\mathcal{H}\) can be assumed Sperner and an edge containing the full set of vertices would imply it is the only edge of \(\mathcal{H}\). We describe the construction of a graph on \(O(n+m)\) vertices and \(O(n^{2}+m^{2})\) edges whose set of minimal resolving sets can be partitioned into two families where the first one has size \(O(nm^{2})\), and where the second roughly consists of \(O(nm)\) copies of the minimal transversals of \(\mathcal{H}\). See Figure 1 for an illustration of the construction. We start from the non-incidence co-bipartite graph of \(\mathcal{H}\) with bipartition \(V:=\{v_{1},\ldots,v_{n}\}\) and \(H:=\{e_{1},\ldots,e_{m}\}\), to which we add a clique \(H^{\prime}:=\{e^{\prime}_{1},\ldots,e^{\prime}_{m}\}\) that we make complete to \(V\). Then \(V\), \(H\), and \(H^{\prime}\) are cliques, \(v_{i}\) is adjacent to every \(e^{\prime}\in H^{\prime}\), and it is adjacent to \(e_{j}\) if and only if \(v_{i}\not\in E_{j}\). We construct two additional sets \(U:=\{u_{1},u^{\prime}_{1},\ldots,u_{\log n+1},u^{\prime}_{\log n+1}\}\) and \(W:=\{w_{1},w^{\prime}_{1},\ldots,w_{\log m+1},w^{\prime}_{\log m+1}\}\) on \(2\log n+2\) and \(2\log m+2\) vertices, respectively. We complete \(U\) into a clique minus each of the edges \(u_{i}u^{\prime}_{i}\), \(i\in\{1,\ldots,\log n+1\}\), and add to \(W\) each of the edges \(w_{j}w^{\prime}_{j}\), \(j\in\{1,\ldots,\log m+1\}\). For an integer \(j\in\mathbb{N}\), we shall note \(I(j)\) the set of indices (starting from 1) of bits of value 1 in the binary representation of \(j\). Then, we connect each \(v_{i}\), \(i\in\{1,\ldots,n\}\), to the vertices \(u_{k}\) and \(u^{\prime}_{k}\) for every \(k\in I(i)\), and each of \(e_{j}\) and \(e^{\prime}_{j}\), \(j\in\{1,\ldots,m\}\) to the vertices \(w_{k}\) and \(w^{\prime}_{k}\) for every \(k\in I(j)\). Observe that, by the nature of the binary coding, no element of \(V\) is complete or anti-complete to \(U\), and the same can be said for \(H\cup H^{\prime}\) and \(W\). Note that this binary representation gadget is derived from ideas used in [6, 7]. Finally, we connect every vertex of \(U\) to every vertex of \(H\cup H^{\prime}\), and connect every vertex of \(W\) to every vertex of \(V\). This concludes the construction of our graph \(G\). Figure 1: Illustration of the reduction from Trans-Enum to MinResolving with \(\mathcal{H}\) consisting of \(E_{1}=\{v_{1},v_{2}\}\), \(E_{2}=\{v_{2},v_{3},v_{4}\}\), \(E_{3}=\{v_{3},v_{5}\}\), and \(E_{4}=\{v_{4},v_{5},v_{6},v_{7},v_{8}\}\). Dashed lines represent non-edges, and a bold line between two sets of vertices \(A,B\) means that \(A\) is complete to \(B\). For the sake of legibility, we do not represent the edges of \(G[U]\), which is almost a clique, nor the edges of the cliques \(H\), \(H^{\prime}\), and \(V\). We only represent the non-edges between \(V\) and \(H\). We also do not fully represent some of the edges incident to the vertices \(u^{\prime}_{i}\) and \(w^{\prime}_{j}\). The set of white filled vertices is one of the \(O(nm)\) minimal resolving sets associated to the minimal transversal \(\{v_{1},v_{3},v_{5}\}\) of \(\mathcal{H}\). The set of squared vertices is one of the \(O(nm^{2})\) minimal resolving sets not associated with any minimal transversal. We start with easy observations. **Lemma 3.2**.: _Let \(S\) be a minimal resolving set of \(G\). Then, \(S\) intersects each of \(\{u_{i},u^{\prime}_{i}\}\)\((i\in\{1,\ldots,\log n+1\})\) and \(\{w_{j},w^{\prime}_{j}\}\)\((j\in\{1,\ldots,\log m+1\})\) on one vertex._ Proof.: First, note that each of the sets \(\{u_{i},u^{\prime}_{i}\}\)\((i\in\{1,\ldots,\log n+1\})\) and \(\{w_{j},w^{\prime}_{j}\}\)\((j\in\{1,\ldots,\log m+1\})\) defines a distinct pair of (false or not) twin vertices in \(G\). As two (false or not) twins share the same distances to every other vertex in the graph, it is easily seen that any resolving set must intersect them in order to distinguish them, and that it intersects them on precisely one element whenever it is minimal. In the following, let us consider arbitrary \[X\in\left\{\{x_{1},\ldots,x_{\log n+1}\}:(x_{1},\ldots,x_{\log n +1})\in\prod_{i=1}^{\log n+1}\{u_{i},u^{\prime}_{i}\}\ \right\},\] \[Y\in\left\{\{y_{1},\ldots,y_{\log m+1}\}:(y_{1},\ldots,y_{\log m +1})\in\prod_{j=1}^{\log m+1}\{w_{j},w^{\prime}_{j}\}\right\},\] and note \(\mathcal{Z}\) the set of all possible unions \(Z:=X\cup Y\). Note that there are \(4nm\) possible choices for \(Z\), and that by Lemma 3.2, any resolving set contains one such \(Z\) as a subset. We characterize the pairs of vertices of \(G\) that are distinguished by these sets. **Lemma 3.3**.: _Let \(Z\in\mathcal{Z}\) and let \(P\) be the set of all pairs \(\{e_{i},e^{\prime}_{i}\}\) with \(i\in\{1,\ldots,m\}\). Then, \(Z\) distinguishes a pair \((a,b)\) of distinct vertices if and only if \((a,b)\notin P\)._ Proof.: We consider cases depending on the nature of each pair \(a,b\) of distinct vertices in \(G\) to show that they are distinguished by \(Z\). Clearly if one of \(a\) or \(b\) belongs to \(Z\), then the pair is distinguished. We will thus assume \(a\) and \(b\) to be disjoint from \(Z\) in the rest of the case analysis. We first consider \(a\in U\). Then, \(a\in\{u_{i},u^{\prime}_{i}\}\) for some \(i\in\{1,\ldots,\log n+1\}\). As by assumption \(a\not\in Z\), \(a\neq x_{i}\). Then, \(\mathsf{dist}(a,x_{i})=2\), while \(\mathsf{dist}(b,x_{i})=1\) for any \(b\) in \(U\) or \(H\cup H^{\prime}\). If \(b\) belongs to \(V\) or \(W\), then there is some \(y_{j}\in Y\) such that \(\mathsf{dist}(b,y_{j})\leq 1\) and \(\mathsf{dist}(a,y_{j})\geq 2\). Hence, the pair \(a,b\) is distinguished in that case. We now consider \(a\in W\). Then, \(a\in\{w_{j},w^{\prime}_{j}\}\) for some \(j\in\{1,\ldots,\log m+1\}\) and as by assumption \(a\not\in Z\), \(a\neq y_{j}\). If \(b\) belongs to \(W\), then \(\mathsf{dist}(a,y_{j})=1\) and \(\mathsf{dist}(b,y_{j})\geq 2\). If \(b\) belongs to \(H\cup H^{\prime}\), then \(\mathsf{dist}(a,x_{i})\geq 2\) and \(\mathsf{dist}(b,x_{i})=1\) for some \(x_{i}\in X\). The same holds when \(b\) belongs to \(V\) as every element \(v\in V\) is adjacent to some \(x_{i}\) by the nature of the binary coding between \(U\) and \(V\). The case \(b\in U\) was handled above. We conclude that \(a,b\) is distinguished. Let us now assume that \(a\in V\) and \(b\in H\cup H^{\prime}\). Recall that, by the nature of the binary coding between \(U\) and \(V\), for every such \(a\), there exists \(x_{i}\in X\) such that \(\mathsf{dist}(a,x_{i})\geq 2\). Now, since \(\mathsf{dist}(b,x_{i})=1\), we get that \(a,b\) is distinguished by such an \(x_{i}\) and the case follows. We are left with \(\{a,b\}\) being a subset of \(V\) or \(H\cup H^{\prime}\). In each of these cases, \(a\) and \(b\) have distinct adjacencies with respect to \(U\) or \(W\) as their indices within \(V\) or \(H\cup H^{\prime}\) are distinct. In particular, this is true when \(a\in H\) and \(b\in H^{\prime}\) or vice versa since \(\{a,b\}\notin P\). Then, there exists \(z\in Z\) such that \(\mathsf{dist}(a,z)\neq\mathsf{dist}(b,z)\), and hence, \(a,b\) is distinguished, concluding the case and the proof. Since by Lemma 3.2 every minimal resolving set contains a choice of \(Z\) as above, we get that the non-trivial part of minimal resolving sets in \(G\) is dedicated to distinguishing pairs in \(P\). We characterize these non-trivial parts in the following. **Lemma 3.4**.: _If \(S\) is a minimal resolving set of \(G\) such that \(S\cap(H\cup H^{\prime})\neq\emptyset\), then \(S=Z\cup\{e\}\) for some \(Z\in\mathcal{Z}\) and \(e\in H\cup H^{\prime}\)._ Proof.: Recall that, by Lemma 3.2 there exists \(Z\in\mathcal{Z}\) such that \(S\cap(U\cup W)=Z\). By Lemma 3.3, only pairs \(\{a,b\}\in P\) are not distinguished by \(Z\). Since there is no edge between \(H\) and \(H^{\prime}\), and as these sets are cliques, picking \(e\) in any of these sets will satisfy \(\mathsf{dist}(a,e)\neq\mathsf{dist}(b,e)\). Hence, \(Z\cup\{e\}\) is a resolving set for every \(e\in H\cup H^{\prime}\), and the lemma follows by minimality. **Lemma 3.5**.: _If \(S\) is a minimal resolving set of \(G\) such that \(S\cap(H\cup H^{\prime})=\emptyset\), then \(S=Z\cup T\) for some \(Z\in\mathcal{Z}\) and some minimal transversal \(T\) of \(\mathcal{H}\)._ Proof.: Let \(\{e_{j},e^{\prime}_{j}\}\in P\). As by assumption \(S\cap(H\cup H^{\prime})=\emptyset\), we get from Lemma 3.2 that \(S=Z\cup T\) for some \(T\subseteq V\). Now, for \(v\in V\) to distinguish \(e_{j}\) from \(e^{\prime}_{j}\), it must be that \(v\) is non-adjacent to \(e_{j}\) since \(v\) is complete to \(H^{\prime}\). By construction, we deduce that \(v\in E_{j}\) in that case. Since by Lemma 3.3 every pair in \(P\) needs to be distinguished by \(T\), we derive that \(T\) is a transversal of \(\mathcal{H}\). The minimality of \(S\) implies that, for every \(v\in V\), there exists at least one pair in \(P\) that is distinguished by \(v\) but not by \(T\setminus\{v\}\). Hence, \(T\) is a minimal transversal of \(\mathcal{H}\). **Lemma 3.6**.: _If \(T\) is a minimal transversal of \(\mathcal{H}\), then \(Z\cup T\) is a minimal resolving set of \(G\) for any \(Z\in\mathcal{Z}\)._ Proof.: Since \(T\) is a transversal of \(\mathcal{H}\), every pair of \(P\) is distinguished by a vertex of \(T\). By Lemma 3.2 and Lemma 3.3, we conclude that \(Z\cup T\) is a resolving set. It is minimal as, for every \(v_{i}\in T\), there is some \(E_{j}\) in \(\mathcal{H}\) such that \((T\setminus\{v_{i}\})\cap E_{j}=\emptyset\), and hence, the pair \(\{e_{j},e^{\prime}_{j}\}\) is not distinguished by \(T\setminus\{v_{i}\}\), and by Lemma 3.3 this pair is not distinguished by \((Z\cup T)\setminus\{v_{i}\}\). Note that to every \(T\in Tr(\mathcal{H})\) corresponds \(4nm\) distinct minimal resolving sets in \(G\) obtained by extending \(T\) with every possible \(Z\in\mathcal{Z}\). We show that our reduction still preserves polynomial delay at the cost of (potentially exponential) space using a folklore trick on regularizing the outputs. **Theorem 3.7**.: _There is a polynomial-delay algorithm for Trans-Enum whenever there is one for MinResolving._ Proof.: Let \(\mathsf{A}\) be an algorithm for MinResolving running with polynomial delay \(f(n)\) for some function \(f:\mathbb{N}\to\mathbb{N}\), where \(n\) is the number of vertices in \(G\). We first describe an incremental-polynomial time algorithm \(\mathsf{B}\) for Trans-Enum generating the \(i^{\text{th}}\) solution in \(O(i\cdot(nm^{2}\cdot f(n)))\) time. We start by constructing \(G\) as above. Clearly, this can be done in polynomial time in \(n+m\). Then, we simulate \(\mathsf{A}\) on \(G\). Each time \(\mathsf{A}\) produces a set of the form \(S=Z\cup T\) with \(|T|\geq 2\), we check whether \(T\) has already been obtained before by keeping every such \(T\) in memory, and output it as a solution for Trans-Enum if not. This concludes the description of \(\mathsf{B}\). Its correctness follows from Lemmas 3.4, 3.5, and 3.6. Let us analyze the complexity of \(\mathsf{B}\). By Lemma 3.4, \(\mathsf{A}\) generates at most \(nm^{2}\) solutions in total time \(O(nm^{2}\cdot f(n))\) before generating a first solution of the form \(Z\cup T\) with \(T\in Tr(\mathcal{H})\). Hence, the first solution of \(\mathsf{B}\) is obtained in \(O(nm^{2}\cdot f(n))\) time, as required. Suppose now that \(\mathsf{B}\) has produced \(i\) solutions \(T_{1},\ldots,T_{i}\in Tr(\mathcal{H})\) in \(O(i\cdot(nm^{2}\cdot f(n)))\) time. By Lemmas 3.4 and 3.5, whenever \(\mathsf{B}\) produces the \((i+1)^{\text{th}}\) solution for Trans-Enum, the simulation of \(\mathsf{A}\) has generated at most \(i\cdot(4nm-1)+nm^{2}\) solutions of the form \(Z\cup\{e\}\), \(e\in H\cup H^{\prime}\) or \(Z\cup T\) for \(T\in\{T_{1},\ldots,T_{i}\}\). This takes \(O\big{(}(i\cdot(4nm-1)+nm^{2})\cdot f(n)\big{)}\) time by assumption, after which \(\mathsf{B}\) produces the next solution. Hence, in total, \(\mathsf{B}\) has spent \(O(i\cdot 4nm\cdot f(n)+nm^{2}\cdot f(n))\) time outputting the \((i+1)^{\text{th}}\) solution of Trans-Enum as desired. Note that in the incremental time of \(\mathsf{B}\), the dependence on \(i\) is linear. Then, using a folklore trick on regularizing the delay of such kinds of algorithms (see, e.g., [11, Proposition 3]), we can regularize our algorithm \(\mathsf{B}\) to polynomial-delay by keeping every new set \(T\) in a queue, and pulling a new set from the queue every \(nm^{2}\cdot f(n)\) steps. This concludes the proof. We note that the space needed for the reduction of Theorem 3.7 to hold is potentially exponential, as every obtained minimal transversal is stored in a queue. However, it can be seen using another folklore trick (see, e.g., [1, Section 3.3]) on checking whether solutions have been already obtained before by running the same algorithm on the same number of steps minus one, that the reduction can be made to preserve incremental-polynomial time and polynomial space at the cost of a worse dependence on the number of solutions. We end this section by dealing with MinStrongResolving, and argue that a reduction from [1] implies the following theorem. **Theorem 3.8**.: MinStrongResolving _can be solved with polynomial delay._ Proof.: In [1, Theorem 2.1] it was proven that, given any graph \(G\), another graph \(G^{\prime}\) such that the vertex covers of \(G^{\prime}\) are exactly the strong resolving sets of \(G\) can be constructed in polynomial time. Furthermore, the size of the obtained graph is polynomial in the size of \(G\), namely it satisfies \(V(G^{\prime})=V(G)\). Since the vertex covers of a graph are exactly the complements of its independent sets, we deduce a polynomial-delay algorithm for MinStrongResolving using the algorithm of Tsukiyama et al. [12]. ## 4 Geodetic sets In this section, we prove that Trans-Enum and MinGeodetic on split graphs are equivalent, and show our reductions to preserve polynomial delay. As for the general case, we show it to be a particular case of enumerating all the minimal flats of the graphic matroid associated to \(K_{n}\) that are transversals of a given \(n\)-vertex hypergraph, whose complexity status is unsettled to date. We first deal with the reduction from Trans-Enum. Let \(\mathcal{H}\) be a hypergraph on vertex set \(\{v_{1},\ldots,v_{n}\}\) and edge set \(\{E_{1},\ldots,E_{m}\}\). We furthermore assume that \(n,m\geq 1\) and that no vertex of \(\mathcal{H}\) appears in every edge. Note that these assumptions can be conducted without loss of generality. In particular, if a vertex \(v\) appears in every edge, then \(Tr(\mathcal{H})\) consists of \(\{v\}\) and the minimal transversals of \(\mathcal{H}^{\prime}:=\{E\setminus\{v\}:E\in\mathcal{H}\}\) and solving Trans-Enum on \(\mathcal{H}\) is equivalent to solving it on \(\mathcal{H}^{\prime}\), and thus, we can recursively remove all such vertices. We describe the construction of a split graph \(G\) on \(O(n+m)\) and \(O(n^{2}m^{2})\) edges whose set of minimal geodetic sets is partitioned into two families where the first has size \(O(m)\) and the second is in bijection with the set of minimal transversals of \(\mathcal{H}\). See Figure 2 for an illustration of the construction. We start from the non-incidence bipartite graph of \(\mathcal{H}\) with bipartition \(V:=\{v_{1},\ldots,v_{n}\}\) and \(H:=\{e_{1},\ldots,e_{m}\}\), to which we add a set of vertices \(U:=\{u_{1},\ldots,u_{m}\}\) with \(u_{j}\) only adjacent to \(e_{j}\) for every \(1\leq j\leq m\). We then complete \(U\cup V\) into a clique, add a vertex \(e^{*}\) adjacent to every vertex in \(V\), and another vertex \(u^{*}\) adjacent to every other vertex (including the vertex \(e^{*}\)) in \(G\). This completes the construction. We note that \(G\) is a split graph with clique \(K:=U\cup V\cup\{u^{*}\}\) and independent set \(I:=H\cup\{e^{*}\}\). Observe that since \(u^{*}\) is a universal vertex of \(G\), the diameter of \(G\) is at most \(2\), and we may reformulate \(x\) being on a shortest \(a\)-\(b\) path with \(a\neq x\neq b\) as \(x\) being the middle vertex of a \(P_{3}\) in \(G\) (as an induced subgraph). We derive easy observations. **Lemma 4.1**.: _The set \(I\) is contained in every geodetic set of \(G\)._ Proof.: This is a direct consequence of the fact that no vertex in \(I\) is the middle vertex of a \(P_{3}\) in \(G\). **Lemma 4.2**.: _Only the elements in \(U\) are not distinguished by \(I\)._ Proof.: Clearly, all the elements of \(I\) are self-distinguished. Recall that \(\mathcal{H}\) is assumed to contain at least one edge, and no vertex of \(\mathcal{H}\) appears in every edge. Thus, if \(x\in V\cup\{u^{*}\}\), then there exists \(e\in H\) adjacent to \(x\) and \(ee^{*}\) distinguishes it. Now, since no \(P_{3}\) having its endpoints in \(I\) contains \(u\), we conclude that only the vertices in \(U\) are not distinguished by \(I\), as desired. **Lemma 4.3**.: _The set \(I\cup\{u\}\) is a minimal geodetic set of \(G\) for every \(u\in U\)._ Proof.: This follows by Lemmas 4.1 and 4.2 by observing that, for any \(u^{\prime}\in U\) with \(u\neq u^{\prime}\), we have a \(P_{3}\)\(euu^{\prime}\) for \(e\) the unique neighbor of \(u\) in \(H\). We may now characterize minimal geodetic sets that are of interest as far as the transversality of \(\mathcal{H}\) is concerned. Figure 2: Illustration of the reduction from Trans-Enum to MinGeodetic with \(H\) consisting of \(E_{1}=\{v_{1},v_{2}\}\), \(E_{2}=\{v_{2},v_{3},v_{4}\}\), \(E_{3}=\{v_{3},v_{5}\}\), and \(E_{4}=\{v_{4},v_{5},v_{6}\}\). Dashed lines represent non-edges and the bold lines incident to \(u^{*}\) and \(e^{*}\) mean these two vertices are complete to \(I\) and \(V\), respectively. For the sake of readability, we do not represent the edges of the clique \(K\). The square vertices belong to any geodetic set. The set of white filled vertices is a minimal geodetic set obtained from the minimal transversal \(\{v_{1},v_{3},v_{5}\}\) of \(\mathcal{H}\). **Lemma 4.4**.: _Let \(S\) be a minimal geodetic set of \(G\) such that \(S\cap U=\emptyset\). Then, \(S\cap K\) is a minimal transversal of \(\mathcal{H}\)._ Proof.: By Lemmas 4.1 and 4.2, we have that \(I\subseteq S\), and that only the elements in \(U\) are not distinguished by \(I\). Let \(T:=S\cap K\). Since \(S\cap U=\emptyset\) and as \(u^{*}\) is adjacent to every vertex in the graph, we derive \(T\subseteq V\). Now, in order to distinguish \(u_{j}\) (\(j\in\{1,\ldots,m\}\)) it must be that a vertex from \(v\) is not adjacent to \(e_{j}\), as the only \(P_{3}\) having \(u_{j}\) as a middle vertex contains \(e_{j}\). Hence, \(T\) defines a transversal of \(\mathcal{H}\) whenever it distinguishes every such \(u_{j}\). If it was not minimal, then removing a vertex \(v\) from \(T\) would still intersect every edge of \(\mathcal{H}\), which in turn would still distinguish every \(u\in U\), a contradiction. **Lemma 4.5**.: _If \(T\) is a minimal transversal of \(\mathcal{H}\), then \(T\cup I\) is a minimal geodetic set of \(G\)._ Proof.: By Lemma 4.2, \(I\) distinguishes every vertex of \(G\) except for those in \(U\). Now, since \(T\) is a transversal, for every \(E_{j}\in\mathcal{H}\), there exists \(v\in T\) such that \(v\in E_{j}\), and it follows that \(v\) is not adjacent to \(e_{j}\) and \(e_{j}u_{j}v\) defines a \(P_{3}\). Thus, every \(u_{j}\) is distinguished and we conclude that \(S:=T\cup I\) is a geodetic set. Let us assume that it is not minimal and let \(x\in S\) such that \(S\setminus\{x\}\) is still a geodetic set. Then, for every \(u_{j}\) (\(j\in\{1,\ldots,m\}\)), there exists a pair \(e_{j},v\) with \(v\in S\setminus\{x\}\) such that \(e_{j}u_{j}v\) forms a \(P_{3}\). Hence, for every \(E_{j}\in\mathcal{H}\), there exists \(v\in T\setminus\{x\}\) such that \(v\in E_{j}\), a contradiction to the minimality of \(T\). **Theorem 4.6**.: _There is a polynomial-delay algorithm for Trans-Enum whenever there is one for_ MinGeodetic _on split graphs._ Proof.: This is a consequence of the fact that the graph \(G\) can be constructed in polynomial time in the size of \(\mathcal{H}\), has polynomial size, and that it contains \(m\) minimal geodetic sets \(I\cup\{u\}\) with \(u\in U\). All the other minimal geodetic sets of \(G\) are of the form \(I\cup T\) where \(T\) is a minimal transversal of \(\mathcal{H}\). Hence, a polynomial-delay algorithm for MinGeodetic would take at most \(m\) times its delay between two consecutive minimal geodetic sets of the form \(I\cup T\). We now argue that a polynomial-delay algorithm for Trans-Enum yields one for MinGeodetic on split graphs. Let \(G\) be a split graph of bipartition \((K,I)\) with \(K\) the clique and \(I\) the independent set. Among all such partitions we consider one that maximizes the size of \(I\) and we may furthermore assume that \(|I|\geq 2\), as otherwise the instance is trivial. As in Lemma 4.1, let us first note that \(I\subseteq S\) for any geodetic set \(S\) of \(G\) as the neighborhood of every vertex \(x\in I\) is a clique. By the maximality of \(I\), every vertex \(v\in K\) that is not distinguished by a pair of \(I\) has precisely one neighbor \(u\in I\) whose set of vertices at distance two from \(u\) contains the full set \(I\) as a subset. Indeed, if it was not the case, then \(v\) would be distinguished by the pair \(u,w\) for \(w\) the vertex at distance three from \(u\). Then, to distinguish \(v\) we must either pick \(v\) or intersect \(K\setminus N(u)\) the non-neighborhood of \(u\) in \(K\). We identify all such vertices \(v_{1},\ldots,v_{k}\) and their only neighbors \(u_{1},\ldots,u_{k}\) in \(I\) to construct a hypergraph \(\mathcal{H}\) on vertex set \(K\) with an edge \(E_{i}=\{K\setminus N(u_{i})\}\cup\{v_{i}\}\) for every \(1\leq i\leq k\). We note that possibly \(u_{i}=u_{j}\) for distinct \(i,j\in\{1,\ldots,k\}\), which is of no concern in the following. Clearly, the construction can be achieved in polynomial time in the size of \(G\), and by the above remarks we obtain a bijection between the minimal transversals of \(\mathcal{H}\) and the minimal geodetic sets of \(G\). This leads to the next theorem. **Theorem 4.7**.: _There is a polynomial-delay algorithm for Mingeodetic on split graphs whenever there is one for Trans-Enum._ We now end the section by showing that the general case of MinGeodetic reduces to enumerating all the minimal flats of the graphic matroid associated to the clique \(K_{n}\) that are transversals of a given \(n\)-vertex hypergraph. Let us start with the construction. We consider a graph \(G\) and construct a hypergraph \(\mathcal{H}\) whose vertices are (unordered) pairs of distinct vertices of \(G\), denoted \(uv\) instead of \(\{u,v\}\) for convenience, and where every vertex \(v\) of \(G\) gives rise to an edge \(E_{v}:=\{xy\,:\,v\text{ is on a shortest $x$-$y$ path}\}\) in \(\mathcal{H}\). To avoid ambiguity, we shall refer to the vertices of \(\mathcal{H}\) as _nodes_, and use variables \(r,s,t\) for nodes in the following. Then, \(\mathcal{H}\) has \(O(n^{2})\) nodes and \(O(n)\) edges. Clearly, every transversal \(T\) of \(\mathcal{H}\) induces a geodetic set \(\bigcup_{t\in T}t\) of \(G\), as every \(E_{v}\) is hit by a pair of \(t\) in \(T\), and that such a pair distinguishes \(v\) in \(S\). Unfortunately, minimal transversals of \(\mathcal{H}\) do not necessarily define minimal geodetic sets of \(G\) in that way, and not every minimal geodetic set of \(G\) defines a minimal transversal of \(\mathcal{H}\) by considering all the pairs of elements contained in it. Consider for example the graph \(G\) obtained from a triangle \(abc\) by adding a pendent vertex \(d\) adjacent to \(c\). Then, \(\{a,b,d\}\) is the only minimal geodetic set of \(G\), while \(\{ab,ad,bd\}\) is easily verified to be a transversal of \(\mathcal{H}\) that is _not_ minimal as it contains \(\{ad,bd\}\) as a subset. On the other hand, \(\{ac,bc,cd\}\) can be checked to be a minimal transversal of \(\mathcal{H}\), while \(\{a,b,c,d\}\) is not. We nevertheless show that consistent sets that are transversals of \(\mathcal{H}\) are in bijection with the geodetic sets of \(G\) for an appropriate notion of consistency. In the following, we call a subset \(U\) of nodes of \(\mathcal{H}\)_consistent_ if, whenever two distinct nodes \(r,s\in U\) are such that \(r\cap s\neq\emptyset\), then the unique other node \(t\) such that \(r\cup s=s\cup t=r\cup t\) is also part of \(U\). As an example, a subset \(U\) containing \(ab\) and \(bc\) but not \(ac\) is not consistent, while the set \(U=\{ab,bc,ac,bd,cd\}\) or the set of all nodes of \(\mathcal{H}\) are consistent. More generally, the family of all pairs of a given set is consistent. The aforementioned correspondence is the following. **Lemma 4.8**.: _There is a bijection between the minimal geodetic sets of \(G\) and the minimal consistent subsets of nodes that are transversals of \(\mathcal{H}\)._ Proof.: Let \(S\) be a minimal geodetic set of \(G\) and consider the set \(T\) of all pairs of vertices in \(S\). Since every vertex \(v\) in \(G\) is distinguished by a pair of elements in \(S\), every edge \(E_{v}\) in \(\mathcal{H}\) is hit by a pair of \(T\). As \(T\) is consistent by construction, we conclude that it is a consistent transversal of \(\mathcal{H}\). Let us assume toward a contradiction that it is not minimal with that property, and let \(T^{\prime}\) be a minimal consistent proper subset of \(T\) that is a transversal of \(\mathcal{H}\). Let \(S^{\prime}\) be the union of all pairs in \(T^{\prime}\). As \(T^{\prime}\subset T\) and both \(T\) and \(T^{\prime}\) are consistent, \(S^{\prime}\subset S\). Then, by the minimality of \(S\), there must be a vertex \(v\) in \(G\) that is not distinguished by any pair of \(S^{\prime}\). As \(T^{\prime}\) is only constituted of the pairs of elements in \(S\), we conclude that \(E_{v}\) is not intersected by \(T^{\prime}\), and hence, that it is not a transversal, a contradiction. Let \(T\) be a minimal consistent transversal of \(\mathcal{H}\) and consider the union \(S\) of all pairs in \(T\). Since every edge \(E_{v}\) in \(\mathcal{H}\) is hit by a pair \(t\) in \(T\), we have that each vertex \(v\) in \(G\) is distinguished by a pair of vertices in \(S\). Thus, \(S\) is a geodetic set of \(G\). Let us assume that it is not minimal and let \(x\) be such that \(S^{\prime}:=S\setminus\{x\}\) is a geodetic set. Consider the family \(T^{\prime}\) of pairs of \(S^{\prime}\). Since \(S^{\prime}\) is a geodetic set, every edge \(E_{v}\) in \(\mathcal{H}\) is hit by a pair in \(T^{\prime}\). However, by the construction of \(T^{\prime}\) and since \(T\) is consistent, we derive that \(T^{\prime}\subset T\), which contradicts the minimality of \(T\). Let us now discuss the implications of Lemma 4.8. We observe that the consistency of a subset of vertices of \(\mathcal{H}\) as defined above may as well be expressed as satisfying a set of implications \(\Sigma:=\{r,s\to t:r\cup s=s\cup t=r\cup t\}\) in the sense that any subset containing the premise of an implication in \(\Sigma\) must contain its conclusion. It is well known that consistent sets in that context are the closed sets of a lattice [1, 2]. In the very particular case of the rules defined above, the lattice is in fact known to be the lattice of flats of the graphic matroid associated to the clique \(K_{n}\), or equivalently, to be the lattice of partitions of a finite \(n\)-element set [1]. Consequently, listing consistent transversals in our context may be reformulated as the enumeration of the minimal flats of the matroid associated to the clique \(K_{n}\) that are transversals of \(\mathcal{H}\). To the best of our knowledge, no output quasi-polynomial time algorithm is known for that problem. It should however be noted that in the more general setting where \(\Sigma\) is allowed to contain any implications with premises of size at most two, the enumeration is intractable as it generalizes the dualization in lattices given by implicational bases of size at most two [1]. Graphs with no long induced paths In the previous sections, we showed that MinResolving and MinGeodetic are tough problems as they are at least as hard as Trans-Enum, arguably one of the most challenging open problems in algorithmic enumeration to date. Furthermore, these reductions hold for graphs with no long induced paths. Namely, it can be easily checked that Theorem 3.7 holds for \(P_{6}\)-free graphs, while Theorem 4.6 holds for \(P_{5}\)-free graphs. This motivates the study of these problems on instances that do not contain long induced paths. We show MinGeodetic and MinResolving to be tractable on \(P_{4}\)-free graphs using a variant of a Courcelle's theorem for enumeration and clique-width [13]. We assume the reader to be familiar with MSO logic and clique-width, and refer the reader to [1] for an introduction. **Theorem 5.1**.: _Both MinGeodetic and MinResolving restricted to \(P_{4}\)-free graphs admit linear-delay algorithms with a preprocessing using time \(O(n\log n)\)._ Proof.: We argue that our theorem is a consequence of the meta-theorem from [13, Corollary 2] stating that: * given a monadic second-order formula \(\phi(X_{1},\ldots,X_{k})\), and * a clique-expression of width \(p\) expressing a graph \(G\), we can enumerate in linear delay all the tuples \((A_{1},\ldots,A_{k})\in V(G)^{k}\) such that \(G\models\phi(A_{1},\ldots,A_{k})\) after a preprocessing using time \(O(n\log n)\). Now, observe that, for every \(d\in\mathbb{N}\), there exists a first order formula \(\phi_{d}(x,y)\) of size \(O(d^{2})\) testing whether \(\mathsf{dist}(x,y)=d\) by testing whether there exists a path of length \(d\) between \(x\) and \(y\) and none of length at most \(d-1\). Hence, for every \(\Delta\in\mathbb{N}\), the monadic second-order formula \(\psi(X)=\psi^{\prime}(X)\wedge(\forall X^{\prime}\subset X\neg\psi^{\prime}(X))\), where \[\psi^{\prime}(X):=\forall y,z\,(y\neq z)\implies\exists x\in X\bigvee_{i\in \{0,\ldots,\Delta\}}\phi_{i}(x,y)\wedge\neg\phi_{i}(x,z)\] has size \(O(\Delta^{2})\), and, for any graph \(G\) whose connected components have diameter at most \(\Delta\) and for every \(S\subseteq V(G)\), we have \(G\models\phi(S)\) if and only if \(S\) is a minimal resolving set of \(G\). We obtain a similar monadic second-order formula for minimal geodetic set by replacing \(\psi^{\prime}(X)\) by the following: \[\forall y\,\exists x,z\in X\bigvee_{i\in\{0,\ldots,\Delta\}}\phi_{d}(x,z) \wedge\phi_{d}(x,y,z),\] where \(\phi_{d}(x,y,z)\) of size \(O(d)\) tests whether \(y\) is in a path of length \(d\) between \(x\) and \(z\). Hence, the meta-theorem from [13, Corollary 2] leads to the following claim. **Claim 5.2**.: _Given a clique-width expression of bounded width that defines a graph \(G\) whose connected components have bounded diameter, we can solve MinGeodetic and MinResolving with linear delay after a preprocessing using time \(O(n\log n)\)._ Now, we observe that \(P_{4}\)-free graphs--a.k.a. cographs--have clique-width at most \(2\)[13], and that a clique-expression of width at most \(2\) can be computed in linear time [12]. Moreover, every connected component of a \(P_{4}\)-free graph has diameter at most \(2\). Hence, this theorem is a direct consequence of Claim 5.2. Interestingly, Theorem 5.1 outlines a dichotomy for MinGeodetic in a sense that the problem is tractable for \(P_{k}\)-free graphs, \(k\leq 4\), and that it is harder than Trans-Enum otherwise. This relates to similar behaviors and a line of research that emerged in [1] on classifying forbidden induced subgraphs for which the enumeration of minimal dominating sets is tractable, or harder than Trans-Enum. ## 6 Perspectives for further research We investigated a number of problems related to the metric dimension that connect to problems of huge interest in algorithmic enumeration. Except for MinStrongResolving that can be solved with polynomial delay on general graphs, we showed that MinResolving is equivalent to Trans-Enum and that the same holds for MinGeodetic when restricted to split graphs. Moreover, the general case of MinGeodetic may be seen as an intriguing variant of enumerating the flats of a matroid for which the complexity status is unsettled. The results presented in this work showed that the difficulty of MinResolving and MinGeodetic is tightly related to the maximum length of an induced path in the graph at hand. This motivates the study of these problems on \(P_{k}\)-free graphs for small values of \(k\). Except for MinGeodetic that we completely characterized with respect to Trans-Enum, the case of MinResolving for \(k=5\) is yet to be classified. Subcases of interest include split and co-bipartite graphs, that define proper subclasses of \(P_{5}\)-free graphs. We note that the case of co-bipartite graphs is also open for MinGeodetic. For these graph classes, we were not able to devise total-polynomial time algorithms; we however note that the extension problem for MinGeodetic is hard on co-bipartite graphs, which suggests that the generation is non-trivial in that case, a point that is discussed in Appendix A. Other open directions are to know whether MinGeodetic admits a total quasi-polynomial time algorithm, or to know how it relates to problems that are known to be harder than Trans-Enum yet do admit sub-exponential algorithms. Problems of interest include the dualization of products of posets [10] or the dualization in distributive lattices [10]. Concerning the candidates for Question 1.1, we can also mention minimal connected dominating sets for which the question is open [12, 13]. This case was however conjectured not to be equivalent to Trans-Enum by Kante at the 2015 Lorentz Worshop on enumeration algorithms [1]. Acknowledgements.The second author is thankful to Arnaud Mary for pointing out the flaw in the arXiv preprint [14], and to Simon Vilmin for extensive discussions on the links between minimal geodetic sets enumeration and the enumeration of the flats of a matroid.
2309.10376
Graph Contrastive Learning Meets Graph Meta Learning: A Unified Method for Few-shot Node Tasks
Graph Neural Networks (GNNs) have become popular in Graph Representation Learning (GRL). One fundamental application is few-shot node classification. Most existing methods follow the meta learning paradigm, showing the ability of fast generalization to few-shot tasks. However, recent works indicate that graph contrastive learning combined with fine-tuning can significantly outperform meta learning methods. Despite the empirical success, there is limited understanding of the reasons behind it. In our study, we first identify two crucial advantages of contrastive learning compared to meta learning, including (1) the comprehensive utilization of graph nodes and (2) the power of graph augmentations. To integrate the strength of both contrastive learning and meta learning on the few-shot node classification tasks, we introduce a new paradigm: Contrastive Few-Shot Node Classification (COLA). Specifically, COLA employs graph augmentations to identify semantically similar nodes, which enables the construction of meta-tasks without the need for label information. Therefore, COLA can utilize all nodes to construct meta-tasks, further reducing the risk of overfitting. Through extensive experiments, we validate the essentiality of each component in our design and demonstrate that COLA achieves new state-of-the-art on all tasks.
Hao Liu, Jiarui Feng, Lecheng Kong, Dacheng Tao, Yixin Chen, Muhan Zhang
2023-09-19T07:24:10Z
http://arxiv.org/abs/2309.10376v1
# Graph Contrastive Learning Meets Graph Meta Learning: A Unified Method for Few-shot Node Tasks ###### Abstract Graph Neural Networks (GNNs) have become popular in Graph Representation Learning (GRL). One fundamental application is few-shot node classification. Most existing methods follow the meta learning paradigm, showing the ability of fast generalization to few-shot tasks. However, recent works indicate that graph contrastive learning combined with fine-tuning can significantly outperform meta learning methods. Despite the empirical success, there is limited understanding of the reasons behind it. In our study, we first identify two crucial advantages of contrastive learning compared to meta learning, including (1) the comprehensive utilization of graph nodes and (2) the power of graph augmentations. To integrate the strength of both contrastive learning and meta learning on the few-shot node classification tasks, we introduce a new paradigm--**C**ontrastive Few-Shot Node **C**lassification (**COLA**). Specifically, COLA employs graph augmentations to identify semantically similar nodes, which enables the construction of meta-tasks without the need for label information. Therefore, COLA can utilize all nodes to construct meta-tasks, further reducing the risk of overfitting. Through extensive experiments, we validate the essentiality of each component in our design and demonstrate that COLA achieves new state-of-the-art on all tasks. ## 1 Introduction Graph Neural Networks (GNNs) [1; 2] have emerged as the predominant encoders for Graph Representation Learning (GRL) in recent studies, with node classification being a crucial area of investigation. Most researches focus on examining GNNs in supervised or semi-supervised settings [3; 4], which rely on many annotated data. Nevertheless, acquiring high-quality labels is challenging in many scenarios, leading to growing interest in exploring few-shot transductive node classification (FSNC), where only few labeled samples are provided for each class. The majority of current studies on FSNC [5; 6; 7; 8; 9; 10; 11] follow the meta learning [12; 13] paradigm. Specifically, to tackle a few-shot problem with \(N\) classes and \(k\) samples per class, meta learning gains knowledge through multiple training episodes with \(N\)-way \(k\)-shot meta-tasks generated from training classes. Each meta-task consists of a support set and a query set, both sampled from nodes belonging to a fixed number (\(N\)) of classes. The objective is to develop an algorithm that can perform well on the query set by training on only few support samples. This procedure enables the model to learn the latent distribution of tasks and thus can be easily transferred to tasks with unseen classes. Self-supervised learning (SSL) can also effectively handle downstream few-shot tasks outside graph learning domains like computer vision [14; 15; 16]. Such capability demonstrates the importance of transferable and discriminative representations [17] in few-shot learning. Observing the success of SSL in other areas, a recent study [18] on few-shot node classification used pre-trained node embeddings learned from existing Graph Contrastive Learning (GCL) methods [19; 20] to train a linear classifier for few-shot tasks. Even without label information, its best results significantly outperform the previous state-of-the-art (SOTA) supervised meta learning approaches. To understand the success behind contrastive learning (CL), we analyze and validate two critical factors contributing to contrastive learning's exceptional performance through extensive experiments. The first factor is the use of data augmentation. It helps the model to learn discriminative embeddings with minimal redundant information from the graph, which is essential for few-shot tasks. Secondly, CL methods explicitly incorporate node embeddings from validation/test classes in contrastive loss, reducing the likelihood of model overfitting. In contrast, meta learning relies on node labels to construct meta-tasks, thus can only use nodes from training classes, losing much graph information. Hence, one natural question emerges: Can we leverage the advantages of contrastive learning to enhance the current meta learning framework? To address this question, we propose a new paradigm for few-shot node classification termed **C**ontractive Few-Shot Node **C**lassification (**COLA**). Unlike original meta-tasks, which require nodes within the same class to construct support sets, COLA construct meta-tasks without labels. Specifically, the selection of support and query sets is the core of \(N\)-way \(k\)-shot meta-tasks construction. We start by randomly sampling \(N\) query nodes and regard them as \(N\) different ways. The main challenge is how to find \(k\) semantically similar samples to each query node, without label information. We first use GNNs as graph encoders to get node embeddings in three augmented graphs. Leveraging the idea that similar nodes should maintain semantic similarity in perturbed graphs, the nodes that have similar embeddings to the query node across different augmented graphs are selected to construct the support set. To train an effective graph encoder, we generate the query embeddings from one augmented graph using a trainable GNN and obtain the query embeddings from another augmented graph by a momentum GNN, whose weight is the moving average of the trainable GNN. Our framework has several advantages: (1) We utilize the invariant information among three augmented graphs to construct semantically correct meta-tasks without label information; (2) Data augmentation allows the generation of a diverse range of meta-tasks over episodes and helps the GNN encoder learn a discriminative data representation; (3) The construction of meta-tasks enables the utilization of all nodes in training, further incorporating more graph information; (4) We take a slowly updated encoder to create a more stable support set candidate pool, which is less prone to noise compared to a rapidly updated encoder. We conduct tests on six real-world datasets, examining the necessity of each framework component. Our results demonstrate that the proposed framework outperforms SOTA approaches, highlighting its effectiveness and potential for application. ## 2 Notations and Preliminaries We first introduce some preliminary concepts and notations. In this work, we consider an undirected attributed graph \(\mathcal{G}=(\mathcal{V},\mathcal{E},\mathbf{A},X)\), where \(\mathcal{V}=\{v_{1},\cdots,v_{|\mathcal{V}|}\}\) is the set of nodes, \(\mathcal{E}=\{e_{1},\cdots,e_{|\mathcal{E}|}\}\) is the set of edges. The adjacency matrix \(\mathbf{A}\in\{0,1\}^{|\mathcal{V}|\times|\mathcal{V}|}\) describes the graph structure, with \(\mathbf{A}_{ij}=1\) indicating an edge between nodes \(v_{i}\) and \(v_{j}\) and \(\mathbf{A}_{ij}=0\) otherwise. The feature matrix \(X\in\mathbb{R}^{|\mathcal{V}|\times d}\) contains the node features, where \(\mathbf{x}_{i}\in\mathbb{R}^{d}\) represents the feature of node \(v_{i}\) and \(d\) is the feature dimension. In our work, we focus on the node classification problem, where each node \(i\) has a label \(y_{i}\in C\) and \(C\) is the set of labels with \(|C|\) different classes. **Few-shot Node Classification.** In node classification, nodes are usually divided into train, validation, and test sets, denoted as \(X_{train}\), \(X_{val}\), and \(X_{test}\), respectively. However, unlike supervised node classification where the node labels of train/validation/test sets are sampled from the same label set \(C\), the label of nodes in few-shot learning are sampled from non-overlapped label sets for train/validation/test set, denoted as \(C_{train}\), \(C_{val}\). and \(C_{test}\). Further, it holds that \(C_{train}\cap C_{test}=\emptyset\). Few-shot Learning typically deals with \(N\)-way \(k\)-shot tasks, where the objective is to classify nodes into one of \(N\) distinct classes using only \(k\) labeled samples per class. **Meta Learning.** Meta learning [12; 13] tries to solve the few-shot problems by designing a novel training strategy. The overall process of meta learning can be divided into meta-train and meta-test phases. During meta-train phase, the model is trained to simulate the few-shot learning environment. It enables the model to quickly adapt to new few-shot tasks with limited labeled data during the meta-test phase. Specifically, at each training episode, meta learning constructs an \(N\)-way \(k\)-shot task using samples from the training set \(X_{train}\). To form an \(N\)-way \(k\)-shot task, meta learning first randomly select a set \(C_{meta}\) with \(N\) classes from \(C_{train}\) and then generate a **support set**\(\mathcal{S}=\{(\mathbf{x}_{i},y_{i})|y_{i}\in C_{meta},i=1,\cdots,N\times k\}\) and a **query set**\(\mathcal{Q}=\{(\mathbf{x}_{i},y_{i})|y_{i}\in C_{meta},i=1,\cdots,N\times q\} (\mathcal{S}\cap\mathcal{Q}=\emptyset)\) by sampling \(k\) support and \(q\) query samples from each class in \(C_{meta}\), respectively. The objective is to train on the support set so that it can perform well on the query set. In meta-test phase, the \(N\)-way \(k\)-shot tasks are constructed with samples in \(X_{test}\) in a similar way. ## 3 Contrastive Few-Shot Node Classification (COLA) In this section, we first identify two critical components that contribute hugely to the success of contrastive learning on FSNC but lack in meta learning. Next, we introduce our new paradigm COLA, which leverages the strengths of both contrastive learning and meta learning. Our key idea is to construct meta-tasks without labels, where the invariant information among three augmented graphs is utilized to construct semantically correct meta-tasks. The supervised contrastive loss [21] is taken to learn the meta-tasks. ### Analysis on Success of Contrastive Learning in Few-Shot Node Classification Although most current works on transductive FSNC follow meta learning framework (details will be discussed in Section 4), a recent study TLP [18] highlights the effectiveness of graph contrastive learning combined with fine-tuning. The authors conducted experiments using various existing graph contrastive learning methods and fine-tuned a linear classifier on top of the learned representation, which resulted in significant performance improvements on few-shot node classification tasks compared to SOTA supervised meta learning methods. To explain the strong performance of contrastive learning, we start to analyze the difference between contrastive learning and meta learning. Both techniques strive to bring the embeddings of semantically similar nodes closer and separate embeddings of semantically dissimilar ones. However, the definition of semantically similar is different in the two methods. Meta learning regards all node embeddings from the same class as similar and node embeddings from different classes as dissimilar. In contrast, self-supervised contrastive learning only considers the embeddings of the same node in different augmented graphs as similar. A direct advantage of this is that contrastive learning can explicitly utilize all node embeddings in a given graph without worrying about label leaking issues. However, meta learning can only rely on samples from the training set, which may increase the likelihood of overfitting to the training classes and limit the model's ability to transfer knowledge to test classes. Further, leveraging the graph augmentation technique is another difference between contrastive learning and meta learning, which is already known to be effective in learning discriminative representation [17]. We conjecture the above two differences contribute most to the success of contrastive learning in FSNC. We then conduct extensive ablation studies to validate our speculations. We present one experimental result in Figure 1 and include other results in Appendix C. The experiment is conducted on a 2-way 5-shot task from Cora [22] dataset, and the node embeddings pre-trained from a GCL model named GRACE [23] are used to train a classifier for few-shot tasks. We control the nodes used for pretraining to be sampled from \(C_{train}\), \(C_{train}\cup C_{val}\), \(C_{test}\), and the whole graph. \(C_{train}\), \(C_{val}\) and \(C_{test}\) contain 3, 2, 2 non-overlapped classes. We then assess the model on few-shot tasks sampled from \(C_{test}\). The results reveal several insights: although the number of nodes belonging to \(C_{train}\cup C_{val}\) far exceeds the number of nodes in \(C_{test}\), only using samples from \(C_{test}\) to pretrain achieves better result than the other two settings. This experiment validates that explicitly leveraging test class samples during training can effectively avoid overfitting. Besides, using all nodes can maximize the utilization of graph information. Another observation is that eliminating augmentation leads to a performance decrease. Thus, the discrimi Figure 1: 2-way 5-shot task on Cora using GRACE+finetune. Accuracy of four situations w/ and w/o augmentations. native representation acquired by contrastive learning through data augmentation techniques is also crucial for few-shot tasks. From experimental results, we can see the explicit use of all nodes and data augmentation are indeed crucial to the performance of contrastive learning. These insights inspire us to propose a more robust meta learning framework that can effectively leverage the discriminative representation learned by contrastive learning while also benefiting from the generalization capabilities of meta learning. ### Meta-task Construction without Labels In this section, we introduce our framework COLA, and the overall framework is illustrated in Figure 2 and Algorithm 1. COLA aims to construct meta-tasks without labels, such that all nodes can be explicitly used during training. We will first introduce the process to generate three embeddings and explain how they function together to construct meta-tasks. For a graph \(\mathcal{G}\), let \(\mathcal{A}(\mathcal{G})\) denotes the distribution of graph data augmentation of \(\mathcal{G}\). These augmentations [24] typically involve one or more operations, such as node dropping, edge perturbation, and attribute masking. For the given graph represented as \((X,\mathbf{A})\), we apply three different data augmentations \(\mathcal{A}_{1},\mathcal{A}_{2},\mathcal{A}_{3}\sim\mathcal{A}\) and generate corresponding augmented graphs \((X_{1},\mathbf{A}_{1}),(X_{2},\mathbf{A}_{2}),(X_{3},\mathbf{A}_{3})\). We then use GNNs to generate Lookup, Support, and Query Embeddings from the augmented graphs. Formally, \[L:=f_{\mbox{{ema}}}(X_{1},\mathbf{A}_{1}),\;S:=f_{\mbox{{ema}}}(X_{2},\mathbf{ A}_{2}),\;Q:=g(f(X_{3},\mathbf{A}_{3})), \tag{1}\] where Lookup Embedding \(L\) and Support Embedding \(S\) are generated by a momentum encoder \(f_{\mbox{{ema}}}\), and the Query Embedding \(Q\) is generated by a trainable graph encoder \(f\) with a projection head \(g\). Weights of \(f_{\mbox{{ema}}}\) are the moving average from \(f\). Details about the momentum encoder will be discussed later. Then we present the process to construct meta-tasks. Inspired by contrastive learning, we first sample \(N\) nodes and regard them as \(N\) classes to form the query set \(\mathcal{Q}\). Denote the query set \(\mathcal{Q}=\{v_{1},\cdots,v_{N}\}\), where \(v_{i}\) is the query node of the \(i\)-th way. To construct an \(N\)-way \(k\)-shot meta-task, the support set \(\mathcal{S}\) should include \(k\) samples that have similar semantics with the query sample from each of the \(N\) ways. Then, how to find semantically similar samples is the main challenge. We first get query nodes' embeddings from Lookup Embedding \(L\) and denote them as \(\{L_{v_{1}},\cdots,L_{v_{N}}\}\). For each \(i\in[1,\cdots,N]\), we then measure the similarity between \(L_{v_{i}}\) and all node embeddings \(\{S_{1},\cdots,S_{|\mathcal{V}|}\}\) in Support Embedding \(S\). The \(k\) embeddings in \(S\) with the highest similarity score will be selected as the support set, leading to \(Nk\) samples in the support set. We denote them as Figure 2: An overview of the COLA Framework. The construction of a 2-way 3-shot meta-task is illustrated. Two nodes 2 and 5 are sampled as the query set. The query node’s embedding in Lookup Embedding matches with all node embeddings in Support Embedding. Top-\(k\) similar embeddings are selected for the support set. Supervised contrastive loss is calculated for each task. \(\{S_{v_{i}^{1}},\cdots,S_{v_{i}^{k}}\}_{i=1}^{N}\), where \(S_{v_{i}^{j}}\) is the \(j\)-th support sample of the \(i\)-th query node. Finally, we get query nodes' embedding from Query Embedding \(Q\) and denote them as \(\{Q_{v_{1}},\cdots,Q_{v_{N}}\}\) and use them as the query set to construct a meta-task together with the support set. The task \(\mathcal{T}\) can be represented as \(\mathcal{T}=\{Q_{v_{i}},\{S_{v_{i}^{j}}\}_{j=1}^{k}\}_{i=1}^{N}\). Actually, our method can be regarded as the process of matching and optimizing. We use the fact that the most essential graph information should be invariant across different augmented views. **Given a query node \(v_{i}\), the \(k\) embeddings (in \(S\)) that are most similar to \(v_{i}\)'s embedding from one augmented view should have comparable similarity to its embedding from another augmented view.** Consequently, after identifying the top \(k\) embeddings most similar to the query node's lookup embedding \(L_{v_{i}}\), we maximize the similarity between these \(k\) embeddings and the query node's query embedding \(Q_{v_{i}}\). The momentum encoder is another component of meta-task construction. Formally, denote the parameters of \(f_{\textsf{ema}}\) by \(\theta_{\textsf{ema}}\) and parameters of \(f\) by \(\theta\), \(\theta_{\textsf{ema}}\) is updated by exponential moving average (EMA) like \(\theta_{\textsf{ema}}=m\theta_{\textsf{ema}}+(1-m)\theta\), where \(m\) is the momentum coefficient to control what degree it preserves the history. By employing a momentum encoder instead of the same trainable GNN encoder, the support set candidate pool (\(S\)) remains consistent across episodes and is less susceptible to noise or non-informative information from the rapidly changing encoder. Lookup Embedding and Support Embedding share the same momentum encoder, allowing for more accurate and consistent matches. COLA is a new paradigm that constructs meta-tasks without labels. Including all nodes to meta-task construction effectively avoid overfitting to training classes. Unlike other graph meta learning methods, where support and query sets are derived from the original graph's embeddings, COLA constructs these sets from the embeddings of two distinct augmented views. By doing so, COLA can learn a discriminative representation. Since no label information is used, our framework uses exactly the same information as graph contrastive learning methods. It's important to note that within this framework, the roles of the first and second augmented graphs can be interchanged. We perform extensive ablation studies to verify our designs and discuss the limitation in Appendix E. ### Meta-Train with Supervised Contrastive Loss **Meta-Train Phase.** To train the model, we employ supervised contrastive loss [21]. In our setting, for each way \(i\), the query embedding is treated as the anchor sample. The support embeddings \(\{S_{v_{i}^{1}},\cdots,S_{v_{i}^{k}}\}\) are considered as positive samples, while support embeddings \(\{S_{v_{i^{\prime}}^{1}},\cdots,S_{v_{i^{\prime}}^{k}}\}_{i^{\prime}\neq i}\) from other ways are viewed as negative samples. Formally, the pseudo-supervised contrastive loss for each meta-task can be expressed as follows: \[L_{sup}(\{Q_{v_{i}},\{S_{v_{i}^{j}}\}_{j=1}^{k}\}_{i=1}^{N})=-\sum_{i=1}^{N} \frac{1}{k}\sum_{j=1}^{k}\log\frac{\exp(Q_{v_{i}}\cdot S_{v_{i}^{j}}/\tau)}{ \sum_{\mathbf{v}\in S_{i}}\exp(Q_{v_{i}}\cdot\mathbf{v}/\tau)}, \tag{2}\] where \(Q_{v_{i}}\) is the query sample of the \(i\)-th way, and \(S_{v_{i}^{j}}\) is the \(j\)-th support sample of \(Q_{v_{i}}\). \(S_{t}\) denotes all the support embeddings in the current meta-task and \(\tau\) is the temperature parameter. Finally, the loss function of each meta-train episode is the average loss of multiple meta-tasks. **Meta-Test Phase.** During the meta-test phase, we discard the momentum encoder and retain the GNN encoder. Then a linear classifier is trained on top of the learned node embeddings from the GNN encoder. To elaborate, we initially select \(N\) classes from \(C_{test}\) and sample \(k\) labeled nodes from each class. The embeddings of these samples then undergo supervised training to fit a linear classifier. In the final step, we evaluate the performance using \(q\) nodes from each of the \(N\) classes. ## 4 Related Work **Graph Few-shot Learning.** While GNNs for node classification are generally semi-supervised [2], considerable efforts were spent on removing the labeling dependency [25; 1; 3]. However, they cannot handle unseen classes during the test phase. This inspired research on the few-shot node classification problem. The majority of research employs a meta learning paradigm. Meta-GNN [10] adapts the optimization-based meta learning method MAML [12] to graph data. GFL [26] enables few-shot classification on unseen graphs with seen node classes. GPN [5] uses ProtoNet [13], a metric-based meta learning method, and refines prototypes with the weights learned by a GCN [2]. G-Meta [7] leverages subgraph information and achieves good performance on both transductive and inductive FSCC. RALE [11] assigns relative and absolute locations to each node within meta-tasks. TENT [6] applies node-level, class-level, and task-level adaptations in each task to mitigate task variance impact. Recently, TLP [18], inspired by graph contrastive learning, trains a few-shot classifier using pre-trained node embeddings, thereby significantly enhancing the performance over existing meta learning approaches. Its success prompts us to delve further into the potential of contrastive learning. **Graph Contrastive Learning.** Contrastive Learning methods [15; 14; 16] have been adapted to the graph domain. DGI [27] learns node representations by maximizing mutual information (MI) between local and global graph features. GRACE [23] maximizes node-level agreement between two corrupted views. MVGRL [28] maximizes the MI between node representations of one view and graph representations of another view. GraphCL [24] applies various data augmentation techniques to the graph and then employs a contrastive loss function to move the representations of augmented views of the same graph closer. MERIT [20] leverages bootstrapping within a Siamese network and multi-scale graph contrastive learning to enhance node representation learning. SUGRL [19] employs node embeddings from MLP as anchors and takes advantage of structural and neighbor information to obtain two kinds of positive samples. Different from previous methods, SUGRL takes the combination of triplet loss instead of InfoNCE loss [29]. BGRL [30] extends the non-contrastive setting [16] that does not need negative samples to graph problem. TLP integrates various graph contrastive learning methods. SUGRL consistently delivers superior performance on few-shot tasks. **Few-shot Learning with Contrastive Learning.** Recent works in computer vision show that meta learning and contrastive learning can benefit from each other. Some recent few-shot auxiliary learning works [31; 32; 33] view few-shot learning as the main task and combine the few-shot loss with self-supervised auxiliary tasks. Liu et al. [34] employs supervised contrastive learning on meta-tasks, where support images and query images are processed with different data augmentations to construct hard samples. CPLAE [35] represents support and query samples using concatenated embeddings of both the original and augmented versions. It then regards prototypes of support samples as the anchor samples in contrastive learning. PsCo [36] uses a momentum network with a queue like MoCo [14] to improve pseudo labeling in the unsupervised meta learning setting. MetaContrastive [37] proposes a meta learning framework to enhance contrastive learning by transforming contrastive learning setup to meta-tasks. **However, in the field of graph learning, there is no work that enhances meta learning with the advantages of contrastive learning, and it is challenging to tailor these previous methods from the image domain for the graph.Few-shot Learning with Contrastive Learning.** ## 5 Experiment In this section, we demonstrate COLA outperforms all the baselines in each task and provide the ablation study to validate the significance of each model component. ### Datasets, Setup, and Baselines **Datasets.** We conducted our experiments on six benchmark datasets: Cora [22], CiteSeer [22], Amazon-Computer [38] (Computer), CoraFull [39], Coauthor-CS [38] (CS), and ogbn-arxiv [40]. In each run for the same dataset, the classes were randomly divided into three subsets: \(C_{train}\), \(C_{val}\), and \(C_{test}\). The setting of the split ratio follows previous works [18] and a detailed description of these datasets is provided in Appendix A. **Implementation Details.** We utilized Graph Convolutional Networks [2] (GCNs) as the encoder, and a multi-layer perceptron (MLP) as the projection head. Our data augmentation combines edge and feature dropout. The number of training tasks for calculating the average loss function is set to 20. We report mean accuracy and the 95% confidence interval of 20 runs for both COLA and baseline models for a fair comparison. All models were tested on a single NVIDIA A100 80GB GPU. The detailed setting of hyperparameters is reported in Appendix B. **Baselines.** We compared our model with two groups of baselines: meta learning and graph contrastive learning with finetuning (proposed by TLP [18]). For meta learning, we first evaluate two plain meta learning models without GNN [2] as backbone: MAML [12] and ProtoNet [13], then we evaluate several meta learning works for few-shot node classification: Meta-GNN [10], GPN [5], G-Meta [7], and TENT [6]. For TLP methods, we adhered to the settings and evaluated different graph contrastive learning methods for both contrastive-GCL and noncontrastive-GCL. They are MVGRL [28], GraphCL [24], GRACE [23], MERIT [20], SUGRL [19], and BGRL [30], respectively. ### Main Results Evaluations were made under 2-way 1-shot/5-shot settings on Cora, CiteSeer, and Computer datasets due to the limited number of available classes. CoraFull, CS, and ogbn-arxiv datasets were evaluated under 5-way 1-shot/5-shot settings. We present the main results on Cora, CiteSeer, and CoraFull datasets in Table 1 and include results on other datasets in Appendix C. **Our method COLA outperforms all the other baselines in every task.** Compared with meta learning methods, COLA achieves at least 11.18% and up to 20.56% absolute accuracy improvement. The results demonstrate that the utilization of all nodes and a discriminative data representation indeed \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Dataset & \multicolumn{2}{c}{Cora} & \multicolumn{2}{c}{CiteSeer} & \multicolumn{2}{c}{CoraFull} \\ \cline{2-7} Task & 2-way 1-shot & 2-way 5-shot & 2-way 1-shot & 2-way 5-shot & 5-way 1-shot & 5-way 5-shot \\ \hline \hline \multicolumn{7}{c}{Meta learning} \\ \hline MAML [12] & 52.59 \(\pm\) 2.28 & 56.45 \(\pm\) 2.41 & 51.77 \(\pm\) 2.28 & 54.21 \(\pm\) 2.30 & 22.47 \(\pm\) 1.21 & 26.58 \(\pm\) 1.32 \\ ProtoNet [13] & 51.69 \(\pm\) 2.17 & 55.00 \(\pm\) 2.39 & 51.43 \(\pm\) 2.12 & 53.23 \(\pm\) 2.28 & 34.17 \(\pm\) 1.74 & 46.86 \(\pm\) 1.74 \\ Meta-GNN [10] & 57.87 \(\pm\) 2.52 & 57.35 \(\pm\) 2.30 & 55.12 \(\pm\) 2.62 & 60.59 \(\pm\) 3.26 & 55.36 \(\pm\) 2.49 & 71.42 \(\pm\) 2.02 \\ GPN [5] & 56.09 \(\pm\) 2.08 & 63.83 \(\pm\) 2.86 & 59.33 \(\pm\) 2.23 & 65.60 \(\pm\) 2.47 & 56.48 \(\pm\) 2.72 & 71.23 \(\pm\) 2.11 \\ G-Meta [7] & 66.15 \(\pm\) 3.00 & 82.85 \(\pm\) 1.19 & 54.33 \(\pm\) 2.02 & 61.47 \(\pm\) 2.37 & 58.47 \(\pm\) 2.37 & 72.03 \(\pm\) 1.88 \\ TENT [6] & 54.33 \(\pm\) 2.10 & 58.97 \(\pm\) 2.40 & 60.06 \(\pm\) 3.01 & 66.31 \(\pm\) 2.45 & 49.83 \(\pm\) 2.02 & 64.23 \(\pm\) 1.75 \\ \hline \hline \multicolumn{7}{c}{Graph Contrastive Learning + Finetune} \\ \hline BGRL [30] & 59.16 \(\pm\) 2.48 & 81.31 \(\pm\) 1.89 & 54.33 \(\pm\) 2.14 & 66.74 \(\pm\) 2.13 & 40.82 \(\pm\) 1.95 & 69.98 \(\pm\) 1.67 \\ MVGRL [28] & 74.96 \(\pm\) 2.94 & 91.32 \(\pm\) 1.47 & 63.39 \(\pm\) 2.69 & 79.73 \(\pm\) 1.92 & 66.40 \(\pm\) 2.31 & 83.99 \(\pm\) 1.51 \\ MERIT [20] & 70.63 \(\pm\) 3.11 & 91.00 \(\pm\) 1.22 & 65.64 \(\pm\) 2.94 & 78.54 \(\pm\) 2.43 & 65.17 \(\pm\) 1.96 & 84.74 \(\pm\) 1.44 \\ GraphCL [24] & 74.32 \(\pm\) 3.26 & 90.43 \(\pm\) 1.21 & 71.39 \(\pm\) 3.17 & 79.60 \(\pm\) 1.89 & 66.76 \(\pm\) 2.75 & 84.55 \(\pm\) 1.48 \\ GRACE [23] & 71.50 \(\pm\) 1.42 & 88.49 \(\pm\) 1.44 & 67.43 \(\pm\) 2.51 & 82.09 \(\pm\) 1.64 & 62.05 \(\pm\) 2.22 & 81.54 \(\pm\) 1.52 \\ SUGRL [19] & 81.52 \(\pm\) 2.09 & 92.49 \(\pm\) 1.02 & 72.43 \(\pm\) 2.42 & 86.58 \(\pm\) 1.19 & 73.95 \(\pm\) 2.13 & 83.07 \(\pm\) 1.21 \\ \hline COLA (ours) & **84.58 \(\pm\) 1.96** & **94.03 \(\pm\) 1.48** & **76.54 \(\pm\) 2.02** & **86.87 \(\pm\) 1.49** & **74.36 \(\pm\) 2.37** & **86.59 \(\pm\) 2.26** \\ \hline \hline \end{tabular} \end{table} Table 1: Results on Cora, CiteSeer and CoraFull datasets. (Top rows) Meta Learning. (Middle rows) Graph Contrastive Learning with fine-tuning. (Bottom row) COLA (our method). All scores are averaged over 20 runs. Evaluation metrics were scaled to 100 for readability purposes. In bold are methods with the best results for each task. In blue are methods with the best results in each group. benefit the learning on few-shot tasks. Thus even constructing meta-tasks without label information, COLA can achieve excellent performance over traditional meta learning methods. Graph contrastive learning methods benefit from the learned discriminative representations and show excellent ability to deal with downstream few-shot tasks. SUGRL achieves the best performance on most few-shot tasks. COLA outperforms SUGRL in each task with a maximum relative accuracy improvement of 5.93%. This demonstrates that the use of \(N\)-way \(k\)-shot task construction in COLA makes it more suitable for few-shot problems compared to contrastive learning methods. ### Model Design Component Analysis #### 5.3.1 Query, Support, Lookup Embeddings First, we examine the primary design elements of COLA: Query (\(Q\)), Support (\(S\)), and Lookup Embeddings (\(L\)) and present the results in Table 2. To understand the distinct function of each one, we investigate three alternative scenarios. In the first scenario, we only use the Query Embedding. The query sample \(Q_{v_{i}}\) is extracted from the Query Embedding and has to align with all nodes within the Query Embedding itself to identify the support set. The second scenario omits the Lookup Embedding. Here, query sample \(Q_{v_{i}}\) is compared with all nodes from Support Embedding to find the top-\(k\) similar ones in \(S\). The third scenario excludes Support Embedding, so we use the query embedding \(L_{v_{i}}\) from Lookup Embedding to compare with all node embeddings in Query Embedding. Compared with COLA, the first and second scenarios regard the query embedding as its own lookup tool. This reduces the amount of information in the meta-task, leading to suboptimal results. In the second scenario, the use of Support Embedding further deteriorates the performance since the inconsistency between the Query and Support Embeddings' encoders leads to a mismatch. The third scenario involves the Lookup Embedding, but both the query and support set are derived from Query Embedding, which means the model cannot take advantage of the extra information gained from two different augmented views. We also find that even some of these suboptimal setups can still outperform meta learning methods, underscoring the importance of using all available nodes. Our COLA model significantly outperforms the three scenarios, illustrating the importance of each component in its design. In essence, we benefit from the invariant information among these three augmented graphs to construct meta-tasks, such that the support set selected by Lookup Embedding has very similar semantics to the query set. This ensures our model keeps constructing semantically correct meta-tasks. #### 5.3.2 Momentum Encoder To generate Support and Lookup Embedding, COLA uses a momentum GNN encoder \(f_{\text{ema}}\), whose weights are an exponential moving average of the weights of the trained GNN encoder. The momentum encoder is more stable than the trainable GNN encoder. We test different values of the momentum variable from 0 to 1 and present the results in Table 3. Value 0 means the encoder is updated to the trained GNN encoder at each step and value 1 means the encoder is never updated. The results \begin{table} \begin{tabular}{l l l l l l l l l} \hline \hline & & & \multicolumn{3}{c}{Cora} & \multicolumn{3}{c}{CiteSeer} \\ \cline{4-9} \(Q\) & \(S\) & \(L\) & 2-way 1-shot & 2-way 3-shot & 2-way 5-shot & 2-way 1-shot & 2-way 3-shot & 2-way 5-shot \\ \hline ✓ & & & 61.90 \(\pm\) 1.26 & 84.12 \(\pm\) 2.24 & 88.24 \(\pm\) 1.89 & 56.03 \(\pm\) 1.73 & 71.46 \(\pm\) 2.97 & 74.69 \(\pm\) 2.22 \\ ✓ & ✓ & & 75.79 \(\pm\) 2.75 & 75.20 \(\pm\) 2.68 & 79.44 \(\pm\) 2.01 & 59.48 \(\pm\) 2.83 & 63.73 \(\pm\) 2.48 & 69.10 \(\pm\) 2.31 \\ ✓ & ✓ & ✓ & 76.24 \(\pm\) 3.68 & 86.47 \(\pm\) 1.45 & 85.78 \(\pm\) 2.57 & 64.42 \(\pm\) 2.34 & 69.33 \(\pm\) 3.15 & 73.13 \(\pm\) 2.27 \\ ✓ & ✓ & ✓ & **84.58 \(\pm\) 1.96** & **92.29 \(\pm\) 1.71** & **94.03 \(\pm\) 1.48** & **76.54 \(\pm\) 2.02** & **80.26 \(\pm\) 2.72** & **86.87 \(\pm\) 1.49** \\ \hline \hline \end{tabular} \end{table} Table 2: Component Analysis of Query (\(Q\)), Support (\(S\)), Lookup (\(L\)) Embeddings on Cora and CiteSeer datasets. The first three rows control different components in meta-task construction. The last row is COLA’s setting. In bold are the best results, and underlines are the second best ones. \begin{table} \begin{tabular}{l l l l l l} \hline \hline momentum & 0 & 0.5 & 0.8 & 0.9 & 1 \\ \hline 2-way 1-shot & 70.46 & 74.47 & 75.13 & 76.54 & 54.85 \\ 2-way 5-shot & 78.05 & 81.09 & 83.34 & 86.87 & 65.43 \\ \hline \hline \end{tabular} \end{table} Table 3: Relationship between the momentum parameter and accuracy on CiteSeer. show that using the shared weight encoder (momentum=0) will harm the model performance. A static encoder (momentum=1) always contains the exact same information and constrains the information support embeddings can bring. A larger momentum (around 0.9) can help the momentum encoder memorize historical information, contributing to a consistent and stable Support Embedding. ### Deep Investigation of COLA **True Label Ratio.** To evaluate the quality of task construction, we measure the true label ratio in each task. The true label ratio is calculated by \(R_{true}=n_{t}/Nk\), where \(n_{t}\) is the number of selected support samples that indeed have the same label with corresponding query sample, and \(Nk\) is the total number of support samples. To better visualize the trend, we only present the true label ratio within 50 epochs in Figure 2(a) and 2(b). Note that \(R_{true}\) still increase after epoch 50. The trend of \(R_{true}\) reflects that the model is gradually selecting more and more support nodes that have exactly the same label as the query node. For example, the initial true label ratio for Cora's 2-way 5-shot problem is around 0.41 and it steadily increases to 0.8, indicating that only around 2 selected support samples in this task have false labels. This measure verifies that the proposed method can construct semantically correct meta-tasks even without label information. **Analysis of the number of negatives.** Contrastive learning methods benefit from both the data augmentation and the large number of negative samples. In COLA, although we take the supervised contrastive loss, the number of negative samples is relatively small. This is because all the negative samples of a node only come from the support set of other ways, e.g. \((N-1)k\) for a \(N\)-way \(k\)-shot problem. Consequently, we examine whether the meta-tasks constructed by COLA will benefit from a large number of negative samples just like contrastive learning does. Thus, we vary the number of negatives from \((N-1)k\) to \(|\mathcal{V}|\) (number of nodes in the graph) and present the result in Figure 2(c). We get a conclusion that is contrary to expectations: the performance of our model is negatively impacted by increasing the number of negative samples in each case. We conjecture that the advantages contrastive learning gains from a high number of negative samples do not transfer well to few-shot tasks. Consequently, it underscores the need for a unified method (COLA) that is more suitable for FSNC. **Using all nodes and data augmentation indeed contributes to the success of COLA.** We evaluate whether the utilization of all nodes and data augmentation will be helpful in our model and show the results of the CiteSeer dataset in Table 4. From the results, we can conclude that training without all nodes will lead to a performance decrease, especially when nodes from \(C_{test}\) are not involved. Data augmentation is also important for our method since the meta-task construction relies on invariant graph information across the three augmented views. These findings underscore the fact that COLA significantly benefits from data augmentation, enabling the construction of meta-tasks that optimally leverage graph information. ## 6 Conclusion In this paper, we focus on the transductive few-shot node classification. We first identify several key components behind the success of contrastive learning on FSNC, including the comprehensive use of graph nodes and the power of graph augmentations. We then introduce a new paradigm--**Contrastive \begin{table} \begin{tabular}{l c c c} \hline \hline & \(C\backslash C_{test}\) & \(C_{test}\) & All nodes \\ \hline w/ aug & 68.43 & 72.19 & 86.97 \\ w/o aug & 65.18 & 61.02 & 74.51 \\ \hline \hline \end{tabular} \end{table} Table 4: Results of w/ and w/o augmentations and nodes from \(C_{test}\). Figure 3: (a) and (b): true label ratio that measures the ratio of the selected support samples actually having the same label as the query sample. (c): Performance drops with extra negative samples. Few-Shot Node **C**lassification (**COLA**). Unlike traditional meta learning methods that require label information, COLA finds semantically similar node embeddings to construct meta-task by leveraging the invariant information across three augmented graphs. COLA contains the advantages of both contrastive learning and meta learning on the few-shot node classification tasks. Through extensive experiments, we validate the essentially of each component in our design and demonstrate that COLA achieves new state-of-the-art on all tasks. One limitation of our method is the increased computational cost due to the sorting operation used to find the support set, though this increase is linear to \(|\mathcal{V}|\) and has no significant negative impacts in practice. We believe our research will bring some new insights to the FSNC field.
2309.08539
On the size of Bruhat intervals
For affine Weyl groups and elements associated to dominant coweights, we present a convex geometry formula for the size of the corresponding lower Bruhat intervals. Extensive computer calculations for these groups have led us to believe that a similar formula exists for all lower Bruhat intervals.
Federico Castillo, Damian de la Fuente, Nicolas Libedinsky, David Plaza
2023-09-15T17:03:30Z
http://arxiv.org/abs/2309.08539v1
# On the size of Bruhat intervals ###### Abstract For affine Weyl groups and elements associated to dominant coweights, we present a convex geometry formula for the size of the corresponding lower Bruhat intervals. Extensive computer calculations for these groups have led us to believe that a similar formula exists for all lower Bruhat intervals. ## 1 Introduction **1.1 Generalities.** While calculating with indecomposable Soergel bimodules [19] and Kazhdan-Lusztig polynomials [5], [20] for affine Weyl groups, it became apparent that finding formulas for the cardinalities of lower Bruhat intervals played a crucial role. Surprisingly, little is known apart from length 2 (general) intervals [7, Lemma 2.7.3], lower intervals for smooth elements in Weyl groups [25], [22] and related results for affine Weyl groups [29], [8]. Although the known cases are particularly important from a geometric viewpoint, to the best of the authors' knowledge there is no program for finding cardinalities of all lower intervals. This motivated us to embark on a series of papers aimed at bridging this gap in the case of affine Weyl groups. In this paper, we relate the Bruhat order with convex geometry. With ideas similar to those presented here, we expect to go beyond lower intervals into the realm of general intervals in future work. In this paper we study, for any affine Weyl group, the lower interval for the element \(\theta(\lambda)\) (see Definition 2.1) associated to a dominant coweight \(\lambda\). These are the elements that originally appeared in the research on Soergel bimodules and Kazhdan-Lusztig polynomials mentioned above; they are intimately related to representation theory (character formulas for Lie groups, geometric Satake equivalence, quantum groups, among others). The main result of this paper is a formula relating the cardinality of the lower interval \([\mathrm{id},\theta(\lambda)]\) and the volumes of the faces of a certain polytope. We guessed this formula by examining the \(\widetilde{A}_{2}\) case and assuming that certain phenomena that hold there continue to hold in higher dimensions. Although these phenomena turned out to be low-rank accidents, the formula miraculously survived. This paper makes apparent that Bruhat intervals for affine Weyl groups are intricately connected to Euclidean geometry. Another manifestation of this connection is the observation in the work in progress [10], that for an affine Weyl group possibly all "non-silly" isomorphisms of Bruhat intervals (of length bigger than the order of the finite Weyl group) are just Euclidean translations of the connected components of the interval. A different kind of connection between these two worlds, but this time for the symmetric group, was developed in [18] and further explored in [32]: the _Bruhat interval polytopes_ consisting of the convex hull of permutations in an arbitrary Bruhat interval. **1.2 The \(\tilde{A}_{2}\) case.** Let us consider \(W\) the affine Weyl group of type \(\tilde{A}_{2}\), and the usual identification between elements in \(W\) and triangles (alcoves) in the tessellation of the plane by equilateral triangles. If \(x\) is an element of \(W\), when we write \(x\subset\mathbb{R}^{2}\), we mean the set of points in the closure of the alcove corresponding to \(x\) (the closed triangle). In Figure 1 we have the simple roots \(\alpha_{1}\) and \(\alpha_{2}\) in blue and in red, and the fundamental weights \(\varpi_{1}\) and \(\varpi_{2}\). For a dominant weight \(\lambda\in X^{+}:=\mathbb{Z}_{\geq 0}\varpi_{1}+\mathbb{Z}_{\geq 0}\varpi_{2}\) (depicted by a white dot in Figure 1), let \(\theta(\lambda)\in W\) denote the \(\lambda\)-translate of the opposite of the fundamental alcove: those are the grey triangles. Let also \(Y_{\lambda}\) denote \(\operatorname{Conv}(W_{f}\cdot\lambda)\), the convex hull of the orbit of \(\lambda\) under the finite Weyl group \(W_{f}\). For \(\lambda=\varpi_{1}+2\varpi_{2}\), it is the yellow hexagon in Figure 2. The faces of \(Y_{\lambda}\) containing \(\lambda\) are \[F_{J}:=Y_{\lambda}\cap(\lambda+\sum_{i\in J}\mathbb{R}\alpha_{i})\quad J \subset\{1,2\}.\] Figure 1 Figure 2 For \(x\in W\) we will denote \(\leq x:=\{w\in W\,|\,w\leq x\}\). In Figure 3 we draw the set \(\leq\theta(\lambda)\) (with \(\lambda\) as before). It is the union of all the colored sets. Figure 3 Figure 4 Let us suppose only in this introduction, to simplify the formulas, that the volume of one alcove is \(1\), so that the volume of \(\leq\theta(\lambda)\subset\mathbb{R}^{2}\) is equal to the cardinality of \(\leq\theta(\lambda)\subseteq W\). Our initial observation was that there are four real numbers \(\mu_{J}\) (independent of \(\lambda\)) with \(J\subseteq\{1,2\}\) such that \[|\leq\theta(\lambda)|=\mu_{1,2}\text{Area}(F_{1,2})+\mu_{1}\text{Length}(F_{1})+ \mu_{2}\text{Length}(F_{2})+\mu_{\emptyset}\text{Card}(F_{\emptyset}). \tag{1.1}\] **Remark 1.1**.: The reader may notice that the formula presented here bears strong similarities to Pick's theorem (see (5.5)). For the proof of Theorem B, a generalization of the formula 1.1 applicable to any root system, we use a generalized version of Pick's theorem developed by Berline and Vergne. For more details see Example 5.6. In Figure 4 there is a partition of the plane into 13 parts, one for each face of \(Y_{\lambda}\), such that when intersected with \(\leq\theta(\lambda)\), one obtains Figure 3. That division can be done for any convex polytope and is called the set of normal cones. For \(C\) a face of \(Y_{\lambda}\) we call \(\text{Nor}(C)\) the corresponding region. The low-rank accidents behind the formula, mentioned in Section 1.1, are the following. First, that \[Y_{\lambda}\ \subseteq\ \leq\theta(\lambda)\ \subseteq\ \mathbb{R}^{2}.\] Second, that in Figure 3 the number \(\mu_{1,2}\text{Area}(F_{1,2})\) is the volume of the yellow part, \(\mu_{1}\text{Length}(F_{1})\) is the volume of the blue part, \(\mu_{2}\text{Length}(F_{2})\) is the volume of the red part and finally that \(\mu_{\emptyset}\text{Card}(F_{\emptyset})\) is the volume of the light blue part. In general, it is that \(\mu_{J}\text{Vol}(F_{J})\) is the volume of \(W_{f}\cdot(\text{Nor}(F_{J})\,\cap\,\leq\theta(\lambda))\). Although these low-rank accidents are correct for \(\widetilde{A}_{2}\) and \(\widetilde{A}_{3}\), they fail in higher ranks. The first one is not true already for \(\widetilde{A}_{4}\). The second one fails in \(\widetilde{A}_{24}\), because there is a \(\mu_{J}<0\), (see Remark 6.5) so \(\mu_{J}\text{Vol}(F_{J})\) can not be the volume of some set. ### Results For any root system \(\Phi\) one has an associated affine Weyl group \(W_{a}\) and one can define similar concepts as the ones defined in last section. For example, \(\theta(\lambda)\) corresponds to the alcove touching \(\lambda\) in the direction of \(\rho\) (the sum of the fundamental weights). The following theorem builds the bridge between Coxeter combinatorics and convex geometry. **Theorem A** (Lattice Formula).: _For every dominant coweight \(\lambda\), we have_ \[|\leq\theta(\lambda)|=|W_{f}|\ |\text{Conv}(W_{f}\cdot\lambda)\cap(\lambda+ \mathbb{Z}\Phi^{\vee})|.\] This formula is a key step to prove our main theorem below but it is also interesting in its own right, as we now explain. In [27] Postnikov studied permutohedra of general types. Among them, one of the most remarkable is the regular permutohedron of type \(A_{n}\). The number of integer points of that polytope can be interpreted [31, Section 3] as the number of forests on \(\{1,2,\ldots,n\}\). There are other interpretations for the integer points of the regular permutohedron of type \(A_{n}\), for instance, [2, Proposition 4.1.3] gives one as certain orientations of the complete graph. We remark that these interpretations are only for the regular permutohedron of type \(A_{n}.\) For non-regular permutohedra of any type, before the present paper, there was no interpretation of the integer points. Theorem A gives a first interpretation of this sort, and it is also of a different nature than the pre-existent ones in that it is not related to graph theory but to Coxeter theory. This theorem also gives an interesting new insight. For a generic permutohedron (i.e. the convex hull of \(\operatorname{Conv}(W_{f}\cdot\lambda)\) for some \(\lambda\in\mathbb{Z}_{>0}\varpi_{1}+\mathbb{Z}_{>0}\varpi_{2}\)), the set of vertices is in bijection with the finite Weyl group \(W_{f}=\{w\leq_{R}w_{0}\}\) where \(\leq_{R}\) is the right weak Bruhat order on \(W_{f}\) and \(w_{0}\) is the longest element. The Hasse diagram of \(\leq_{R}\) on \(\{w\leq_{R}w_{0}\}\) corresponds to the graph of the polytope. Theorem A (or, more precisely Proposition 3.1) says that if we consider the strong Buhat order, the set \(\leq\theta(\lambda)\) can be obtained from the lattice points inside the polytope. Heuristically, the weak Bruhat order gives the vertices of the polytope and the strong Bruhat order gives the lattice points inside the polytope. Now we can present the main result of this paper. For \(J\subseteq\{1,2,\ldots,n\}\), one can define the face \(F_{J}=\operatorname{Conv}(W_{J}\cdot\lambda)\). See section 2 for more details. **Theorem B** (Geometric Formula).: _For every rank \(n\) root system \(\Phi\), there are unique \(\mu_{J}^{\Phi}\in\mathbb{R}\) such that for any dominant coweight \(\lambda\),_ \[|\leq\theta(\lambda)|=\sum_{J\subset\{1,\ldots,n\}}\mu_{J}^{\Phi}\mathrm{Vol} (F_{J}),\] Theorem B is proved by combining Theorem A with a particular formula for computing the number of lattice points developed by Berline-Vergne [6] and Pommersheim-Thomas [26]. The construction we use is part of a bigger family of formulae relating the number of lattice points of a polytope with the volumes of its faces, see [4, Section 6]. In [27], Postnikov gives several formulas for the volumes \(\mathrm{Vol}(F_{J})\) for any \(\Phi\). When \(\Phi\) is the root system of type \(A_{n}\), we give in Section 6 formulas for \(\mu_{J}^{\Phi}\) if \(J\) is connected. For example, if \(J=\{1,\ldots,l\}\), then \[\mu_{J}^{A_{n}}=\frac{l!}{\sqrt{l+1}}(n+1)\begin{bmatrix}n+1\\ l+1\end{bmatrix}, \tag{1.2}\] where the square bracket at the right means the Stirling number of the first kind. The volumes are polynomials in the coordinates \(m_{1},\ldots,m_{n}\) of \(\lambda\) in the coweight basis. As a consequence of Theorem B we obtain that the size of the lower Bruhat intervals generated by \(\theta(\lambda)\) is a polynomial function on the coordinates of \(\lambda\). **Perspectives:** As mentioned in the abstract, by some robust computational evidence we believe that one can partition the affine Weyl group 1, with one of the parts being \(\{\theta(\lambda)\}_{\lambda\in X^{+}}\), such that in each of these parts one will have a formula similar to that in Theorem B but with different coefficients. Footnote 1: Technically one should be able to partition the affine Weyl group minus the set of parabolic subgroups isomorphic to the finite Weyl group. The problem of finding cardinalities of intervals in the finite Weyl group seems to be a different kind of beast. Indeed, in the paper [12, Thm 1.4] it is proven that the computation of lower intervals with respect to the weak Bruhat order is #P-complete, so for the weak Bruhat order no such partition of the finite Weyl group should be possible. For the sets \(\ngeqslant x:=\{w\in W\,|\,w\ngeqslant x\}\) we possess limited computational evidence, with the exception of the treacherous case of \(\widetilde{A}_{2}.\) However, this does not prevent us from dreaming that a similar phenomenon (at least the polynomiality part) may occur for these sets, that are similar to star-shaped non-convex polytopes. If this was true, we would likely be on the verge of producing a formula for any interval, as \[|[x,y]|=|\leq y|-|\ngeqslant x|+P(x,y),\] where \(P(x,y)\) denotes the number of alcoves within a collection of simplices, easily computable in practice. ### Structure of the paper. Section 2 contains a recollection of definitions concerning affine Weyl groups and alcove geometry. Additionally, we establish the maximality of \(\theta(\lambda)\) within a suitable double coset and provide the normalization for the volumes used in this paper. In Section 3 we present the proof of the lattice formula. Section 4 focuses on proving various results concerning the volumes of the \(F_{J}\). These findings are then employed to establish the Geometric formula in Section 5, while in Section 6, we compute the \(\mu_{J}^{A_{n}}\) for connected \(J\). This last result relies on a formula by Luis Ferroni [14]. ### Acknowledgments. We would like to thank Gaston Burrull, Stephane Gaussent, Maria Ines Icaza, Jose Samper, Joel Kamnitzer, Anthony Licata and Geordie Williamson for their helpful comments. Thanks to Daniel Juteau for helping with the redaction. Special thanks to Leonardo Patimo for some important insights. FC was partially supported by FONDECYT-ANID grant 1221133. NL was partially supported by FONDECYT-ANID grant 1230247. DP was partially supported by FONDECYT-ANID grant 1200341. ## 2 Preliminaries In this section we introduce the essential objects needed to state the Lattice Formula and the Geometric Formula. We refer to [9, 16] for more details about Weyl groups and to [33] for more information about polytopes. ### Affine Weyl groups. Let \(\Phi\) be an irreducible (reduced, crystallographic) root system of rank \(n\), and let \(V\) be the ambient (real) Euclidean space spanned by \(\Phi\), with inner product \((-,-):V\times V\to\mathbb{R}\). Let \(I_{n}\coloneqq\{1,\ldots,n\}\). We fix a set \(\Delta=\{\alpha_{i}\mid i\in I_{n}\}\) of simple roots and \(\Phi^{+}\) is the corresponding set of positive roots. Let \(\alpha^{\vee}=2\alpha/(\alpha,\alpha)\) be the coroot corresponding to \(\alpha\in\Phi\). The _fundamental coweights_\(\varpi_{i}^{\vee}\) are defined by the equations \(\left(\varpi_{i}^{\vee},\alpha_{j}\right)=\delta_{ij}\). They form a basis of \(V\). A _coweight_ is an integral linear combination of the fundamental coweights, and a _dominant coweight_ is a coweight whose coordinates in this basis are non-negative. We denote by \(\Lambda^{\vee}\) and \((\Lambda^{\vee})^{+}\) the set of coweights and dominant coweights, respectively. We define \[C^{+}=\{\lambda\in V\,|\,(\lambda,\alpha_{i})\geq 0,\ \text{ for all }i\in I_{n}\}.\] We refer to \(C^{+}\) as the _dominant region_. We notice that \((\Lambda^{\vee})^{+}=\Lambda^{\vee}\cap C^{+}\). We denote by \(\leq\) the _dominance order_ on \(\Lambda^{\vee}\), that is, \(\mu\leq\lambda\) if \(\lambda-\mu\) can be written as a non-negative integral linear combination of simple coroots. Let \(H_{\alpha}\) be the hyperplane of \(V\) orthogonal to a root \(\alpha\), and let \(s_{\alpha}\) denote the reflection through \(H_{\alpha}\). For \(\alpha_{i}\in\Delta\) we write \(s_{i}=s_{\alpha_{i}}\). The group \(W_{f}\) of orthogonal transformations of \(V\) generated by \(S_{f}=\{s_{i}\mid i\in I_{n}\}\) is the _(finite) Weyl group_ of \(\Phi\). It is a Coxeter system with generators \(S_{f}\), length function \(\ell\) and Bruhat order \(\leq\). We denote by \(w_{0}\) the longest element of \(W_{f}\). We also consider the _affine Weyl group_\(W_{a}\). It is the group of affine transformations of \(V\) generated by \(W_{f}\) and translations by elements of \(\mathbb{Z}\Phi^{\vee}\), where \(\Phi^{\vee}\) is the coroot system. We have \(W_{a}\cong\mathbb{Z}\Phi^{\vee}\rtimes W_{f}\). The group \(W_{a}\) can also be realized as the group generated by the affine reflections \(s_{\alpha,k}\) along the hyperplanes \[H_{\alpha,k}=\{\lambda\in V\mid(\lambda,\alpha)=k\},\text{ where }\alpha\in\Phi,k\in \mathbb{Z}.\] Removing all these hyperplanes from \(V\), leaves an open set whose connected components are called alcoves. We choose the alcove \[A_{\text{id}}\coloneqq\{\lambda\in V\mid-1<(\lambda,\alpha)<0,\ \forall\alpha\in\Phi^{+}\}\] to be the _fundamental alcove_. The map \(w\mapsto wA_{\mathrm{id}}\) defines a bijection between \(W_{a}\) and the set of alcoves, so we define \(A_{w}:=wA_{\mathrm{id}}\) for each \(w\in W_{a}\). We define the vertices of an alcove \(A_{w}\) as the vertices of its closure \(\overline{A_{w}}\). The walls of \(A_{\mathrm{id}}\) are the hyperplanes \(H_{\alpha}\) with \(\alpha\in\Delta\), together with \(H_{\widetilde{\alpha},-1}\), where \(\widetilde{\alpha}\) is the highest root of \(\Phi\). We put \(s_{0}\coloneqq s_{\widetilde{\alpha},-1}\) and \(S\coloneqq S_{f}\cup\{s_{0}\}\). Then the pair \((W_{a},S)\) is a Coxeter system, with length function \(\ell\) and Bruhat order \(\leq\). For affine Weyl groups, we have a beautiful interpretation of the length function in terms of hyperplanes that separate a given alcove from the fundamental alcove. More precisely, for \(w\in W_{a}\) we have \[\ell(w)=\#\{H=H_{\alpha,k}\mid\text{$H$ separates $A_{w}$ from $A_{\mathrm{id}}$}\}. \tag{2.1}\] The _extended affine Weyl_ group, \(W_{e}\), is the subgroup of affine transformations of \(V\) generated by \(W_{f}\) and \(\Lambda^{\vee}\) (acting as translations). We have \(W_{e}\cong\Lambda^{\vee}\rtimes W_{f}\). In general, \(W_{e}\) is not a Coxeter group. However, for every \(w\in W_{e}\), \(wA_{\mathrm{id}}\) is still an alcove so that as in (2.1) one can define its length \(\ell(w)\) by counting how many hyperplanes \(H_{\alpha,k}\) separate \(A_{\mathrm{id}}\) and \(wA_{\mathrm{id}}\). Let \(\Omega\) be the subgroup of \(W_{e}\) of length \(0\) elements. Equivalently, \(\Omega\) consists of the \(\sigma\in W_{e}\) such that \(\sigma A_{\mathrm{id}}=A_{\mathrm{id}}\). Thus the elements of \(\Omega\) permute the walls of the fundamental alcove, so that conjugation by \(\Omega\) permutes the simple reflections in \(W_{a}\). In this way, \(\Omega\) can be seen as a group of automorphisms of the corresponding completed Dynkin diagram. We define \(W_{\sigma}\coloneqq\sigma W_{f}\sigma^{-1}\), which is isomorphic to \(W_{f}\). We set \(s_{\sigma}\coloneqq\sigma s_{0}\sigma^{-1}\), so that \(W_{\sigma}\) is the maximal (finite) parabolic subgroup of \(W_{a}\) generated by \(S\setminus\{s_{\sigma}\}\). Other equivalent realization of this group is as a quotient: \(\Omega\cong\Lambda^{\vee}/\mathbb{Z}\Phi^{\vee}\) (see [17, SS1.7]). We will define a specific system of representatives of \(\Lambda^{\vee}/\mathbb{Z}\Phi^{\vee}\). Write the highest root as a combination of simple roots: \[\widetilde{\alpha}=\eta_{1}\alpha_{1}+\cdots+\eta_{n}\alpha_{n}. \tag{2.2}\] One has that \(\eta_{i}\in\mathbb{Z}_{>0}\). For \(i\in I_{n}\), it is not hard to check that the intersection of the reflecting hyperplanes corresponding to \(S\setminus\{s_{i}\}\) is \(v_{i}=-\varpi_{i}^{\vee}/\eta_{i}\), for \(i\neq 0\), and \(v_{0}=\mathbf{0}\) for \(i=0\), where \(\mathbf{0}\) is the origin of \(V\). The set \(\{v_{0},\ldots,v_{n}\}\) is precisely the set of vertices of the fundamental alcove \(A_{\mathrm{id}}\). A fundamental coweight \(\varpi_{i}^{\vee}\) is called _minuscule_ if \((\varpi_{i}^{\vee},\widetilde{\alpha})=\eta_{i}=1\). Let \(M\subset I_{n}\) be the index set of the minuscule fundamental coweights. Both \(\{\mathbf{0},-\varpi_{i}^{\vee}\mid i\in M\}\) and \(\{\mathbf{0},\varpi_{i}^{\vee}\mid i\in M\}\) are complete systems of representatives of \(\Lambda^{\vee}/\mathbb{Z}\Phi^{\vee}\). It is known that for every \(\sigma\in\Omega\setminus\{\mathrm{id}\}\), the vector \(-\sigma(\mathbf{0})\) is a minuscule fundamental coweight. Furthermore, \(\sigma\mapsto\sigma(\mathbf{0})\) is a bijection from \(\Omega\) to the representatives \(\{\mathbf{0},-\varpi_{i}^{\vee}\mid i\in M\}\) of \(\Lambda^{\vee}/\mathbb{Z}\Phi^{\vee}\) (see [9, Prop VI.2.3.6]). Using the notation \(v_{i}\) from the paragraph above, if \(\sigma(\mathbf{0})=v_{i}\) with \(\sigma\in\Omega\) then \(s_{\sigma}=s_{i}\in S\), which is the unique simple reflection that does not fix \(v_{i}\). We will use this identification and put \(\sigma\) instead of \(\sigma(\mathbf{0})\), by abuse of notation. The group \(\Lambda^{\vee}\) contains \(\mathbb{Z}\Phi^{\vee}\) as a subgroup of finite index, which is called the _index of connection_. One can use it to compute the order of \(W_{f}\) (see [16, SS4.9]): \[|W_{f}|=n!\,\eta_{1}\cdots\eta_{n}\,[\Lambda^{\vee}:\mathbb{Z}\Phi^{\vee}]. \tag{2.3}\] ### Maximal elements in double cosets In this section we introduce the main protagonists of this paper. Namely, the elements \(\theta(\lambda)\) for \(\lambda\in(\Lambda^{\vee})^{+}\). We also study some of their properties. **Definition 2.1**.: Let \(\lambda\) be a dominant coweight. Since \(A_{w_{0}}+\lambda\) is an alcove, there exists a unique element \(\theta(\lambda)\in W_{a}\) such that \(A_{\theta(\lambda)}=A_{w_{0}}+\lambda\). See Figure 1 for an example. For any subset \(J\subset S\), the subgroup \(W_{J}\) of \(W_{a}\) generated by \(J\) is called a _parabolic subgroup_. We identify subsets of \(S\) with subsets of \(\{0,1,\ldots,n\}\). In the following lemma we record some useful facts about parabolic double cosets in \(W_{a}\) (see [13, Lemma 2.12]). **Lemma 2.2**.: _Let \(I\) and \(J\) be proper subsets of \(S\) and \(p\) in \(W_{I}\backslash W_{a}/W_{J}\). Then,_ 1. \(p\) _is an interval. That is, there exist_ \(\underline{p},\overline{p}\in p\) _such that_ \(p=\{x\in W\mid\underline{p}\leq x\leq\overline{p}\}\)_. In particular,_ \(p\) _has a longest element._ 2. _The longest element_ \(\overline{p}\in p\) _is uniquely determined by the conditions_ * \(I\subset\{s\in S\mid\ell(s\overline{p})<\ell(\overline{p})\}\)_._ * \(J\subset\{s\in S\mid\ell(\overline{p}s)<\ell(\overline{p})\}\)_._ **Definition 2.3**.: For any \(X\subset W_{a}\) we define \[A(X)\coloneqq\bigsqcup_{x\in X}A_{x}.\] **Lemma 2.4**.: _Let \(\lambda\) be a dominant coweight and let \(\sigma\in\Omega\) such that \(\lambda\in\sigma+\mathbb{Z}\Phi^{\vee}\). Then,_ 1. \(A(W_{\sigma})=A(W_{f})+\sigma\)_._ 2. \(A(\theta(\lambda)W_{\sigma})=A(W_{f})+\lambda\)_._ 3. \(\theta(\lambda)\) _is maximal with respect to the Bruhat order in its right coset_ \(W_{f}\theta(\lambda)\)_._ 4. \(\theta(\lambda)\) _is maximal with respect to the Bruhat order in its left coset_ \(\theta(\lambda)W_{\sigma}\)_._ 5. \(\theta(\lambda)\) _is maximal with respect to the Bruhat order in its double coset_ \(W_{f}\theta(\lambda)W_{\sigma}\)_._ Proof.: 1. It is known that the alcoves corresponding to \(W_{f}\) are precisely the alcoves having the origin \(\mathbf{0}\) as one of its vertices. Since \(\sigma\in\Lambda^{\vee}\) (under the identification \(\sigma\mapsto\sigma(\mathbf{0})\)), we get \[A(W_{f})+\sigma=\{\text{alcoves that have $\sigma$ as one of its vertices}\}.\] Now let \(w\in W_{f}\). Note that \(\sigma w\sigma^{-1}A_{\text{id}}=\sigma wA_{\text{id}}\), so that \(A(W_{\sigma})=\sigma(A(W_{f}))\), by definition of \(W_{\sigma}\). It follows that \[A(W_{\sigma})\subset\{\text{alcoves that have $\sigma$ as one of its vertices}\}.\] That is, \(A(W_{f})+\sigma\) is a collection of \(|W_{f}|\) alcoves containing \(A(W_{\sigma})\). Since \(W_{\sigma}\cong W_{f}\), the set \(A(W_{\sigma})\) also has exactly \(|W_{f}|\) alcoves. Thus \(A(W_{\sigma})=A(W_{f})+\sigma\). 2. Write \(\lambda=\sigma+\mu\) for \(\mu\in\mathbb{Z}\Phi^{\vee}\). Let \(t_{\mu}\in W_{a}\) be the translation by \(\mu\). We notice that \[A_{t_{\mu}^{-1}\theta(\lambda)}=t_{\mu}^{-1}\theta(\lambda)(A_{\text{id}})=t_ {\mu}^{-1}\left(A_{w_{0}}+\lambda\right)=(A_{w_{0}}+\lambda)-\mu=A_{w_{0}}+\sigma.\] It follows that \(A_{t_{\mu}^{-1}\theta(\lambda)}\in A(W_{f})+\sigma\). By (i) we conclude that \(t_{\mu}^{-1}\theta(\lambda)\in W_{\sigma}\). Thus, \[A(\theta(\lambda)W_{\sigma})=A(t_{\mu}W_{\sigma})=t_{\mu}A(W_{\sigma})=A(W_{ \sigma})+\mu.\] 3. We will use (2.1) in order to show that \(\ell(s\theta(\lambda))<\ell(\theta(\lambda))\) for all \(s\in S_{f}\). Notice that the claim then follows by applying Lemma 2.2 for \(I=S_{f}\) and \(J=\emptyset\). We will prove that if there is a hyperplane \(H_{\alpha,k}\) separating \(A_{\mathrm{id}}\) from \(A_{s\theta(\lambda)}\), then \(sH_{\alpha,k}\) separates \(A_{\mathrm{id}}\) from \(A_{\theta(\lambda)}\). Let \(H=H_{\alpha,k}\) be a hyperplane that separates \(A_{\mathrm{id}}\) from \(A_{s\theta(\lambda)}\). Suppose that \(sH\) does not separate \(A_{\mathrm{id}}\) from \(A_{\theta(\lambda)}\). Then, \(sH\) separates \(A_{\mathrm{id}}\) from \(A_{s}\). Since \(H_{\alpha_{s}}\) is the unique hyperplane that separates \(A_{\mathrm{id}}\) from \(A_{s}\), we conclude \(sH=H=H_{\alpha_{s}}\). However, since \(A_{\theta(\lambda)}\subset C^{+}\) we know that \(H_{\alpha_{s}}\) separates \(A_{\mathrm{id}}\) from \(A_{\theta(\lambda)}\). This contradiction proves our claim. 4. For \(x\in W_{f}\) we denote by \(x^{\prime}\) the unique element \(x^{\prime}\in W_{a}\) such that \(A_{x^{\prime}}=A_{x}+\lambda\). We claim that \(\ell(x^{\prime})\leq\ell(w_{0}^{\prime})\) for all \(x\in W_{f}\). We will prove this by showing that each hyperplane \(H_{\alpha,k}\) that separates \(A_{x^{\prime}}\) from \(A_{\mathrm{id}}\) must also separate \(A_{w_{0}^{\prime}}\) from \(A_{\mathrm{id}}\). We proceed by contradiction. Suppose there is an hyperplane \(H^{\prime}\) as above that separates \(A_{x^{\prime}}\) from \(A_{\mathrm{id}}\) but it does not separate \(A_{w_{0}^{\prime}}\) from \(A_{\mathrm{id}}\). Thus \(H^{\prime}\) separates \(A_{w_{0}^{\prime}}\) from \(A_{x^{\prime}}\), but these alcoves share the vertex \(\lambda\), so that \(\lambda\in H^{\prime}\). Then, the hyperplane \(H=H^{\prime}-\lambda\) passes through the origin and separates \(A_{x}\) from \(A_{w_{0}}\). As any \(H_{\alpha,0}\) separates \(A_{\mathrm{id}}\) from \(A_{w_{0}}\), the alcoves \(A_{x}\) and \(A_{\mathrm{id}}\) are on the same side of \(H\). Therefore, \(A_{x^{\prime}}\) and \(A_{\mathrm{id}^{\prime}}\) are on the same side of \(H^{\prime}\). Since \(\lambda\) is dominant, \(A_{\mathrm{id}}\) and \(A_{\mathrm{id}^{\prime}}\) are on the same side of \(H^{\prime}\). Thus \(A_{x^{\prime}}\) and \(A_{\mathrm{id}}\) are on the same side of \(H^{\prime}\), which contradicts our choice of \(H^{\prime}\) and proves our claim. Since \(\theta(\lambda)=w_{0}^{\prime}\) we conclude from (ii) that \(\ell(x^{\prime})\leq\ell(\theta(\lambda))\) for all \(x^{\prime}\in\theta(\lambda)W_{\sigma}\). The result now follows by applying Lemma 2.2 for \(I=\emptyset\) and \(J=S\setminus\{s_{\sigma}\}\). 5. This follows by combining Lemma 2.2 for \(I=S_{f}\) and \(J=S\setminus\{s_{\sigma}\}\) together with (iii) and (iv). ### Polytopes and their volumes It is a classic result that there exists a unique translation invariant measure \(\mu\) on \(\mathbb{R}^{m}\) up to scaling. The Euclidean volume on \(\mathbb{R}^{m}\) is a translation invariant measure \(\mathrm{Vol}_{m}\) normalized so that \(\mathrm{Vol}_{m}([0,1]^{m})=1\), where \([0,1]^{m}\) is the Cartesian product of \(m\) unit segments, i.e. the unit cube. In the present paper we assume that every vector space \(V\) lives inside some \(\mathbb{R}^{m}\) that comes equipped with an Euclidean volume. The volume in \(\mathbb{R}^{m}\) has the property that \(|\det(v_{1},\dots,v_{m})|=\mathrm{Vol}_{m}(\Pi_{v_{1},\dots,v_{m}})\), where \[\Pi_{v_{1},\dots,v_{m}}=\{a_{1}v_{1}+\dots+a_{m}v_{m}\in\mathbb{R}^{m}\mid 0 \leq a_{i}\leq 1,\,\forall\,\,1\leq i\leq m\}\] is the parallelepiped spanned by the vectors \(\{v_{1},\cdots,v_{m}\}\). **Definition 2.5**.: A lattice is a discrete subgroup \(\Gamma\) of \(V\) that spans \(V\) as a vector space. As a group, a lattice is always isomorphic to \(\mathbb{Z}^{d}\) where \(d=\dim(V)\)[3, Theorem 10.4]. We define the _determinant_ of \(\Gamma\) as the volume of \(\Pi_{v_{1},\dots,v_{m}}\) for any integral basis \(v_{1},\dots,v_{m}\) of \(\Gamma\). The determinant of \(\Gamma\) does not depend on the integral basis [3, Theorem 10.8]. **Example 2.6**.: The alcove \(A_{\mathrm{id}}\) is a simplex with vertices \(\mathbf{0},-\varpi_{1}^{\vee}/\eta_{1},\dots,-\varpi_{n}^{\vee}/\eta_{n}\), where the numbers \(\eta_{i}\) are defined in Equation (2.2). Thus, we have \[\mathrm{Vol}(A_{\mathrm{id}})=\frac{\det(\Lambda^{\vee})}{n!\eta_{1}\cdots \eta_{n}}. \tag{2.4}\] A priori the volume of a subset contained in a proper subspace of \(\mathbb{R}^{m}\) is zero. However we can consider an induced volume on any subspace \(V\) as follows. Let \(\{v_{1},\dots,v_{k}\}\) be a basis for the subspace \(V\). The Euclidean volume induces a measure \(\mathrm{Vol}_{k}\) (also called Euclidean volume, by abuse of notation) on \(V\) defined by the property that \(\det(v_{1},\ldots,v_{k},u_{k+1},\ldots,u_{m})\), where \(\{u_{k+1},\ldots,u_{m}\}\) is an orthonormal basis for the orthogonal complement of \(V\) in \(\mathbb{R}^{m}\). An important part of this paper focuses on the study of volumes of polytopes living in the ambient space, \(V\), of a given root system \(\Phi\). In this setting, we embed \(V\) inside some \(\mathbb{R}^{m}\) by following the conventions outlined in [9, Plates I,\(\ldots\),VI]. In particular, we have an explicit description for \(V\) within a specific \(\mathbb{R}^{m}\), accompanied by explicit descriptions for roots, coroots, coweights, etc. In the following subsection, we give all the details in type A. #### 2.3.1 Type A Let \(\Phi\) be a root system of type \(A_{n}\). In this case \(V\) is the hyperplane of \(\mathbb{R}^{n+1}\) (with standard basis \(\varepsilon_{1},\ldots,\varepsilon_{n+1}\)) of vectors whose coordinate sum is zero and \(\Phi=\{\varepsilon_{i}-\varepsilon_{j}\mid 1\leq i,j\leq n+1,\,i\neq j\}\). The simple roots are given by \(\alpha_{i}=\varepsilon_{i}-\varepsilon_{i+1}\) with \(1\leq i\leq n\), and the positive roots are the vectors \(\varepsilon_{i}-\varepsilon_{j}\) with \(1\leq i<j\leq n+1\). In this example the parallelepiped \(\Pi_{\Delta}\subseteq V\) spanned by the simple roots has \(n\)-Euclidean volume equal to \[\det(\mathbb{Z}\Phi)=\operatorname{Vol}_{n}(\Pi_{\Delta})=\det\left|\begin{array} []{cccccc}1&-1&0&\cdots&0&0\\ 0&1&-1&\cdots&0&0\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ 0&0&\cdots&\cdots&1&-1\\ \frac{1}{\sqrt{n+1}}&\frac{1}{\sqrt{n+1}}&\frac{1}{\sqrt{n+1}}&\cdots&\frac{1 }{\sqrt{n+1}}&\frac{1}{\sqrt{n+1}}\end{array}\right|=\frac{n+1}{\sqrt{n+1}}= \sqrt{n+1}, \tag{2.5}\] as can be checked by using row operations to transform the last row into \[\left[0,\cdots,0,\frac{n+1}{\sqrt{n+1}}\,\right].\] Since \((\alpha,\alpha)=2\) for all \(\alpha\in\Phi\), we have \(\alpha=\alpha^{\vee}\). Therefore the fundamental weights and fundamental coweights coincide, and are given by \[\varpi_{i}=\varepsilon_{1}+\cdots+\varepsilon_{i}-\frac{i}{n+1}(\varepsilon_ {1}+\cdots+\varepsilon_{n+1}). \tag{2.6}\] The volume of the fundamental alcove is \(\sqrt{n+1}/(n+1)!\). This follows from Example 2.6 and a computation of \(\det(\Lambda^{\vee})\) using the coweights in Equation (2.6). #### 2.3.2 Orbit polytopes A _polytope_\(\mathsf{P}\subseteq V\) is the convex hull of finitely many points. A _supporting hyperplane_\(H\) of a polytope \(\mathsf{P}\) is an affine hyperplane such that \(\mathsf{P}\cap H\neq\emptyset\) and \(\mathsf{P}\) is contained in one of the two closed halfspaces defined by \(H\). A _face_\(\mathsf{F}\subseteq\mathsf{P}\) is the intersection of \(\mathsf{P}\) with a supporting hyperplane. We also consider the whole polytope and the empty set to be faces. Faces of dimension \(0\) and \(1\) are called _vertices_ and _edges_ respectively. Faces of codimension \(1\) are called _facets_. **Definition 2.7**.: Let \(\Phi\) be an irreducible root system with simple roots \(\Delta=\{\alpha_{1},\ldots,\alpha_{n}\}\) and let \(\lambda\in V\). The orbit polytope \(\mathsf{P}^{\Phi}(\lambda)\) of an element \(\lambda\in V\) is defined as the convex hull of the \(W_{f}\)-orbit of \(\lambda\), i.e. \(\operatorname{Conv}\{w\cdot\lambda\mid w\in W_{f}\}\). Without loss of generality we always assume that \(\lambda\) is in the dominant region \(C^{+}\). If \(\lambda=\mathbf{0}\) then \(\mathsf{P}^{\Phi}(\lambda)=\{\mathbf{0}\}\). Otherwise, \(\mathsf{P}^{\Phi}(\lambda)\) is full dimensional. Let \(\lambda=m_{1}\varpi_{1}^{\vee}+\cdots+m_{n}\varpi_{n}^{\vee}\in C^{+}\). We define the _vanishing set_ of \(\lambda\) as \[Z(\lambda):=\{j\in I_{n}\mid m_{j}=0\}.\] For \(j\in I_{n}\) the element \(s_{j}\in W_{f}\) fixes \(\lambda\) if and only if \(j\in Z(\lambda)\). Furthermore, the stabilizer of \(\lambda\) is the parabolic subgroup \(W_{Z(\lambda)}\). The face structure of the orbit polytopes depends on vanishing sets as the following theorem (proved in [28, Corollary 1.3]) makes precise. **Proposition 2.8**.: _Let \(\Phi\) be a root system with Dynkin diagram \(D.\) Let \(\lambda\) be an element with vanishing set \(Z\). There is a bijection between_ 1. \(W_{f}\)_-orbits of_ \(d\)_-dimensional faces of_ \(\mathsf{P}^{\Phi}(\lambda)\)__ 2. _Subsets_ \(J\subseteq\Delta\) _with_ \(|J|=d\) _such that no connected component of_ \(D|_{J}\) _is contained in_ \(D|_{Z}\)_._ We can say more about this bijection. **Definition 2.9**.: For \(J\) a subset of \(\Delta\) satisfying Theorem 2.8 (ii), let \(\mathsf{F}_{J}(\lambda)\) be the unique face in the corresponding \(W_{f}\)-orbit containing \(\lambda\). We can describe \(\mathsf{F}_{J}(\lambda)\) in two ways. First we have that \[\mathsf{F}_{J}(\lambda):=\operatorname{Conv}\{w\cdot\lambda\mid w\in W_{J}\}. \tag{2.7}\] We can also describe \(F_{J}\) as an intersection of \(\mathsf{P}^{\Phi}(\lambda)\) with supporting hyperplanes. For each index \(j\in I_{n}\) we have that the hyperplane \[H_{j}(\lambda)\coloneqq\{v\in V\mid\langle\varpi_{j}^{\vee},v\rangle=\langle \varpi_{j}^{\vee},\lambda\rangle\} \tag{2.8}\] is a supporting hyperplane of \(\mathsf{P}^{\Phi}(\lambda)\). For a set \(J\subset I_{n}\) we define the affine subspace \[H_{J}(\lambda):=\bigcap_{j\in J^{c}}H_{j}(\lambda)=\lambda+\operatorname{Span }\{\alpha_{j}\,:\,j\in J\}, \tag{2.9}\] where \(J^{c}\) is the complement of \(J\) in \(I_{n}\). Note that \(H_{j}(\lambda)=H_{I_{n}\setminus\{j\}}(\lambda)\). By the second equality we can see that the linear subspace parallel to \(H_{J}(\lambda)\) is \[L_{J}=\operatorname{Span}\{\alpha_{j}\,:\,j\in J\}. \tag{2.10}\] We have that \[\mathsf{F}_{J}(\lambda)\coloneqq H_{J}(\lambda)\cap\mathsf{P}^{\Phi}(\lambda). \tag{2.11}\] **Remark 2.10**.: The definition of orbit polytopes makes sense even for non-irreducible root systems. In particular, Equation (2.7) shows that the face \(\mathsf{F}_{J}(\lambda)\) is itself an orbit polytope with Weyl group \(W_{J}\). Proposition 2.8 and Corollary 2.12 are valid not just for orbit polytopes but also for their faces. **Remark 2.11**.: Every facet containing \(\lambda\) is an intersection of the form \(\mathsf{P}^{\Phi}(\lambda)\cap H_{i}\). However, when \(i\in Z(\lambda)\) the intersection is a face of codimension greater than \(1\). We call such faces degenerate facets. An immediate consequence of Proposition 2.8 is the following. **Corollary 2.12**.: _Let \(\lambda\) be generic, i.e., with empty vanishing set. Then the \(W_{f}\)-orbits of faces of \(\mathsf{P}^{\Phi}(\lambda)\) are in bijection with subsets of \(\Delta\). In particular, the \(W_{f}\)-orbits of the facets are in bijection with \(\Delta\). More precisely, every face of \(\mathsf{P}^{\Phi}(\lambda)\) is in the \(W_{f}\)-orbit of \(F_{J}(\lambda)\) for some \(J\subset I_{n}\). Furthermore, the face \(F_{J}(\lambda)\) has dimension \(|J|\) and its \(W_{f}\)-orbit consists precisely of \([W_{f}:W_{J}]\) faces._ ## 3 Lattice Formula For any \(x,y\in W_{a}\), let \([x,y]\) be the Bruhat interval consisting of the elements \(z\in W_{a}\) such that \(x\leq z\leq y\). We write \(\leq y\coloneqq[\mathrm{id},y]\). **Proposition 3.1**.: _For every dominant coweight \(\lambda\), we have_ \[A\bigg{(}\leq\theta(\lambda)\bigg{)}=\bigsqcup_{\mu\in W_{f}\cdot X_{\lambda}} A(W_{f})+\mu, \tag{3.1}\] _where \(X_{\lambda}=\{\mu\in(\Lambda^{\vee})^{+}\mid\mu\leq\lambda\}\)._ Proof.: Let \(\sigma\in\Omega\) be such that \(\lambda\in\sigma+\mathbb{Z}\Phi^{\vee}\). By the Lifting Property and (v) in Lemma 2.4, we can easily prove that the set \(\leq\theta(\lambda)\) is \(W_{f}\)-invariant on the left and \(W_{\sigma}\)-invariant on the right. On the other hand, for every \(\sigma^{\prime}\in\Omega\) the map \(\theta\) \[\theta:\big{(}\sigma^{\prime}+\mathbb{Z}\Phi^{\vee}\big{)}\cap(\Lambda^{\vee })^{+}\xrightarrow{\sim}W_{f}\backslash W_{a}/W_{\sigma^{\prime}}, \tag{3.2}\] given by \(\lambda^{\prime}\mapsto W_{f}\theta(\lambda^{\prime})W_{\sigma^{\prime}}\) is a bijection that intertwines the dominance order on the left with the Bruhat order on the right (for more details on this bijection see [20, Section 2.1]). In particular, if \(\mu,\lambda\in(\Lambda^{\vee})^{+}\), then \(\mu\leq\lambda\) if and only if \(\mu\in\sigma+\mathbb{Z}\Phi^{\vee}\) and \(\theta(\mu)\leq\theta(\lambda)\). Let us prove the equation \[\leq\theta(\lambda)=\bigsqcup_{\mu\in X_{\lambda}}W_{f}\theta(\mu)W_{\sigma}. \tag{3.3}\] First, we prove the inclusion \(\supseteq\). If \(\mu\in X_{\lambda}\), then \(\theta(\mu)\leq\theta(\lambda)\) and thus the invariance of \(\leq\theta(\lambda)\) implies that \(W_{f}\,\theta(\mu)W_{\sigma}\) is contained in the set \(\leq\theta(\lambda)\). We now prove the inclusion \(\subseteq\,.\) Let \(u\leq\theta(\lambda)\). Once again, the invariance of \(\leq\theta(\lambda)\) implies that \(W_{f}\,uW_{\sigma}\) is contained in the set \(\leq\theta(\lambda)\). By the bijection 3.2, the maximal element of the coset \(W_{f}\,uW_{\sigma}\) is \(\theta(\mu)\) for some \(\mu\in(\sigma+\mathbb{Z}\Phi^{\vee})\cap(\Lambda^{\vee})^{+}\). Then, \(W_{f}\,\theta(\mu)W_{\sigma}\) is contained in the set \(\leq\theta(\lambda)\). In particular, \(\theta(\mu)\leq\theta(\lambda)\) and thus \(\mu\in X_{\lambda}\). We conclude the proof of equality (3.3). From there we see \[\leq\theta(\lambda)=W_{f}\cdot\left(\bigsqcup_{\mu\in X_{\lambda}}\theta(\mu )W_{\sigma}\right).\] By looking at the corresponding alcoves and by using Lemma 2.4(ii), we get \[A(\leq\theta(\lambda)) =W_{f}\cdot\left(\bigsqcup_{\mu\in X_{\lambda}}A(\theta(\mu)W_{ \sigma})\right)\] \[=W_{f}\cdot\left(\bigsqcup_{\mu\in X_{\lambda}}A(W_{f})+\mu\right)\] \[=\bigsqcup_{\mu\in W_{f}\cdot X_{\lambda}}A(W_{f})+\mu.\] The closure of the set \(A(W_{f})\subset V\) defines a polytope, which we call the \(W_{f}\)-polytope. If \(\Phi\) has type \(A_{n}\), then \(W_{f}\) is isomorphic to the symmetric group \(S_{n+1}\). Figure 5 shows the \(S_{3}\)-polytope and the \(S_{4}\)-polytope. The black arrows are the fundamental (co)weights and the colored arrows are the simple (co)roots. Proposition (3.1) shows that the \(W_{f}\)-polytope tessellates the closure of the set \(A(\leq\theta(\lambda))\). **Theorem 3.2** (Lattice Formula).: _For every dominant coweight \(\lambda\), we have_ \[|\leq\theta(\lambda)|=|W_{f}|\ |\mathsf{P}^{\Phi}(\lambda)\cap(\lambda+\mathbb{Z} \Phi^{\vee})|. \tag{3.4}\] Proof.: Note that \(\mathsf{P}^{\Phi}(\lambda)\cap C^{+}\) consists precisely of the elements \(\lambda-(x_{1}\alpha_{1}^{\vee}+\cdots+x_{n}\alpha_{n}^{\vee})\in C^{+}\) with all the \(x_{i}\geq 0\). It follows that \(X_{\lambda}=\mathsf{P}^{\Phi}(\lambda)\cap(\lambda+\mathbb{Z}\Phi^{\vee})\cap C ^{+}\). Since \(\mathsf{P}^{\Phi}(\lambda)\) and \(\lambda+\mathbb{Z}\Phi^{\vee}\) are \(W_{f}\)-invariant, and \(V=W_{f}\cdot C^{+}\), we obtain \[W_{f}\cdot X_{\lambda}=\mathsf{P}^{\Phi}(\lambda)\cap(\lambda+\mathbb{Z}\Phi ^{\vee}).\] Therefore, (3.4) follows by counting alcoves in (3.1). ## 4 On the volumes \(V_{j}^{\Phi}\) In this section we study the volumes of orbit polytopes. We fix an irreducible root system \(\Phi\) of rank \(n\). Notice that faces of \(\mathsf{P}^{\Phi}(\lambda)\) in the same \(W_{f}\)-orbit have the same volume, without loss of generality we can focus on the representatives \(\mathsf{F}_{J}(\lambda)\). **Definition 4.1**.: For \(\lambda\in V\) and \(J\subset I_{n}\), we define \(V_{J}^{\Phi}(\lambda)\) as the \(|J|\)-dimensional volume of \(\operatorname{Conv}(W_{J}\cdot\lambda)\). Let \(D\) be the Dynkin diagram corresponding to \(\Phi\). We denote by \(D_{J}\) the graph obtained from \(D\) by eliminating all the vertices \(i\) for \(i\not\in J\). We define \(\mathcal{C}_{J}\) as the collection of the index sets of the connected components of \(D_{J}\). For example in type \(A_{n}\) for \(n\geq 4\), \(\mathcal{C}_{\{1,2,4\}}=\{\{1,2\},\{4\}\}\). We say that \(J\) is connected if \(\mathcal{C}_{J}=\{J\}\). In the following lemma we record some basic facts about \(V_{J}^{\Phi}(\lambda)\). **Lemma 4.2**.: _Let \(J\subset I_{n}\) and let \(\lambda=m_{1}\varpi_{1}^{\vee}+\cdots+m_{n}\varpi_{n}^{\vee}\in V\)._ 1. _The function_ \(V_{J}^{\Phi}\) _only depends on the "_\(J\)_-coordinates". That is, if_ \(\lambda_{J}=\sum_{j\in J}m_{j}\varpi_{j}^{\vee}\) _then_ \[V_{J}^{\Phi}(\lambda)=V_{J}^{\Phi}(\lambda_{J}).\] _._ 2. _The volume_ \(V_{J}^{\Phi}(\lambda)\) _can be computed as the product of the volumes corresponding to the connected components of_ \(J\)_. That is,_ \[V_{J}^{\Phi}(\lambda)=\prod_{K\in\mathcal{C}_{J}}V_{K}^{\Phi}(\lambda).\] We will give a recursive formula for \(V_{J}^{\Phi}\). Before doing so, we will need some previous results. **Definition 4.3**.: For \(J\subset I_{n}\) we define \(\mathcal{B}_{J}=\{b_{1},\ldots,b_{n}\}\) where \[b_{i}=\begin{cases}\alpha_{i}^{\vee}&\text{if }i\in J\\ \varpi_{i}^{\vee}&\text{if }i\notin J\end{cases}\] We will show that this set is in fact a basis. We call \(\mathcal{B}_{J}\) the \(J\)-mixed basis. For \(j\in I_{n}\) we define \(\nu_{j}\in V\) to be the unique element satisfying \((\nu_{j},b_{i})=\delta_{ij}\) for all \(i\in I_{n}\). **Lemma 4.4**.: _Let \(J\subset I_{n}\). Then,_ 1. \(\mathcal{B}_{J}=\{b_{1},\ldots,b_{n}\}\) _is a basis of_ \(V\)_._ 2. _For all_ \(j\in J\) _we have_ \((\varpi_{j}^{\vee},\nu_{j})>0\)_._ 3. _For any_ \(x\in L_{J}\) _we have_ \(\sum_{w\in W_{J}}wx=0\)_, where_ \(L_{J}\) _is given by (_2.10_)._ Proof.: 1. We proceed by induction on \(|J^{c}|\). If \(J^{c}=\emptyset\) there is nothing to prove. Let us now assume that \(|J^{c}|\geq 1\). Let \(i\in J^{c}\) and define \(J^{c}_{i}=J^{c}\setminus\{i\}\). Of course, \(|J^{c}_{i}|<|J^{c}|\). Thus our inductive hypothesis implies that \(\mathcal{B}_{J_{i}}\) is a basis of \(V\). Thus we can write \[\varpi_{i}^{\vee}=\sum_{j\in J_{i}}\lambda_{j}\alpha_{j}^{\vee}+\sum_{k\in J^ {c}_{i}}\lambda_{k}\varpi_{k}^{\vee}.\] (4.1) If \(\lambda_{i}\neq 0\) then we are done, since in this case \(\alpha_{i}^{\vee}\) would live in the span of \(\mathcal{B}_{J}\). So, we assume that \(\lambda_{i}=0\). After pairing both sides of (4.1) with \(\alpha_{j}\) for \(j\in J\) we get a homogeneous linear system with \(|J|\) unknowns \(\{\lambda_{j}\}_{j\in J}\) whose matrix is obtained from the Cartan matrix of \(\Phi\) by eliminating all the rows and columns indexed by \(J^{c}\). We denote this matrix by \(M_{J}\). We notice that \(M_{J}\) is a block diagonal matrix with each block being the Cartan matrix of the root system associated to the connected components of \(J\). It follows that \(M_{J}\) is invertible. We conclude that \(\lambda_{j}=0\) for all \(j\in J\). Therefore, (4.1) reduces to \[\varpi_{i}^{\vee}=\sum_{k\in J^{c}_{i}}\lambda_{k}\varpi_{k}^{\vee}.\] (4.2) This is a contradiction since \(\{\varpi_{k}^{\vee}\}_{k\in I_{n}}\) is a basis of \(V\). It follows that \(\lambda_{i}\neq 0\) and \(\mathcal{B}_{J}\) is a basis of \(V\). 2. Fix \(j\in J\). Note that \(\nu_{j}\) can equivalently be defined as the vector \(\nu_{j}\in L_{J}\) such that \((\nu_{j},\alpha_{i}^{\vee})=\delta_{ij}\) for all \(i\in J\). Write \[\nu_{j}=\sum_{k\in J}u_{k}\alpha_{k},\] (4.3) for some real numbers \(u_{k}\). Pairing each side of (4.3) with \(\alpha_{i}^{\vee}\) for \(i\in J\), gives a system of equations \(e_{j}=M_{J}\,u\), where \(e_{j}=(\delta_{ij})_{i\in J}\) and \(u=(u_{i})_{i\in J}\) are column vectors. It follows that \(u\) is the \(j^{\rm th}\) column of \(M_{J}^{-1}\). Furthermore, by [21, SS5] we know that all the entries of \(M_{J}^{-1}\) are strictly positive. We conclude that \(u_{k}>0\) for all \(k\in J\). In particular, \(0<u_{j}=(\varpi_{j}^{\vee},\nu_{j})\). 3. By linearity we can assume \(x=\alpha_{j}\) for some \(j\in J\). The group \(W_{J}\) can be partitioned as \[W_{J}=W_{J}^{s_{j}}\sqcup W_{J}^{s_{j}}\cdot s_{j},\] (4.4) where \(W_{J}^{s_{j}}=\{w\in W_{J}\mid\ell(w)<\ell(ws_{j})\}\). By recalling that \(s_{j}(\alpha_{j})=-\alpha_{j}\), we obtain \[\sum_{w\in W_{J}}wx=\sum_{w\in W_{J}^{s_{j}}}(wx+ws_{j}x)=\sum_{w\in W_{J}^{s_ {j}}}(wx-wx)=0.\qed\] (4.5) **Lemma 4.5**.: _Let \(J\subset I_{n}\). For any \(\lambda\in C^{+}\) we have_ \[V_{J}^{\Phi}(\lambda)=\frac{1}{|J|}\sum_{j\in J}\left[W_{J}:W_{J\setminus\{j \}}\right]\;\frac{(\lambda,\nu_{j})}{\|\nu_{j}\|}\;V_{J\setminus\{j\}}^{\Phi}( \lambda). \tag{4.6}\] Proof.: Let \(\lambda=(m_{1},\ldots,m_{n})\) in the fundamental coweight basis. Note that \(W_{J}\cdot\lambda\subset L_{J}+\lambda\). We will compute the \(|J|\)-dimensional volume of \(\mathcal{P}=\operatorname{Conv}(W_{J}\cdot\lambda)\) in the space \(U=L_{J}+\lambda\). For \(w\in W_{J}\) and \(j\in J\), let \[F(w,j)=w\operatorname{Conv}(W_{J\setminus\{j\}}\cdot\lambda).\] Let us suppose that \(m_{i}>0\) for all \(i\in J\), so that, by Corollary 2.12, the facets of \(\mathcal{P}\) are given by \[\mathcal{F}=\{F(w,j)\mid j\in J\text{ and }w\in W_{J}/W_{J\setminus\{j\}}\}.\] For \(K\subset I_{n}\), let \(\xi_{K}\in V\) be the vector resulting from writing \(\lambda\) in the \(J\)-mixed basis and then removing all but its \(K\)-coordinates. By definition, it follows that \(\xi_{J^{c}}\) is stabilized by \(W_{J}\), where \(J^{c}\) is the complement of \(J\) in \(I_{n}\). Since \(\xi_{J}\in L_{J}\), Lemma 4.4(iii) gives \[\frac{1}{|W_{J}|}\sum_{w\in W_{J}}w\lambda=\frac{1}{|W_{J}|}\sum_{w\in W_{J}}w (\xi_{J^{c}}+\xi_{J})=\xi_{J^{c}}+\frac{1}{|W_{J}|}\sum_{w\in W_{J}}w\xi_{J}= \xi_{J^{c}}.\] Therefore, \(\xi_{J^{c}}\in\mathcal{P}\). Let \(h(w,j)\) be the pyramid in \(U\) having \(F(w,j)\) as its base and \(\xi_{J^{c}}\) as its apex. Note that \(\mathcal{P}\) is just the union of these pyramids. Let \(j\in J\). There are exactly \([W_{J}:W_{J\setminus\{j\}}]\) pyramids of the form \(h(w,j)\), with \(w\in W_{J}/W_{J\setminus\{j\}}\). All of these pyramids have equal \(|J|\)-dimensional volume, which is given by \[\operatorname{Vol}_{|J|}\left(h(\operatorname{id},j)\right)=\frac{1}{|J|}d_{j} \operatorname{Vol}_{|J|}\left(F(\operatorname{id},j)\right)=\frac{1}{|J|}d_{j} \,V_{J\setminus\{j\}}^{\Phi}(\lambda),\] where \(d_{j}\) denotes the distance from \(\xi_{J^{c}}\) to the hyperplane \(L_{J\setminus\{j\}}+\lambda\) of \(U\). By definition, \(\nu_{j}\in L_{J}\) is orthogonal to \(L_{J\setminus\{j\}}\). Therefore, \[d_{j}=\frac{(\lambda-\xi_{J^{c}},\nu_{j})}{\|\nu_{j}\|}=\frac{(\lambda,\nu_{j} )}{\|\nu_{j}\|},\] since \((\xi_{J^{c}},\nu_{j})=0\). This gives the desired result, assuming \(m_{i}>0\) for all \(i\in J\). Finally if some \(m_{i}\) are zero, we can still divide \(\mathcal{P}\) according to its facets. They are contained in \(\mathcal{F}\) but the containment may be proper, see Remark 2.11. Suppose \(F(w,j)\in\mathcal{F}\) is not a facet. Then \(F(w,j)\) is still a face of \(\mathcal{P}\), but has dimension \(<|J|-1\). Then the corresponding (degenerated) pyramid \(h(w,j)\) has zero \(|J|\)-dimensional volume, so these extra faces are not a problem. **Remark 4.6**.: Let \(\mathbf{m}=(m_{i})_{i\in I_{n}}\) be a \(n\)-tuple of non-negative real numbers and \(J\subset I_{n}\). Let \(V_{J}^{\Phi}(\mathbf{m})\coloneqq V_{J}^{\Phi}(m_{1}\varpi_{1}^{\vee}+\cdots+ m_{1}\varpi_{n}^{\vee})\). Lemma 4.5 implies that \(V_{J}^{\Phi}(\mathbf{m})\) is in fact a homogeneous polynomial of degree \(|J|\) in the variables \(\{m_{j}\mid j\in J\}\). Indeed, note that \[(\lambda,\nu_{j})=\sum_{i\in J}m_{i}(\varpi_{i}^{\vee},\nu_{j}),\] for every \(j\in J\), that is, \((\lambda,\nu_{j})\) is a homogeneous polynomial of degree \(1\) in the desired variables. Since \(V_{\emptyset}^{\Phi}=1\), the result follows by induction. From now on, \(V_{J}^{\Phi}\) will mean the corresponding polynomial in \(\mathbb{R}[m_{1},m_{2},\ldots,m_{n}]\). **Lemma 4.7**.: _Let \(J,K\subset I_{n}\) and define \(m_{J}\coloneqq\prod_{j\in J}m_{j}\in\mathbb{R}[m_{1},\ldots,m_{n}]\). Let \(c_{J,K}^{\Phi}\) be the coefficient of \(m_{J}\) in \(V_{K}^{\Phi}(\mathbf{m})\) and \(c_{J}^{\Phi}=c_{J,J}^{\Phi}\)._ 1. _If_ \(K\neq J\) _then_ \(c_{J,K}^{\Phi}=0\)_._ 2. \(c_{J}^{\Phi}>0\)_._ Proof.: If \(J=\emptyset\) both statements are trivial. Thus we can assume that \(J\neq\emptyset\). 1. We recall that \(V_{K}^{\Phi}(\mathbf{m})\) is a homogeneous polynomial in the variables \(\{m_{k}\}_{k\in K}\). Suppose that \(c_{J,K}^{\Phi}\neq 0\). It follows that \(J\subset K\). On the other hand, \(m_{J}\) has degree \(|J|\), so that \(|J|=|K|\). Therefore, \(J=K\) which contradicts our hypothesis. We conclude that \(c_{J,K}^{\Phi}=0\). 2. The coefficient of \(m_{J}\) in \((\lambda,\nu_{j})\,V_{J\setminus\{j\}}^{\Phi}(\lambda)\) is \((\varpi_{j}^{\vee},\nu_{j})\,c_{J\setminus\{j\}}^{\Phi}\). Thus, (4.6) gives \[c_{J}^{\Phi}=\frac{1}{|J|}\sum_{j\in J}[W_{J}:W_{J\setminus\{j\}}]\,(\varpi_{ j}^{\vee},\nu_{j})\,c_{J\setminus\{j\}}^{\Phi}.\] By induction, we can assume \(c_{J\setminus\{j\}}^{\Phi}>0\). Thus, the result follows by Lemma 4.4(ii). **Corollary 4.8**.: _The polynomials \(V_{J}^{\Phi}(\mathbf{m})\), with \(J\subset I_{n}\), are linearly independent._ Proof.: The result follows by a direct application of Lemma 4.7. **Remark 4.9**.: So far we have used the Euclidean volume on \(V\). To compare our results to the volume formulas of Postnikov there is a scalar factor of \(\sqrt{n+1}\) missing for type \(A_{n}\). This is because his formulas are stated with the volume relative to the root lattice, that is, it is scaled so that the fundamental parallelepiped spanned by the simple roots has volume \(1\), but the Euclidean volume is \(\sqrt{n+1}\) by Equation (2.5). Our variables \(m_{1},\cdots,m_{n}\) correspond to the variables \(u_{1},\cdots,u_{n}\) in [27, Section 16]. Geometric Formula The purpose of this section is to prove Theorem B. For the reader's convenience, we state the theorem again. **Theorem 5.1**.: _For every root system \(\Phi\), there are unique \(\mu_{J}^{\Phi}\in\mathbb{R}\) such that for any dominant coweight \(\lambda\),_ \[|\leq\theta(\lambda)|=\sum_{J\subset I_{n}}\mu_{J}^{\Phi}V_{J}^{\Phi}(\lambda). \tag{5.1}\] This implies that if \(\Phi\) has rank \(n\) and \(\lambda=(m_{i})_{i\in I_{n}}\) in the fundamental coweight basis, then \(|\leq\theta(\lambda)|\) is a polynomial of degree \(n\) in the \(m_{1},\ldots,m_{n}\). Taking the sum over a fixed rank \(|J|=d\) gives the degree \(d\) part of the polynomial. We call the coefficients \(\mu_{J}^{\Phi}\) the _geometric coefficients_. First we review some more concepts from discrete geometry. **5.1**: **Transverse cones.** Let \(\Gamma\subset V\) be a lattice. A polytope \(\mathsf{P}\) in \(V\) is called a _lattice_ polytope if all its vertices are in \(\Gamma\); it is called _rational_ if an integer dilation of it is a lattice polytope. A pointed cone \(\mathsf{C}\) is rational if its vertex is a lattice point and every ray (\(1\)-dimensional face) contains a lattice point. Let \(L\subset V\) be a linear subspace. If \(L\) has a basis consisting of elements of \(\Gamma\), then \(\Gamma/L\) is also a lattice in the quotient \(V/L\) by [3, Corollary 10.3]. Since \(V\) has an inner product we can define \(L^{\perp}.\) There is a canonical isomorphism between \(V/L\) and \(L^{\perp}\) under which the map \(V\to V/L\) corresponds to the orthogonal projection to \(L^{\perp}\). Notice that in general the set \(\Gamma\cap L^{\perp}\) is different from the projection of \(\Gamma\) into \(L^{\perp}\) as can be seen in Figure 6. For each facet \(\mathsf{G}\) of \(\mathsf{P}\) we define an outer normal as an element \(u_{G}\in V\) such that every point in the polytope satisfies the inequality \(\langle u_{\mathsf{G}},v\rangle\leq b_{\mathsf{G}}\), for some constant \(b_{\mathsf{G}}\), with equality iff \(v\in\mathsf{G}\). If \(\mathsf{P}\) is a lattice polytope, then \(u_{\mathsf{G}}\) can be chosen to be a lattice vector. Furthermore, if \(\mathsf{P}\) is full dimensional then \(u_{\mathsf{G}}\) is unique up to positive scalar. **Definition 5.2**.: Let \(\Gamma\subset V\) a lattice and let \(\mathsf{P}\) be a full dimensional lattice polytope. For every face \(\mathsf{F}\subset\mathsf{P}\) let \(H\) be its affine span, \(L\) the corresponding linear subspace. We define the following cones: * The **feasible cone**\(\mathsf{f}(\mathsf{F},\mathsf{P})=\{v\in V\ :\ \exists\ \epsilon>0\text{ such that }x+ \epsilon v\in P\}\), where \(x\) is a point in the relative interior of \(\mathsf{F}\), i.e., a point in \(\mathsf{F}\) that does not belong to any proper face of it. The feasible cone is independent of the interior point \(x\). * The **supporting cone**\(\mathsf{s}(\mathsf{F},\mathsf{P}):=H+\mathsf{f}(\mathsf{F},\mathsf{P})\). By definition this is a translation of the feasible cone. Figure 6: * The **transverse cone**\(\mathsf{t}(\mathsf{F},\mathsf{P})=\mathsf{s}(\mathsf{F},\mathsf{P})/L\subset V/L=L^ {\perp}\subset V\). * The **normal cone**\(\mathsf{n}(\mathsf{F},\mathsf{P})=\mathrm{cone}\{u_{\mathsf{G}}\ :\ \mathsf{G}\text{ is a facet such that }\mathsf{F}\subset\mathsf{G}\}\). For any of these cones, we simply say "the cone of \(\mathsf{F}\)" when \(\mathsf{P}\) is clear from context. The first three cones are visibly related. For any (possibly non-pointed) cone \(\mathsf{C}\) that includes the origin, we define its _polar_. \[\mathsf{C}^{\circ}=\{v\in V\ :\ (v,w)\leq 0,\forall w\in\mathsf{C}\}.\] The normal cone is the polar of the feasible cone, see [30, Theorem 6.46] (in that source the feasible cone is called the tangent cone). This set of definitions begs for an example. **Example 5.3**.: Consider the lattice \(\mathbb{Z}^{2}\) in \(\mathbb{R}^{2}\) and the lattice polygon given by \[\mathsf{P}=\mathrm{Conv}\begin{bmatrix}-3&-4&-3&0\\ 1&3&4&2\end{bmatrix}\] Let's first analyze the vertex \(v=(-3,1)\). Figure 8 illustrates the four cones. The supporting cone is the cone with vertex \(v\) and emanating towards the polytope. The feasible cone is the translation of this cone that places the vertex in the origin. The linear space spanned by \(v\) is trivial so \(V/L=V\) and thus the transverse cone agrees with the supporting cone. The vertex \(v\) is contained in two facets, which are edges in this case. The vectors \((-2,-1)\) and \((1,-3)\) are outer normals of the edges \(\{(-3,1),(-4,3)\}\) and \(\{(-3,1),(0,2)\}\), respectively. These two outer normals (or rather a dilation) are depicted in Figure 8 as dashed arrows perpendicular to the faces. The normal cone is the cone with vertex at the origin spanned by these two vectors. Now let's analyze the edge \(\{(-3,4),(0,2)\}\). In Figure 8 the supporting cone and the transverse cone are shown. First note that the linear subspace \(L\) corresponding to this edge is generated by the vector \((3,-2)\) and it is shown as a dashed line through the origin. The feasible cone (not depicted) is everything below that line. The supporting cone is everything below the line spanned by the edge (note how it is not pointed). The transverse cone is the projection to the orthogonal complement of \(L\). The outer normal can be chosen to be \((2,3)\), and it is depicted as an arrow perpendicular to the edge. The normal cone is generated by \((2,3)\), which is an outer normal of the edge. Notice that the normal cone is a ray that is polar to the feasible cone. We can now explain a less trivial example, the transverse cones for orbit polytopes. We use the notation \(\overline{v}\) to denote the image of \(V\) in a quotient map. **Lemma 5.4**.: _Let \(\Phi\subset V\) be a root system and let \(\lambda\in C^{+}\) be generic. Let \(\mathsf{F}_{J}(\lambda)\subset\mathsf{P}^{\Phi}(\lambda)\) be as in Definition 2.9. Then_ \[\mathsf{t}(\mathsf{F}_{J}(\lambda),\mathsf{P}^{\Phi}(\lambda))=\overline{ \lambda}+\operatorname{cone}\{\overline{-\alpha_{i}^{\vee}}\,:\,i\notin J\} \subset V/L_{J}, \tag{5.2}\] _where \(L_{J}\) is defined in Equation (2.10)._ Proof.: We put \(\mathsf{F}_{J}=\mathsf{F}_{J}(\lambda)\) and \(\mathsf{P}=\mathsf{P}^{\Phi}(\lambda)\) for simplicity. The linear space generated by \(F_{J}\) is \(L_{J}\). Note that the facets of \(\mathsf{P}\) containing \(\mathsf{F}_{J}\), are precisely \(\mathsf{F}_{I_{n}\setminus\{i\}}\) with \(i\notin J\), since \(\lambda\) is generic. It follows that the normal cone of \(\mathsf{F}_{J}\) is generated by \(\{\varpi_{i}^{\vee}\,:\,i\notin J\}\). As the feasible cone is polar to the normal cone, we have that \[\mathsf{f}(\mathsf{F}_{J},\mathsf{P}) =\{v\,:\,(v,\varpi_{i}^{\vee})\leq 0\text{ for all }i\notin J\}\] \[=\operatorname{span}\{\alpha_{j}^{\vee}\,:\,j\in J\}+ \operatorname{cone}\{-\alpha_{i}^{\vee}\,:\,i\notin J\}.\] By definition, \(\mathsf{s}(\mathsf{F}_{J},\mathsf{P})=\lambda+\mathsf{f}(\mathsf{F}_{J}, \mathsf{P})\) and \(\mathsf{t}(\mathsf{F}_{J},\mathsf{P})=\overline{\mathsf{s}(\mathsf{F}_{J}, \mathsf{P})}\). Equation (5.2) follows, since \(\operatorname{span}\{\alpha_{j}^{\vee}\,:\,j\in J\}=L_{J}\). ### Euler-Maclaurin formulas The following is the _Euler-Maclaurin formula_ developed by Berline and Vergne [6] (see also [3, Chapters 19-20] for an exposition). There exist a function \(\nu\) on pointed rational cones such that the following is true for all lattice polytopes \(\mathsf{P}\). \[|\mathsf{P}\cap\Gamma|=\sum_{\mathsf{F}\subseteq\mathsf{P}}\nu\,(\mathsf{t} (\mathsf{F},\mathsf{P}))\operatorname{relVol}(\mathsf{F}), \tag{5.3}\] where the sum is indexed over all nonempty faces of \(\mathsf{P}\). The relative volume \(\operatorname{relVol}(\mathsf{F})\) of a face is the volume form on its affine span \(H\) normalized with respect to the lattice \(\Gamma\cap L\), where \(L\) is the linear subspace parallel to \(H\). More precisely, \[\operatorname{relVol}(\mathsf{F})=\frac{\operatorname{Vol}(\mathsf{F})}{ \det(\Gamma\cap L)}. \tag{5.4}\] **Remark 5.5**.: To be more precise, Berline and Vergne's main construction in [6] is a function \(\mu\) that maps pointed rational cones to meromorphic functions [6, Section 4]. In this paper we only use the function \(\nu\) which is \(\mu\) evaluated at zero [6, Definition 25], and then Equation (5.3) is equivalent to [6, Theorem 26] when the function \(h\) is the constant function equal to \(1\). Alternatively, we are using the construction of Pommersheim and Thomas in [26] in the case where the complement map is given by an inner product, see [26, Corollary 1 (iv)]. Both constructions are involved and only in few cases the actual value of the \(\nu\) function is known. See [15] for a purely combinatorial construction of this function. We remark that for a single polytope \(P\), it is obvious that there will be a formula of the sort. The interesting part of Berline-Vergne's theorem is that the \(\nu\) function satisfies Equation (5.3) for all lattice polytopes simultaneously and has certain local properties (see Lemma 5.7). **Example 5.6** (Pick's Theorem).: Let \(\mathbb{Z}^{2}\subset\mathbb{R}^{2}\) be the ambient lattice and space and \(\mathsf{P}=\operatorname{Conv}\{(1,0),(3,0),(4,3),(4,4),(3,4),(0,1)\}\) be the lattice polygon depicted in Figure 9. Equation (5.3) states that the total \(15\) lattice points in the polygon can be accounted in the following way. **Dimension 2**: The contribution to the sum of the whole polygon is its relative volume, which in this case is simply its area: \(19/2\). This is because the \(\nu\) function applied to the transverse cone of the whole polygon is \(1\) (one can prove this using that the \(\nu\) function applied to a singleton gives \(1\)). **Dimension 1**: For each edge the \(\nu\) value of the corresponding transverse cone is always \(1/2\). Additionally, we compute the relative volume. For example in the edge \((0,1)-(3,4)\) we normalize the volume in the spanning subspace so that the fundamental parallelepiped \((0,0)-(1,1)\) has area one. This segment then has relative volume of \(3\). The total contribution of the edges is equal to \((3+1+1+1+2+1)/2=9/2\) **Dimension 0**: For each vertex the relative volume is equal to one. So we are simply adding the \(\nu\) values. In this case, they are not all equal, but there is a property of \(\nu\) called _valuation_ that says that they add up to \(1\) when summed over the transverse cones of vertices. Extrapolating this example we get that for any lattice polygon \(\mathsf{P}\) we have the formula \[|\mathsf{P}\cap\mathbb{Z}^{2}|=\operatorname{Area}(\mathsf{P})+\frac{1}{2} \text{Boundary Points}(\mathsf{P})+1, \tag{5.5}\] which is known as Pick's formula. Now we analyze these concepts in the case of generic orbit polytopes. Recall that we know their face structure by Corollary 2.12, in particular their \(W_{f}\)-orbits of faces are in bijection with subsets \(J\subset I_{n}\). Also, we introduce a translation of orbit polytopes \[\mathsf{Q}^{\Phi}(\lambda)=\mathsf{P}^{\Phi}(\lambda)-\lambda. \tag{5.6}\] The polytope \(\mathsf{Q}^{\Phi}\) is always a \(\mathbb{Z}\Phi^{\vee}\)-lattice polytope, which simplifies some considerations below. Its faces are \(\mathsf{G}_{J}(\lambda,w):=w\mathsf{F}_{J}(\lambda)-\lambda\) for all pairs of \(w\in W_{f},J\subset I_{n}\). We define \(\mathsf{G}_{J}(\lambda):=\mathsf{G}_{J}(\lambda,\operatorname{id})\). **Lemma 5.7**.: _Let \(\Phi\) be a root system of rank \(n\), \(J\subset I_{n}\), \(\lambda\) a **generic** element of \((\Lambda^{\vee})^{+}\) and \(\mathsf{Q}^{\Phi}(\lambda)\) defined as in Equation (5.6). Then_ 1. _The_ \(\nu\) _value of the transverse cone of_ \(\mathsf{G}_{J}(\lambda)\) _in_ \(\mathsf{Q}^{\Phi}(\lambda)\) _is independent of_ \(\lambda\)_._ 2. _The_ \(\nu\) _value of the transverse cones of_ \(\mathsf{G}_{J}(\lambda)\) _and_ \(\mathsf{G}_{J}(\lambda,w)\) _are equal for all_ \(w\in W_{f}\)_._ 3. _For_ \(w\in W_{f}\) _we have that_ \(\operatorname{Vol}(\mathsf{G}_{J}(\lambda))=\operatorname{Vol}(\mathsf{G}_{J} (\lambda,w))\)_. Furthermore,_ \(\operatorname{relVol}(\mathsf{G}_{J}(\lambda))=\operatorname{relVol}( \mathsf{G}_{J}(\lambda,w))\)_._ Proof.: We need to use two properties of \(\nu\). These two results are part of the content of [6, Proposition 14]. The following operations do not change the \(\nu\) value of a transverse cone. Figure 9: A lattice polygon with \(15\) lattice points. * Applying a lattice-preserving orthogonal transformation. * Translating by a lattice element. 1. Since \(\lambda\) is generic, by applying Lemma 5.4 we obtain \[\mathsf{t}(\mathsf{G}_{J}(\lambda),\mathsf{Q}^{\Phi}(\lambda))=\mathsf{t}( \mathsf{F}_{J}(\lambda),\mathsf{P}^{\Phi}(\lambda))-\overline{\lambda}=\mathrm{ cone}\{\overline{-\alpha_{j}^{\vee}}\ :\ j\notin J\}.\] Clearly, the right-hand side of the previous equality does not depend on \(\lambda\). Therefore, the \(\nu\) value does not depend on \(\lambda\) as well. 2. We can rewrite \[\mathsf{G}_{J}(\lambda,w)=w\mathsf{G}_{J}(\lambda)-(\lambda-w\lambda).\] (5.7) The facets containing \(\mathsf{G}_{J}(\lambda)\) are \(\mathsf{G}_{I_{n}\setminus\{i\}}(\lambda)\) with \(i\notin J\). Then the facets containing \(\mathsf{G}_{J}(\lambda,w)\) are \(\mathsf{G}_{I_{n}\setminus\{i\}}(\lambda,w)=w\mathsf{G}_{I_{n}\setminus\{i\} }(\lambda)-(\lambda-w\lambda)\) for \(i\notin J\), the equality by Equation (5.7). Notice that \(w\mathsf{G}_{I_{n}\setminus\{i\}}(\lambda)-(\lambda-w\lambda)\) is simply a translation of \(w\mathsf{G}_{I_{n}\setminus\{i\}}(\lambda)\) and as such it has the same outer normal. Since \(w\) as a linear transformation preserves the inner product, then the set \(\{w\varpi_{i}\,:\,i\notin J\}=w\{\varpi_{i}\,:\,i\notin J\}\) are the outer normals of the facets containing \(\mathsf{G}_{J}(\lambda,w)\). This shows that \[\mathsf{n}(\mathsf{G}_{J}(\lambda,w),\mathsf{Q}^{\Phi}(\lambda))=w\cdot \mathsf{n}(\mathsf{G}_{J}(\lambda),\mathsf{Q}^{\Phi}(\lambda)).\] (5.8) This implies that \[\mathsf{s}(\mathsf{G}_{J}(\lambda,w),\mathsf{Q}^{\Phi}(\lambda))-(w\lambda- \lambda)=\mathsf{f}(\mathsf{G}_{J}(\lambda,w),\mathsf{Q}^{\Phi}(\lambda))=w \mathsf{f}(\mathsf{G}_{J}(\lambda),\mathsf{Q}^{\Phi}(\lambda))=w\mathsf{s}( \mathsf{G}_{J}(\lambda),\mathsf{Q}^{\Phi}(\lambda)).\] In the first equality, we subtract only a vector of the supporting cone because the linear span of the face \(\mathsf{G}_{J}(\lambda,w)\) is contained in the feasible cone. The inner equality follows by taking the polar on both sides of Equation (5.8). Indeed notice that \(\langle v,C\rangle\leq 0\) if and only if \(\langle wv,wC\rangle\leq 0\), so the polar is also compatible with multiplying by \(w\). The last equality follows from the first one. Rearranging we obtain \[\mathsf{s}(\mathsf{G}_{J}(\lambda,w),\mathsf{Q}^{\Phi}(\lambda))=w\mathsf{s}( \mathsf{G}_{J}(\lambda),\mathsf{Q}^{\Phi}(\lambda))+(w\lambda-\lambda)\,.\] (5.9) Notice that the linear span of the face \(\mathsf{G}_{J}(\lambda,w)\) is \(wL_{J}\). Also, as \(w\) is an orthogonal transformation, \(wL_{J}^{\perp}=(wL_{J})^{\perp}\). So we can project both sides of Equation (5.9) to arrive at \[\mathsf{t}(\mathsf{G}_{J}(\lambda,w),\mathsf{Q}^{\Phi}(\lambda))=w\mathsf{t}( \mathsf{G}_{J}(\lambda),\mathsf{Q}^{\Phi}(\lambda))+\pi(w\lambda-\lambda) \subset wL_{J}^{\perp}.\] Here \(\pi(w\lambda-\lambda)\) is the image of \(w\lambda-\lambda\) under the orthogonal projection to \(wL_{J}^{\perp}\). The second vector is an element of the projected lattice \(\mathbb{Z}\Phi^{\vee}\) in \(wL_{J}^{\perp}\) so \(w\mathsf{t}(\mathsf{G}_{J}(\lambda),\mathsf{Q}^{\Phi}(\lambda))-\pi(w\lambda -\lambda)\) has the same \(\nu\)-value as \(w\mathsf{t}(\mathsf{G}_{J}(\lambda),\mathsf{Q}^{\Phi}(\lambda))\). The linear map \(w\) is a lattice-preserving orthogonal transformation, so by the properties mentioned above \(w\mathsf{t}(\mathsf{G}_{J}(\lambda))\) and \(\mathsf{t}(\mathsf{G}_{J}(\lambda))\) have the same \(\nu\)-value. Putting everything together, both cones \(\mathsf{t}(\mathsf{G}_{J}(\lambda))\) and \(\mathsf{t}(\mathsf{G}_{J}(\lambda,w))\) have the same \(\nu\)-value. 3. The two faces differ by applying \(w\) and translating. Since \(W_{f}\) is a subgroup of the orthogonal group of \(V\), the volume does not change by translations and multiplications by elements in \(W_{f}\). This proves the first claim. For the second claim, we notice that the restrictions of the lattice to both faces are isomorphic by the orthogonal transformation \(w\), so both fundamental parallelepipeds have the same volume and hence the relative volume of both faces agree. As a consequence of the last item of Lemma 5.7 we have that the quotient between the relative volume and the volume does not depend on \(w\). It doesn't depend on \(\lambda\) either, as \(L_{J}\) is the linear subspace spanned by it. We define \[v_{J}:=\frac{\operatorname{relVol}(\mathsf{G}_{J}(\lambda,w))}{\operatorname{ Vol}(\mathsf{G}_{J}(\lambda,w))}=\frac{1}{\det(\mathbb{Z}\Phi^{\vee}\cap L_{J})}, \tag{5.10}\] where the second equation follows from Equation (5.4). ### Proof of the geometric formula We first prove the existence of such a formula for \(\lambda\) generic. **Proposition 5.8**.: _For every root system \(\Phi\), there exists \(\mu_{J}^{\Phi}\in\mathbb{R}\) such that for any **generic** dominant coweight \(\lambda\),_ \[|\leq\theta(\lambda)|=\sum_{J\subset I_{n}}\mu_{J}^{\Phi}V_{J}^{\Phi}(\lambda). \tag{5.11}\] Proof.: Using the Lattice Formula, Equation (3.4), we have \[|\leq\theta(\lambda)|=|W_{f}|\ |\mathsf{P}^{\Phi}(\lambda)\cap(\lambda+ \mathbb{Z}\Phi^{\vee})|=|W_{f}|\ |\big{(}\mathsf{P}^{\Phi}(\lambda)-\lambda\big{)}\cap \mathbb{Z}\Phi^{\vee}|. \tag{5.12}\] By Proposition 2.8, the set of vertices of \(\mathsf{P}^{\Phi}(\lambda)\) is \(W_{f}\cdot\lambda.\) By definition of a coweight and that of the action of \(W_{f}\) on \(V\), \(\lambda-w(\lambda)\in\mathbb{Z}\Phi^{\vee},\) for \(w\in W_{f}.\) Thus the polytope \(\mathsf{Q}^{\Phi}(\lambda)=\mathsf{P}^{\Phi}(\lambda)-\lambda\) is a lattice polytope with respect to the lattice \(\mathbb{Z}\Phi^{\vee}\). We use Berline-Vergne formula, Equation (5.3), to obtain \[|\mathsf{Q}^{\Phi}(\lambda)\cap\mathbb{Z}\Phi^{\vee}|=\sum_{\mathsf{F}\subseteq \mathsf{Q}^{\Phi}(\lambda)}\nu\left(\mathsf{t}(\mathsf{F},\mathsf{Q}^{\Phi}( \lambda))\right)\operatorname{relVol}(\mathsf{F}). \tag{5.13}\] For \(J\subset I_{n}\), let \(\mathcal{F}_{J}=\{F_{J}^{\prime}(\lambda,w)\ :\ w\in W_{f}\}\). By part 2 and 3 of Lemma 5.7 and Corollary 2.12, we get \[|\mathsf{Q}^{\Phi}(\lambda)\cap\mathbb{Z}\Phi^{\vee}| =\sum_{J\subset I_{n}}\sum_{\mathsf{F}\in\mathcal{F}_{J}}\nu \left(\mathsf{t}(\mathsf{F},\mathsf{Q}^{\Phi}(\lambda))\right)\operatorname{ relVol}(\mathsf{F})\] \[=\sum_{J\subset I_{n}}[W_{f}:W_{J}]\,\nu\left(\mathsf{t}( \mathsf{G}_{J}(\lambda),\mathsf{Q}^{\Phi}(\lambda))\right)\operatorname{ relVol}(\mathsf{G}_{J}(\lambda)).\] Since \(\mathsf{Q}^{\Phi}(\lambda)\) is just a translation of \(\mathsf{P}^{\Phi}(\lambda)\), it is clear that \(V_{J}^{\Phi}(\lambda)=\operatorname{Vol}(\mathsf{G}_{J}(\lambda))\). Therefore, substituting in (5.12) we get \[|\leq\theta(\lambda)|=\sum_{J\subseteq I_{n}}\mu_{J}^{\Phi}V_{J}^{\Phi}( \lambda), \tag{5.14}\] where \[\mu_{J}^{\Phi}:=|W_{f}|\frac{|W_{f}|}{|W_{J}|}\nu\left(\mathsf{t}(\mathsf{G}_{ J}(\lambda),\mathsf{Q}^{\Phi}(\lambda))\right)v_{J}. \tag{5.15}\] Finally, Lemma 5.7 part 1 implies that \(\mu_{J}^{\Phi}\) does not depend on the choice of \(\lambda\). A function \(g:\mathbb{Z}_{\geq 0}^{n}\to\mathbb{R}\) is called a _multivariate quasi-polynomial_ if there exists a finite-index lattice \(\mathcal{L}\subset\mathbb{Z}^{n}\) (i.e. \(\{\mathbf{u}_{i}\}_{i\in I}=\mathbb{Z}^{n}/\mathcal{L}\), with \(I\) finite), and polynomials \(p_{i}\in\mathbb{R}[x_{1},\ldots,x_{n}]\) such that if \(\mathbf{m}:=(m_{1},\ldots,m_{n})\in\mathbb{Z}^{n}\), for all \(i\in I\) we have \[g(\mathbf{m})=p_{i}(\mathbf{m}),\text{ if }\overline{\mathbf{m}}=\mathbf{u}_{i}.\] **Proposition 5.9**.: _For every dominant coweight \(\lambda=\sum_{i}m_{i}\varpi_{i}^{\vee}\), we have that \(|\leq\theta(\lambda)|\) is a quasi-polynomial in \(m_{1},\ldots,m_{n}\)._ Proof.: By the Lattice Formula (Theorem 3.2) it is enough to prove the quasi-polynomiality of \[|\mathsf{P}^{\Phi}(\lambda)\cap(\lambda+\mathbb{Z}\Phi^{\vee})|=|\left( \mathsf{P}^{\Phi}(\lambda)-\lambda\right)\cap\mathbb{Z}\Phi^{\vee}|. \tag{5.16}\] Recall that the Minkowski sum of two polytopes \(\mathsf{P}\) and \(\mathsf{Q}\) is the polytope \(\mathsf{P}+\mathsf{Q}=\mathrm{Conv}\{p+q\,:\,p\in\mathsf{P},\,q\in\mathsf{Q}\}\). Since \(\lambda=\sum_{i}m_{i}\varpi_{i}^{\vee}\) we have the following equality of polytopes (see [1, Proposition 6.4]): \[\mathsf{P}^{\Phi}(\lambda) =m_{1}\mathsf{P}^{\Phi}(\varpi_{1}^{\vee})+m_{2}\mathsf{P}^{\Phi }(\varpi_{2}^{\vee})+\cdots+m_{n}\mathsf{P}^{\Phi}(\varpi_{n}^{\vee}), \tag{5.17}\] \[\mathsf{Q}^{\Phi}(\lambda) =m_{1}\mathsf{Q}^{\Phi}(\varpi_{1}^{\vee})+m_{2}\mathsf{Q}^{\Phi }(\varpi_{2}^{\vee})+\cdots+m_{n}\mathsf{Q}^{\Phi}(\varpi_{n}^{\vee}), \tag{5.18}\] where \(\mathsf{Q}^{\Phi}\) is defined in (5.6). In Equation (5.18) every polytope on the right-hand side is \(\mathbb{Z}\Phi^{\vee}\)-rational since the index of connection is finite. Following McMullen [23, Theorem 7] we have that the number of lattice points in an integer Minkowski sum of rational polytopes is a quasi-polynomial in the dilation factors. This implies that the right-hand side of (5.16) is a quasi-polynomial in \(m_{1},\ldots,m_{n}\). We will now prove that \(|\leq\theta(\lambda)|\) is an honest-to-god polynomial. **Lemma 5.10**.: _Let \(h\) be a quasipolynomial in \(n\geq 1\) variables such that \(h(\mathbf{m})=0\) whenever \(\mathbf{m}\in\mathbb{Z}_{>0}^{n}\). Then \(h\) is identically zero. Consequently, if two quasipolynomials agree in \(\mathbb{Z}_{>0}^{n}\), they agree everywhere._ Proof.: Let \(\mathcal{L}\subset\mathbb{Z}^{n}\) be a finite-index lattice. It is enough to show that for any \(\mathbf{u}\in\mathbb{Z}_{>0}^{n}\) and for any polynomial \(p\in\mathbb{R}[x_{1},\ldots,x_{n}]\) such that \(p(\mathbf{m})=0\), for all \(\mathbf{m}\in(\mathbf{u}+\mathcal{L})\cap\mathbb{Z}_{>0}^{n}\), we have that \(p\) is the zero polynomial. We will prove this in two steps. _Claim 5.11_.: If \(\mathbf{u}\in\mathbb{Z}_{\geq 0}^{n}\), \(p\in\mathbb{R}[x_{1},\ldots,x_{n}]\) are such that \(p(\mathbf{m})=0\) for all \(\mathbf{m}\in(\mathbf{u}+\mathcal{L})\cap\mathbb{Z}_{>0}^{n}\), then \(p\) is zero in \(\mathbb{Z}_{>0}^{n}\). Proof.: We first prove the case when \(\mathbf{u}=\mathbf{0}\). If \(p\) vanishes on \(\mathcal{L}\cap\mathbb{Z}_{>0}^{n}\), then for every \(\mathbf{v}=(v_{1},v_{2},\ldots,v_{n})\in\mathcal{L}\cap\mathbb{Z}_{>0}^{n}\) the univariate polynomial \(p(tv_{1},tv_{2},\cdots,tv_{n})\in\mathbb{R}[t]\) vanishes for every \(m\in\mathbb{Z}_{>0}\), hence it is the zero polynomial. This means that \(p\) vanishes on all lines through the origin containing an element of \(\mathcal{L}\), i.e. \(\mathcal{L}\)_-rational lines_. On the other hand, any element of \(\mathbb{Z}_{>0}^{n}\) belongs to some \(\mathcal{L}\)-rational line since \(\mathcal{L}\) is a finite-index lattice. Summing up, \(p\) vanishes on \(\mathbb{Z}_{>0}^{n}\). When \(\mathbf{u}\neq\mathbf{0}\), we can conclude with a similar argument by doing a change of coordinates. This ends the proof of the claim. Let us return to the proof of the lemma. It remains to show that a polynomial that vanishes in \(\mathbb{Z}_{>0}^{n}\) must be the zero polynomial. We proceed by induction on \(n\). If \(n=1\), then \(p\) has infinitely many zeros so it is the zero polynomial. For \(n>1\) let us write \[p=\sum_{i\geq 0}p_{i}(x_{1},\ldots,x_{n-1})x_{n}^{i}, \tag{5.19}\] where \(p_{i}\in\mathbb{R}[x_{1},\ldots,x_{n-1}]\). For each fixed \((n-1)\)-tuple \((a_{1},\ldots,a_{n-1})\in\mathbb{Z}_{>0}^{n-1}\) the polynomial \(p(a_{1},\ldots,a_{n-1},x_{n})\in\mathbb{R}[x_{n}]\) vanishes for all \(x_{n}\in\mathbb{Z}_{>0}\), and therefore, it is the zero polynomial. It follows that \(p_{i}(a_{1},\ldots,a_{n-1})=0\) for all \((a_{1},\ldots,a_{n-1})\in\mathbb{Z}_{>0}^{n-1}\). By our inductive hipothesis we conclude that \(p_{i}(x_{1},\ldots,x_{n-1})\) is the zero polynomial for all \(i\geq 0\). By substituting in (5.19) we get \(p=0\). Finally, the second claim in the lemma follows by considering the difference of the two quasipolynomials. Proof of Theorem 5.1.: Proposition 5.8 together with the fact that \(V_{J}^{\Phi}\) are polynomials (Remark 4.6) imply that \(|\leq\theta(\lambda)|\) is a polynomial of degree \(n\) in the \(m_{1},\ldots,m_{n}\) when they are positive integers. By Proposition 5.9 we know that \(|\leq\theta(\lambda)|\) is a quasi-polynomial in the \(m_{i}\)'s. If two quasipolynomials agree on the set of positive integers then they must agree everywhere, by Lemma 5.10. Hence formula (5.14) holds for every orbit polytope, with \(\lambda\) generic or not. Finally by Corollary 4.8 the volume polynomials are linearly independent hence the coefficients \(\mu_{J}^{\Phi}\) are unique. This implies that if \(\Phi\) has rank \(n\) and \(\lambda=(m_{i})_{i\in I_{n}}\) in the fundamental coweight basis, then \(|\leq\theta(\lambda)|\) is a polynomial of degree \(n\) in the \(m_{1},\ldots,m_{n}\). Taking the sum over a fixed rank \(|J|=d\) gives the degree \(d\) part of the polynomial. We call the coefficients \(\mu_{J}^{\Phi}\) the _geometric coefficients_. ## 6 On the geometric coefficients \(\mu_{J}^{\Phi}\) In this section we compute some geometric coefficients \(\mu_{J}^{\Phi}\). For the rest of this section, \(\Phi\) will be a root system of rank \(n\). We recall from Definition 2.5 that \(\det(\Gamma)\) denotes the volume of the fundamental parallelepiped spanned by any basis of \(\Gamma\). ### The extreme geometric coefficients \(\mu_{\emptyset}^{\Phi}\) and \(\mu_{I_{n}}^{\Phi}\) The geometric coefficient corresponding to the empty set is easily determined. Using the geometric formula (5.1), we get \[\mu_{\emptyset}^{\Phi}=\sum_{J\subseteq I_{n}}\mu_{J}^{\Phi}V_{J}^{\Phi}( \mathbf{0})=|\leq\theta(\mathbf{0})|=|\leq w_{0}|=|W_{f}|.\] **Lemma 6.1**.: _Let \(\text{Vol}(A_{\text{id}})\) be the \(n\)-dimensional volume of the fundamental alcove. Then_ \[\mu_{I_{n}}^{\Phi}=\frac{1}{\text{Vol}(A_{\text{id}})}.\] Proof.: The Berline-Vergne construction \(\nu\) has the property that \(\nu\) for the whole polytope as a face is equal to \(1\). Following equations (5.15) and (5.10), we have that \[\mu_{I_{n}}^{\Phi}=\frac{|W_{f}|}{\det(\mathbb{Z}\Phi^{\vee})}. \tag{6.1}\] On the other hand, by [3, Theorem 10.9], we know that \[[\Lambda^{\vee}:\mathbb{Z}\Phi^{\vee}]=\det(\mathbb{Z}\Phi^{\vee})/\det( \Lambda^{\vee}).\] Using (2.3) and substituting in (6.1), we get \[\mu_{I_{n}}^{\Phi}=\frac{n!\eta_{1}\cdots\eta_{n}}{\det(\Lambda^{\vee})}=\frac {1}{\text{Vol}(A_{\text{id}})}, \tag{6.2}\] where the last equality follows from Equation (2.4). By (6.2) in order to compute the value of \(\mu_{I_{n}}^{\Phi}\) we need to compute both the product \(\eta_{1}\cdots\eta_{n}\) and \(\det(\Lambda)^{\vee}\). The values of \(\mu_{I_{n}}^{\Phi}\) are computed using [9, Plates I,\(\ldots\),VI] and they are displayed in Table 1. ### Type A In this section we fix a root system \(\Phi\) of type \(A_{n}\). We are going to compute all the geometric coefficients \(\mu_{J}^{A_{n}}\) for connected \(\emptyset\neq J\subseteq I_{n}\). For two positive integers \(d,k\) with \(k\leq d\), let \(\Delta_{k,d}\) be the hypersimplex. In formulas, \[\Delta_{k,d}=\left\{x\in[0,1]^{d}\mid x_{1}+\cdots+x_{d}=k\right\}.\] The vertices of this convex polytope lie in \(\mathbb{Z}^{d}\). Indeed, \(\Delta_{k,d}\) is the convex hull of the vectors whose coordinates consist of \(k\) ones and \(d-k\) zeros. Equivalently, \(\Delta_{k,d}\) is the convex hull of the \(S_{d}\)-orbit of the vector \((1,\ldots,1,0,\ldots,0)\in\mathbb{R}^{d}\) with \(k\) ones, where \(S_{d}\) acts by permuting coordinates. The _Ehrhart polynomial_ of \(\Delta_{k,d}\) is the polynomial \(E_{k,d}(t)\in\mathbb{Q}[t]\) such that for every \(m\in\mathbb{Z}_{\geq 0}\), \[E_{k,d}(m)=\left|\mathbb{Z}^{d}\cap m\Delta_{k,d}\right|,\] where \(m\Delta_{k,d}\) is the dilation of \(\Delta_{k,d}\) with respect to the origin by the factor \(m\). **Lemma 6.2**.: _For all \(m\in\mathbb{Z}_{\geq 0}\), and for all \(1\leq k\leq n\),_ \[|\leq\theta(m\varpi_{k})|=(n+1)!\,E_{k,n+1}(m). \tag{6.3}\] Proof.: Recall the realization of the root system of type \(A_{n}\) inside the subspace \(V\) of \(\mathbb{R}^{n+1}\) of vectors whose coordinate sum is \(0\), explained in SS2.3.1. By Theorem 3.2, we just need to prove the equality \[|\mathsf{P}^{\Phi}(m\varpi_{k})\cap(m\varpi_{k}+\mathbb{Z}\Phi)|=|\mathbb{Z}^ {n+1}\cap m\Delta_{k,n+1}|. \tag{6.4}\] The dilated hypersimplex \(m\Delta_{k,n+1}\) does not belong to the linear subspace \(V\subset\mathbb{R}^{n+1}\) but to the parallel affine hyperplane \(V_{mk}\) of \(\mathbb{R}^{n+1}\) of vectors whose coordinate sum is \(mk\). We will prove that the map \(T:V_{mk}\to V\) given by \[T(u)=u-\left(\frac{mk}{n+1}\right)\sum_{j=1}^{n+1}\varepsilon_{j} \tag{6.5}\] is a bijection that restricted to \((m\Delta_{k,n+1}\cap\mathbb{Z}^{n+1})\) gives exactly the set \[\mathsf{P}^{\Phi}(m\varpi_{k})\cap(m\varpi_{k}+\mathbb{Z}\Phi),\] thus proving Equation (6.4). As \(T\) is a translation it is a bijection. The set \(m\Delta_{k,n+1}\) is the convex hull of the \(W_{f}=S_{n+1}\) orbit of the vector \(v_{k}\coloneqq m\varepsilon_{1}+\ldots+m\varepsilon_{k}\). Using (2.6) one \begin{table} \begin{tabular}{||c|c|c|c|c|c|c|c|c||} \hline Type & \(A_{n}\) & \(B_{n}\) & \(C_{n}\) & \(D_{n}\) & \(E_{6}\) & \(E_{7}\) & \(E_{8}\) & \(F_{4}\) & \(G_{2}\) \\ \hline \hline \(\eta_{1}\cdots\eta_{n}\) & 1 & \(2^{n-1}\) & \(2^{n-1}\) & \(2^{n-3}\) & 24 & 288 & 17280 & 48 & 6 \\ \hline \(\det(\Lambda^{\vee})\) & \(\frac{\sqrt{n+1}}{n+1}\) & 1 & \(\frac{1}{2}\) & 2 & \(\frac{1}{\sqrt{3}}\) & \(\frac{1}{\sqrt{2}}\) & 1 & 2 & \(\frac{1}{\sqrt{3}}\) \\ \hline \(\mu_{I_{n}}^{\Phi}\) & \(\frac{(n+1)!}{\sqrt{n+1}}\) & \(n!2^{n-1}\) & \(n!2^{n}\) & \(n!2^{n-4}\) & \(24\sqrt{3}\cdot 6!\) & \(288\sqrt{2}\cdot 7!\) & \(17280\cdot 8!\) & 576 & \(12\sqrt{3}\) \\ \hline \end{tabular} \end{table} Table 1: Values of the geometric coefficient \(\mu_{I_{n}}^{\Phi}\). sees that \(T(v_{k})=m\varpi_{k}\). Since both \(m\Delta_{k,n+1}\) and \(\mathsf{P}^{\Phi}(m\varpi_{k})\) are defined as the convex hull of the \(W_{f}=S_{n+1}\) orbit of the vectors \(v_{k}\) and \(m\varpi_{k}\), respectively, and \(T\) clearly commutes with the action of \(S_{n+1}\), we conclude that \(T(m\Delta_{k,n+1})=\mathsf{P}^{\Phi}(m\varpi_{k})\). On the other hand, for \(u=(u_{1},\ldots,u_{n+1})\in m\Delta_{k,n+1}\cap\mathbb{Z}^{n+1}\) we have \[T(u)-m\varpi_{k}=u-v_{k}=\sum_{j=1}^{n}\left(\sum_{i=1}^{j}u_{i}-(\min\{j,k\}) \,m\right)\alpha_{j}\in\mathbb{Z}\Phi.\] This shows that \[T(m\Delta_{k,n+1}\cap\mathbb{Z}^{n+1})\subseteq\mathsf{P}^{\Phi}(m\varpi_{k} )\cap(m\varpi_{k}+\mathbb{Z}\Phi). \tag{6.6}\] Conversely, for \(u=(u_{1},\ldots,u_{n+1})\in m\varpi_{k}+\mathbb{Z}\Phi\) we consider \(u^{\prime}=u-m\varpi_{k}+v_{k}\). It is easy to see that \(T(u^{\prime})=u\) and that \(u^{\prime}\in v_{k}+\mathbb{Z}\Phi\subset\mathbb{Z}^{n+1}\). This shows that \[\mathsf{P}^{\Phi}(m\varpi_{k})\cap(m\varpi_{k}+\mathbb{Z}\Phi)\subseteq T(m \Delta_{k,n+1}\cap\mathbb{Z}^{n+1}). \tag{6.7}\] Finally, by combining (6.6) and (6.7) we get (6.4). In [14, Lemma 4.1] the author provided closed formulas for the coefficients of the polynomial \(E_{k,d}(t)\). For \(0\leq m<d\), the coefficient \(\widetilde{e}_{k,d,m}\) of \(t^{m}\) in \(E_{k,d}(t)\) is given by \[\widetilde{e}_{k,d,m}=\frac{1}{(d-1)!}\,\sum_{j=0}^{k-1}\sum_{i=0}^{d-m-1}(-1 )^{i+j}\binom{d}{j}(k-j)^{m}\,\binom{d-j}{m+1+i-j}\,\binom{j}{j-i}\,, \tag{6.8}\] where the brackets denote the (unsigned) Stirling numbers of the first kind [24, A008275]. Therefore, by Lemma 6.2, we can write \[|\leq\theta(t\varpi_{i})|=e_{i,0}+e_{i,1}\,t+\cdots+e_{i,n}\,t^{n}, \tag{6.9}\] with \(e_{i,j}=(n+1)!\,\widetilde{e}_{i,n+1,j}\). On the other hand, for \(a\) a non-negative integer, we can obtain another expression for \(|\leq\theta(a\varpi_{i})|\) by applying the Geometric Formula (5.1). Namely, \[|\leq\theta(a\varpi_{i})|=\sum_{J\subseteq I_{n}}\mu_{J}^{A_{n}}V_{J}^{A_{n}} (a\varpi_{i}). \tag{6.10}\] We can compute \(V_{J}^{A_{n}}(a\varpi_{i})\) explicitly. **Lemma 6.3**.: _Let \(\emptyset\neq J\subseteq I_{n}\) and \(a\in\mathbb{Z}_{\geq 0}\). Then, \(V_{J}^{A_{n}}(a\varpi_{i})=c_{i,J}a^{|J|}\) where \(c_{i,J}\) is the coefficient of \(m_{i}^{|J|}\) in the polynomial \(V_{J}^{A_{n}}(\mathbf{m})\). Moreover, \(c_{i,J}=0\) unless \(i\in J\) and \(J\) is connected._ Proof.: The polynomial \(V_{J}^{A_{n}}(\mathbf{m})\) is homogeneous of degree \(|J|\) in the variables \((m_{j})_{j\in J}\). It follows that \(V_{J}^{A_{n}}(a\varpi_{i})=c_{i,J}\,a^{|J|}\). By Lemma 4.2(i) if \(i\notin J\), we have \(V_{J}^{A_{n}}(a\varpi_{i})=V_{J}^{A_{n}}(\mathbf{0})=0\). Finally, let us assume that \(J\) is not connected. Let \(J_{1}\) be a connected component of \(J\) such that \(i\not\in J_{1}\). We saw above that \(V_{J_{1}}^{A_{n}}(a\varpi_{i})=0\). On the other hand, Lemma 4.2(ii) shows that \(V_{J_{1}}^{A_{n}}\) is a factor of \(V_{J}^{A_{n}}\). We conclude that \(V_{J}^{A_{n}}(a\varpi_{i})=0\). We can give an explicit formula for the coefficients \(c_{i,J}\) occurring in Lemma 6.3. Note that every non-empty connected set in type \(A_{n}\) is given by \(I(l,u)\coloneqq I_{l}+u\) for some \(1\leq l\leq n\) and \(0\leq u\leq n-l\). **Lemma 6.4**.: _Let \(1\leq l\leq n\) and \(0\leq u\leq n-l\). If \(i\in I(l,u)\) then we have_ \[c_{i,I(l,u)}=\frac{\sqrt{l+1}}{l!}A(l,i-u),\] _where \(A(r,s)\) denotes the Eulerian number [24, A008292]._ Proof.: By Lemma 4.5 we have \(c_{i,I(l,u)}=c_{i-u,I_{l}}\). Therefore, it is enough to show that \[c_{j,I_{l}}=\frac{\sqrt{l+1}}{l!}A(l,j) \tag{6.11}\] for all \(1\leq j\leq l\). This last equality follows by [27, Theorem 16.3(3)] via Remark 4.9. For \(1\leq l,i\leq n\), let \(\mathcal{C}_{i}^{l}\) be the collection of connected subsets \(J\subseteq I_{n}\) such that \(i\in J\) and \(|J|=l\). Although the left-hand side of (6.10) is only defined for \(a\in\mathbb{Z}_{\geq 0}\), the right-hand side is a polynomial in the variable \(a\). We know that two polynomials that agree in an infinite number of points are equal, so by equating the coefficients in (6.9) and (6.10) we get \[\sum_{J\in\mathcal{C}_{i}^{l}}\mu_{J}^{A_{n}}\,c_{i,J}=e_{i,l}, \tag{6.12}\] for all \(1\leq l,i\leq n\). We can reformulate this problem as a family of systems of linear equations as follows. For a fixed \(1\leq l\leq n\), let \(M_{l}=(m_{ij})\in M_{n-l+1}(\mathbb{R})\) where \[m_{ij}=\left\{\begin{array}{ll}c_{i,I(l,j-1)},&\mbox{if }i\in I(l,j-1);\\ 0,&\mbox{otherwise}.\end{array}\right. \tag{6.13}\] Furthermore, let \(\mu_{l}=\left(\mu_{I(l,0)}^{A_{n}},\mu_{I(l,1)}^{A_{n}},\ldots,\mu_{I(l,n-l)} ^{A_{n}}\right)^{t}\) and \(F_{l}=(e_{1,l},e_{2,l},\ldots,e_{n-l+1,l})^{t}\), where \(v^{t}\) denotes the transpose of \(v\). Then, the system of linear equations \(M_{l}\mu_{l}=F_{l}\) is equivalent to the subset of equations in (6.12), obtained by considering \(1\leq i\leq n-l+1\). It is easy to see that \(M_{l}\) is a lower triangular matrix with determinant \[\det(M_{l})=\prod_{i=1}^{n-l+1}c_{i,I(l,i-1)}. \tag{6.14}\] By Lemma 6.4 we know that \(c_{i,I(l,i-1)}\neq 0\) for all \(1\leq i\leq n-l+1\). Therefore, \(M_{l}\) is invertible and we can obtain the geometric coefficients for \(J\) connected by solving the system \(M_{l}\mu_{l}=F_{l}\). We finish this paper by providing closed formulas for geometric coefficients in some specific cases. For instance, if \(l=1\) then \(M_{l}\) is a diagonal matrix. Thus, we get \[\mu_{\{i\}}^{A_{n}}=\frac{e_{i,1}}{c_{i,\{i\}}}. \tag{6.15}\] Similarly, as \(M_{l}\) is a lower triangular matrix we can solve the equality associated with the first row of \(M_{l}\) in the system \(M_{l}\mu_{l}=F_{l}\) for all \(1\leq l\leq n\). This yields \[\mu_{I_{l}}^{A_{n}}=\frac{e_{1,l}}{c_{1,I_{l}}}=\frac{l!}{\sqrt{l+1}}(n+1) \left[\begin{matrix}n+1\\ l+1\end{matrix}\right], \tag{6.16}\] which corresponds to (1.2). **Remark 6.5**.: In [11] the authors provided a combinatorial formula for the \(\nu\) function (it is called \(\alpha\) in the reference) from Berline and Vergne in the type A case. By Equation (5.15), the positiveness of the geometric coefficients \(\mu_{J}^{\Phi}\) is directly tied to the positivity of \(\nu\). By [11, Example 6.3] there exists faces such that the \(\nu\) function is negative. Hence there are also negative \(\mu\) coefficients.
2309.14110
On the thermodynamic theory of curvature-dependent surface tension
An exact equation for determining the Tolman length (TL) as a function of radius is obtained and a computational procedure for solving it is proposed. As a result of implementing this procedure, the dependences of the TL and surface tension on radius are obtained for the drop and bubble cases and various equations of state. As one of the results of the thermodynamic study, a new equation for the dependence of surface tension on radius (curvature effect), alternative to the corresponding Tolman equation and associated with the spinodal point, is obtained. The Kelvin type equation for the equimolecular radius is shown to be exact over the entire metastability region and serves as the basis for the TL equation. The expansions of surface tension near the spinodal and binodal points show that the correction to Rusanov s linear asymptotics in the first case is a series in cubes of the radius, whereas a series in curvature holds in the second case. As a result of the analysis of these expansions, the fundamental impossibility to determine the curvature effect analytically from the binodal point is established; the computational procedure determines it from the spinodal point. It is shown that just the characteristics of the system on the spinodal, mainly the TL value at zero radius, determine the curvature effect. In general, the theory reveals a close connection between surface and bulk properties.
Nikolay V. Alekseechkin
2023-09-25T13:08:08Z
http://arxiv.org/abs/2309.14110v1
# On the thermodynamic theory of curvature-dependent surface tension ###### Abstract An exact equation for determining the Tolman length (TL) as a function of radius is obtained and a computational procedure for solving it is proposed. As a result of implementing this procedure, the dependences of the TL and surface tension on radius are obtained for the drop and bubble cases and various equations of state. As one of the results of the thermodynamic study, a new equation for the dependence of surface tension on radius (curvature effect), alternative to the corresponding Tolman equation and associated with the spinodal point, is obtained. The Kelvin type equation for the equimolecular radius is shown to be exact over the entire metastability region and serves as the basis for the TL equation. The expansions of surface tension near the spinodal and binodal points show that the correction to Rusanov's linear asymptotics in the first case is a series in cubes of the radius, whereas a series in curvature holds in the second case. As a result of the analysis of these expansions, the fundamental impossibility to determine the curvature effect analytically from the binodal point is established; the computational procedure determines it from the spinodal point. It is shown that just the characteristics of the system on the spinodal, mainly the TL value at zero radius, determine the curvature effect. In general, the theory reveals a close connection between surface and bulk properties. ## I Introduction The dependence of surface tension on the interface radius, \(\sigma(R)\), is of great importance for the nucleation theory and other phenomena related to capillarity. Ignoring the curvature effect by the classical nucleation theory [1] (CNT) leads to a discrepancy between its predictions and experimental data in the region of high supersaturations corresponding to small sizes of nuclei, where this effect just manifests itself. Tolman, in his pioneering work [2], derived a general equation for \(\sigma(R)\), expressing this dependence in terms of a certain characteristic length \(\delta\) called the TL now. Assuming that it is constant, \(\delta=\delta_{{}_{\infty}}\), over most of the range of sizes, he obtained the famous asymptotics \(\sigma_{{}_{T}}(R)=\sigma_{{}_{\infty}}/(1+2\delta_{{}_{\infty}}/R)\), where \(\sigma_{{}_{\infty}}=\sigma(\infty)\). At the same time, Tolman assumed that this length may depend on \(R\) in the region of small sizes. However, it turns out that just this region of the order of 1 nm corresponding to nuclei containing some tens - a few hundreds of molecules, where both \(\delta\) and \(\sigma\) change sharply with \(R\), is at the same time the nucleation region. Simple estimates show that outside this region the nucleation work is so great that nucleation does not occur. In the nucleation region, the nucleus is so small that it cannot be considered homogeneous, as assumed by CNT, and the surface effects [3] are significant; the curvature effect is a consequence of this inhomogeneity. The development of density functional theories [4, 5, 6, 7, 8] was aimed to take into account this inhomogeneity and thereby to bring the theory predictions into agreement with experimental data. As additional methods, the scaling relations [9] and their subsequent development [10, 11], as well as the diffuse interface theory [12, 13], should be mentioned. The accuracy of the above Tolman asymptotics for \(\sigma(R)\) was estimated [14, 15] and shown that it is valid only for large drops consisting of more than \(10^{6}\) molecules, which is far beyond the nucleation region. Thus, although it has general scientific interest, it is useless for the nucleation theory. Nevertheless, the quantity \(\delta_{{}_{\infty}}\) providing the first step towards studying the curvature effect has been the subject of numerous studies and debates in recent decades, including the density gradient [16, 17, 18, 19] and density functional [20, 21, 22] studies. Both negative and positive signs (depending on the research method) for this quantity were reported in literature for the drop case, starting already with Tolman's estimates giving a positive value for it [2]. At the same time, much less number of works is dedicated to determining the dependences \(\delta(R)\)[23, 24, 15] and \(\sigma(R)\)[24, 25, 26, 27]. Among them, the most successful is the method of determination of both of these dependences proposed by Kashchiev [24] and based on some simple approximation. Classical thermodynamics studies the properties of macroscopic systems (such a system is formed by a large parent phase containing a nucleus), giving various relationships between their parameters. In this way, the behavior of the thermodynamic quantity \(\sigma(R)\) and, consequently, \(\delta(R)\), should be determined by the macroscopic parameters of the system and its equation of state (EOS). The present study provides exact equations for these dependences within the framework of classical thermodynamics and thereby confirms this expectation. This theory also validates Kashchiev's approximation. ## II Thermodynamics of nucleation ### A. Nucleation work We consider the formation of a spherical nucleus of a new phase within the macroscopic one-component parent phase; the latter therefore plays the role of a thermostat with temperature \(T_{0}\) and pressure \(P_{0}\) (in what follows, subscript 0 refers to thermostat parameters). Specifically, the following two systems are considered: a drop in a vapor and a bubble in a liquid, i.e. transitions between fluid phases of the same substance. The general equation for the work of formation of a nucleus of arbitrary size [28, 29] \[W=\Delta E-T_{0}\Delta S+P_{0}\Delta V \tag{1}\] where \(\Delta E\), \(\Delta S\), and \(\Delta V\) are the changes in the energy, entropy, and volume of the system, respectively, when one nucleus is formed. Being a consequence of only the first and second laws of thermodynamics, this equation at the same time reflects the natural condition of nucleation - constancy of temperature and pressure of the macroscopic parent phase. This equation underlies the fluctuation theory [28] and the multivariable theory of nucleation [30, 31, 32, 33, 34, 35]; a nucleus is also a [heterophase] fluctuation. Assigning subscripts 1 and 2 to the extensive parameters of the system before and after the formation of the nucleus, respectively, as well as denoting by \(\varepsilon\), \(s\), and \(\upsilon\) the energy, entropy and volume per one particle, respectively, we have for the energy: \[E_{1}=N_{tot}\varepsilon_{0}\,,\ \ E_{2}=E+E_{0}+E_{\Sigma}=N\varepsilon+(N_{tot} -N)\varepsilon_{0}+E_{\Sigma}\,,\ \ N+N_{0}+N_{\Sigma}=N_{tot}\] (2a) where the parameters without index refer to the nucleus, \(N\) and \(N_{0}\) are the numbers of particles in the nucleus and the old phase (thermostat), respectively; subscript \[\Sigma\] denotes the superficial quantities for the chosen dividing surface (DS). The superficial quantities arise in the Gibbs approach [36], when a real system with a diffuse interface is compared to the reference thermodynamic system with a sharp interface, where the homogeneous macroscopic properties of both coexisting phases continue up to the DS. Thus, \[\Delta E=E_{2}-E_{1}=N\Delta\varepsilon+E_{\Sigma}\,,\ \ \ \Delta \varepsilon=\varepsilon-\varepsilon_{0}\] (2b) Similarly, \[\Delta S=N\Delta s+S_{\Sigma}\,.\] As is known, two main DS are used in the thermodynamic theory of a surface: the equimolecular (EM) DS and the surface of tension (ST) [36, 37, 38, 39]. By definition, \(N_{\Sigma}=0\) for the EM DS, so that \(N+N_{0}=N_{tot}\) in this case and \(\Delta V=(V+V_{0})-V_{1}=(N\upsilon+N_{0}\upsilon_{0})-N_{tot}\upsilon_{0}=N \Delta\upsilon\). The following equations hold for the EM DS:[37] \[\frac{\partial\sigma_{e}}{\partial T_{0}}=-\frac{S_{\Sigma}^{(e)}}{A_{e}}, \quad\frac{E_{\Sigma}^{(e)}}{A_{e}}=\sigma_{e}-T_{0}\frac{\partial\sigma_{e}} {\partial T_{0}}\,,\quad A_{e}=4\pi R_{e}^{2}\] (3a) from where \[E_{\Sigma}^{(e)}-T_{0}S_{\Sigma}^{(e)}=\sigma_{e}A_{e}\] (3b) where \(R_{e}\) is the EM radius of the nucleus, \(\sigma_{e}\) is the surface tension for the EM DS. Hereafter, the quantities for the EM DS will be denoted by subscript \(e\), whereas the quantities for the ST will be written without index (e. g., \(R\) and \(\sigma\) are the ST radius and surface tension, respectively). Introducing the chemical potentials of a particle in the old phase, \[\mu_{0}(T_{0},P_{0})=\varepsilon_{0}-T_{0}s_{0}+P_{0}\upsilon_{0}\,,\,\, \mbox{and in the nucleus, }\mu(T_{0},P)=\varepsilon-T_{0}s+P\upsilon\,,\,\,\mbox{we get}\] \[\Delta\varepsilon-T_{0}\Delta s+P_{0}\Delta\upsilon=\Delta\mu_{\upsilon}- \Delta\rho\upsilon\,,\quad\Delta\mu_{\upsilon}=\mu(T_{0},P)-\mu_{0}(T_{0},P_{0} )\,,\quad\Delta p=P-P_{0} \tag{4}\] where \(\Delta p\) is the Laplace pressure produced by the curved DS. For the DS of arbitrary radius \(r\), it has the following form:[37] \[\Delta p=\frac{2\sigma_{r}}{r}+\left[\frac{\partial\sigma_{r}}{\partial r}\right] \tag{5}\] where the brackets mean that the derivative is taken with respect to the mathematical displacement of the DS. Ono and Condo[37] obtained the following equations: \[\sigma_{r}=\frac{\sigma}{3}\Bigg{[}\frac{R^{2}}{r^{2}}+2\frac{r}{R}\Bigg{]} \tag{6a}\] \[\frac{\partial\sigma_{e}}{\partial R_{e}}=\left[\frac{\partial\sigma_{r}}{ \partial r}\right]_{r=R_{e}} \tag{6b}\] The function \(\sigma_{r}(r)\) has a minimum at the ST, i.e. \(\sigma_{r}^{(\min)}=\sigma\) at \(r=R\). From Eqs. (5) and (6a), one obtains \[\left[\frac{\partial\Delta p}{\partial r}\right]=0 \tag{7a}\] \[\Delta p=\frac{2\sigma}{R} \tag{7b}\] i. e. the Laplace pressure as a physical quantity does not depend on the DS location \(r\); it is determined by the ST, emphasizing the unique role of this DS. As a result, Eq. (1) acquires the following form for the EM DS: \[W_{e}=\frac{V_{e}}{\upsilon}\left(\Delta\mu_{\nu_{x}}-\Delta p\,\upsilon\right)+ \sigma_{e}A_{e} \tag{8}\] where \(V_{e}=(4\pi/3)R_{e}^{3}\) and \(V_{e}/\upsilon=N\). The extremum condition \(\partial W_{e}/\partial R_{e}=0\) determines the critical radius \(R_{e}^{*}\) - radius of the nucleus in unstable equilibrium with the mother phase. Computing this derivative, we differentiate the quantities \(V_{e}\), \(\Delta p\), \(\sigma_{e}\), \(A_{e}\) and utilize Eq. (5) with \(r=R_{e}\) together with Eq. (6b); as a result, \[\frac{\Delta\mu_{\nu_{x}}}{\upsilon}=\frac{R_{e}}{3}\frac{\partial\Delta p}{ \partial R_{e}}=0 \tag{9}\] where the _identical_ equality of the derivative to zero is obtained from Eqs. (5) and (6a) with \(r=R_{e}\), together with Eq. (6b); it is similar to Eq. (7a) and can be viewed as its particular case: \[\frac{\partial\Delta p}{\partial R_{e}}=\left[\frac{\partial\Delta p}{\partial r }\right]_{r=R_{e}}=0 \tag{10}\] Eq. (9) yields the familiar condition of phase equilibrium \[\Delta\mu_{\nu_{x}}=0\,,\quad\mu(T_{0},P)=\mu_{0}(T_{0},P_{0}) \tag{11}\] from which the desired critical radius \(R_{e}^{*}\) must be determined; the condition \(\partial W_{e}/\partial R_{e}=0\) itself does not give this radius, as might be expected at first glance. So, an important result of the above analysis is that the equilibrium condition should give the _EM critical radius_\(R_{e}^{*}\), which determines the nucleation barrier \(W_{e}^{*}\); this fact will be essentially used later. ## B. Thermodynamic relations for a critical nucleus Combining Eqs. (8) and (11), we get the work of critical nucleus formation \(W_{e}^{*}\) (which is the nucleation work proper): \[W_{e}=-\Delta pV_{e}+\sigma_{e}A_{e} \tag{12}\] In what follows, we will deal only with critical nuclei, so critical quantities will not be marked with an asterisk. The corresponding Gibbs equation for the work [36] \[W=-\Delta pV+\sigma\Lambda=\frac{1}{3}\sigma\Lambda \tag{13}\] refers to the ST; the second equality is obtained with the use of Eq. (7b). As shown above, \(\left(\Delta p\right)_{e}=\Delta p\), so Eqs. (12) and (13) have the same form. From Eq. (6a) with \(r=R_{e}\), one obtains \[\sigma_{e}A_{e}=\frac{1}{3}\sigma\Lambda+\Delta pV_{e}\] and Eq. (12) gives \(W_{e}=W\). Thus, the nucleation work has an invariant form and the same value for these DS, as it must for a physical quantity. Differentiating Eq. (12) with respect to \(\Delta p\), we use the equality \(\partial/\partial\Delta p=(\partial R_{e}\ /\partial\Delta p)(\partial/ \partial R_{e})\) and Eq. (5) with \(r=R_{e}\). As a result, \[-\Delta p\frac{\partial V_{e}}{\partial\Delta p}+\sigma_{e}\frac{\partial A_{ e}}{\partial\Delta p}+A_{e}\frac{\partial\sigma_{e}}{\partial\Delta p}=0\] and\({}^{24,40}\) \[\frac{\partial W}{\partial\Delta p}=-V_{e} \tag{14}\] The temperature \(T_{0}\) is assumed to be constant in the present theory, so this condition is not highlighted in this and other equations and partial derivatives will sometimes be replaced by ordinary ones. In a similar way or from Eq. (14) directly, the "conjugate" relation is derived: \[\frac{\partial W}{\partial V_{e}}=-\frac{R_{e}}{3}\frac{\partial\Delta p}{ \partial R_{e}} \tag{15}\] In contrast to Eq. (9), here the derivative \(\partial\Delta p/\partial R_{e}\) is not equal to zero; it reflects the real physical dependence \(\Delta p(R_{e})\) for a _critical_ nucleus, like the dependence \(\Delta p(R)\) considered later. Introducing the TL\({}^{2}\) which is the spacing between the EM DS and ST, \(\delta(R)=R_{e}(R)-R\), we have \[V_{e}=\frac{4\pi}{3}(R+\delta(R))^{3}=V+A\widetilde{\delta}(R)\,\quad \widetilde{\delta}(R)\equiv\delta(R)\Bigg{[}1+\frac{\delta(R)}{R}+\frac{1}{3} \Bigg{(}\frac{\delta(R)}{R}\Bigg{)}^{2}\Bigg{]}\] (16a) from where \[\delta(R)=R\Bigg{[}1+3\frac{\widetilde{\delta}(R)}{R}\Bigg{]}^{-1/3}-R\] (16b) Another important relation involves the function \[\widetilde{\delta}(R)\]. Differentiating Eq. ( 13 ) with respect to \[\Delta p\] and taking into account the equality \[-\Delta p\frac{\partial V}{\partial\Delta p}+\sigma\frac{\partial A}{\partial \Delta p}=0\] we get \[\frac{\partial W}{\partial\Delta p}=-V+A\frac{\partial\sigma}{\partial\Delta p}\] Comparing this equation to Eq. (14) and employing Eq. (16a), we obtain the desired relation: \[\frac{\partial\sigma}{\partial\Delta p}=-\widetilde{\delta}(R) \tag{17}\] Eqs. (14) and (17) will be used later for surface tension expansions. From the Gibbs adsorption equation \(d\sigma=-\Gamma d\mu\), equation \(d\mu=dP/\rho=dP_{0}/\rho_{0}\), and Eq. (7b), one obtains [2, 36] \[\frac{\partial\sigma}{\partial\Delta p}=-\frac{\Gamma}{\rho-\rho_{0}} \tag{18}\] where \(\Gamma=N_{\Sigma}/A\) is the adsorption for the ST; \(\rho=\upsilon^{-1}\) and \(\rho_{0}=\upsilon_{0}^{-1}\) are the macroscopic densities. From Eqs. (17) and (18), \[\widetilde{\delta}(R)=\frac{\Gamma}{\rho-\rho_{0}} \tag{19}\] This equation shows that the function \(\widetilde{\delta}(R)\) is not only a certain characteristic length, i.e. a geometrical function (as can be seen from Eq. (16a)), but also an important physical function. Both the functions, \(\widetilde{\delta}(R)\) and \(\delta(R)\) coincide in the limit \(R\rightarrow\infty\): \(\widetilde{\delta}(R)=\delta(R)=\delta_{\infty}\). Using the definition of the adsorption \(\Gamma\), Tolman [2] derived Eq. (19) by integrating the difference of densities over a transitional surface layer between coexisting phases. In the case of a drop, the lower integration limit must lie in a liquid with the properties of a macroscopic phase. However, macroscopic properties are not reached inside very small droplets, so Tolman was unsure about the correctness of Eq. (19) in this case. The present derivation avoids this problem, so Eq. (19) is true for droplets of arbitrary size; the reference thermodynamic system exists and therefore the determination of \(\Gamma\) is possible for them. ### C. One more derivation of Tolman's equation and new equation for radius-dependent surface tension Tolman [2] derived the equation for the dependence \(\sigma(R)\) from Eqs. (18) and (19). On the other hand, Eq. (14) with \(W=\Delta pV/2\) and Eq. (16a) or Eq. (17) alone can also serve this purpose; the latter gives the shortest way: \[-\frac{d\sigma}{d\Delta p}=-\frac{d\sigma}{dR}\biggl{(}\frac{d\Delta p}{dR} \biggr{)}^{-1}=\frac{R}{2}\frac{\sigma^{\prime}/\sigma}{R^{-1}-\sigma^{\prime }/\sigma}=\widetilde{\delta}(R)\] where Eq. (7b) was used and the prime denotes the derivation with respect to \(R\). From here, \[\frac{\sigma^{\prime}}{\sigma}=\frac{2\widetilde{\delta}(R)/R^{2}}{1+2 \widetilde{\delta}(R)/R}\equiv\varphi_{B}(R) \tag{20a}\] \[\sigma(R)=\sigma_{{}_{\infty}}\exp\!\!\left[-\int\limits_{R}^{\infty}\!\varphi_{{}_{ B}}(r)dr\,\right] \tag{20b}\] which is the Tolman equations [2]. In this way, Tolman Eq. (20a) is derived _without using the adsorption_\(\Gamma\), differently from its original derivation; the function \(\widetilde{\delta}(R)\) is initially defined by Eq. (16a) as merely some length, similarly to the TL \(\delta(R)\). An alternative equation for \(\sigma(R)\) is derived as follows. From Eq. (7b), one obtains \[\frac{d\ln\Delta p}{dR}=\frac{d\ln\sigma}{dR}-\frac{1}{R}=\varphi_{{}_{B}}(R)- \frac{1}{R}=-\frac{1}{R+2\widetilde{\delta}(R)}\equiv-\varphi_{{}_{S}}(R) \tag{21}\] where Eq. (20a) was utilized. Integration of this equation under the physical condition that \(\Delta p\) is finite on the spinodal (\(R=0\)), \(\Delta p(0)\equiv\Delta p_{s}=2K(T_{{}_{0}})\), gives the desired equation: \[\Delta p(R)=\Delta p_{s}\exp\!\left[-\int\limits_{0}^{R}\!\varphi_{{}_{S}}(r) dr\,\right] \tag{22a}\] \[\sigma(R)=KR\exp\!\left[-\int\limits_{0}^{R}\!\varphi_{{}_{S}}(r)dr\,\right] \tag{22b}\] The linear asymptotics \[\sigma(R)=KR \tag{23}\] in the limit \(R\to 0\) is trivially obtained from Eq. (7b) and the \(\Delta p\) finiteness condition in this limit. It was obtained by Rusanov [38, 39] from the analysis of possible behavior of the quantity \(\Gamma/(\rho-\rho_{{}_{0}})\) at \(R\to 0\) which also uses this condition. The difference between the two equations for \(\sigma(R)\) is obvious. While Tolman's dependence \(\sigma(R)\) from Eq. (20b) is "tied" to the binodal (\(R\rightarrow\infty\)), the dependence \(\sigma(R)\) from Eq. (22b) is "tied" to the spinodal. However, this difference is deeper from the physical point of view. As mentioned above, Eq. (20b) was obtained without using adsorption; so it contains no physical characteristics of the bulk phases coexisting on the binodal. Apparently, this property relates to the fact established later as a result of a more detailed analysis that it is impossible to determine the dependence \(\sigma(R)\) analytically from the binodal point. On the contrary, Eqs. (22a, b) contain an important physical quantity \(\Delta p_{s}\) which is determined by the bulk properties of the metastable phase (by the EOS). Just these equations will play a key role in the subsequent analysis. A Equilibrium condition Writing the condition of phase equilibrium, Eq. (11), in the differential form, \(d\mu_{v}=\upsilon_{v}dP_{v}=d\mu_{i}=\upsilon_{i}dP_{i}\), one obtains after integration from the binodal state \[\Delta\mu_{v}=\int\limits_{P_{m}}^{P_{v}}\upsilon_{v}dP_{v}=\Delta\mu_{i}=\int \limits_{P_{m}}^{P_{i}}\upsilon_{i}dP_{i}\equiv\Delta\mu\] (24a) where subscripts v and \(l\) relate to vapor and liquid phases, respectively, and \[\Delta\mu\] is the difference in chemical potentials between the current (supersaturated) and saturated (binodal) states. We have \[P_{l}=P_{v}\pm\Delta p\], where the plus and minus signs refer to the drop and bubble nuclei, respectively; hence, \[\Delta\mu=\int\limits_{P_{m}}^{P_{v}}\upsilon_{v}dP_{v}=\int\limits_{P_{m}}^{P _{v}\pm\Delta\nu}\upsilon_{i}dP_{i} \tag{24b}\] These integrals are performed using the EOS \(\upsilon(P)\) for a given substance. To get the expansion of the function \(P_{v}(\Delta p)-P_{m}\) in \(\Delta p\) near the binodal for the drop case, we denote \(\Delta p=x\), \(P_{v}-P_{m}=y\), and find the expansion of the function \(y(x)\) from the equation \[\int\limits_{0}^{y}\upsilon_{v}(z+P_{m})dz=\int\limits_{0}^{y+x}\upsilon_{i} (u+P_{m})du \tag{25}\] under the obvious condition \(y(0)=0\). The derivatives are as follows: \[y^{\prime}(0)=\frac{c}{1-c}\approx c\,\quad y^{\prime}(0)=\frac{c^{2}}{(1-c )^{3}}\Bigg{[}\chi_{v}-\frac{\chi_{l}}{c}\Bigg{]}\approx c^{2}\Bigg{[}\chi_{v} -\frac{\chi_{l}}{c}\Bigg{]},\ \ c=\frac{\upsilon_{m}^{l}}{\upsilon_{m}^{v}}\,,\] \[\upsilon_{m}^{l}=\upsilon_{l}(P_{m})\,,\ \upsilon_{m}^{v}=\upsilon_{v}(P_{m})\,, \quad\chi_{v}=-\frac{1}{\upsilon_{m}^{v}}\frac{\partial\upsilon_{v}}{\partial P _{v}}(P_{m})\,,\quad\chi_{l}=-\frac{1}{\upsilon_{m}^{l}}\frac{\partial \upsilon_{l}}{\partial P_{l}}(P_{m})\] where \(\chi_{v}\) and \(\chi_{l}\) are the isothermal compressibility coefficients of vapor and liquid, respectively, on the binodal. The condition \(c<<1\) is assumed; e. g. \(c\approx 2\times 10^{-5}\) for water at room temperature and \(c\approx 3\times 10^{-3}\) for argon at \(T_{0}=85\)\({}^{\circ}\)K. Thus, up to second order terms, one obtains \[P_{v}(\Delta p)-P_{m}=c\Delta p+\frac{c^{2}}{2}\Bigg{[}\chi_{v}-\frac{\chi_{l }}{c}\Bigg{]}(\Delta p)^{2}\] (26a) Further, expanding the "vapor integral" \[\Delta\mu_{v}\] in a series in \[(P_{v}-P_{m})\] up to second order terms, we have \[\int\limits_{0}^{P_{v}-P_{\infty}}\upsilon_{v}(z+P_{\infty})dz=\upsilon_{\infty}^{ v}\!\!\!\left[\left(P_{\nu}-P_{\infty}\right)-\frac{\chi_{v}}{2}\left(P_{\nu}-P_{ \infty}\right)^{2}\right]\!=\upsilon_{\infty}^{l}\!\!\left[\Delta p-\frac{ \chi_{l}}{2}\left(\Delta p\right)^{2}\right] \tag{26b}\] where Eq. (26a) was utilized. On the other hand, expanding the "liquid integral" \(\Delta\mu_{l}\) in a series in \(\Delta p\), we can neglect by \(\left(P_{\nu}-P_{\infty}\right)\) in comparison with \(\Delta p\), in view of the condition \(c<<\!1\): \[\int\limits_{0}^{\left(P_{\nu}-P_{\infty}\right)+\Delta p}\!\!\!\upsilon_{l} (u+P_{\infty})du=\!\!\!\upsilon_{\infty}^{l}\!\!\left[\Delta p-\frac{\chi_{l} }{2}\left(\Delta p\right)^{2}+\frac{1}{6\upsilon_{\infty}^{l}}\frac{d^{\,2} \upsilon_{l}}{dP_{l}^{2}}\left(P_{\infty}\right)\!\left(\Delta p\right)^{3}+ \cdots\right] \tag{27}\] Comparing this equation to Eq. (26b), we see that both expansions coincide, as it must. An important conclusion is these expansions are expressed only in terms of the parameters of the _liquid_, rather than vapor. The same result is obtained for the bubble case; the only difference is that the series in \(\Delta p\) is alternating, where the odd terms have a minus sign. Apparently, the physical reason is that the Laplace pressure \(\Delta p\) "breaks the symmetry" between the vapor and liquid phases - it acts on the _liquid_ both in the case of a drop and in the case of a bubble; it compresses a liquid drop and stretches the bulk liquid phase in the bubble case. The latter can be easily seen by the example of a large negative pressure \(\left|P_{l}\right|>>P_{\nu}\) in the liquid. The Laplace pressure \(\Delta p=-P_{l}\) in this case ensures the mechanical equilibrium of a bubble of critical size and, consequently, stretches the liquid and tends to collapse the bubble; the coefficient \(\chi_{l}\) is the "stretch factor" here. The vapor spinodal pressure \(P_{s}^{\nu}\) is determined by the EOS; e.g., this is the maximum point on the van der Waals (vdW) curve \(P(\upsilon)\). The corresponding \(\Delta p_{s}^{\left(d\right)}\) value is found as a root of Eq. (24b): \[\int\limits_{P_{\infty}}^{P_{\nu}^{\nu}}\!\!\upsilon_{v}dP_{\nu}=\int\limits_{ P_{\infty}}^{P_{\nu}^{\nu}+\Delta p_{\nu}^{\left(d\right)}}\!\!\upsilon_{l}dP_{l}\] (28a) where each of the integrals represents the limiting difference of chemical potentials \[\Delta\mu_{s}^{\left(d\right)}\] for the drop case. The liquid spinodal pressure \(P_{s}^{l}\) (the minimum point on the vdW curve) is the [theoretical] ultimate tensile strength of the liquid at a given temperature; the corresponding \[\Delta p_{s}^{\left(b\right)}\] value is given by the equation \[\int\limits_{P_{\infty}}^{P_{s}^{l}+\Delta p_{\nu}^{\left(b\right)}}\!\!\upsilon _{v}dP_{v}=\int\limits_{P_{\infty}}^{P_{\nu}^{\nu}}\!\!\upsilon_{l}dP_{l}\] (28b) where each of the integrals gives the limiting difference of chemical potentials \(\Delta\mu_{s}^{(b)}\) for the bubble case. At a sufficiently low temperature, when \(P_{s}^{l}\) is negative and large in absolute value, the approximation \(\Delta p_{s}^{(b)}=-P_{s}^{l}\) will be used with good accuracy, as \(P_{v}<<\left|P_{s}^{l}\right|\) in this case. ## Appendix B Basic equation for determining the Tolman length As stated above, the condition of phase equilibrium, i.e. Eq. (24b), should give the EM critical radius \(R_{e}\). On the other hand, a similar derivation of the nucleation work with the use of the ST\({}^{31}\) yields Gibbs Eq. (13) and the same equilibrium condition, from which the ST critical radius \(R\) should be determined. Indeed, Eq. (24b) includes just the radius \(R\) through \(\Delta p(R)=2\sigma(R)/R\); however, it cannot be determined from this equation until the function \(\sigma(R)\) is known. So, Eq. (24b) purporting to give both radii \(R_{e}\) and \(R\), in its present form gives neither of them. To get \(R_{e}\) from the equilibrium condition, Eq. (24b) must be transformed to contain \(R_{e}\) instead of \(R\). Preliminarily, it should be noted that the macroscopic liquid phase in the reference thermodynamic system is under the pressure \(P=P_{0}+\Delta p\) in the drop case, i.e., is compressed. Thus, both the radii \(R\) and \(R_{e}\) defined using this system implicitly take into account both the compressibility and the dependence \(\sigma(R)\). The ST radius \(R\) in the condition of equilibrium takes into account both these phenomena by means of the liquid integral as a whole in Eq. (24b), whereas the EM radius \(R_{e}\) in addition to \(R\) contains the function \(\delta(R)\) which _itself can do this_ under constant values of \(\upsilon_{l}\) and \(\sigma\) in this integral. Thus, we can assume that the use of both the constant volume \(\upsilon_{l}=\upsilon_{\infty}^{l}\) and the constant surface tension \(\sigma=\sigma_{\infty}\) together with \(R_{e}\) instead of \(R\) in Eq. (24b) _does not change it_. Under these conditions, the liquid integral is easily performed, which yields \[\Delta\mu=\pm\upsilon_{\infty}^{l}\frac{2\sigma_{\infty}}{R_{e}}+\upsilon_{ \infty}^{l}(P_{v}-P_{\infty})=\pm\upsilon_{\infty}^{l}\frac{2\sigma_{\infty}} {R+\delta(R)}+\upsilon_{\infty}^{l}(P_{v}-P_{\infty}) \tag{29}\] The mathematical meaning of such a transformation of Eq. (24b) is as follows: the function \(\delta(R)\) takes into account both the compressibility and the dependence \(\sigma(R)\) through the entire metastability region, thereby giving the true dependence of \(\Delta\mu\) on the degree of supersaturation; this fact gives another physical meaning to the TL. In other words, Eq. (24b) with variable quantities \(\upsilon_{l}\), \(\sigma\) and \(R\) is _identical_ to the same equation with constant (binodal) quantities \(\upsilon_{l}\) and \(\sigma\) when \(R_{e}\) is used instead of \(R\). Eq. (29) gives the desired EM critical radius for a given supersaturation ratio \(\xi=P_{\rm v}\,/\,P_{\infty}\), \[R_{e}(\xi)=\frac{\pm\,2\nu_{\infty}^{l}\sigma_{\infty}}{\Delta\mu(\xi)-\nu_{ \infty}^{l}(P_{\rm v}-P_{\infty})} \tag{30}\] and resembles the familiar Kelvin equation, but the latter contains \(R\), not \(R_{e}\), which is an approximation. As a supporting physical argument in favor of the fact that the quantity \(R_{e}\) in Eq. (29) is indeed the EM radius, we note that it has been shown both experimentally[41, 42] and in density functional calculations[8, 43] that Eq. (29) accurately estimates the radius \(R_{e}\) even for very small nuclei; this fact was used in Refs. [23] and [43]. Other arguments in favor of this equation will be given later, after calculations using it. The basic equation for determining the function \(\delta(R)\) is obtained from Eq. (30) as follows: \[\delta(R)=\frac{\pm\,2\nu_{\infty}^{l}\sigma_{\infty}}{\Delta\mu(\Delta p(R))- \nu_{\infty}^{l}[P_{\rm v}(\Delta p(R))-P_{\infty}]}-R \tag{31}\] where either \(\Delta\mu_{\rm v}\) or \(\Delta\mu_{l}\) can be taken as \(\Delta\mu\). The dependence \(\Delta\mu(\Delta p(R))\) is highlighted here for the procedure for determining \(\delta(R)\) described later; it is given by Eq. (24b) which also defines the dependence \(P_{\rm v}(\Delta p)\). Interestingly, the form of this equation is consistent with the form of general Eq. (16b). Eq. (31) confirms the expectation for the function \(\delta(R)\) to be determined by the macroscopic parameters of the system and its EOS. So, both radii \(R_{e}\) and \(R\) are indeed determined by the equilibrium condition, as expected, but in quite different ways. The radius \(R_{e}\) is determined directly and explicitly from the equilibrium condition, Eq. (30), and underlies the determination of the function \(\delta(R)\). The radius \(R\) is calculated only after obtaining the function \(\delta(R)\) from Eq. (31) (and, consequently, the dependences \(\sigma(R)\) and \(\Delta p(R)\)) as the root of Eq. (24b); this root \(R(\xi)\) gives the desired dependence on the supersaturation. Thus, the way to get the radius \(R\) is much more complicated. These facts, together with others, show the role of the EM DS in surface thermodynamics. The spinodal value \(\delta(0)\equiv\delta_{s}\) of the TL is \[\delta_{s}^{(d)}=\frac{2\nu_{\infty}^{l}\sigma_{\infty}}{\Delta\mu_{s}^{(d)}- \nu_{\infty}^{l}(P_{\rm v}^{l}-P_{\infty})}\,,\qquad\delta_{s}^{(b)}=\frac{- \,2\nu_{\infty}^{l}\sigma_{\infty}}{\Delta\mu_{s}^{(b)}-\nu_{\infty}^{l}(P_{\rm v }(\Delta p_{s}^{(b)})-P_{\infty})} \tag{32}\] for drops and bubbles, respectively (\(\Delta\mu_{s}^{(b)}<0\)). ##.5 Equations of state Quite accurate EOS for various substances are available in the literature, however, as a rule, they are quite complex and contain dozens of coefficients. To get only qualitative results illustrating the theory, some simple cubic EOS are used here. First, it is necessary to establish criteria for evaluating these EOS in relation to this theory. The chemical potential difference for the liquid phase can be represented as follows: \[\Delta\mu_{l}=\int_{\rho_{\infty}^{l}}^{\rho_{l}}\frac{d\rho_{l}}{\rho_{l}^{2} \widetilde{\chi}_{l}(\rho_{l})} \tag{33}\] where \(\widetilde{\chi}_{l}(\rho_{l})\) is the liquid compressibility coefficient as a function of its density; \(\chi_{l}=\widetilde{\chi}_{l}(\rho_{\infty}^{l})\). Thus, the compressibility coefficient \(\chi_{l}\) and mass density \(\rho_{m}^{l}\) of the liquid on the binodal obtained from the EOS will be compared with their experimental values. On the other hand, the chemical potential difference \(\Delta\mu_{v}\) for the vapor phase depends on the \(P_{\infty}\) value. The latter for a given EOS is determined by Maxwell's rule as a root of the equation \[\int_{\nu_{\infty}^{l}(P_{\infty})}^{\nu_{\infty}^{l}(P_{\infty})}\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! desired values of \(P_{\infty}\) and \(\chi_{l}\) are obtained from Eqs. (34) and (37), respectively, as the second step. The results presented in the Table show that this procedure gives \(P_{\infty}\) and \(\rho_{m}^{l}\) values close to the experimental ones and somewhat underestimated \(\chi_{l}\) value. It also gives overestimated values of the critical point parameters (\(T_{c}\)= 199 \(K\), \(P_{c}\)= 8.8\(\times 10^{7}\) dyn/cm\({}^{2}\)), but in the present theory this fact is considered insignificant. The PR and G-R EOS have the following functional forms, respectively: \[P_{PR}(\upsilon)=\frac{kT_{0}}{\upsilon-b}-\frac{a\alpha(T_{0})}{\upsilon^{2} +2b\upsilon-b^{2}} \tag{38}\] \[P_{G-R}(\upsilon)=\frac{kT_{0}}{\upsilon-b}-\frac{a(T_{0})}{(\upsilon-c)( \upsilon-d)} \tag{39}\] The parameters of these equations are given in original Refs. [44] and [45] in combination with the methodology for their determination; the parameters of interest calculated for these EOS are given in the Table. While the liquid density \(\rho_{m}^{l}\) sometimes falls directly within the scope of the mentioned methodology (this applies to the G-R EOS), the coefficient \(\chi_{l}\) is usually not taken into account by it. Nevertheless, the PR EOS surprisingly yields the experimental value of \(\chi_{l}\), whereas the G-R EOS is unsatisfactory with respect to this quantity. All three EOS under consideration are shown in Fig. 1. The vdW and G-R curves are close to each other in the liquid region, including almost the same spinodal point; the G-R curve here has the steepest slope among the three curves due to the smallest \(\chi_{l}\) value. On the other hand, the PR and G-R curves coincide in the vapor region. ## IV IV. Surface tension expansions and computational procedure for determining the Tolman length A. Surface tension expansion near the spinodal point and the possibility of its determination from this point Eq. (14) is used to obtain the surface tension expansion near the spinodal point. Expanding the work \(W\) into a series in \(\Delta p\), we have \[W(\Delta p)=-\kappa_{1}(\Delta p-\Delta p_{s})-\kappa_{2}(\Delta p-\Delta p_{s })^{2}-\cdots\] (40a) Eq. ( 14 ) gives the expansion coefficients: \[\kappa_{1}=V_{e}(\Delta p_{s})=V_{e}\Big{|}_{R=0}\equiv V_{s}^{e}=\frac{4\pi}{3} \delta_{s}^{3}\,,\ \ \ \kappa_{k}=\frac{1}{k!}\frac{d^{k}V_{e}}{d(\Delta p)^{k}}\Bigg{|}_{\Delta p= \Delta p_{s}}\ \ Since \(x=x_{s}\) corresponds to \(y=0\), we calculate the derivatives \(y_{x^{k}}^{(k)}(x_{s})\) and then find \(x_{y^{k}}^{(k)}(0)\) using Eq. (44). In this way, the expansions of \(\Delta p(R)\), Eq. (43b), and hence \(\sigma(R)\) acquire the following form: \[\Delta p(R) = \Delta p_{s}\left[\mathbb{I}+a_{1}R^{3}+a_{2}R^{6}+\cdots+a_{k}R^ {3k}+\cdots\right] \tag{45a}\] \[\sigma(R) = KR\left[\mathbb{I}+a_{1}R^{3}+a_{2}R^{6}+\cdots+a_{k}R^{3k}+\cdots\right]\] (45b) \[a_{1}=\frac{1}{\alpha\kappa_{1}}=-\frac{1}{2\delta_{s}^{3}},\ \ \ a_{2}=\frac{\kappa_{1}-2K\kappa_{2}}{ \alpha^{2}\kappa_{1}^{3}}\,,\ \ a_{3}=\frac{\kappa_{1}^{2}-6K\kappa_{1}\kappa_{2}+4K^{2}(2 \kappa_{2}^{2}-\kappa_{1}\kappa_{3})}{\alpha^{3}\kappa_{1}^{5}}\,,\ \ etc. \tag{45c}\] Thus, the expansion of both the reduced Laplace pressure \(\Delta p(R)/\Delta p_{s}\) and the function \(\sigma(R)/KR\) near the spinodal point is a series in powers of \(R^{3}\). Eq. (45b) can be viewed as an expansion for Eq. (22b). From Eq. (20a), \[\frac{\widetilde{\delta}(R)}{R}=\frac{R\varphi_{g}(R)}{2(1-R\varphi_{g}(R))} \,,\ \ \varphi_{g}(R)=\frac{d\sigma/dR}{\sigma(R)} \tag{46}\] Substituting the \(\sigma(R)\) expansion, Eq. (45b), into this equation and using Eq. (16b), we get the TL expansion as follows: \[\widetilde{\delta}(R)=\delta_{s}\left[\mathbb{I}+c_{1}R^{3}+c_{2}R^{6}+\cdots +c_{k}R^{2k}+\cdots\right]-R \tag{47a}\] \[c_{1}=\frac{2}{3}\frac{a_{1}^{2}-a_{2}}{a_{1}}\,,\ \ c_{2}=-\frac{4a_{1}^{4}-5a_{1}^{2}a_{2}+9a_{3}a_{1}-8a_{2}^{2}}{9a_{1}^{2}}\,, \ \ etc. \tag{47b}\] From Eq. (47a), the linear asymptotic behavior \[\widetilde{\delta}(R)=\delta_{s}-R\,,\ \ \widetilde{\delta}(0)=-1 \tag{48}\] as the _universal_ property of the TL at \(R\to 0\) is obtained. It takes place for any substance at any temperature (\(\delta_{s}\) is a function of the temperature \(T_{0}\)). The expansion of \(\Delta\mu\) in \(\Delta p\) near the binodal was shown above. In a similar way, the expansion of \(\Delta\mu\) in \((\Delta p-\Delta p_{s})\) near the spinodal can be obtained. With the use of Eq. (45a), the general structure of the resulting expression for drops will be as follows: \[\Delta\mu-\upsilon_{\infty}^{l}(P_{\nu}-P_{\infty})=\left[\Delta\mu_{s}- \upsilon_{\infty}^{l}(P_{s}^{\nu}-P_{\infty})\left[\mathbb{I}+d_{1}R^{3}+d_{ 2}R^{6}+\cdots\right]=\frac{2\upsilon_{\infty}^{l}\sigma_{\infty}}{\delta_{s} ^{(d)}}\left[\mathbb{I}+d_{1}R^{3}+d_{2}R^{6}+\cdots\right] \tag{49}\] where Eq. (32) was employed. Here the coefficients \(d_{k}\) are expressed in terms of the coefficients \(a_{k}\) and the derivatives \(d^{k}\upsilon_{l}/dP_{l}^{k}\) taken at the vapor spinodal point, i.e. at \(P_{l}=P_{s}^{\nu}+\Delta p_{s}\). Substituting Eq. (49) into Eq. (31) and comparing the result to Eq. (47a) gives \[\widetilde{\delta}(R)=\frac{\delta_{s}^{(d)}}{\left[\mathbb{I}+d_{1}R^{3}+d_{ 2}R^{6}+\cdots\right]}-R=\delta_{s}^{(d)}\left[\mathbb{I}+c_{1}R^{3}+c_{2}R^{ 6}+\cdots\right]-R \tag{50}\] Hence we conclude that (i) Eq. (31) is consistent with the expansion of \(\delta(R)\) obtained from an independent thermodynamic study and (ii) the coefficients \(c_{k}\) and, consequently, the coefficients \(a_{k}\) (cf. Eq. (47b)) can be expressed in terms of the coefficients \(d_{k}\), i.e. in terms of the mentioned derivatives \(d^{k}\upsilon_{l}/dP_{l}^{k}\), by equating the coefficients at the same powers in the series of the LHS and RHS of Eq. (50). In this way, in principle, it is possible to determine the functions \(\delta(R)\) and \(\sigma(R)\) with the required accuracy. However, in practice it turns out that the series in Eqs. (45b) and (47a) converge very slowly, whereas the complexity of the coefficients \(a_{k}\), \(c_{k}\) and \(d_{k}\) increases very quickly with increasing order of approximation. Therefore, this method does not allow any noticeable advance in \(R\). Appendix B. Surface tension expansion near the binodal point and the impossibility of its determination from this point Eq. (17) is used to obtain the surface tension expansion near the binodal point: \[\sigma(\Delta p)=\sigma_{\infty}+\lambda_{\mathrm{t}}\Delta p+\lambda_{ \mathrm{t}}(\Delta p)^{2}+\cdots \tag{51a}\] \[\lambda_{\mathrm{t}}=\frac{d\sigma}{d\Delta p}\Bigg{|}_{\omega=0}=-\widetilde {\delta}(\infty)=-\delta_{\infty}\,,\ \ \lambda_{\mathrm{t}}=\frac{1}{k!\,d(\Delta p)^{k}}\Bigg{|}_{\omega=0}\ \mathrm{for}\ \ k\geq 2 \tag{51b}\] The coefficients \(\lambda_{\mathrm{t}}\) relate to the function \(\delta(R)\) as follows: \[\lambda_{\mathrm{t}}=\frac{1}{2}\frac{d^{2}\sigma}{d(\Delta p)^{2}}\Bigg{|}_ {\omega=0}=-\frac{1}{2}\frac{d\widetilde{\delta}(R)}{dR}\frac{dR}{d(\Delta p) }\Bigg{|}_{R\rightarrow\infty}=\frac{1}{2}\frac{d\widetilde{\delta}(R)}{dR} \frac{1}{\Delta p(R)\varphi_{S}(R)}\Bigg{|}_{R\rightarrow\infty} \tag{51c}\] where Eq. (21) was utilized. The derivatives for \(k>2\) can be calculated by induction. The derivatives in Eq. (40b) can be transformed in a similar way. Keeping only the linear term in this expansion, \[\sigma=\sigma_{\infty}-\delta_{\infty}\,\frac{2\sigma}{R}\] we get the Tolman asymptotics \[\sigma_{T}(R)=\frac{\sigma_{\infty}}{1+2\delta_{\infty}\,/\,R}=\sigma_{ \infty}\Bigg{[}1-\frac{2\delta_{\infty}}{R}+\frac{4\delta_{\infty}^{2}}{R^{2} }-\frac{8\delta_{\infty}^{3}}{R^{3}}+\cdots\Bigg{]} \tag{52}\] To obtain the general form of the expansion, the method described above is again applied. In the notations \[z=\frac{1}{R}\,,\ \ \Delta p=x\,,\ \beta=\frac{1}{2}\] Eq. (51a) takes the form \[z(x)=\beta\frac{x}{\sigma_{{}_{\infty}}+\lambda_{\rm{t}}x+\lambda_{\rm{t}}x^{2}+\cdots} \tag{53}\] We need the expansion \[x(z)=x(0)+x_{z}^{\prime}(0)z+\frac{1}{2}x_{zz}^{\prime}(0)z^{2}+\cdots \tag{54}\] where \(x(0)=0\). This is the expansion of \(\Delta p\) and \(\sigma\) in the _curvature_\(z\): \[\Delta p=2\sigma_{{}_{\infty}}z\big{[}\![1+b_{1}z+b_{2}z^{2}+\cdots]\!\!=\frac {2\sigma_{{}_{\infty}}}{R}\!\left[1\!-\!\frac{2\delta_{{}_{\infty}}}{R}+\frac {b_{2}}{R^{2}}+\cdots\right] \tag{55a}\] \[\sigma=\sigma_{{}_{\infty}}\big{[}\![1+b_{1}z+b_{2}z^{2}+\cdots]\!\!=\sigma_{{} _{\infty}}\!\left[1\!-\!\frac{2\delta_{{}_{\infty}}}{R}+\frac{b_{2}}{R^{2}}+ \cdots\right] \tag{55b}\] Eq. (55b) can be viewed as an expansion for Eq. (20b) which has the following form in terms of the curvature: \[\sigma(z)=\sigma_{{}_{\infty}}\exp\!\left[-\frac{\dot{z}}{0}\frac{2\widetilde {\delta}(z^{\prime})dz^{\prime}}{1+2z^{\prime}\widetilde{\delta}(z^{\prime}) }\right] \tag{56}\] The coefficients \(b_{k}\) are determined with aid of Eq. (44): \[b_{1}=2\lambda_{1}=-2\delta_{{}_{\infty}}\,,\;\;b_{2}=4\big{[}\lambda_{1}^{2}+ \sigma_{{}_{\infty}}\lambda_{2}\big{]}\,,\;\;b_{3}=8\big{[}\lambda_{1}^{3}+3 \sigma_{{}_{\infty}}\lambda_{1}\lambda_{2}+\sigma_{{}_{\infty}}^{2}\lambda_{3 }\big{]},\;\;etc. \tag{57}\] Using these coefficients and comparing Eq. (55b) to Eq. (52), we obtain the difference between the true function \(\sigma(R)\) and Tolman's approximation: \[\sigma(R)-\sigma_{{}_{T}}(R)=\sigma_{{}_{\infty}}\!\left[\frac{4\sigma_{{}_{ \infty}}\lambda_{2}}{R^{2}}+\frac{8\big{(}\sigma_{{}_{\infty}}^{2}\lambda_{3 }-3\sigma_{{}_{\infty}}\delta_{{}_{\infty}}\lambda_{2}\big{)}}{R^{3}}+\cdots\right] \tag{58}\] Noting additionally that \(b_{4}=16\lambda_{1}^{4}+\cdots=16\delta_{{}_{\infty}}^{4}+\cdots\), we can assume by induction that the expansion for \(\sigma(R)\) includes Tolman's asymptotics _as a component_, making a correction to it; this correction starts with a quadratic term. The coefficient \(b_{2}\) is known in the literature as the rigidity constant;[17; 18; 22] it can be evaluated with the aid of Eq. (51c) after obtaining the function \(\delta(R)\). Further, the TL expansion is obtained, as before; preliminarily, all the necessary equations, including Eq. (16b), are converted from \(R\)- to \(z\)-dependence: \[\delta(z)=c_{{}_{1}}+c_{2}z+c_{3}z^{2}+\cdots=\delta_{{}_{\infty}}+\frac{c_{{} _{2}}}{R}+\frac{c_{{}_{3}}}{R^{2}}+\cdots \tag{59a}\] \[c_{{}_{1}}=-\frac{b_{{}_{1}}}{2}=\delta_{{}_{\infty}}\,,\;\;c_{{}_{2}}=\frac{3} {4}b_{{}_{1}}^{2}-b_{{}_{2}}\,,\;\;c_{{}_{3}}=-\frac{29}{24}b_{{}_{1}}^{3}+ \frac{5}{2}b_{{}_{1}}b_{{}_{2}}-\frac{3}{2}b_{{}_{3}}\,,\;\;\;etc. \tag{59b}\] The rigidity constant is determined by the derivative of the function \(\delta(R)\), i.e. by the coefficient \[c_{{}_{2}}\;\;(\varphi_{{}_{S}}(R)\to 1/R\;\;\mbox{and}\;\;\Delta p(R)=2 \sigma_{{}_{\infty}}/R+O(1/R^{2})\;\;\mbox{at}\;\;R\to\infty\;\;).\] Eq. (30) converted from \(R\) to \(z\) reads: \[1+z\delta(z)=\pm\frac{2\upsilon_{\infty}^{l}\sigma_{\infty}z}{\Delta\mu-\upsilon_{ \infty}^{l}(P_{\nu}-P_{\infty})} \tag{60}\] Expanding the denominator according to Eq. (27) and utilizing Eq. (55a), we have \[1+z\delta(z)=\frac{\pm 2\upsilon_{\infty}^{l}\sigma_{\infty}z}{\pm 2\upsilon_{ \infty}^{l}\sigma_{\infty}z\big{[}1+b_{\uparrow}z+\cdots\big{[}1+(\chi_{l}/ \,1)(2\sigma_{\infty}z+\cdots)\big{]}}\] from where \[1+z\delta(z)=1-(b_{\uparrow}\mp\chi_{l}\sigma_{\infty})z+O(z^{2})\,\ \ \delta(z)=-(b_{ \uparrow}\mp\chi_{l}\sigma_{\infty})+O(z) \tag{61}\] Thus, \(\ \delta_{\infty}=-(b_{\downarrow}\mp\chi_{l}\sigma_{\infty})\). Substituting \(\ b_{\downarrow}=-2\delta_{\infty}\), we get \(\ \delta_{\infty}\) for drops[23] and bubbles: \[\delta_{\infty}^{(d)}=-\chi_{l}\sigma_{\infty}\,\ \ \delta_{\infty}^{(b)}=\chi_{l} \sigma_{\infty}=-\delta_{\infty}^{(d)} \tag{62a}\] If the correction \(\ P_{\nu}(\Delta p)-P_{\infty}=\pm c\Delta p\) is taken into account in Eq. (27), the \(\ \delta_{\infty}\) values receive the corresponding correction: \[\delta_{\infty}^{(b,d)}=\pm\chi_{l}\sigma_{\infty}(1+2c) \tag{62b}\] The quantity \(\ \chi_{l}\sigma_{\infty}\) was called "the fundamental length characteristic of liquids" in Ref. [46]; it varies in the narrow range \(0.017\div 0.047\)\(nm\), i.e. by a factor less than 3, for 30 liquids of very different nature considered therein near the triple point, while \(\ \chi_{l}\) and \(\ \sigma_{\infty}\) separately vary by a factor of 150. In this way, the TL limiting value \(\ \delta_{\infty}\) turns out to be equal in absolute value to this important characteristic of liquids. The reason why \(\ \delta_{\infty}\) contains the compressibility of a liquid rather than a vapor was explained above. Eq. (62a) reflects the property of the function \(\ \delta(R)\) mentioned above to take into account both the compressibility of the liquid and the dependence \(\ \sigma(R)\) in Eq. (29); we can say that this property manifests itself on the binodal through Eq. (62a). Returning to Eq. (60) for drops, \[\Delta\mu-\upsilon_{\infty}^{l}(P_{\nu}-P_{\infty})=\frac{2\upsilon_{\infty}^ {l}\sigma_{\infty}z}{1+z\delta(z)} \tag{63}\] and expanding both sides in \(\ z\) (Eq. (27) together with Eq. (55a) is employed for the LHS), we obtain \(\ f_{LHS}(z)=f_{RHS}(z)\) after reducing both sides by the factor \(\ (2\upsilon_{\infty}^{l}\sigma_{\infty})\), where \[f_{LHS}(z)=z\bigg{\{}1+(b_{\downarrow}+2\sigma_{\infty}d_{\downarrow})z+(b_{ \downarrow}+4b_{\downarrow}d_{\downarrow}\sigma_{\infty}+4d_{\downarrow} \sigma_{\infty}^{2})z^{2}\] \[+[b_{\downarrow}+4b_{\downarrow}a_{\downarrow}\sigma_{\infty}+b_{\downarrow}(2 b_{\downarrow}d_{\downarrow}\sigma_{\infty}+12d_{\downarrow}\sigma_{\infty}^{2})+8d_{ \downarrow}\sigma_{\infty}^{3}]z^{3}+\cdots\bigg{\}},\ \ d_{\downarrow}=\frac{1}{\upsilon_{\infty}^{l}}\frac{d^{ \,k}\upsilon_{l}}{(k+1)!dP_{l}^{k}}(P_{\infty}) \tag{64a}\] \[f_{RHS}(z)=z\Bigg{\{}1+\frac{b_{\downarrow}}{2}\,z+\Bigg{(}b_{\,2}-\frac{b_{ \downarrow}^{\,2}}{2}\Bigg{)}z^{2}+\Bigg{(}\frac{3b_{\downarrow}}{2}-\frac{3b_ {\downarrow}b_{\,2}}{2}+\frac{7b_{\downarrow}^{\,3}}{12}\Bigg{)}z^{3}+\cdots \Bigg{\}} \tag{64b}\] In particular, \(\,d_{{}_{1}}=-\chi_{{}_{I}}/2\,\). In view of Eq. (62a), we see that the linear terms in braces coincide for these functions. However, an attempt to equate the coefficients at other powers of \(\,z\,\) fails: when the coefficients at \(\,z^{{}^{2}}\,\) are equated, the coefficient \(\,b_{{}_{2}}\,\) drops out of this equation and therefore is not determined; coefficients at other powers of \(\,z\,\) contain \(\,b_{{}_{2}}\,\). Thus, the coefficients at \(\,z^{{}^{k}}\,\) in the functions \(\,f_{{}_{LHS}}(z)\,\) and \(\,f_{{}_{RHS}}(z)\,\) are not equal to each other for \(\,k\geq 2\,\). This means that here the coefficients \(\,b_{{}_{k}}\,\) cannot be determined in terms of the coefficients \(\,d_{{}_{k}}\,\) (in contrast to the case of expansion near the spinodal) and therefore the functions \(\,\sigma(z)\,\) and \(\,\delta(z)\,\) are _not determined from the binodal point_ analytically (of course, these coefficients can be determined in another way, e.g., by fitting to the solution of exact Eq. (31) or to the results of density functional calculations). The same conclusion holds for the bubble case. Thus, the functions \(\,f_{{}_{LHS}}(z)\,\) and \(\,f_{{}_{RHS}}(z)\,\) should be equal to each other _as a whole_, without the equality of the corresponding terms of their series. In other words, the series in Eqs. (64a, b) should coincide _asymptotically_: for a given \(\,z\,\), the difference between them should become arbitrary small by taking a sufficient number of their terms; the higher the order of approximation, the closer these series are to each other. Employing the expansion in Eq. (27) to the denominator of Eq. (30), we have, up to a quadratic term \[R_{{}_{e}}(\Delta p)=\frac{\pm\,2\nu_{{}_{m}}^{l}\sigma_{{}_{m}}}{\pm\,\upsilon _{{}_{m}}^{l}\Delta p[1\mp\left(\chi_{{}_{I}}/2\right)\Delta p]}=\frac{2\sigma _{{}_{m}}}{\Delta p}\bigg{[}1\pm\frac{\chi_{{}_{I}}}{2}\Delta p\bigg{]}\] which gives, in view of Eq. (62a), \[R_{{}_{e}}^{(K)}(\Delta p)=\frac{2\sigma_{{}_{m}}}{\Delta p}-\delta_{{}_{m}}^ {(d,b)} \tag{65}\] This simple approximation for \(\,R_{{}_{e}}(\Delta p)\,\), obtained in a different way, was used by Kashchiev [24] in his approach to calculating the dependences \(\,\delta(R)\,\), \(\sigma(R)\,\), and other functions of interest. The accuracy of this approximation will be verified later, after the function \(\,\delta(R)\,\) has been calculated from exact Eq. (31). Since this equation is derived from an expansion near the binodal, the maximum deviation is expected to be near the spinodal. This approximation corresponds to the linear term in the braces of Eqs. (64a, b), and \(\,f_{{}_{LHS}}(z)=f_{{}_{RHS}}(z)=z(1-\delta_{{}_{m}}z)\,\) in this case. From Eq. (65), the function \(\,\delta(R)\,\) is also determined as \[\delta^{(K)}(R)=\frac{2\sigma_{{}_{m}}}{\Delta p(R)}-\delta_{{}_{m}}-R \tag{66}\] and the following simple relation between the main parameters of the theory is obtained: \[\delta^{(K)}_{s}+\delta_{\infty}=\frac{\sigma_{\infty}}{K} \tag{67}\] where two parameters relate to the spinodal and two to the binodal. This equation explicitly shows that the \(\delta_{s}\) value is determined mainly by the limiting Laplace pressure \(\Delta p_{s}\) (\(\delta_{s}\) is noticeably large than \(\delta_{\infty}\)). Combining Eqs. (22a) and (66), we get a more explicit integral equation for \(\delta^{(K)}(R)\): \[\delta^{(K)}(R)=\frac{\sigma_{\infty}}{K}\exp[\int_{0}^{R}\frac{dr}{r+2\delta ^{(K)}(r)}]-\delta_{\infty}-R \tag{68}\] where \[\delta^{(K)}(R)\equiv\delta^{(K)}(R)\Biggl{[}1+\frac{\delta^{(K)}(R)}{R}+ \frac{1}{3}\Biggl{(}\frac{\delta^{(K)}(R)}{R}\Biggr{)}^{2}\Biggr{]}\] ## Appendix C. Computational procedure for determining the dependences \(\delta(R)\) and \(\sigma(R)\) As shown above, the functions \(\delta(R)\) and \(\sigma(R)\) cannot be determined by their expansions near the spinodal and binodal points due to technical complexity and fundamental impossibility, respectively. Therefore, these functions will be determined here _as a whole_ in some range in \(R\) with the aid of Eqs. (22a), (24b) and (31). The function \(\Delta p(R)\) in Eq. (22a) contains the function \(\delta(R)\) in the integrand; the function \(\Delta\mu(\Delta p)\) in Eq. (24b) contains this \(\Delta p\) in the integration limit. Thus, Eq. (31) is a "super-integral" equation for \(\delta(R)\). It can be solved by the method of successive approximations. The computational procedure includes the following steps. (i) The linear function \(\delta_{0}(R)=\delta_{s}-R\), Eq. (48), is taken as the initial (zero) approximation for \(\delta(R)\). Eq. (22a) in this case gives \[\Delta p_{0}(R)=\Delta p_{s}\,\frac{2\delta^{3}_{s}}{R^{3}+2\delta^{3}_{s}} \tag{69}\] where any value of \(\delta_{s}\) can be taken; its true value, Eq. (32), will be determined in step (v). (ii) The selected range in \(R\), \([0,R_{\max}]\), is divided by \(N_{p}\) points \(R_{i}\). (iii) The function \(\Delta p_{0}(R_{i})\) is calculated at these points. (iv) The function \(\Delta\mu(\Delta p_{0}(R_{i}))\) is calculated using the dependence \(P_{\nu}(\Delta p)\) determined preliminarily, Eq. (24b). (v) The function \(\delta_{1}(R_{i})\) is calculated at the points \(R_{i}\) according to Eq. (31). (vi) Cubic spline interpolation is applied to obtain the fist approximation function \(\delta_{i}(R)\) in the selected interval \([0,\,R_{\rm max}\,]\). (vii) The first approximation function \(\Delta p_{{}_{1}}(R)\) is determined by Eq. (22a) with \(\delta(R)=\delta_{{}_{1}}(R)\); hence the corresponding approximation for the surface tension \(\sigma_{1}(R)=\Delta p_{1}(R)R/2\) is obtained, Eq. (22b). This procedure is then repeated from step (iii): the functions \(\Delta p_{{}_{1}}(R_{{}_{i}})\) and \(\Delta\mu(\Delta p_{{}_{1}}(R_{{}_{i}}))\) are calculated to obtain the second approximation function \(\delta_{{}_{2}}(R)\); it determines the second approximation functions \(\Delta p_{{}_{2}}(R)\) and \(\sigma_{{}_{2}}(R)\). In this way, repeating this procedure the necessary number of times, we will determine the desired functions \(\delta(R)\) and \(\sigma(R)\) with a required accuracy in the given interval of \(R\) values. A similar procedure can be applied to Eq. (68); it is much simpler here, as the functions \(P_{{}_{\rm v}}(\Delta p)\) and \(\Delta\mu(\Delta p)\) are not needed. Instead, this equation uses only three parameters \(\Delta p_{{}_{s}}=2K\), \(\sigma_{{}_{\infty}}\),and \(\chi_{{}_{l}}\) taken from the experiment (\(\sigma_{{}_{\infty}},\chi_{{}_{l}}\)) and the EOS (\(\Delta p_{{}_{s}},\chi_{{}_{l}}\)). The drop and bubble cases differ only in the value of \(\Delta p_{{}_{s}}\) and the sign of \(\delta_{{}_{\infty}}\). ## V. Results and Discussion Fig. 2 shows the result of the computational procedure. It is clear that the approximations \(\delta_{{}_{k}}(R)\) converge to a solution; as they oscillate, the average \((\delta_{{}_{k}}+\delta_{{}_{k+1}})/2\) of the odd end even approximations can be taken as an approximation for the TL in the selected interval of radii. It can be seen that for \(\sigma(R)\) the convergence to a solution occurs faster than for \(\delta(R)\). It should also be noted that the convergence "accelerates" as the number \(k\) increases: while thirty iterations for \(\delta(R)\) are enough to cover a range of about 10 nm in \(R\), fifty iterations already cover a range of 200 nm. A similar picture is obtained, when this computational procedure is applied to Eq. (68). It is worth noting that the function \(\delta(R)\) determined from the spinodal point tends to its asymptotic value \(\delta_{{}_{\infty}}\) determined from expansions near the binodal. Since the PR EOS gives practically the experimental value of \(\chi_{{}_{l}}\), the corresponding value of \(\delta_{{}_{\infty}}\)= -0.023 nm can be assumed as a true value for argon at this temperature. So, the function \(\delta(R)\) is completely determined from the spinodal point. It turns out that the attempt to perform this computational procedure from the binodal point fails for Eqs. (31) and (66) converted to the curvature \(z\) and using Tolman's Eq. (56) for \(\Delta p\). This result reflects the fact of fundamental impossibility to determine analytically the function \(\delta(R)\) from this point which was established above and can be explained by the example of Eq. (66). Being converted to \(z\) and using Eq. (56), this equation _does not contain_\(\Delta p_{s}\), whereas just this quantity determines both the slope \(K\) of the dependence \(\sigma(R)\) at \(R\to 0\) and the \(\delta_{s}\) value, Eq. (67); i. e., this equation contains no information about the behavior of the functions \(\delta(R)\) and \(\sigma(R)\) near the spinodal. Thus, somewhat paradoxical situation takes place for Eq. (66): although it was obtained from expansions near the binodal, nevertheless, it is solved from the spinodal point; therefore, its accuracy near this point is important. An additional reason is as follows. The line \(P=P_{\infty}\) in Fig.1 separates two metastable states: \(P>P_{\infty}\) (metastable vapor, the drop case) and \(P<P_{\infty}\) (metastable liquid, the bubble case), each with _its own_ dependence \(\sigma(R)\). Therefore, the binodal state, as a borderline state between these two, "contains no information" about this dependence; the function \(\delta(R)\) "cannot make a choice" between its two possibilities \(\pm\left|\delta_{\infty}\right|\). On the other hand, each of these metastable states has its own spinodal point which gives rise to determining the mentioned dependences. The fact that the dependence \(\sigma(R)\) asymptotically tends to \(\sigma_{\infty}\) is a strong argument in favor of the correctness of Eq. (31). In this regard, the importance of the \(\upsilon_{\infty}^{l}(P_{\nu}-P_{\infty})\) term in the denominator of this equation should be noted; without this term, slightly underestimated values of \(\sigma_{\infty}\) are obtained in the drop case (i. e., the \(\sigma(R)\) curve crosses the line \(\sigma=\sigma_{\infty}\)). The same computational procedure applied to Eq. (68) gives the dependence \(\sigma(R)\) asymptotically tending to \(\sigma_{\infty}\) with the same accuracy, so Eq. (68) retains this important property. The dependences \(\delta(R)\) and \(\sigma(R)\) for drops and various EOS are shown in Fig. 3. It is interesting that these dependences are close for the PR and vdW EOS, whereas other pairs of the EOS curves, (G-R, vdW) and (G-R, PR), are close in the liquid and vapor regions, respectively (Fig. 1). This fact is explained by the closeness of the \(\Delta p_{s}\) values for these EOS, which leads to the closeness of the \(\delta_{s}\) values given in the Table. In turn, close \(\delta_{s}\) values lead to close dependences \(\sigma(R)\).[47] The dependences \(\delta(R)\) and \(\sigma(R)\) for bubbles are presented in Fig. 4. Here, the curves for the vdW and G-R EOS are close to each other, in accordance with the fact that the corresponding EOS curves in Fig. 1 have practically the same spinodal point \((\upsilon_{s}^{l},P_{s}^{l})\) in the liquid region. Therefore, the values of \(\Delta p_{s}\) and \(\delta_{s}\) given in the Table are close for these EOS; as mentioned above, \(\Delta p_{s}^{(b)}=-P_{s}^{l}\). Fig. 5 gives a comparison of the exact, \(\delta(R)\), and approximate, \(\delta^{(K)}(R)\), functions, as well as the corresponding \(\sigma(R)\) dependences. As can be seen, these functions are close, but cross. Their maximum relative difference (at the spinodal point) is about 1.3 and 2.5 % for drops and bubbles, respectively, in the case of the PR EOS. For the other two EOS, this difference is smaller; the reason for their "higher accuracy" for Eq. (68) is explained below. Fig. 6a gives a comparison of the exact and approximate, \(R_{e}^{(K)}(\Delta p)\), dependences \(R_{e}(\Delta p)\). As expected, the maximum error is at the spinodal point; the relative errors \(\varepsilon_{R}=[R_{e}^{(K)}-R_{e}]/R_{e}\) here are equal to 1.8 and 2.6 % for drops and bubbles, respectively, and the PR EOS. Since Eq. (65) is based on the replacement of the denominator \(\Delta\mu_{-}(\Delta p)=\Delta\mu(\Delta p)-v_{\infty}^{i}[P_{v}(\Delta p)-P_{ \infty}]\) with the parabolic approximation \(\Delta\mu_{\omega}(\Delta p)=\pm v_{\infty}^{i}\Delta p[1\mp(\chi_{i}/2) \Delta p]\), Fig. 6b shows both these functions together with the dependence \(\Delta\mu(\Delta p)\), all in \(kT_{0}\) units. The relative error \(\varepsilon_{\mu}=(\Delta\mu_{-}-\Delta\mu_{\omega})/\Delta\mu_{-}\) is consistent with \(\varepsilon_{R}\) in Fig. 6a. Summarizing, we can conclude that the approximations given by Eqs. (65) and (66) are sufficiently accurate, so they can be used in practice due to their simplicity. The reason is the _smallness of the compressibility factor_\(\chi_{i}\) for liquids, which is the curvature of the approximating parabola. For this reason, the vdW and especially G-R EOS having less values of \(\chi_{i}\) give higher accuracy to these approximations (for \(\delta_{s}\), this fact can be seen from the Table data). The derivatives of compressibility with respect to pressure are also small, so the liquid branch of the EOS is close to a straight line; this fact justifies the sufficiency of the mentioned parabolic approximation. In the bubble case, some deviation from the straight line occurs in the vicinity of the spinodal point, which gives somewhat higher errors for the dependences of interest. As can be seen from Fig. 6b, the dependence \(\Delta\mu(\Delta p)\) is close to the straight line \(\Delta\mu_{\omega_{\omega}}(\Delta p)=(\Delta\mu_{s}/\Delta p_{s})\Delta p\); the maximum relative deviations are equal to 3.3 and 6.3 % for drops and bubbles, respectively, and the PR EOS. This means that the curves \(\Delta\mu(R)\) and \(\Delta p(R)\) are geometrically similar, so the corresponding reduced dependences \(\Delta\mu(R)/\Delta\mu_{s}\) and \(\Delta p(R)/\Delta p_{s}\) almost coincide. Fig. 7 confirms this conclusion and shows that both \(\Delta\mu\) and \(\Delta p\) can be equally used as a measure of metastability. For bubbles the coincidence is somewhat worse than for drops due to the fact that the dependence \(\Delta\mu(\Delta p)\) for bubbles in Fig. 6b is "more parabolic" than for drops. Fig. 8 shows the function \(\varphi_{\rm S}(R)\) together with its asymptotics \[\varphi_{s}(R)=\left\{\begin{array}{ll}\varphi_{s}^{(0)}(R)=3R^{2}\,/(R^{3}+2 \delta_{s}^{3}),&R\to 0\\ \varphi_{s}^{(\infty)}(R)=1/\,R,&R\to\infty\end{array}\right. \tag{70}\] where the equation for \(\varphi_{s}^{(0)}(R)\) is obtained using the linear asymptotics from Eq. (48). The asymptotics \(1/\,R\) reduces the factor \(R\) in Eq. (22b), providing the condition \(\sigma(R)\to\sigma_{\infty}\). Thus, the sharp dependence \(\sigma(R)\) is determined by a "bell" near the spinodal point. From Eqs. (22b), (67), and the condition \(\sigma(R)/\,\sigma_{\infty}\to 1\), one obtains \[\lim_{R\to\infty}R\exp\!\!\left[-\int\limits_{0}^{R}\!\frac{dr}{r+2\bar{ \delta}^{(K)}(r)}\right]=\delta_{s}^{(K)}+\delta_{\infty} \tag{71}\] regardless of the _shape_ of the \(\delta^{(K)}(R)\) curve; only its _endpoints_ matter. In view of the fact that \(\delta_{s}\) is noticeably larger than \(\delta_{\infty}\) (cf. the Table), these results together with Figs. 3 and 4 show that the curvature effect is determined mainly by the properties of the system at the spinodal point, which confirms the earlier conjecture.\({}^{47}\) This equation explains why the dependences \(\sigma(R)\) for functions \(\delta(R)\) with the same value of \(\delta_{s}\) are close to each other; the slight difference between them is associated only with the small quantity \(\delta_{\infty}\). ## VI. Conclusions The main result of the theory is Eqs. (31), (22a, b) and the corresponding computational procedure for determining \(\delta(R)\) in coupling with \(\sigma(R)\). The underlying thermodynamic consideration includes the following results. (i) The work of formation of a nucleus of arbitrary size derived for the EM DS, Eq. (8), and then the extremum condition applied to it lead to the conclusion that the equilibrium condition should give the EM critical radius. (ii) This conclusion, together with the nucleation work for the EM DS, Eq. (12), as well as Eqs. (14) and (17) employed for surface tension expansions, shows that the EM DS is not merely some kind of auxiliary DS, but is inherent in thermodynamics alongside with the ST; one can say that it plays the role of a kind of "zero point" in surface thermodynamics in a broad sense, not only in relation to adsorption. (iii) New Eqs. (22a, b) associated with the spinodal point are key to determining both the \(\delta(R)\) and \(\sigma(R)\) dependences. Analysis of the equilibrium condition expressed by Eq. (24b) results in the conclusion that Kelvin type Eq. (29) for the EM radius is exact, which implies the physical meaning of the TL as a function that takes into account both the compressibility of the liquid and the dependence \(\sigma(R)\). This equation yields basic Eq. (31) for determining the TL. The expansion of surface tension into a series near the spinodal and binodal points shows the fundamental possibility and impossibility, respectively, of determining it from these points. The physical reason for this impossibility can be explained by the fact that Tolman's Eq. (20b) associated with the binodal contains a lack of information about the properties of the system. These expansions also provide some fundamental properties of the TL and useful equations: (i) the linear asymptotic behavior of the TL at the spinodal point, Eq. (48), as its universal property; (ii) Eq. (62a) for the asymptotic value \(\delta_{\infty}\) of the TL at the binodal point, and (iii) approximate Eq. (65) for \(R_{e}(\Delta p)\) and the simplified Eq. (68) for determining the TL. The solution of Eq. (31) obtained by the method of successive approximations shows that the TL strongly depends on the radius in the region of small sizes, in accordance with Tolman's original assumption and more recent studies. This fact, together with Eq. (71) and Fig. 8, leads to the conclusion that the curvature effect is determined by the properties of the system at the spinodal point, mainly by the value of \(\delta_{s}\). An assessment of the accuracy of approximate Eq. (68) for \(\delta(R)\) shows its suitability for practical use; the reason of its good accuracy is established to be the smallness of the liquid compressibility factor \(\chi_{l}\). ## Acknowledgment I am very grateful to Dr. R. I. Kholodov from the Sumy Institute of Applied Physics, who supported me from the first tragic days of the war and kindly provided me with the conditions to carry out this work. ## Author Declarations ### Conflict of Interest There are no conflicts of interest to declare ### Data Availability The data that support the findings of this study are available within the article. ## References * [1] F. F. Abraham, _Homogeneous Nucleation Theory_ (Academic, New York, 1974). * [2] R. C. Tolman, J. Chem. Phys. **17**, 333 (1949). * [3] N. V. Alekseechkin, J. Aerosol Sci. **116**, 1 (2018). * [4] D. Oxtoby and R. Evans, J. Chem. Phys. **89**, 7521 (1988). * [5] X. C. Zeng and D. W. Oxtoby, J. Chem. Phys. **94**, 4472 (1991). * [6] R. M. Nyquist, V. Talanquer, and D. W. Oxtoby, J. Chem. Phys. **103**, 1175 (1995). * [7] V. Talanquer and D. W. Oxtoby, J. Chem. Phys. **100**, 5190 (1994). * [8] V. Talanquer and D. W. Oxtoby, J. Chem. Phys. **99**, 2865 (1995). * [9] R. McGraw and A. Laaksonen, Phys. Rev. Lett. **76**, 2754 (1996). * [10] V. Talanquer, J. Chem. Phys. **106**, 9957 (1997). * [11] K. Koga and X. C. Zeng, J. Chem. Phys. **110**, 3466 (1999). * [12] L. Granasy, J. Chem. Phys. **104**, 5188 (1996). * [13] L. Granasy, J. Phys. Chem. **100**, 10768 (1996). * [14] K. Koga, X. C. Zeng, A. K. Shchekin, J. Chem. Phys. 109 (1998) 4063. * [15] L. Granasy, J. Phys. Chem. **109**, 9660 (1998). * [16] O. Wilhelmsen, D. Bedeaux, and D. Reguera, J. Chem. Phys. **142**, 171103 (2015). * [17] O. Wilhelmsen, D. Bedeaux, and D. Reguera, J. Chem. Phys. **142**, 064706 (2015). * [18] A. Aasen, E. M. Blokhuis, and O. Wilhelmsen, J. Chem. Phys. **148**, 204702 (2018). * [19] A. Aasen, D. Reguera, and O. Wilhelmsen, Phys. Rev. Lett. **124**, 045701 (2020). * [20] E. M. Blokhuis and A. E. van Giessen, J. Phys.: Condens. Matter **25**, 225003 (2013). * [21] P. Rehner and J. Gross, J. Chem. Phys. **148**, 164703 (2018). * [22] P. Rehner, A. Aasen, and O. Wilhelmsen, J. Chem. Phys. **151**, 244710 (2019). * [23] L. S. Bartell, J. Phys. Chem. B **105**, 11615 (2001). * [24] D. Kashchiev, J. Chem. Phys. **153**, 124509 (2020). * [25] T. V. Bykov and X. C. Zeng, J. Chem. Phys. **111**, 10602 (1999). * [26] D. Kashchiev, J. Chem. Phys. **118**, 9081 (2003). * [27] J. Julin, I. Napari, J. Merikanto, and H. Vehkamaki, J. Chem. Phys. **133**, 044704 (2010). * [28] L. D. Landau and E. M. Lifshits, _Statistical Physics, Pt. 1_ (Nauka, Moscow, 1976). * [29] R. Kubo, _Thermodynamics_ (North-Holland Publishing Company, Amsterdam, 1968). * [30] N. V. Alekseechkin, J. Chem. Phys. **124**, 124512 (2006). * [31] N. V. Alekseechkin, J. Phys. Chem. B **116**, 9445 (2012). * (32) N. V. Alekseechkin, Eur. Phys. J. B **86**: 401 (2013). * (33) N. V. Alekseechkin, Physica A **412**, 186 (2014). * (34) N. V. Alekseechkin, J. Chem. Phys. **143**, 054502 (2015). * (35) N. V. Alekseechkin, Chem. Phys. **517**, 138 (2019). * (36) J. W. Gibbs, _The Collected Works_, _Vol. I. Thermodynamics_ (Yale University Press, New Haven, 1957). * (37) S. Ono and S. Kondo, in _Handbuch der Physik_, edited by S. Flugge (Springer, Berlin, 1960), Vol. 10, p.134. * (38) A. I. Rusanov, Fazovie Ravnovesiya i Poverkhnostnye Yavleniya (_Phase Equilibria and Surface Phenomena_) (Khimiya, Leningrad, 1967). * (39) A. I. Rusanov, _Phasengleichgewichte und Grenzflachenerscheidungen_ (Akademie-Verlag, Berlin, 1978). * (40) D. Kashchiev, J. Chem. Phys. **125**, 014502 (2006). * (41) Y. Viisanen, R. Strey, and H. Reiss, J. Chem. Phys. **99**, 4680 (1993). * (42) R. Strey, P. E. Wagner, and Y. Viisanen, J. Phys. Chem. **98**, 7748 (1994). * (43) A. Laaksonen and R. McGraw, Europhys. Lett. **35**, 367 (1996). * (44) D-Y. Peng and D. B. Robinson, Ind. Eng. Chem. Fundam. **15**, 59 (1976). * (45) F. de J. Guevara-Rodriguez, Fluid Phase Equilibria **307**, 190 (2011). * (46) P. A. Egelstaff and B. Widom, J. Chem. Phys. **53**, 2667 (1970). * (47) N. V. Alekseechkin, Chem. Phys. **500**, 19 (2018). \begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|l|l|} \cline{2-13} \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{\(\chi_{l}\times 10^{10}\)} & \multicolumn{1}{c|}{\(P_{m}\times 10^{-5}\)} & \multicolumn{1}{c|}{\(\rho_{m}^{l}\)} & \multicolumn{1}{c|}{\(P_{s}^{\nu}\times 10^{-6}\)} & \multicolumn{1}{c|}{\(\Delta p_{s}^{(d)}\times 10^{-8}\)} & \multicolumn{1}{c|}{\(\delta_{s}^{(d)}/\delta_{s}^{(e,a)}\)} & \multicolumn{1}{c|}{\(\delta_{m}^{(d)}\)} & \multicolumn{1}{c|}{\(\Delta p_{s}^{(b)}\times 10^{-8}\)} & \multicolumn{1}{c|}{\(\delta_{s}^{(b)}/\delta_{s}^{(e,b)}\)} \\ \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{cm\({}^{2}\)/dyn} & \multicolumn{1}{c|}{dyn/cm\({}^{2}\)} & \multicolumn{1}{c|}{g/cm\({}^{3}\)} & \multicolumn{1}{c|}{dyn/cm\({}^{2}\)} & \multicolumn{1}{c|}{dyn/cm\({}^{2}\)} & \multicolumn{1}{c|}{nm} & \multicolumn{1}{c|}{nm} & \multicolumn{1}{c|}{dyn/cm\({}^{2}\)} & \multicolumn{1}{c|}{nm} \\ \cline{2-13} \multicolumn{1}{c|}{vdW} & 1.23 & 7.8 & 1.45 & 11.06 & 6.13 & 0.3794/ 0.383 & -0.014 & 4.63 & 0.464/ 0.474 \\ \hline PR & 2.05 & 8.1 & 1.65 & 8.78 & 6.23 & 0.3791/ 0.386 & -0.023 & 3.46 & 0.615/ 0.631 \\ \hline G-R & 0.76 & 7.78 & 1.36 & 8.77 & 5.07 & 0.452/ 0.454 & -0.0086 & 4.67 & 0.469/ 0.476 \\ \hline Experimental & 2.02 & 7.9 & 1.41 & & & & & & & & \\ \end{tabular} Table. Main parameters of the theory for argon at \(T_{0}\) = 85 \(K\) for the EOS under consideration. Figure 1: EOS under consideration for argon at \(T_{0}\) = 85 \(K\). \((B_{v},B_{i})\) and \((S_{v},S_{i})\) are the binodal and spinodal points for vapor and liquid, respectively. Figure 2: Approximations \(\delta_{x}(R)\) for the TL (a) and \(\sigma_{x}(R)\) for the surface tension (b) in the case of drops and the PR EOS. The number \(k\) of iteration is shown at the corresponding curve. Figure 3: Tolman lengths (a) and the corresponding dependences \(\sigma(R)\) (b) in the drop case for the EOS under consideration. Figure 4: Tolman lengths (a) and the corresponding dependences \(\sigma(R)\) (b) for bubbles. Figure 5: (a) Exact (solid) and approximate, Eq. (68), (heavy dashed) Tolman lengths, \(\delta(R)\) and \(\delta^{(K)}(R)\), respectively, for the drop and bubble cases and PR EOS; the difference \(\delta^{(K)}(R)-\delta(R)\) is shown in the inset. (b) Corresponding dependences \(\sigma(R)\).
2309.03478
Spin-Statistics for Black Hole Microstates
The gravitational path integral can be used to compute the number of black hole states for a given energy window, or the free energy in a thermal ensemble. In this article we explain how to use the gravitational path integral to compute the separate number of bosonic and fermionic black hole microstates. We do this by comparing the partition function with and without the insertion of $(-1)^{\sf F}$. In particular we introduce a universal rotating black hole that contributes to the partition function in the presence of $(-1)^{\sf F}$. We study this problem for black holes in asymptotically flat space and in AdS, putting constraints on the high energy spectrum of holographic CFTs (not necessarily supersymmetric). Finally, we analyze wormhole contributions to related quantities.
Yiming Chen, Gustavo J. Turiaci
2023-09-07T04:46:12Z
http://arxiv.org/abs/2309.03478v3
# Spin-Statistics for Black Hole Microstates ###### Abstract The gravitational path integral can be used to compute the number of black hole states for a given energy window, or the free energy in a thermal ensemble. In this article we explain how to use the gravitational path integral to compute the separate number of bosonic and fermionic black hole microstates. We do this by comparing the partition function with and without the insertion of \((-1)^{\sf F}\). In particular we introduce a universal rotating black hole that contributes to the partition function in the presence of \((-1)^{\sf F}\). We study this problem for black holes in asymptotically flat space and in AdS, putting constraints on the high energy spectrum of holographic CFTs (not necessarily supersymmetric). Finally, we analyze wormhole contributions to related quantities. + Footnote †: institutetext: Department of Physics, University of Wisconsin, Madison, WI 53706, USA ## 1 Introduction The gravitational path integral [1] is normally used to compute the total number of black hole microstates with some constraints such as a fixed energy window or temperature. The result is usually phrased in terms of the free energy \(F(\beta)\) as a function of temperature or the microcanonical entropy \(S(E)\) as a function of the energy \[Z(\beta)=\text{Tr}\ e^{-\beta H}=e^{-\beta F(\beta)},\ \ \ \ \text{Tr}_{E}\ 1=e^{S(E)}. \tag{1}\] The purpose of this article is to apply this formalism to derive statistical properties of how these black hole microstates are distributed according to whether they are bosonic or fermionic. In principle, for Lorentz invariant theories the quantum statistics of a state is correlated with the angular momentum: states with half-integer angular momentum are fermions while states with integer angular momentum are bosons. This would correctly suggest we can estimate the quantum statistics of a black hole microstate by looking at its angular momentum. The problem with this approach is that in the limit of small \(G_{N}\), where we can rely on semiclassical gravity, charges including the angular momentum are large and of order \(1/G_{N}\). The distinction of whether the angular momentum is integer or half-integer is therefore beyond the classical approximation. The determination of the quantum statistics from the gravitational path integral in the way described in this paragraph therefore requires incorporating quantum effects. To extract the distribution of fermionic and bosonic black hole microstates in a simpler way, we focus on the following quantities which generalize (1): \[Z_{\rm spin}(\beta)={\rm Tr}\left(-1\right)^{\sf F}e^{-\beta H}=e^{-\beta F_{\rm spin }(\beta)},\ \ \ \ \ {\rm Tr}_{E}\left(-1\right)^{\sf F}=e^{S_{\rm spin}(E)}. \tag{2}\] These are the free energy \(F_{\rm spin}(\beta)\) as a function of temperature, and the entropy \(S_{\rm spin}(E)\) as a function of energy, both computed with an insertion of \(\left(-1\right)^{\sf F}\) the operator that assigns a value of \(+1\) to bosonic states and \(-1\) to fermionic states. The quantities in (2) together with (1) are enough to extract the number of bosonic against fermionic black hole microstates. The reason to study (2) is that, as we explain in this paper, this quantities can be computed using the gravitational path integral in the classical approximation. The idea to evaluate (2) is the following. For concreteness we frame the discussion in the context of AdS\({}_{d+1}\)/CFT\({}_{d}\) and use the gravitational path integral to compute the partition function of the boundary CFT on \(S^{1}\times S^{d-1}\) both with and without an insertion \(\left(-1\right)^{\sf F}\). Let us consider the fixed temperature ensemble since similar considerations apply when fixing energy. When evaluating the gravitational path integral, since the thermal partition function \(Z(\beta)\) is an Euclidean observable it gets a contribution from the Euclidean section of a non-rotating black hole as well as from thermal AdS. In principle the only difference between the partition function with or without an insertion of \(\left(-1\right)^{\sf F}\) is whether fermionic fields are periodic or anti-periodic around the time cycle, so the same black hole and thermal AdS saddles should contribute to both quantities. Nevertheless, in the presence of the \(\left(-1\right)^{\sf F}\) insertion, fluctuations of fermionic fields around the black hole saddle are singular since the choice of spin structure is incompatible with the time circle contracting at the horizon. So what saddles do contribute to \(F_{\rm spin}\) or \(S_{\rm spin}\)? The first obvious option is thermal AdS. Since the time circle does not contract anywhere inside the thermal AdS geometry we can evaluate \(F_{\rm spin}\) or \(S_{\rm spin}\) by counting bosonic vs fermionic low energy exitations around vacuum AdS. To leading order in the small \(G_{N}\) limit this predicts a small value for \(F_{\rm spin},S_{\rm spin}\sim{\cal O}(1)\). This is much smaller than the results without the \(\left(-1\right)^{\sf F}\) insertion since \(F,S\sim{\cal O}(1/G_{N})\), dominated by the black hole saddle. If this was true, it implies most states are evenly distributed between bosons and fermions and there is a large cancellation in \(F_{\rm spin}\) or \(S_{\rm spin}\). Some evidence for a huge cancellation between bosonic and fermionic states in a theory without supersymmetry was presented from the boundary side in [2] (and references therein). The purpose of this article is to introduce another universal saddle that always contributes to both \(F_{\rm spin}\) and \(S_{\rm spin}\), analogous to the black hole contribution to \(F\) or \(S\). For concreteness we focus first on the free energy. Choose an arbitrary direction and define \(J\) as the angular momentum generating rotations around this direction. One can generalize (1) and compute \({\rm Tr}\,\left(e^{-\beta H}e^{\beta\Omega J}\right)\), where \(\Omega\) is the angular velocity of the ensemble. On the boundary side this is implemented by imposing a twist in the boundary conditions: as fields go around the time cycle they get multiplied by \(+e^{\beta\Omega J}\) (bosons) or \(-e^{\beta\Omega J}\) (fermions). The black hole that contributes to this quantity in the gravitational path integral has now rotation. This implies that the contractible cycle is not time anymore, but a combination of time and the angle around the \(J\) direction. This partition function becomes \(F_{\rm spin}\) when \(\beta\Omega=2\pi{\rm i}\), since \((-1)^{\sf F}=e^{2\pi{\rm i}J}\). Therefore there is a universal rotating black hole saddle that contributes to \(F_{\rm spin}\) such that periodic fermions around the time cycle produces a smooth spin structure. The way described in the previous paragraph of generating black hole solutions with periodic fermions was considered exclusively for supersymmetric solutions in [3; 4; 5]. The point of the present paper is to apply this same construction to general theories that are either not supersymmetric, or supersymmetric but the partition function with a \((-1)^{\sf F}\) insertion is not protected. In the next sections we study the rotating black hole geometries that contribute to \(F_{\rm spin}\) and \(S_{\rm spin}\) and analyze their consequences. For the spacetime dimensions we analyzed, we always find that the rotating black hole saddle that contribute to \(F_{\rm spin}\) has a higher free energy than thermal AdS with periodic fermions making it always subleading. The free energy \(F\) has a phase transition as a function of temperature when thermal AdS and the black hole change dominance, which on the boundary side is interpreted as a confinement-deconfinement phase transition [6]. Instead, even after including the rotating black hole saddle, the free energy \(F_{\rm spin}\) in the presence of \((-1)^{\sf F}\) has no phase transition and depends smoothly on the temperature being always controlled by thermal AdS. In the presence of bulk gauge fields, it is easy to remedy this and make our rotating black hole dominant: one can consider an ensemble where some charge is fixed such that thermal AdS is discarded. A different way to make the rotating black hole be dominant is to work in the fixed energy ensemble and compute the entropy \(S_{\rm spin}\). The rotating black hole does contribute a positive and large amount to \(S_{\rm spin}\). For illustration take AdS\({}_{4}\). The entropy without and with the insertion of \((-1)^{\sf F}\) are given, at large energies, by (see Section 2.2.1 for conventions) \[S(E)\sim E^{2/3},\ \ \ \ \ S_{\rm spin}(E)\sim E^{1/2}. \tag{3}\] The black hole result \(S(E)\sim E^{2/3}\) is consistent with a local 3d theory on the boundary. The rotating black hole contributing to \(S_{\rm spin}\) gives a positive contribution that grows with energy, albeit slower than the total number of states. The results for AdS\({}_{3}\) are similar qualitatively. Instead, in AdS\({}_{5}\) we find that both \(F_{\rm spin}\) and \(S_{\rm spin}\) are dominated by thermal AdS. We also study the contribution from wormholes to the partition function in the presence of a \((-1)^{\sf F}\) insertion. More precisely, we study a generalization of the spectral form factor [7; 8] where \((-1)^{\sf F}\) is inserted. We discuss a construction similar to the double cone of [8] and evaluate its contribution to the gravitational path integral. We argue that at late time the spectral form factor is not sensitive to the \((-1)^{\sf F}\) insertion. We finish this introduction with some general comments and relation to other work. A universal formula was derived in [9; 10] for the density of states for theories with finite-group symmetry. If the finite-group is \(\mathbb{Z}_{2}\) the formula predicts that high energy states are equally distributed between even and odd states. In our context the finite-group is generated by \((-1)^{\sf F}\) and their result would apply if no saddle would contribute an amount to \(F_{\rm spin}\) (or \(S_{\rm spin}\)) comparable to \(F\) (or \(S\)), such that to leading order there are as many bosonic as fermionic states. In this context, our rotating black hole provides a universal geometric configuration that gives a subleading correction to the result of [9; 10]. Of course there can be other contributions such as defects which provide other corrections to [9; 10] (see the discussion section for further comments), but such a contribution would not be universal and would depend on the details of the defect and the theory in which they are embedded. Another interesting feature of our rotating black hole solution is that it does not respect ensemble equivalence: the saddle that dominates at fixed temperature (thermal AdS) is completely different than the saddle that dominates the microcanonical ensemble (the rotating black hole). In Section 2.2 we give an interpretation of why this is so. In the context of AdS\({}_{3}\)/CFT\({}_{2}\) Tauberian theorems that incorporate rotation [11; 12; 13] might be powerful enough to prove when different ensembles are equivalent or not and therefore studying \(Z_{\rm spin}\) in that context could be interesting. The rest of the paper is organized as follows. In Section 2 we find new black hole geometries that contribute to the partition function with an insertion of \((-1)^{\sf F}\). In Section 2.1 we study this problem for black holes in flat space, with and without charge, and in Section 2.2 we generalize this to AdS\({}_{4}\), AdS\({}_{5}\) and finally AdS\({}_{3}\). In Section 3 we study the contribution from wormholes to the partition function in the presence of a \((-1)^{\sf F}\) insertion. We conclude in Section 4 with discussions of our results and future directions, leaving technical details for appendices. ## 2 Black hole solutions in the presence of \((-1)^{\sf F}\) ### Flat Space Solutions In this section we explain the general principle to compute the distribution of fermionic and bosonic black hole microstates using the gravitational path integral. We begin by studying black holes in asymptotically four dimensional flat spacetime. As we will see, the calculation in flat space is a bit singular. However we decide to include it since it is a good starting point to illustrate the main ideas. The action, in Euclidean signature, is given by \[I=-\frac{1}{16\pi G_{N}}\int\sqrt{g}R-\frac{1}{8\pi G_{N}}\oint\sqrt{h}K+( \text{matter}). \tag{1}\] The details of the matter sector will not be important other than the assumption that part of it is fermionic. This assumption does not show up explicitly in the following discussion, while its importance will be discussed in Section 4. This theory has classical black hole solutions described by the Kerr metric, which in Boyer-Lindquist coordinates is given by \[\mathrm{d}s^{2}=-\frac{\rho^{2}\Delta}{\Xi}\mathrm{d}t^{2}+\frac{\rho^{2}}{ \Delta}\mathrm{d}r^{2}+\rho^{2}\mathrm{d}\theta^{2}+\frac{\Xi}{\rho^{2}}\sin^ {2}\theta\left(\mathrm{d}\varphi-\frac{2Ear}{\Xi}\mathrm{d}t\right)^{2} \tag{2}\] with the functions \[\begin{split}\rho^{2}&=r^{2}+a^{2}\cos^{2}\theta,\\ \Delta&=r^{2}-2Er+a^{2},\\ \Xi&=(r^{2}+a^{2})^{2}-a^{2}\Delta\,\sin^{2}\theta. \end{split} \tag{3}\] \(E\) and \(a\) are parameters of the solution, and their interpretation depends on the ensemble we choose to work with. In a microcanonical ensemble, \(E\) is the ADM mass while \(J=aE\) is the angular momentum. In the classical limit both of these quantities are large, of order \(1/G_{N}\) and one is _a priori_ not sensitive to questions such as whether \(J\) is integer (bosonic) or half-integer (fermionic). Below we work in units such that \(G_{N}=1\). In the grandcanonical ensemble, we want instead to fix the inverse temperature \(\beta\), the length of the thermal circle, and the angular velocity \(\Omega\) at infinity. For the metric given in equation (2) this is achieved by imposing the following identification \[(t_{\rm E},\varphi)\sim(t_{\rm E}+\beta,\varphi+{\rm i}\beta\Omega)\sim(t_{ \rm E},\varphi+2\pi), \tag{4}\] where \(t_{\rm E}=-{\rm i}t\) is the Euclidean time. In this ensemble, the parameters \(a\) and \(E\) should be seen as being fixed by the temperature and angular velocity in the following way. The Kerr metric has an outer event horizon at \(r_{+}=E+\sqrt{E^{2}-a^{2}}\). It will be useful sometimes below to trade the parameter \(E\) by \(r_{+}\) since \(E=\frac{r_{+}^{2}+a^{2}}{2r_{+}}\). Demanding that the solution (2) is smooth at the horizon, given the identification (4), determines the parameters \(a\) and \(E\) (or \(r_{+}\)) as functions of \(\beta\) and \(\Omega\) by \[\beta=\frac{4\pi r_{+}(a^{2}+r_{+}^{2})}{(r_{+}^{2}-a^{2})},\ \ \ \ \Omega=\frac{a}{r_{+}^{2}+a^{2}}. \tag{5}\] It is for this choice of ensemble that the action quoted in equation (1) with the Gibbons-Hawking-York boundary term poses a well-defined variational problem. Let us begin by computing the partition function in the grandcanonical ensemble with \(\Omega=0\), where we sum over states with arbitrary mass and angular momentum. This is of course a very well known result [1]. Classically, \(\Omega=0\) implies that \(a=0\) and then the metric (2) reduces to the Schwarszchild black hole. For any value of \(\beta\) and \(\Omega\), the action (1) evaluated on this solution is equal to \(I=\beta E-A/4-\beta\Omega J\), where \(A=4\pi(r_{+}^{2}+a^{2})\) is the area of the event horizon. For \(\Omega=0\) the free energy then is \[-\beta F=\log Z=-\frac{\beta^{2}}{16\pi},\ \ \ \ F=\frac{\beta}{16\pi}. \tag{6}\] In a quantum mechanical description of the black hole microstates, this quantity corresponds to \(Z(\beta)={\rm Tr}\,e^{-\beta H}\), where the trace is taken over states of any mass or angular momentum. We can also work in an ensemble of fixed mass, and again restrict to \(\Omega=0\). We refer to this partition function as \(e^{S(E)}\), and can be interpreted as the number of states of energy \(E\) regardless of the spin.1 On the black hole side the energy corresponds to the mass \(E\). This quantity can be obtained either by Legendre transform of the result at finite temperature, or by finding the appropriate boundary terms in the action. Either way the answer is Footnote 1: In the semiclassical approximation it makes more sense to specify the energy \(E\) to be in a small window \((E-\delta E,E+\delta E)\) with small enough \(\delta E\). Whenever we specify the energy from now on we always mean up to \(\delta E\). \[S(E)=4\pi E^{2}. \tag{7}\] This is the standard Bekenstein-Hawking area law for a Schwarszchild black hole. The result above gives a prediction for the total number of black hole microstates of a given energy, \(\exp\{S(E)\}\). The question we want to address here is what fraction of these states are fermionic states and what fraction are bosonic. On general grounds, we expect that both bosonic and fermionic microstates constitute about half of the total number of states. This is because we can turn a bosonic black hole into a fermionic one by throwing in a fermion, which barely changes its energy. However, we do not expect the ratio to be precisely one half. We might have slightly more bosonic states or fermionic states depending on the energy window we are looking at. It is this subtle difference that we are after. To approach this question, we analyze a quantity whose evaluation is easy to formulate using the gravitational path integral. In terms of the quantum system describing the black hole microstates we would like to compute the partition function with a fermion parity operator, an insertion of \((-1)^{\mathsf{F}}\).2 This is Footnote 2: As we mentioned in the introduction, the quantity \(\mathrm{Tr}\left(-1\right)^{\mathsf{F}}e^{-\beta H}\) is commonly studied for supersymmetric theories, and is called the Witten index [14]. Here we are _not_ considering supersymmetric theories. \[\mathrm{Tr}\left(-1\right)^{\mathsf{F}}e^{-\beta H}=e^{-\beta F_{\mathrm{spin }}},\qquad\mathrm{Tr}_{E}\left(-1\right)^{\mathsf{F}}=e^{S_{\mathrm{spin}}(E)} \tag{8}\] in an ensemble with fixed temperature or energy, respectively. We will use the gravitational path integral to compute the free energy \(F_{\mathrm{spin}}\) and entropy \(S_{\mathrm{spin}}\) in the presence of a \((-1)^{\mathsf{F}}\) insertion, to leading order in the classical limit. In a naive approach to this question, one would conclude that with the insertion of \((-1)^{\mathsf{F}}\) we should study the same Schwarszchild black hole geometry, but impose that fermions should be periodic around the thermal circle. Since this circle contracts at the Figure 1: \((t_{\mathrm{E}},\varphi)\)-plane at fixed \(r\) and \(\theta\neq 0,\pi\). Working in the grandcanonical ensemble implies two identifications given in equation (4), the fundamental domain is bounded by the dashed lines. We show the thermal cycle \((t_{E},\varphi)\sim(t_{E}+\beta,\varphi)\) (red arrow) and the cycle that contracts at the horizon (blue arrow). _Left:_ General value of \(\Omega\). Fermions are always antiperiodic around the blue cycle and around \(\varphi\sim\varphi+2\pi\) since they are both contractible in the black hole geometry. Fields get multiplied by \(e^{\beta\Omega J}\) (bosons) or \(-e^{\beta\Omega J}\) (fermions) as they go around the red cycle. _Right:_ Choice of angular velocity that computes the partition function with a \((-1)^{\mathsf{F}}\) insertion. \(\Omega\) is such that fermions are antiperiodic along the blue cycle but now become periodic around the red one since \(e^{2\pi\mathrm{i}J}=1\) (bosons) or \(-1\) (fermions). horizon, this is not a smooth choice of spin structure. This approach is naive since we can generate periodic boundary conditions for fermions starting from (4) and setting instead \(\beta\Omega=\pm 2\pi\mathrm{i}\). Now the solution is the Kerr black hole and the rotation implies the time cycle does not contract at the horizon anymore. Instead it is precisely the one along the first identification in equation (4) that contracts and fermions are antiperiodic around this cycle [5]. We illustrate this argument in figure 1. Another way to think about this solution is to write \[(-1)^{\mathsf{F}}=e^{\pm 2\pi\mathrm{i}J}. \tag{9}\] Using this relation, the insertion of \((-1)^{\mathsf{F}}\) corresponds to a specific choice of the angular velocity. Let us make a few clarifications. The boundary conditions corresponding to the grand-canonical ensemble given in (4) break the full rotational isometries of flat space down to the one generated by \(\partial_{\varphi}\). This is true for generic values of \(\Omega\). An exception is when \(\Omega=0\): the full \(SU(2)\) rotational symmetry is restored in the boundary and the bulk geometry, the Schwarszchild metric, preserves these isometries everywhere in the interior of the spacetime. Another exception is when \(\beta\Omega=\pm 2\pi\mathrm{i}\): the relation (9) is independent of the choice of axis for \(J\) in the right hand side and therefore the boundary conditions preserve the full rotational invariance. Nevertheless, the new feature is that now the bulk geometry, the Kerr metric, does break the rotational isometries down to \(\partial_{\varphi}\) since it requires singling out an axis of rotation. This implies the presence of a moduli space of Kerr black holes with arbitrary axis of rotation, and the rotational invariance arises from integrating over this moduli space. The presence of these continuum bosonic zero-modes is not problematic since the moduli space is compact3, and in any case this issue would only arise when evaluating quantum corrections which we will not attempt here. Given the choice of axis we made here, the two values \(\beta\Omega=+2\pi\mathrm{i}\) or \(-2\pi\mathrm{i}\) are connected inside the moduli space, and therefore we can restrict from now on to \(\beta\Omega=+2\pi\mathrm{i}\) for concreteness. Footnote 3: This was pointed out in [5] and analyzed in detail in [15] in the context of concrete supergravity examples. After these generalities, let us compute \(F_{\mathrm{spin}}\) using the rotating black hole solution. We first need to impose \(\beta\Omega=2\pi\mathrm{i}\) and solve for \(r_{+}\), or equivalently \(E\), and \(a\). More generally, we have the following relation between \(r_{+}\) and \(a=\mathrm{i}\mathsf{a}\) \[\frac{2\mathsf{a}r_{+}}{\mathsf{a}^{2}+r_{+}^{2}}=\frac{\beta\Omega}{2\pi \mathrm{i}}\equiv\omega. \tag{10}\] We want to set \(\omega=1\). Assuming \(r_{+}\) is finite, this leads to \(\mathsf{a}=\pm r_{+}\) which in turn implies \(\beta=0\). This is an issue since in principle we should be able to compute the free energy for any temperature using the gravitational path integral. A way out is to consider the fixed energy ensemble instead, which we do below. Another option is to work with \(\omega=1-\varepsilon\) and take \(\varepsilon\to 0\) at the end of the calculation. After eliminating \(a\), the inverse temperature is given by \[\beta=4\pi r_{+}\sqrt{\varepsilon(2-\varepsilon)},\quad\Rightarrow\quad r_{+} =\frac{\beta}{4\pi\sqrt{\varepsilon(2-\varepsilon)}}. \tag{11}\] This expression is valid for any \(\varepsilon\), but in the \(\varepsilon\to 0\) limit we get \(r_{+}\sim 1/\sqrt{\varepsilon}\to\infty\). It might be worrisome that computing \(F_{\rm spin}\) involves a geometry with \(r_{+}\to\infty\). Nevertheless one can check that in this limit the energy, the regularized action, and the curvature tensor, all remain finite as explained in Appendix B. (This issue will not arise for black holes in AdS.) Thus we will accept this solution and continue our analysis. We can estimate the contribution of this saddle to the difference between fermionic and bosonic states by computing the on-shell action. We obtain \[I=\beta E-\frac{A}{4}-\beta\Omega J=\frac{\beta^{2}}{8\pi(1+\sqrt{\varepsilon( 2-\varepsilon)})}, \tag{12}\] for an arbitrary value of \(\varepsilon\). Taking the limit \(\varepsilon\to 0\) gives \(I=\beta^{2}/8\pi\) and therefore \[-\beta F_{\rm spin}=-\frac{\beta^{2}}{8\pi},\ \ \ \ F_{\rm spin}=\frac{\beta}{8\pi}. \tag{13}\] To interpret this result, we repeat this analysis in the fixed energy ensemble. In this case, instead of determining \(r_{+}\) from \(\beta\), \(r_{+}\) is determined by the fixed mass \(E\). We obtain \(r_{+}\sim E/\sqrt{4\varepsilon}\) in the small \(\varepsilon\) limit. In the fixed energy ensemble the classical action has an extra boundary term \(-\beta E\) and the total answer for the action is \[I_{\rm fixed\ E}=-\frac{A}{4}-\beta\Omega J\to 2\pi E^{2}. \tag{14}\] Therefore the entropy in the presence of \((-1)^{\sf F}\), counting the difference between fermionic and bosonic black hole microstates of mass \(E\) in the semiclassical limit, is given by \[S_{\rm spin}(E)=2\pi E^{2}=\frac{1}{2}S(E). \tag{15}\] This implies that if the total number of black hole microstates of energy \(E\) grows as \(N_{\rm tot}\), then the absolute value of the difference in number of bosonic minus fermionic states grows as \(\sqrt{N_{\rm tot}}\). Notice at the classical level we cannot determine which statistic has more states. To determine the overall sign would require evaluating the one-loop determinant. A possible interpretation for the singular behavior of the flat space calculation could be the following. The flat space black hole is coupled to radiation extending to infinity. In flat space, we have the possibility of introducing very light particles far away from the black hole that carry large angular momentum. This suggests that in this case it might be subtle to define a closed system involving the black hole and its environment, for which the trace \({\rm Tr}\left(-1\right)^{\sf F}\) can be defined. Such a problem does not exist in AdS space, and as we will see in Section 2.2 the calculation is indeed non-singular. There is a final point we want to address before moving on. Since we assume that the theory includes fermionic and bosonic fields, a background can be probed by operators of half-integer or integer angular momentum. Therefore the boundary condition in equation (4) only depends on \({\rm i}\beta\Omega\ {\rm mod}\ 4\pi\mathbb{Z}\), while the black hole solution depends on its real valued lift \({\rm i}\beta\Omega\). This is resolved by summing over saddles obtained by integer shifts \({\rm i}\beta\Omega\to{\rm i}\beta\Omega+4\pi\mathbb{Z}\). What subset of these saddles should be included is an open question, and affects the entropy both with and without the insertion of \((-1)^{\sf F}\). We leave this for future work. #### Charged black hole We can extend the previous calculation to black holes in the presence of a Maxwell field, in a background of fixed electric charge \(Q\), which we take to be positive. The classical solution of Einstein-Maxwell theory we would need in this case is the Kerr-Newman black hole. We normalize the charge \(Q\) such that the thermodynamic potentials derived from the Kerr-Newman solution are \[\beta=\frac{4\pi r_{+}(r_{+}^{2}+a^{2})}{r_{+}^{2}-a^{2}-Q^{2}},\quad\Omega= \frac{a}{r_{+}^{2}+a^{2}} \tag{16}\] while the mass is given in terms of \(r_{+}\) as \(E=\frac{r_{+}^{2}+a^{2}+Q^{2}}{2r_{+}}\) and \(J=aE\). The saddle contributing to the grandcanonical partition function, with \(\Omega=0\) and therefore no \((-1)^{\mathsf{F}}\) insertion, has \(a=0\). This leads to the Euclidean section of the Reissner-Nordstrom black hole. At fixed temperature, the on-shell action has an explicit but complicated expression at fixed temperature. The result simplifies when written in the fixed energy ensemble and results in an entropy \[S(E,Q)=\pi\big{(}E+\sqrt{E^{2}-Q^{2}}\big{)}^{2}, \tag{17}\] the standard Bekenstein-Hawking area law for the Reissner-Nordstrom black hole. We only consider the black hole solutions with \(E\geq Q\). We can use the gravitational path integral to compute the difference between the number of bosonic and fermionic black hole microstates. To do this, for the same reasons as outlined before, we need to set \(\beta\Omega=2\pi\mathrm{i}\). This implies, using equation (16) and writing \(a=\mathsf{i}\mathsf{a}\), that \(2\mathsf{a}r_{+}=r_{+}^{2}+\mathsf{a}^{2}-Q^{2}\) and therefore \(\mathsf{a}=r_{+}-Q\). The horizon radius is fixed in terms of the inverse temperature as \(r_{+}=\frac{Q(\beta-2\pi Q)}{\beta-4\pi Q}\).4 In this case the on-shell action appropriate to the fixed temperature ensemble has a very simple form \(I=-\pi Q^{2}+\beta Q\), implying that Footnote 4: The radius of the horizon is negative for \(2\pi Q<\beta<4\pi Q\). One option to deal with temperatures in this range is to define the geometry along a complex contour in the \(r\)-plane that approaches \(r_{+}\) from infinity without going close to the singularity at \(r=0\). This type of complex geometry would violate the criterion put forth in [16]. For \(\beta<2\pi Q\), \(r_{+}\) is positive again but a new horizon appears with a larger value of \(r\), which again has to be avoided along the complex plane. If we consider the classical limit of \(\mathcal{N}=2\) supergravity in 4d, these complex geometries seem necessary to ensure that the index is temperature independent. \[-\beta F_{\mathrm{spin}}=\pi Q^{2}-\beta Q, \tag{18}\] which is the result found in [5]. This solution has two issues. The first is that it does not have a smooth \(Q\to 0\) limit since \(r_{+}\to 0\). The second is that for this solution the energy is \(E=Q\) independently of \(\beta\). Instead we should be able to compute the difference between bosonic and fermionic microstates in a fixed energy ensemble with any \(E\neq Q\). This requires finding a different saddle in the fixed temperature ensemble. To do this set \(\beta\Omega=2\pi\mathrm{i}(1-\varepsilon)\) and take the limit \(\varepsilon\to 0\) more carefully. This way we discover a new solution such that the horizon radius is \[r_{+}\sim\frac{\sqrt{\beta^{2}-(4\pi Q)^{2}}}{4\pi\sqrt{2\varepsilon}},\quad \varepsilon\ll 1. \tag{19}\] his solution is similar to the one we found in the uncharged case. Now the average ADM energy is given by \(E=\beta/4\pi\) and becomes a free parameter set by the temperature (and happens to be independent of the charge). The action in the fixed temperature ensemble is now \(I=\pi Q^{2}+\beta^{2}/8\pi\), or equivalently \[-\beta F^{\prime}_{\rm spin}=-\pi Q^{2}-\frac{\beta^{2}}{8\pi}. \tag{20}\] Notice the sign change of the temperature independent term between \(F_{\rm spin}\) and \(F^{\prime}_{\rm spin}\). The true free energy should be the minimum between \(F_{\rm spin}\) and \(F^{\prime}_{\rm spin}\), which happens to be \(F_{\rm spin}\) for any temperature. The goal of [5] was to use this type of solution to compute the index of the black hole in \({\cal N}=2\) supergravity. In this case we expect the solution with \(E=Q\) to be the only contribution. The boundary conditions in this context are supersymmetric and any solution that breaks supersymmetry (if \(E\neq Q\) there is no Killing spinor) will have a fermionic zero-mode that makes this contribution vanish.5 The second solution above with \(r_{+}\to\infty\) therefore makes a vanishing contribution to the index in supergravity due to this fermion zero-modes. If we are working with a non-supersymmetric theory of gravity, then both solutions should be in principle included. For any temperature, the black hole with \(r_{+}\) finite has always a lower free energy and is dominant. Nevertheless, when working at fixed energy \(E>Q\) the solution with finite \(r_{+}\) disappears and only the solution with \(r_{+}\to\infty\) remains. Footnote 5: Even if \(E=Q\) some supersymmetries are broken since the Reissner-Nordstrom preserves only four supercharges, and the contribution to the index also vanishes. One can fix this considering the helicity supertrace instead of the index, which can be also computed with the gravitational path integral. Let us then consider entropy in the fixed energy ensemble with the insertion of \((-1)^{\sf F}\), using the relation between energy and temperature derived above, \(\beta=4\pi E\). The action, including the boundary terms necessary to change ensemble, is given by \(\pi(2E^{2}-Q^{2})\) and therefore the entropy counting the difference between bosonic and fermionic microstates is \[S_{\rm spin}(E,Q)=\pi(2E^{2}-Q^{2}). \tag{21}\] Figure 2: Plot of \(S_{\rm spin}/S\) for a charged black hole of energy \(E\) as a function of \(q=Q/E\). It is always smaller than one and for a given energy \(E\), it interpolates between \(1/2\) for \(Q=0\) and \(1\) for \(Q=E\). We do not expect any black hole microstate with \(E<Q\) regardless of its statistics, so this result should only be considered for \(E>Q\). The ultimate reason why a complex saddle with \(E<Q\) is not allowed has to be tracked back to the integration contour of the gravitational path integral. In the uncharged case we found \(S_{\rm spin}(E,Q=0)=\frac{1}{2}\cdot S(E,Q=0)\). We can now evaluate the ratio between the entropy with and without a \((-1)^{\sf F}\) insertion and we find the following charge dependent result \[\frac{S_{\rm spin}(E,Q)}{S(E,Q)}=\frac{2-q^{2}}{(1+\sqrt{1-q^{2}})^{2}}\leq 1, \quad q\equiv\frac{Q}{E}, \tag{22}\] where \(0\leq q\leq 1\). We plot this function in figure 2. When \(q=0\) we recover the \(1/2\) we found for the uncharged black hole. When \(q=1\) we reproduce the expectation that number of states counted with a \((-1)^{\sf F}\) is the same as the total number of states, and therefore to leading order all states have the same statistics. (This interpretation is not correct after including quantum effects which imply the zero-temperature entropy at \(E=Q\) does not correspond to a real degeneracy [17; 18; 19; 20].) As indicated above, this ratio is always smaller than one. This is consistent with unitarity since the total number of states cannot be smaller than the number of bosonic minus fermionic states. ### AdS Solutions #### 2.2.1 AdS\({}_{4}\) In this section we analyze the quantum statistics of black hole microstates in asymptotically AdS\({}_{4}\) spaces. Through AdS/CFT this puts a constraint on the quantum statistics of high energy states of holographic CFT\({}_{3}\)'s. As explained for example in [21] it is possible to construct backgrounds in string theory where the low energy theory is described by Einstein-Maxwell in asymptotically AdS\({}_{4}\) coupled to matter \[I=-\frac{1}{16\pi}\int{\rm d}^{4}x\sqrt{g}[R-2\Lambda-F^{2}]+I_{\rm bdy}. \tag{23}\] We work in units with \(G_{N}=1\). We parametrize the cosmological constant by \(\Lambda=-3/\ell^{2}\) and \(I_{\rm bdy}\) are the boundary terms necessary depending on the ensemble. The black hole solutions are given by the following generalization of the Kerr-Newman black hole \[{\rm d}s^{2}=-\frac{\Delta_{r}}{\rho^{2}}\Big{[}{\rm d}t-\frac{a\sin^{2}\theta }{\Xi}{\rm d}\phi\Big{]}^{2}+\frac{\rho^{2}}{\Delta_{r}}{\rm d}r^{2}+\frac{ \rho^{2}}{\Delta_{\theta}}{\rm d}\theta^{2}+\frac{\Delta_{\theta}\sin^{2} \theta}{\rho^{2}}\left[a{\rm d}t-\frac{r^{2}+a^{2}}{\Xi}{\rm d}\phi\right]^{2} \tag{24}\] where \[\rho^{2}=r^{2}+a^{2}\cos^{2}\theta,\quad\Xi=1-a^{2}/\ell^{2}, \tag{25}\] \[\Delta_{r}=(r^{2}+a^{2})(1+\frac{r^{2}}{\ell^{2}})-2mr+q^{2},\quad \Delta_{\theta}=1-\frac{a^{2}}{\ell^{2}}\cos^{2}\theta. \tag{26}\] We take the charge to be electric and the gauge field is \(A=-\frac{qr}{\rho^{2}}({\rm d}t-\frac{a\sin^{2}\theta}{\Xi}{\rm d}\phi)+ \alpha{\rm d}t\), where \(\alpha\) is a constant chosen to make the gauge connection smooth at the horizon. The solution is parametrized by \(m\), \(a\), and \(q\), which roughly correspond to the mass/energy (or temperature), the angular momentum (or angular velocity), and the charge (or chemical potential). In the Euclidean section, one can find the inverse temperature and angular velocity by demanding smoothness at the horizon, located at \(\Delta_{r}(r_{+})=0\), which gives \[\beta=\frac{4\pi(r_{+}^{2}+a^{2})}{r_{+}(1+\frac{a^{2}}{\ell^{2}}+3\frac{r_{+} ^{2}}{\ell^{2}}-\frac{a^{2}+q^{2}}{r_{+}^{2}})},\quad\Omega=\frac{a(1+r_{+}^{ 2}/\ell^{2})}{r_{+}^{2}+a^{2}}. \tag{27}\] We can trade the parameter \(m\) by \(r_{+}\) using the relation \(m=\frac{(r_{+}^{2}+a^{2})(1+\frac{r_{+}^{2}}{\ell^{2}})+q^{2}}{2r_{+}}\). The ADM charges, the energy \(E\), angular momentum \(J\), and U(1) charge \(Q\) are given by \[E=\frac{m}{\Xi^{2}},\ \ \ \ J=aE,\ \ \ \ Q=\frac{q}{\Xi}. \tag{28}\] Finally, the area of the even horizon is given by \(A=4\pi(r_{+}^{2}+a^{2})/\Xi\). Below we consider in detail the uncharged case \(q=0\) (the generalization to \(q\neq 0\) is striaghftoward). The analysis is very similar to the one presented in the previous section so we will be brief, emphasizing mainly the differences. As before, we begin by reminding the reader the results for the free energy and entropy without an insertion of the fermion parity operator. In the fixed temperature ensemble we can sum over all states by taking \(\Omega=0\) and therefore the black hole solution has \(a=0\). The size of the horizon is determined through the (inverse) temperature by \[\beta=\frac{4\pi r_{+}}{1+\frac{3r_{+}^{2}}{\ell^{2}}},\quad r_{+}=\frac{2\pi \ell^{2}}{3\beta}\left(1+\sqrt{1-\frac{3\beta^{2}}{4\pi^{2}\ell^{2}}}\right). \tag{29}\] This solution only exists for \(\beta\leq 2\pi\ell/\sqrt{3}\). The free energy, computed from the on-shell action \(I=\beta E-A/4-\beta\Omega J\) is given by \[-\beta F = \frac{(2\pi\ell^{2}+\sqrt{4\pi^{2}\ell^{4}-3\beta^{2}\ell^{2}})( -3\beta^{2}+\pi(2\pi\ell^{2}+\sqrt{4\pi^{2}\ell^{4}-3\beta^{2}\ell^{2}}))}{2 7\beta^{2}} \tag{30}\] \[\sim \frac{16\pi^{3}\ell^{4}}{27\beta^{2}},\ \ \ \ \beta\to 0. \tag{31}\] From the first line we can derive that the black hole dominates over thermal AdS for temperatures higher than \(\beta<\pi\ell\). The second line shows the result in the high temperature limit. There is a second solution for \(r_{+}\) which has higher free energy and therefore never dominates the ensemble. In the fixed energy ensemble, the black hole radius is constrained by \(E=\frac{r_{+}(r_{+}^{2}+\ell^{2})}{2\ell^{2}}\). There are three solutions of this equation, one real and two complex conjugate ones. The two complex solutions have negative real part of the entropy. The real solution has an entropy given by \[S(E)=\frac{\pi(3^{2/3}\ell^{2}-3^{1/3}\ell^{4/3}(\sqrt{3\ell^{2}+81E^{2}}-9E)^{ 2/3})^{2}}{9\ell^{4/3}(\sqrt{3\ell^{2}+81E^{2}}-9E)^{2/3}}\sim 2^{2/3}\ell^{4/3} \pi E^{2/3},\ \ \ \ E\rightarrow\infty, \tag{32}\] where we also indicated the behavior at large energies. In this limit the radius of the black hole is \(r_{+}\sim 2^{1/3}\ell^{2/3}E^{1/3}\). We are now ready to consider the distribution between fermionic and bosonic black hole microstates. We begin by computing the free energy \(F_{\rm spin}\) in the presence of a fermion parity operator insertion \((-1)^{\sf F}\). Again, we implement this by imposing \(\beta\Omega=2\pi{\rm i}\). The first observation we can make is that nothing interesting happens that requires regulating the limit \(\beta\Omega\to 2\pi{\rm i}\), as opposed to the flat space analysis. We find two solutions for \(a={\rm i}\mathfrak{a}\) given by \[\mathfrak{a}=r_{+},\ \ \ \ \mathfrak{a}=\frac{r_{+}(\ell^{2}+3r_{+}^{2})}{\ell^{ 2}-r_{+}^{2}}. \tag{33}\] The first solution has \(\beta=0\) and \(E=0\) so it is the same as \(\mathfrak{a}=r_{+}\) in flat space. The second solution has temperature and energy \[\beta=\frac{16\pi\ell^{2}r_{+}^{3}}{3r_{+}^{4}-2\ell^{2}r_{+}^{2}-\ell^{4}}, \ \ \ E=-\frac{4r_{+}^{3}\ell^{2}(\ell^{2}-r_{+}^{2})^{2}}{(9r_{+}^{4}-2\ell^{2}r_{ +}^{2}+\ell^{4})^{2}} \tag{34}\] To compute \(F_{\rm spin}\) we focus on the first relation and solve for \(r_{+}(\beta)\). For general \(\beta\) and \(\ell\), we find four solutions for \(r_{+}\). Two are real and two are complex conjugated, and they are all finite. In the flat space limit, one of the solution for the black hole radius becomes the one found in the previous section with \(r_{+}\to\infty\). We see therefore that the background cosmological constant regulates the large \(r_{+}\) limit of the flat space black hole, without the need to go away from \(\beta\Omega=2\pi{\rm i}\). We find that all these four solutions satisfy, for any \(\beta\) and \(\ell\), \[{\rm Re}\left[-\beta F_{\rm spin}\right]<0, \tag{35}\] and therefore thermal AdS always dominates the ensemble computing \(F_{\rm spin}\). The possible presence of other saddles which are neither thermal AdS nor black holes and support periodic fermion boundary conditions is not ruled out. Assuming this is not the case and the free energy is indeed of order one, controlled by thermal AdS, it suggests a large cancellation in \(F_{\rm spin}\) compared with \(F\) since there are actually a large number of black hole states in a given energy window. Next we consider the entropy \(S_{\rm spin}\), counting black hole states in the presence of a \((-1)^{\sf F}\) insertion. Using equation (34) we solve for \(r_{+}(E)\). It is clear by looking at the equation for \(E\) that it can be rewritten as the roots of an eight order polynomial. There are therefore eight solutions (that cannot be written analytically). We find for positive values of the energy that the solutions always come in four pairs of complex conjugated values of \(r_{+}\). Out of these four pairs, two pairs have negative real part of the action while two pairs have positive real part of the action. We denote the action of these solutions by \(I_{1}\) and \(I_{2}\), such that \({\rm Re}(I_{1,2})<0\) and \({\rm Im}(I_{1,2})>0\), and their complex conjugate. Out of the two pairs with negative real part of the action, one pair has the smallest real part which we choose to be \(I_{2}\). In general, in the presence of two competing saddles with the same real part we should add their contributions in the total partition function. The one-loop determinants arising from each saddle are important in writing the total contribution. In our case the two saddles are related by \(\mathrm{i}\rightarrow-\mathrm{i}\) and therefore the one-loop determinants are related by complex conjugation. The total answer is \[Z\approx Z_{\mathrm{1-loop}}\,e^{-I}+\overline{Z_{\mathrm{1-loop}}}\,e^{- \overline{I}}\approx|Z_{\mathrm{1-loop}}|e^{-\mathrm{Re}(I)}\,2\cos\left( \mathrm{Im}(I)+\mathrm{Arg}\,Z_{\mathrm{1-loop}}\right). \tag{36}\] This applies whether \(Z\) computes the partition function in the fixed energy or temperature ensemble. As a function of either, the real part of the action and the absolute value of the one-loop determinant determine the envelope of the partition function while the imaginary part of the action leads to rapid oscillations around the envelope. The overall phase of these oscillations requires knowledge of the one-loop determinants which we do not evaluate here, but the frequency to leading order is determined by \(\mathrm{Im}(I)\). A similar feature takes place in the context of the WKB approximation in quantum mechanics. This conclusion applies to both \(I_{1}\) and \(I_{2}\). The pair of solutions with minimal \(\mathrm{Re}(I)\), with an action \(I_{2}\), would naively be the one that dominates the ensemble. Nevertheless, we find it to have strange properties. For example, it satisfies \(\mathrm{Re}(r_{+})<0\)6 and moreover if we continue this solution smoothly from \(\beta\Omega=2\pi\mathrm{i}\) down to \(\beta\Omega=0\) it does not become the familiar black hole that computes the partition function and instead remains complex. On the other hand the other solution with action \(I_{1}\) is less dominant but has a more reasonable behavior. For example, \(\mathrm{Re}(r_{+})>0\) and becomes the black hole computing the partition function when continued to \(\beta\Omega=0\). Moreover, only this solution becomes the one studied in Section 2.1 in the flat space limit. For these reasons we conjecture that the integration contour in the gravitational path integral is such that the solutions \(I_{2}\) do not contribute and instead the pair of complex conjugate solutions with action \(I_{1}\) and \(\mathrm{Re}(r_{+})>0\) dominate the ensemble. Footnote 6: One can check that they do not satisfy the Kontsevich-Segal-Witten criterion [16; 22], see Appendix A for more discussion. To get some insight on these results we can find the black hole solution analytically in the large energy limit. Looking at equation (34) the solution for large \(E\) is such that the denominator vanishes \(9r_{\star}^{4}-2\ell^{2}r_{\star}^{2}+\ell^{4}=0\). (As opposed to the partition function without Figure 3: _Left:_ Plot of the envelope of \(S_{\mathrm{spin}}/S\) for a black hole in AdS\({}_{4}\) with \(\ell=10\) as a function of energy \(E\), in blue. It starts at \(1/2\) (flat space result), grows slightly at small energy and then decay to zero at large energy. _Right:_ Sketch - not to scale - of the rapid fluctuations as a function of energy in \(\mathrm{Tr}_{E}\left(-1\right)^{\ell}\) in blue and the envelope (black) growing exponentially with \(\sqrt{E}\). Even though we can trust the frequency of oscillations, the overall phase is sensitive to quantum corrections. a \((-1)^{\sf F}\) insertion, large \(r_{+}\) leads to small energies.) This equation has four solutions \(r_{\star}=\pm\frac{\ell}{3}\sqrt{1\pm{\rm i}2^{3/2}}\) for each choice of signs. Then we expand \(r_{+}=r_{\star}+\delta r\) and solve for \(\delta r\) as a function of \(E\). For large \(E\), \(\delta r\) will be small, so we can solve for \(\delta r\) perturbatively. The solution we propose to keep is the one that arises from a small fluctuation around \(r_{\star}=\frac{\ell}{3}\sqrt{1\pm{\rm i}2^{3/2}}\approx 0.47\ell\pm{\rm i} 0.33\ell\). The other solutions have negative real part for \(r_{\star}\). In this limit the action is given by \[I_{1}=c\sqrt{\ell^{3}E}+2\pi{\rm i}\ell E+\ldots,\ \ \ \ c\approx 3.81, \tag{37}\] where the dots denote terms subleading in the large energy limit. The other saddle in the pair has action \(\overline{I}_{1}=c\sqrt{\ell^{3}E}-2\pi{\rm i}\ell E\). Since these two solutions have the same real part of the action they both contribute and should be added up in the path integral. The result for the entropy in the presence of \((-1)^{\sf F}\) is therefore \[S_{\rm spin}(E)\approx c\sqrt{\ell^{3}E}+\log\cos(2\pi\ell E) \tag{38}\] in the large energy limit. As explained above, knowing the actual phase of the oscillatory term requires including quantum effects, a problem we leave for future work. This result explains the reason why the black hole states do not contribute to the canonical ensemble with the insertion of \((-1)^{\sf F}\). We find within each energy window the difference between bosonic and fermionic states is large in magnitude, of order \(e^{c\sqrt{\ell^{3}E}}\).7 Nevertheless whether most states are bosonic or fermionic depends also on the energy due to the oscillatory factor coming from the imaginary part of the action, making the sum over energies (involved in the fixed temperature ensemble) smaller than each individual term. Footnote 7: This expression looks like the Cardy formula in 2d and would be interesting if there was a two dimensional interpretation of this result from the CFT side. It might very well be a coincidence. The contribution to the difference between fermionic and bosonic states, coming from the black hole studied here, is qualitatively consistent with [2]. The authors of [2] propose that without supersymmetry, large \(N\) adjoint QCD in four dimensions has a large cancellation between bosonic and fermionic states. The partition function with an insertion of \((-1)^{\sf F}\) is found to grow with energy, with a rate slower than the partition function. This is similar to the prediction we find here for three dimensional conformal field theories. #### 2.2.2 AdS\({}_{5}\) In this section we make some comments regarding the generalization to five dimensional AdS, relevant to four dimensional conformal field theories. We consider a low energy theory with an action given by Einstein-Hilbert with cosmological constant \(\Lambda=-6/\ell^{2}\), with \(\ell\) being the five-dimensional AdS radius. For simplicity we consider the case without charge. The black hole metric with mass and angular momentum is given by \[{\rm d}s^{2} = -\frac{\Delta_{\theta}(1+\frac{r^{2}}{\ell^{2}})}{\Xi_{a}\Xi_{b} }{\rm d}t^{2}+\frac{2m}{\rho^{2}}\left(\frac{\Delta_{\theta}{\rm d}t}{\Xi_{a} \Xi_{b}}-\omega\right)^{2}+\frac{\rho^{2}{\rm d}r^{2}}{\Delta_{r}}+\frac{\rho ^{2}{\rm d}\theta^{2}}{\Delta_{\theta}} \tag{39}\] \[+\frac{r^{2}+a^{2}}{\Xi_{a}}\sin^{2}\theta{\rm d}\phi^{2}+\frac{r ^{2}+b^{2}}{\Xi_{b}}\cos^{2}\theta{\rm d}\psi^{2}.\] where we define \(\rho^{2}=r^{2}+a^{2}\cos^{2}\theta+b^{2}\sin^{2}\theta\) and the functions \[\Xi_{a}=1-\frac{a^{2}}{\ell^{2}},\ \ \ \ \Xi_{b}=1-\frac{b^{2}}{\ell^{2}}, \ \ \ \ \ \omega=a\sin^{2}\theta\frac{\mathrm{d}\phi}{\Xi_{a}}+b\cos^{2}\theta\frac{ \mathrm{d}\psi}{\Xi_{b}},\] \[\Delta_{r}=\frac{(r^{2}+a^{2})(r^{2}+b^{2})(1+\frac{r^{2}}{\ell^ {2}})}{r^{2}}-2m,\ \ \Delta_{\theta}=1-\frac{a^{2}}{\ell^{2}}\cos^{2}\theta-\frac{b^{2}}{\ell^{2}} \sin^{2}\theta. \tag{40}\] The boundary has the topology of \(S^{1}\times S^{3}\) with \(S^{1}\) being thermal time. The spatial three-sphere is parametrized by \(\phi\) and \(\psi\) which are \(2\pi\) periodic and \(\theta\in[0,\pi/2]\). When considering a solution with a Lorenzian interpretation one has to restrict \(a^{2}<\ell^{2}\) and \(b^{2}<\ell^{2}\). In the Euclidean path integral involved in computing the free energy or entropy, we allow these parameters to be complex. The two angular momenta \(J_{1}\) and \(J_{2}\) generate rotations around \(\phi\) and \(\psi\) respectively. The energy and angular momenta for this black hole are given by \[E=\frac{\pi m(2\Xi_{a}+2\Xi_{b}-\Xi_{a}\Xi_{b})}{4\Xi_{a}^{2}\Xi_{b}^{2}},\ \ J_{1}=\frac{\pi 2am}{4\Xi_{a}^{2}\Xi_{b}^{2}},\ \ J_{2}=\frac{\pi 2bm}{4\Xi_{a}\Xi_{b}^{2}}. \tag{41}\] The thermodynamic potentials, the temperature and the two angular velocities conjugated to the two angular momenta \(J_{1}\) and \(J_{2}\), are derived from imposing smoothness at the Euclidean horizon and are given by \[\beta=\frac{2\pi r_{+}(r_{+}^{2}+a^{2})(r_{+}^{2}+b^{2})}{r_{+}^{4}[1+\ell^{- 2}(2r_{+}^{2}+a^{2}+b^{2})]},\ \ \Omega_{1}=\frac{a(1+\frac{r_{+}^{2}}{\ell^{2}})}{(r_{+}^{2}+a^{2})},\ \ \ \Omega_{2}=\frac{b(1+\frac{r_{+}^{2}}{\ell^{2}})}{(r_{+}^{2}+b^{2})}. \tag{42}\] The area of the event horizon that appears in the Bekenstein-Hawking entropy is given by \(A=\frac{\pi^{2}(r_{+}^{2}+a^{2})(r_{+}^{2}+b^{2})}{2r_{+}\Xi_{a}\Xi_{b}}\). The on-shell action in the fixed temperature and angular velocity ensemble is \(I=\beta E-A/4-\beta\Omega_{1}J_{1}-\beta\Omega_{2}J_{2}\). In the fixed energy and angular velocity ensemble, the on-shell action is instead \(I=-A/4-\beta\Omega_{1}J_{1}-\beta\Omega_{2}J_{2}\). In the fixed temperature ensemble we can compute the free energy with no insertion of \((-1)^{\mathsf{F}}\). The saddle point in this case contributing to the partition function is a black hole with \(a=b=0\) and \(r_{+}\) is computed as a function of inverse temperature \(\beta\). The solution is only real for \(\beta<\pi\ell/2\). Moreover the black hole solution dominates the ensemble only for even higher temperatures \(\beta<2\pi\ell/3\). In the range \(2\pi\ell/3<\beta\) thermal AdS dominate the ensemble and therefore \(\beta_{\rm HP}=2\pi\ell/3\) is the Hawking-Page transition corresponding to the confinement-deconfinement transition of the dual gauge theory [6]. At very high temperatures, the size of the black hole is \(r_{+}\approx\ell^{2}\pi/\beta\) and the free energy is large and negative \[-\beta F\approx\frac{\ell^{6}\pi^{5}}{8\beta^{3}} \tag{43}\] and in a fixed energy ensemble the entropy grows as \(S(E)\sim E^{3/4}\). We now move on to the computation of \(F_{\rm spin}\) and \(S_{\rm spin}\). In this case (and really for black holes in any number of dimensions bigger than four) we have more freedom to decide how to implement the \((-1)^{\mathsf{F}}\) since there are more than one angular momenta. In our definition \(J_{1}\) and \(J_{2}\) are two generators that each rotate along an \(\mathbb{R}^{2}\subset\mathbb{R}^{4}\). From the point of view of the field theory living in the four dimensional boundary of AdS\({}_{5}\), Lorentz invariance on \(\mathbb{R}^{4}\) implies that \(J_{1}\) and \(J_{2}\) are either both half-integer (corresponding to fermionic states) or both integer (corresponding to bosonic states). In other words \(2J_{1}=2J_{2}\) mod \(2\mathbb{Z}\). Therefore we could implement the fermion parity operator either by \((-1)^{\sf F}=e^{2\pi{\rm i}J_{1}}\) or \((-1)^{\sf F}=e^{2\pi{\rm i}J_{2}}\). For concreteness, we choose to use \(J_{1}\) and therefore we can set \(\Omega_{2}=J_{2}=0\) on our black hole. There are other notions of fermion parity operator in five dimensions. The group of rotations of \(S^{3}\) can be written as \({\rm SO}(4)\approx{\rm SU}(2)_{L}\times{\rm SU}(2)_{R}\) and the two \({\rm SU}(2)\) factors are generated by \(J_{L}=\frac{J_{1}+J_{2}}{2}\) and \(J_{R}=\frac{J_{1}-J_{2}}{2}\). Therefore one could also define other parity operators such as \((-1)^{\sf F_{\rm I}}=e^{2\pi{\rm i}J_{L}}\) and \((-1)^{\sf F_{R}}=e^{2\pi{\rm i}J_{R}}\). The free energy and entropy in the presence of such insertions can be computed in a similar way, but they would not be probing the quantum statistics of the black hole microstate which requires the \((-1)^{\sf F}\) insertion of the previous paragraph. After these clarifications we implement the insertion of \((-1)^{\sf F}\) by setting \(\beta\Omega_{1}=2\pi{\rm i}\) and \(\beta\Omega_{2}=0\) for concreteness. The second condition implies \(b=0\) and the solution for \(a={\rm i}{\sf a}\) is \[\frac{\beta\Omega_{1}}{2\pi{\rm i}}=\frac{{\sf a}(r_{+}^{2}+\ell^{2})}{2r_{+}^ {3}-{\sf a}^{2}r_{+}+\ell^{2}r_{+}}=1,\ \ \Rightarrow\ \ {\sf a}=-\frac{\ell^{2}}{r_{+}}-2r_{+}. \tag{44}\] (There is also another solution with \({\sf a}=r_{+}\) and with \(\beta=E=0\) which we ignore.) With (44), the temperature and energy are given by \[\beta=\frac{2\pi\ell^{2}(3r_{+}^{2}+\ell^{2})}{2r_{+}^{3}+\ell^{2}r_{+}},\ \ \ \ E=-\frac{\pi(3r_{+}^{2}+\ell^{2})(4r_{+}^{4}+7\ell^{2}r_{+}^{2}+\ell^{4})}{8 (4r_{+}^{2}+\ell^{2})^{2}}. \tag{45}\] In the fixed temperature ensemble we can solve \(r_{+}(\beta)\) and compute the free energy \(-\beta F_{\rm spin}=-\beta E+A/4+\beta\Omega_{1}J_{1}\). In a fixed energy ensemble we solve instead \(r_{+}(E)\) and \(S_{\rm spin}=A/4+\beta\Omega_{1}J_{1}\). In the fixed temperature ensemble, we find numerically that all solutions for \(r_{+}\) lead to a free energy \({\rm Re}[-\beta F_{\rm spin}]<0\) and therefore thermal AdS\({}_{5}\) dominates the ensemble for \(F_{\rm spin}\) at all temperatures. There is no Hawking-Page transition. This is similar to the results for AdS\({}_{4}\), and it is also qualitatively consistent with [2]. From the boundary side the Hawking-Page transition present in the free energy is interpreted as a confinement-deconfinement phase transtition. On the other hand the free energy in the presence of \((-1)^{\sf F}\) has no phase transition and is always in the confined phase regardless of temperature. In the fixed energy ensemble we find numerically that \({\rm Re}[S_{\rm spin}]=0\) for all the solutions for \(r_{+}\). Therefore, as opposed to the result in four dimensions, \(S_{\rm spin}\) is controlled by the difference between bosonic and fermionic low energy excitations around thermal AdS, and not from a black hole solution. In this case we expect any type of bulk spin defect that supports periodic fermions to dominate as long as it has large enough entropy. We will come back to this later in the discussion section. #### 2.2.3 AdS\({}_{3}\) We now consider the case of black holes in asymptotically AdS\({}_{3}\) spaces. We focus on theories with a gravity sector described by the Einstein-Hilbert action with a negative cosmological constant, with the AdS\({}_{3}\) length given by \(\ell\). The black hole solution in this theory is given by the BTZ metric \[\mathrm{d}s^{2}=-f\mathrm{d}t^{2}+\ell^{2}\frac{\mathrm{d}r^{2}}{f}+r^{2}( \mathrm{d}\phi-\frac{r_{-}r_{+}}{r^{2}}\mathrm{d}t)^{2},\quad f=\frac{(r^{2}-r _{+}^{2})(r^{2}-r_{-}^{2})}{r^{2}}. \tag{46}\] This black hole carries energy and angular momentum, given by \[E=\frac{\ell}{8}+\frac{r_{+}^{2}+r_{-}^{2}}{8\ell},\quad J=\frac{r_{+}r_{-}}{4 \ell}. \tag{47}\] To be consistent with the other sections, we define the energy such that vacuum AdS has \(E=0\), which is an unconventional choice in the context of 2d CFTs. The corresponding inverse temperature and angular velocity are given by \[\beta=\frac{2\pi\ell r_{+}}{r_{+}^{2}-r_{-}^{2}},\quad\quad\Omega=\frac{r_{-}} {r_{+}}. \tag{48}\] The area of the horizon is \(A=2\pi r_{+}\). It is convenient to work with the left and right-moving temperatures which are \(\beta_{L}=\beta+\beta\Omega\) and \(\beta_{L}=\beta-\beta\Omega\). The on-shell action of the BTZ black hole is given, in an ensemble of fixed left- and right-moving temperatures, by \(I=\beta E-A/4-\beta\Omega J\), while the action of vacuum AdS is just zero. The contribution of the BTZ black hole in a fixed temperature and angular velocity ensemble to the free energy is \[-\beta F=-\frac{\beta_{L}c}{24}+\frac{c}{24}\frac{4\pi^{2}}{\beta_{L}}-\frac{ \beta_{R}c}{24}+\frac{c}{24}\frac{4\pi^{2}}{\beta_{R}}, \tag{49}\] where \(c=3\ell/2\). We are working with units where \(G_{N}=1\) and therefore the semiclassical limit corresponds to large AdS radius in Planck units, or \(c\gg 1\). We shall begin by considering the free energy without and with the \((-1)^{\mathsf{F}}\) insertion. We always work in the NS sector, meaning that fermions are antiperiodic along the spatial circle.8 If \(\beta\Omega=0\) or \(2\pi\mathrm{i}\) we obtain respectively Footnote 8: Notice that otherwise the trick explained in figure 1 to produce black hole solutions with periodic fermions in the thermal circle would not work. See the end of the section for comments on the case of the elliptic genus. \[-\beta F=-\frac{\beta c}{12}+\frac{c}{12}\frac{4\pi^{2}}{\beta},\quad\ -\beta F_{\mathrm{spin}}=-\frac{\beta c}{12}+\frac{c}{12}\frac{4\pi^{2}\beta}{ \beta^{2}+4\pi^{2}}. \tag{50}\] From the expression for \(F\) we see that the black hole only contributes for \(\beta<2\pi\), when compared with thermal AdS\({}_{3}\). Instead, \(-\beta F_{\mathrm{spin}}<0\) for any value of the temperature. This means that thermal AdS with periodic fermions always dominates over the rotating black hole introduced here, in the canonical ensemble. This situation is similar to AdS in other dimensions. When considering the gravitational path integral in asymptotically AdS\({}_{3}\) spaces one needs to sum over \(\mathrm{SL}(2,\mathbb{Z})\) images where different cycles are contractible at the horizon, but satisfies the same boundary conditions [23]. Since we are working with theories with fermions we need to keep track of the spin structure and therefore the sum is only over the subgroup of \(\mathrm{SL}(2,\mathbb{Z})\) that does not modify the spin structure.9 For the thermal trace with \(\Omega=0\) it is known that the BTZ black hole dominate the ensemble (or, at low enough temperatures, thermal AdS). For \(\Omega=2\pi\mathrm{i}\) we have checked that there is no other \(\mathrm{SL}(2,\mathbb{Z})\) image (other than thermal AdS) that respects the spin structure and produces a more dominant contribution than the rotating black hole. This is not obvious since, for example, at \(\Omega=\pi\mathrm{i}\) there is an \(\mathrm{SL}(2,\mathbb{Z})\) black hole whose contribution is bigger than either thermal AdS or the analog of our rotating black hole, and actually dominates the ensemble, see discussion in [25].10 This does not happen for our observable involving \(\Omega=2\pi\mathrm{i}\). Footnote 9: In the NS sector this involves a sum over the subgroup \(\Gamma_{\sigma}/\mathbb{Z}\subset\mathrm{SL}(2,\mathbb{Z})\). The group \(\Gamma_{\sigma}\) is of the form \(\begin{pmatrix}a&b\\ c&d\end{pmatrix}\) with both \(a+b\) and \(c+d\) odd [24], and we mod out by the group generated by \(\begin{pmatrix}1&2\\ 0&1\end{pmatrix}\). Footnote 10: The authors of [25] describe an orbifold of the BTZ black hole that makes the dominant contribution. We checked the action of this orbifold is precisely one of the \(\mathrm{SL}(2,\mathbb{Z})\) images of the rotating black hole with \(\Omega=\mathrm{i}\pi\). We now analyze the difference between bosonic and fermionic black holes in the fixed energy ensemble. The result for the total black hole entropy is the well-known Cardy formula \(S(E)=2\pi\sqrt{\frac{c}{3}(E-\frac{c}{12})}\) for \(E\geq\frac{c}{12}\). (Only for \(E\geq c/6\) the BTZ black hole also dominates the fixed temperature ensemble [26].). What is the gravity prediction for \(S_{\mathrm{spin}}\)? The procedure is very similar to the one implemented already so we will be brief. Setting \(\beta\Omega=2\pi\mathrm{i}\), we find eight solutions for \(r_{+}\) given \(E\): four pairs of complex conjugated ones. Two pairs have \(\mathrm{Re}[S_{\mathrm{spin}}]\leq 0\), and one pair has \(\mathrm{Re}[S_{\mathrm{spin}}]>0\). The pair with positive entropy gives the leading contribution but they come with an imaginary part of opposite sign, leading to a similar oscillatory term as in AdS\({}_{4}\). The result is \[\mathrm{Re}\left[S_{\mathrm{spin}}(E)\right]=\sqrt{2}\pi\sqrt{\frac{c}{3}E}= \frac{1}{\sqrt{2}}S(E),\hskip 14.226378ptE\gg c. \tag{51}\] and \(\mathrm{Im}\left[S_{\mathrm{spin}}\right]=\pm 2\pi E\). Therefore even though we expect the entropy in the presence of \((-1)^{\mathsf{F}}\) to be large, the cancellation found in the fixed temperature ensemble is due to the rapid oscillations as a function of energy. This situation is completely analogous to AdS\({}_{4}\). We would like to emphasize that \((-1)^{\mathsf{F}}\) is defined as \(e^{2\pi\mathrm{i}J}\) and therefore counts the difference between black hole microstates with integer or half-integer scaling dimensions. This is different than the elliptic genus usually computed in two dimensional supersymmetric theories. The elliptic genus counts the difference between states with even vs. odd charges with respect to a U(1) gauge field (which is normally the Cartan of a bigger non-Abelian group such as SU(2)). When AdS\({}_{3}\) arises form string theory, this charge comes from angular momentum on a compact direction and therefore can also be interpreted as a different version of a fermion parity operator similar to the discussion about AdS\({}_{5}\). Since in this case the smoothness of the choice of spin structure is thanks to the presence of a ackground gauge field (and not 3d rotation) the procedure of implementing the fermion parity operator through a complex chemical potential works also in the R sector. ## 3 Contribution from wormholes One might wonder whether wormholes contribute to the quantity \(\rho_{\rm spin}(E)\equiv{\rm Tr}_{E}(-1)^{\sf F}\). To be more precise, what we mean is that the semiclassical geometry we discussed in the previous sections only give a coarse-grained estimation to the quantity \(\rho_{\rm spin}(E)\), which varies smoothly with the energy \(E\), in spite of the oscillation at the scale of \(1/\ell\). However, in a microscopic theory, one expects that on top of the smoothly varying average, \(\rho_{\rm spin}(E)\) also contains a fluctuating piece that depends sensitively to the precise energy window one chooses. We can quantify the typical size of the fluctuating piece by studying the connected correlator \[\langle\rho_{\rm spin}(E)\rho_{\rm spin}(E^{\prime})\rangle_{\rm c}\equiv \langle\rho_{\rm spin}(E)\rho_{\rm spin}(E^{\prime})\rangle-\langle\rho_{\rm spin }(E)\rangle\langle\rho_{\rm spin}(E^{\prime})\rangle. \tag{3.1}\] Here the bracket \(\langle\cdot\rangle\), in a fixed microscopic theory, can be simply viewed as averaging over a small range of energy \(E\) while keeping \(E-E^{\prime}\) fixed. In a theory with only bosons, the correlator (3.1) is nothing but the correlator of the full density of state \(\rho(E)\). Here we are interested in theories that have fermions, where the bosonic and fermionic states contributes to \(\rho_{\rm spin}(E)\) with different signs. Following the insight in [8; 27; 28], the connected contribution to the average (3.1) can be quantified by looking at the contribution from wormholes in the gravitational path integral. In supersymmetric theories, however, it has been argued in [5] that wormholes do not contribute to the square of the index. However, there is no argument in non-supersymmetric theories, which we are focusing on here. In this section, we discuss a universal wormhole contribution to a closely related quantity \[\langle|Y_{E,\rm spin}(T)|^{2}\rangle_{\rm c}\equiv\langle|{\rm Tr}_{E}\,(-1) ^{\sf F}e^{-{\rm i}HT}|^{2}\rangle_{\rm c} \tag{3.2}\] where we introduced a factor of unitary time evolution \(e^{-{\rm i}HT}\) into the trace other than the original \((-1)^{\sf F}\) factor. Evidently, this is the analogue of the spectral form factor [7; 8] when we don not have the \((-1)^{\sf F}\) insertion and we will therefore call it the spin spectral form factor. (A similar quantity was introduced for supersymmetric theories in [29].) Figure 4: An illustration of the double cone wormhole geometry, where we are only drawing the radial and time directions. The wormhole contribution to the ordinary spectral form factor \[\langle|Y_{E}(T)|^{2}\rangle_{\rm c}\equiv\langle|{\rm Tr}_{E}\,e^{-{\rm i}HT}|^{ 2}\rangle_{\rm c} \tag{3.3}\] was discussed in [8]. In particular, in the microcanonical ensemble where one focuses on states around energy \(E\), the gravity solution is the double cone wormhole which exists universally in any theories containing black holes. To construct it, one starts with a two-sided black hole geometry with energy \(E\) and perform a quotient in time by period \(T\), resulting in a geometry of the form in figure 4. The resulting geometry has a zero mode corresponding to the relative time shift between the two sides, which leads to a linear \(T\) dependence in the spectral form factor. The geometry is naively singular since the time circle shrinks to zero size at the bifurcation surface, while [8] proposed a simple prescription to regulate the geometry by deforming the radial coordinate slightly into the complex plane. The time circle is non-shrinking everywhere in the geometry. Another feature of the double cone is that its classical action is zero due to the cancellation between the left and right sides. Since the action vanishes, black holes carrying different angular momentum and possible gauge charges contribute equally (the coefficient multiplying \(T\) in the one-loop determinant is charge independent as well), and one should sum over them. In the case without any symmetry and the statistics of eigenvalues follow Gaussian unitary ensemble (GUE) universality, we have11 Footnote 11: Any unitary Lorentz invariant theory comes with an additional symmetry that modifies the statement here. See footnote 13 for more discussion. \[\langle|Y_{E}(T)|^{2}\rangle_{\rm c}\approx\int{\rm d}E\,\frac{T}{2\pi}, \tag{3.4}\] where the integral over \(E\) runs over the energy window involved in defining the spectral form factor. To understand the computation of (3.2), we first discuss an analogous situation where we have a U(1) gauge symmetry which is also interesting in its own right. ### Analogue in the case with a U(1) gauge symmetry In a gravity theory with a U(1) gauge symmetry, we can consider an analogue of the quantity (3.2) as \[\langle|Y_{E,\mu}(T)|^{2}\rangle_{\rm c}\equiv\langle|{\rm Tr}_{E}\,e^{-{\rm i }(H-\mu Q)T}|^{2}\rangle_{\rm c},\quad\mu\in\mathbb{R} \tag{3.5}\] namely we weight the states carrying charge \(Q\) by a pure phase factor \(e^{{\rm i}\mu QT}\). In gravity, the way to implement this phase factor is to impose a non-zero boundary condition for the gauge field at infinity, such that the holonomy of the gauge field satisfies \[e^{{\rm i}\int{\rm d}t\,A_{t}}|_{\rm bdry}=e^{{\rm i}\mu T}. \tag{3.6}\] In the gauge that \(\partial_{t}A_{t}=0\), (3.6) determines the boundary value of the gauge field up to constant shifts \[A_{t}|_{\rm bdry}=\mu+\frac{2\pi n}{T},\quad n\in\mathbb{Z}. \tag{3.7}\] Let us focus on the \(n=0\) case in the following. The sum over the shifts by \(n\) is important for ensuring the quantization condition for the U(1) charge, but it is not the main focus here (see Appendix. C for an explicit example of how it works). We have \(A_{t}|_{\rm bdry}=\mu\) and we look for the analogous double cone geometry satisfying the boundary condition. The immediate guess will be to take the Lorentzian two-sided charged black hole solution, with energy given by \(E\) and charge \(q\) set by chemical potential \(\mu\) through the usual relation in black hole thermodynamics. For example, in four flat spacetime dimensions, we would be tempting to take \[q=4\pi\mu r_{+} \tag{3.8}\] where \(r_{+}\) is the radius of the outer horizon. However, even though what is described above will be a valid double cone geometry that contributes to (3.5), it is only one out of an entire family of solutions. In fact, for the double cone geometry, even after we fixed the boundary value for the gauge field, the charge \(q\) can still freely vary instead of being fixed by (3.8) or analogous formulae in other dimensions. To understand this point, we can first ask how (3.8) could be derived if one were to consider the standard black hole thermodynamics. There the geometry in question is a Euclidean black hole, with the classical solution of gauge field being \[A_{t_{\rm E}}=-\mu+\frac{q}{4\pi r} \tag{3.9}\] and the condition (3.8) comes from requiring that \(A_{t_{\rm E}}|_{r=r_{+}}=0\), namely the gauge field configuration is smooth at the horizon, where the time circle shrinks to a point. However, for the double cone geometry, the crucial difference is that the time circle never shrinks in the geometry, so we no longer have the constraint \(A_{t}|_{r=r_{+}}=0\). Saying it in a different way, the coefficient of the \(1/r\) piece in the gauge field - the physical charge \(q\) - is not determined by \(\mu\) and can freely vary. Physically, the fact we no longer have a map between \(q\) and \(\mu\) for the double cone geometry is reasonable based on the expectation that no particular value of charge \(q\) would dominate (3.5). The same phenomena takes place in the original double cone with respect to the mass of the black hole: the time cycle is not contractible when periodically identifying time on the Lorentzian geometry. Therefore the mass is not fixed by the asymptotic boundary length and the double cone contributes equally at all energies. Indeed, the double cone geometry has zero classical action regardless of the value of \(q\), so black holes with all possible charges \(q\) contribute equally to (3.5) at the classical level. In other words, we have \[\langle|Y_{E,\mu}(T)|^{2}\rangle_{\rm c}\approx\int{\rm d}E\int{\rm d}q\, \frac{T}{2\pi}. \tag{3.10}\] For the specific case of JT gravity coupled to a U(1) gauge field, the wormhole contribution has been studied in [30]. In Appendix C we review the calculation in JT gravity. Above we are simply restating the main features they found in a general setting. See also [31] for discussion on closely related wormhole solutions with a U(1) gauge field. ### Wormhole contribution to the spin spectral form factor The lesson in the U(1) symmetry case can be generalized straightforwardly to the spin spectral form factor (3.2). To implement the \((-1)^{\sf F}\), similar to what we discussed in Section 2, we turn on an angular potential \(\Omega=2\pi/T\) at infinity. In the discussion of Section 2 where we were studying black hole solutions with a single boundary, it was important that the black hole has particular values of angular momentum \(J\) determined by \(\Omega\) such that the spin structure is smooth at the horizon. However, in the case of a wormhole geometry, since the time circle is never shrinking, we do not have any constraints on the angular momentum \(J\) of the black hole. This is the analogue of the phenomenon that the charge is not fixed by the chemical potential in the previous section. As a conclusion, the wormhole contributions to the spin spectral form factor are simple. One simply consider all the possible black hole geometries with energy \(E\), including those with different angular momentum \(J\). The only difference from the ordinary double cone is that we have a periodic boundary condition in time for fermionic fields. This change in boundary condition is invisible at the classical level, but will affect the one-loop fluctuations around the geometry. However, given that the one-loop determinant of the matter fields around the double cone geometries goes to one at sufficiently late time [8; 32],12 we expect that at large enough \(T\) we have Footnote 12: On the flip side, the matter fluctuations are important at early time, so what we said in this section does not generalized easily to \(\langle|\text{Tr}_{E}(-1)^{\mathsf{F}}|^{2}\rangle_{c}\). \[\langle|\text{Tr}_{E}\,(-1)^{\mathsf{F}}e^{-\text{i}HT}|^{2}\rangle_{\text{c }}\approx\langle|\text{Tr}_{E}\,e^{-\text{i}HT}|^{2}\rangle_{\text{c}}. \tag{3.11}\] This is the main result of this section. ### Interpretation in terms of random matrix universality In this section we give an interpretation of the fact that at late times the spectral form factor is equal with or without the \((-1)^{\mathsf{F}}\) insertion, see equation (3.11), in terms of late time quantum chaos and random matrix universality. To simplify the discussion let us assume that the only symmetry of the random matrix ensemble is the presence of a \(\mathbb{Z}_{2}\) symmetry generated by \((-1)^{\mathsf{F}}\). This implies that the Hilbert space can be decomposed into a bosonic and fermionic sector \(\mathcal{H}=\mathcal{H}_{b}\oplus\mathcal{H}_{f}\) such that both \((-1)^{\mathsf{F}}\) and the Hamiltonian can be written as \(2\times 2\) blocks: \[(-1)^{\mathsf{F}}=\left(\begin{array}{c|c}1&0\\ \hline 0&-1\end{array}\right),\qquad H=\left(\begin{array}{c|c}H_{b}&0\\ \hline 0&H_{f}\end{array}\right). \tag{3.12}\] The appropriate random matrix ensemble therefore is to take \(H_{b}\) and \(H_{f}\) to be statistically independent random matrices, assuming the absence of time reversal symmetry. This situation was studied in the context of JT gravity and in particular the connection with wormholes in [30; 33]. The fact that the two sectors are statistically independent imply in particular that \[\left\langle\text{Tr}_{\mathcal{H}_{b}}e^{-\beta_{1}H}\;\text{Tr}_{\mathcal{ H}_{f}}e^{-\beta_{2}H}\right\rangle_{\text{c}}=0, \tag{3.13}\] which we will use below. Finally, we did not specify yet which ensemble \(H_{b}\) and \(H_{f}\) are drawn from. As explained in section 4 of [34], the CPT theorem implies that the bosonic sector is in the orthogonal ensemble (GOE) while fermionic sector is in the symplectic ensemble (GSE).13 This will not affect much the results below. Footnote 13: Following the notation of [35], the CPT theorem says that the operator RT is a symmetry of any unitary Lorentz invariant theory. R generates a transformation that reverses the sign of one coordinate (any) and T is the time reversal operator defined such that it anticommutes with conserved charges. In Euclidean signature RT generates a rotation of \(180^{\circ}\) in a plane determined by R and the time direction. The fact that for a full rotation \((\textsf{RT})^{2}=(-1)^{\textsf{F}}\) implies the appropriate ensemble for the bosonic and fermionic sectors. Let us first evaluate the partition function. Decomposing the Hilbert space into a bosonic and fermionic sector implies that for each realization of the Hamiltonian we can write \[\operatorname{Tr}_{\mathcal{H}}e^{-\beta H}=\operatorname{Tr}_{\mathcal{H}_{ b}}e^{-\beta H}+\operatorname{Tr}_{\mathcal{H}_{f}}e^{-\beta H}. \tag{3.14}\] The connected contribution to the product of two partition functions is therefore given by \[\left\langle\operatorname{Tr}_{\mathcal{H}}e^{-\beta_{1}H}\,\operatorname{Tr }_{\mathcal{H}}e^{-\beta_{2}H}\right\rangle_{\mathrm{c}}=\left\langle \operatorname{Tr}_{\mathcal{H}_{b}}e^{-\beta_{1}H}\,\operatorname{Tr}_{ \mathcal{H}_{b}}e^{-\beta_{2}H}\right\rangle_{\mathrm{c}}+\left\langle \operatorname{Tr}_{\mathcal{H}_{f}}e^{-\beta_{1}H}\,\operatorname{Tr}_{ \mathcal{H}_{f}}e^{-\beta_{2}H}\right\rangle_{\mathrm{c}} \tag{3.15}\] where we use that the mixed terms vanish due to the statistically independence of the sectors. The right hand side is a universal quantity in the large rank limit of random matrices but we will not need its details below. Next we evaluate the partition function with a \((-1)^{\textsf{F}}\). In terms of the decomposition into bosonic and fermionic states the answer for each realization is now \[\operatorname{Tr}_{\mathcal{H}}\left(-1\right)^{\textsf{F}}e^{-\beta H}= \operatorname{Tr}_{\mathcal{H}_{b}}e^{-\beta H}-\operatorname{Tr}_{\mathcal{H} _{f}}e^{-\beta H}. \tag{3.16}\] When computing the connected contribution to the product of two partition functions, the minus sign that appears above is irrelevant since the sectors are statistically independent \[\left\langle\operatorname{Tr}_{\mathcal{H}}(-1)^{\textsf{F}}e^{ -\beta_{1}H}\,\operatorname{Tr}_{\mathcal{H}}(-1)^{\textsf{F}}e^{-\beta_{2}H }\right\rangle_{\mathrm{c}} = \left\langle\operatorname{Tr}_{\mathcal{H}_{b}}e^{-\beta_{1}H} \operatorname{Tr}_{\mathcal{H}_{b}}e^{-\beta_{2}H}\right\rangle_{\mathrm{c}} \tag{3.17}\] \[+\left\langle\operatorname{Tr}_{\mathcal{H}_{f}}e^{-\beta_{1}H} \operatorname{Tr}_{\mathcal{H}_{f}}e^{-\beta_{2}H}\right\rangle_{\mathrm{c}}.\] Therefore regardless of the details of the right hand side, we find that the answer is identical in both cases with or without the \((-1)^{\textsf{F}}\) insertion: \[\left\langle\operatorname{Tr}_{\mathcal{H}}(-1)^{\textsf{F}}e^{-\beta_{1}H}\, \operatorname{Tr}_{\mathcal{H}}(-1)^{\textsf{F}}e^{-\beta_{2}H}\right\rangle_{ \mathrm{c}}=\left\langle\operatorname{Tr}_{\mathcal{H}}e^{-\beta_{1}H}\, \operatorname{Tr}_{\mathcal{H}}e^{-\beta_{2}H}\right\rangle_{\mathrm{c}} \tag{3.18}\] After analytically continuing in \(\beta\) and going to the microcanonical ensemble, this results is equivalent to equation (3.11) derived as a consequence of the evaluation of the double cone contribution to the gravitational path integral. The late time behavior of either term on the right hand side of (3.15) has the same behavior as the double cone. First, it is a quantity that does not grow with the rank of the matrix and correspondingly the double cone on-shell action vanishes. Second, it has a late time ramp with a linear growth in \(T\). So far we incorporated the \((-1)^{\textsf{F}}\) as a generator of a \(\mathbb{Z}_{2}\) symmetry but did not take into account the fact that it is embedded in a bigger group of rotations in higher dimensions. At late times when the double cone becomes reliable, this approximation for the random matrix ensemble was good enough to reproduce the relation (3.11). Nevertheless the model here so far does not reproduce the correct prefactor of the linear-in-\(T\) term on either side of equation (3.11). Equation (3.15) implies the coefficient of the linear-in-\(T\) term for a theory with a \(\mathbb{Z}_{2}\) symmetry is twice the result for a theory without such a symmetry. According to the discussion in the previous section, the spectral form factor gets multiplied not by \(2\) but by a divergent factor given by the sum over the possible angular momenta of the black hole used to generate the double cone. This factor is divergent but can be regulated by working in an ensemble where the angular momentum is specified in a finite window (which would be the natural way to generalize fixing an energy window that regulates the integral over energies in (3.4)). A more accurate ensemble incorporates the rotation group \(G\) (for example \(G=\mathrm{SU}(2)\) in four dimensions or \(G=\mathrm{SU}(2)\times\mathrm{SU}(2)\) in five dimensions). The Hilbert space is decomposed into irreducible representations of \(G\). For example the case relevant to four dimensions is \(\mathcal{H}=\oplus_{J=0,\frac{1}{2},1,\ldots}\mathcal{H}_{J}\), where each \(\mathcal{H}_{J}\) represents an irreducible \((2J+1)\)-dimensional representation of \(G=\mathrm{SU}(2)\).14 Since the Hamiltonian commutes with the generators of \(G\), it is block diagonal according to the decomposition of the Hilbert space into sectors transforming in fixed representations, and consists of statistically independent random matrices in each sector. We can embed the \(\mathbb{Z}_{2}\) group generated by \((-1)^{\mathsf{F}}\) inside \(G\). Moreover, now the \(\mathsf{CPT}\) theorem implies the existence of an \(\mathsf{RT}\) symmetry that squares to \((-1)^{\mathsf{F}}\) and in even dimensions (after possibly combining with a rotation) anticommutes with the generator of rotations \(\vec{J}\), so even-spin sectors are GOE and odd-spin are GSE.15 Footnote 14: Here \(J\) labels the eigenvalue of \(\vec{J}^{2}\) which is fixed within each representation. In the previous sections we used \(J\) to denote the eigenvalue of \(\vec{J}\cdot\vec{n}\) instead, the eigenvalue of the angular momentum itself along a given direction. Footnote 15: This is true for \(\mathrm{SU}(2)\) since all representations are real. If the group \(G\) has complex representations then such sectors would remain GUE. An insertion of \((-1)^{\mathsf{F}}\) introduces a minus sign in the partition function for all representations of \(G\) that are odd under \((-1)^{\mathsf{F}}\) (for example half-integer \(J\)'s for \(G=\mathrm{SU}(2)\)). Since different representations are statistically independent, this sign disappears when evaluating the spectral form factor for essentially the same reason as the \(\mathbb{Z}_{2}\) case studied earlier in this section. The main difference is that now the right hand side of (3.15) becomes not a sum over an even and odd sector, but a sum over a contribution from all angular momenta appearing in all representations of \(G\). Working this out in more detail (following e.g. [30]) gives an extra factor of \[2\sum_{J=0,\frac{1}{2},1,\ldots}(2J+1)^{2},\] multiplying the value of the ramp for a theory without any symmetry. The factor of \(2\) comes from the ensemble being GOE/GSE while the \((2J+1)^{2}\) factor comes from the \(\mathrm{SU}(2)\) structure. This sum over spins gives a divergent factor (which can be regulated working in an ensemble where the angular momentum is restricted to a window \(\delta J\) around an average value \(J\)) that is now at least qualitatively the same that we identified in the gravity calculation in Section 3.1 and 3.2. We leave a more precise match of this normalization (involving a more careful analysis of quantum effects around the double cone in the presence of rotation) for future work. Discussion In this article we explained how to use the gravitational path integral to estimate the difference between the number of bosonic and fermionic black hole microstates. In particular, we focused on a contribution coming from the complex rotating black hole saddle point. This is a universal contribution that does not depend on the specific matter content of the theories, similar to the black hole that contributes to the ordinary partition function. We discussed its contribution for black holes in various dimensions, as well as the case of charged black holes in four dimensional flat space. We also describe wormhole contributions to the quantities we are computing similar to the double cone of [8]. In this section we will make some further comments and point out some interesting directions. Purely bosonic theoriesWe stated at the beginning of Section 2 that we are considering theories that contain fermions. However, one might be puzzled about where this assumption came into our discussion. Indeed, we never used the explicit details about the matter sector and only used the black hole geometries. On the other hand, our calculation must fail in a purely bosonic theory of quantum gravity (if it exists), since we are finding that the entropy with \((-1)^{\mathsf{F}}\) insertion is different from the entropy, meaning that there exists at least some fermionic states. In fact, the resolution to this puzzle is simple and instructive. In a theory with only bosons, turning on an angular potential \(\Omega=2\pi\mathrm{i}/\beta\) should be viewed as trivial, since all the states are invariant under a rotation by angle \(2\pi\). However, this invariance is not explicit in terms of individual bulk saddle points. Instead, it is enforced by summing over saddles with shifts of \(\Omega\) by \(2\pi\mathrm{i}n/\beta\), \(n\in\mathbb{Z}\). As a consequence, in both the calculation of \(Z\) and \(Z_{\mathrm{spin}}\) we are summing over saddles with \(\Omega=2\pi\mathrm{i}\mathbb{Z}/\beta\), so we trivially have \(Z=Z_{\mathrm{spin}}\), consistent with the fact that the theory does not have fermions. In such theories, the rotating black holes analyzed in this paper are simply subleading contributions to the partition function. So we see that our calculation relies on the assumption about the existence of fermions. In particular, it cannot be used to argue that a quantum gravity theory that contains black holes must have fermions. It would be nice to know whether this is true, and if so whether one can find some other argument for this statement. Other contributions to \(\operatorname{\mathbf{Tr}}\left(-1\right)^{\mathsf{F}}\)In this paper, we have only considered universal contributions to the quantity \(\operatorname{\mathrm{Tr}}\left(-1\right)^{\mathsf{F}}\), either in the canonical or microcanonical ensemble. Our result should really be thought of as giving a _lower bound_ to the size of this quantity, since in general there could be other more dominant contributions that depends on the details of the theory. (An upper bound, automatic from the boundary but not obvious from the bulk, is the partition function without the \((-1)^{\mathsf{F}}\).) One particular possibility is that a theory might contain codimension two defects which implements \((-1)^{\mathsf{F}}\) when one brings an operator around it. Such a defect can be placed at the tip of a Euclidean black hole such that it implements \((-1)^{\mathsf{F}}\) when going around the thermal circle, and it will give rise to a solution that computes \(\operatorname{\mathrm{Tr}}\left(-1\right)^{\mathsf{F}}\).16 This is a special case of the general story for black holes carrying discrete gauge charges [36]. The detailed property of such a defect is a theory dependent question. However, there is an interesting construction of such a defect that only requires the knowledge of the low energy spectrum [37].17 One imagines that instead of having a Euclidean time circle which shrinks to zero at the horizon, it is stabilized to a finite radius by the Casimir energy coming from the light fields, in a similar way as described in [37]. Given that the circle remains finite size, we are then free to choose the periodic boundary condition for the fermions, without the need to worry about the smoothness of the spin structure. Of course, this choice of boundary condition itself affects the Casimir energy one uses to find the solution. Footnote 17: We thank Juan Maldacena for this suggestion. The intriguing aspect of such a solution is that it connects the properties of low energy spectrum to some properties of very high energy spectrum. We hope to return to the details of these solutions in the future. Expectation from field theoriesWe studied a universal contribution from gravity to the difference between bosonic and fermionic states, using AdS/CFT. This raises the question of what expectations do we have from field theory for this quantity. The first obvious one is that the difference between bosonic and fermionic states cannot be large than the sum of them. This is obvious from field theory but becomes a non-trivial constraint from gravity. In particular, all the universal rotating black hole solutions we found satisfy this property. We found that \(Z_{\rm spin}\), the partition function with a \((-1)^{\sf F}\) insertion, is exponentially subleading in the large \(N\) limit compared to \(Z\), the partition function without the insertion. This phenomenon was discussed in [2] (see also references therein) for large \(N\) adjoint QCD, where the authors find a large cancellation between bosonic and fermionic states. Our results were compared with theirs in the last paragraph of Section 2.2.1. A more interesting constraint comes from the thermal effective field theory put forth in [38]. This predicts a specific dependence of the free energy with temperature and angular velocity. Their result was derived without the insertion of \((-1)^{\sf F}\). In the presence of such insertion one would expect their thermal effective theory still applies although the specific value of the effective theory parameters can change. [38] also uses this free energy to extract the microcanonical density of states. Using the rotating black hole for the we find \(F_{\rm spin}\) is controlled by thermal AdS, but in some cases like AdS\({}_{4}\) the microcanonical density of states \(S_{\rm spin}\) is controlled by the black hole. Furthermore, \(S_{\rm spin}\) grows only as \(\sqrt{E}\) and is subextensive in the volume (\(S_{\rm spin}\propto V^{3\over 4}\)). It would be interesting to explore to what extent this is a violation of the thermal effective field theory. _Acknowledgements_ We thank Tom Hartman, Shota Komatsu, Henry Lin, Juan Maldacena, Miguel Montero, Sridip Pal, Douglas Stanford, Edward Witten for discussion. We specially thank Juan Maldacena for initial collaboration. YC is supported by a Procter Fellowship from Princeton University. GJTs work was supported by the Institute for Advanced Study and the NSF under Grant No. PHY-2207584, and by the Sivian Fund, and currently by the University of Washington and the DOE award DE-SC0024363. The Kontsevich-Segal-Witten criterion for complex Kerr saddles In this appendix, we apply the Kontsevich-Segal-Witten (KSW) criterion for complex metrics [16; 22] to the complex Kerr black hole solutions that are relevant for the discussion in this paper. The KSW criterion selects reasonable gravitational saddle points to be included in the gravitational path integral, which is derived by demanding that the fluctuations of various quantum fields on the background are suppressed. To what extent it is a strict rule that one should follow is still an open question.18 Footnote 18: For example, in the derivation it was assumed that the fluctuations of the quantum fields are integrated along real coutours. If one allows for deformations of the integration contours for the matter fields, then the criterion will be weakened. See also [39; 40] for some recently discussed geometries that violate the criterion but nonetheless lead to physically sensible results. Concretely, the criterion states that for a complex metric \(g\) on a \(D\) dimensional manifold, if one picks a real basis such that the metric is diagonal \[g_{ij}=\lambda_{i}\delta_{ij},\quad i,j=1,...,D, \tag{104}\] then the KSW criterion demands that at every point of the manifold we have \[\sum_{i=1}^{D}|\text{Arg}(\lambda_{i})|<\pi. \tag{105}\] In this appendix we discuss how the analysis can be done for either the Kerr metric in 4d flat space or Anti-de Sitter space. The discussion can be generalized other dimensions straightforwardly. The 4d Kerr black hole metric in flat space is \[\text{d}s^{2}=\frac{\rho^{2}\Delta}{\Xi}\text{d}t_{E}^{2}+\frac{\rho^{2}}{ \Delta}\text{d}r^{2}+\rho^{2}\text{d}\theta^{2}+\frac{\Xi}{\rho^{2}}\sin^{2} \theta\left(\text{d}\widetilde{\varphi}+\text{i}\left(\frac{2Ear}{\Xi}- \Omega\right)\text{d}t_{E}\right)^{2} \tag{106}\] where the definition of various functions involved are given in (3). Note that we've introduced coordinate \(\widetilde{\varphi}\) such that the coordinates are identified as \((t_{E},\widetilde{\varphi})\sim(t_{E}+\beta,\widetilde{\varphi})\sim(t_{E}, \widetilde{\varphi}+2\pi)\). The metric is characterized by two parameters \((a,r_{+})\). The case of \(r_{+},a\in\mathbb{R}\) was already analyzed in [16], with the conclusion being that the metric is disallowed under the criterion. The problem comes from that the imaginary piece in (106), which leads to a negative \(g_{t_{E}t_{E}}\) component of the metric at large radius. The physical interpretation is simple - in flat space the fluctuations corresponding to adding particles far away from the black hole carrying large angular momenta are not suppressed. The situation becomes more complicated when we allow both \(r_{+}\) and \(a\) to be complex. In this case, it is not only the far away region we need to worry about, the near horizon region could also become dangerous. Even though the KSW criterion demands that (105) should be satisfied everywhere on the manifold, the full analysis appears complicated and here we will only focus on the near horizon region as well as the asymptotic region, where the analysis can be simplified. We will see that simply from these two limits we can already get interesting constraints on \(r_{+},a\). We first focus on the near horizon part of the geometry. The analysis here is similar to the one for complexified non-rotating black holes in [41]. By introducing new coordinates \[\mathrm{d}\varrho=\frac{\mathrm{d}r}{\sqrt{\Delta}},\quad\mathrm{d}u=\frac{2 \pi\mathrm{d}t_{E}}{\beta},\quad u\sim u+2\pi, \tag{100}\] the near horizon metric can be put into the following form \[\mathrm{d}s^{2}=(r_{+}^{2}+a^{2}\cos^{2}\theta)\left(\mathrm{d}\varrho^{2}+ \varrho^{2}\mathrm{d}u^{2}\right)+(r_{+}^{2}+a^{2}\cos^{2}\theta)\mathrm{d} \theta^{2}+\frac{r_{+}^{2}+a^{2}}{r_{+}^{2}+a^{2}\cos^{2}\theta}\sin^{2}\theta \,\mathrm{d}\widetilde{\varphi}^{2}. \tag{101}\] Note that the off-diagonal term \(\mathrm{d}u\mathrm{d}\widetilde{\varphi}\) will be of order \(\varrho^{2}\) so can be ignored in the near horizon limit. We can absorb the factor \((r_{+}^{2}+a^{2}\cos^{2}\theta)\) into \(\mathrm{d}\varrho^{2}\) and therefore the radial plus time part of the metric can take a form that is completely real. As explained in [41], this is always achievable due to the smoothness of the geometry at the horizon. Therefore, the only nontrivial phases that will enter (100) comes from \(\mathrm{d}\theta^{2}\) and \(\mathrm{d}\widetilde{\varphi}^{2}\) parts of the metric. Concretely, (100) requires \[\left|\mathrm{Arg}\left(r_{+}^{2}+a^{2}\cos^{2}\theta\right)\right|+\left| \mathrm{Arg}\left(\frac{r_{+}^{2}+a^{2}}{r_{+}^{2}+a^{2}\cos^{2}\theta}\right) \right|<\pi \tag{102}\] for any \(\theta\in[0,\pi)\). We could check that this is satisfied by the solutions we discussed in Section 2.1, since there we have \(a=\mathrm{i}\zeta r_{+}\), with \(r_{+}\in\mathbb{R}\) and \(\zeta\) taken to one from below. We can also verify that our metric is allowable in the asymptotic region. The leading behavior of the metric at \(r\to\infty\) takes the following form \[\mathrm{d}s^{2}=-\frac{\beta^{2}}{4\pi^{2}}\Omega^{2}r^{2}\sin^{2}\theta \mathrm{d}u^{2}-2\mathrm{i}r^{2}\frac{\beta}{2\pi}\Omega\sin^{2}\theta\mathrm{ d}u\mathrm{d}\widetilde{\varphi}+\mathrm{d}r^{2}+r^{2}\mathrm{d}\theta^{2}+r^{2} \sin^{2}\theta\mathrm{d}\widetilde{\varphi}^{2}. \tag{103}\] For the solution we considered in Section 2.1, we have \(\beta\Omega=2\pi\mathrm{i}\), which leads to a real metric in (103) with positive coefficients in front of \(\mathrm{d}u^{2}\) and \(\mathrm{d}\widetilde{\varphi}^{2}\). From the general discussion in [16] we know that such a metric is allowable. For the AdS\({}_{4}\) case discussed in Section 2.2.1, the analysis is similar to the flat space case. The near horizon region imposes a family of constraints that says for any \(\theta\) we should have \[\left|\mathrm{Arg}\left(\frac{r_{+}^{2}+a^{2}\cos^{2}\theta}{\ell^{2}-a^{2} \cos^{2}\theta}\right)\right|+\left|\mathrm{Arg}\left(\frac{(r_{+}^{2}+a^{2})( \ell^{2}-a^{2}\cos^{2}\theta)}{(r_{+}^{2}+a^{2}\cos^{2}\theta)(\ell^{2}-a^{2}) ^{2}}\right)\right|<\pi. \tag{104}\] We've checked that the complex solution we considered in Section 2.2.1 satisfy (104), while the solutions we dropped do not. Asymptotically, the metric (24) behaves as \[\mathrm{d}s^{2}\approx\frac{\ell^{2}}{r^{2}}\mathrm{d}r^{2}+\frac{r^{2}}{\ell ^{2}}\frac{\beta^{2}}{4\pi^{2}}\mathrm{d}u^{2}+\mathrm{i}\frac{\beta}{2\pi} \frac{r^{2}}{\ell^{2}}\frac{2a\sin^{2}\theta}{1-a^{2}/\ell^{2}}\mathrm{d}u \mathrm{d}\phi+\frac{r^{2}\sin^{2}\theta}{1-a^{2}/\ell^{2}}\mathrm{d}\phi^{2}+ \frac{r^{2}}{1-\frac{a^{2}}{\ell^{2}}\cos^{2}\theta}\mathrm{d}\theta^{2} \tag{105}\] Since the metric contains off-diagonal terms, one has to first find a real basis in which the metric is diagonal and then apply the criterion (100) [16]. We performed this exercise and found that the solution we considered in (2.2.1) is also allowable in the asymptotic region. Smoothness of the Kerr black hole In this section we explain how the Kerr black hole found in Section 2.1 is smooth even though it requires taking a limit \(r_{+}\to\infty\). The curvature squared of the Kerr metric is given by \[R^{\mu\nu\rho\sigma}R_{\mu\nu\rho\sigma}=\frac{48E^{2}(r^{2}-a^{2}\cos^{2}\theta )[(r^{2}+a^{2}\cos^{2}\theta)^{2}-16r^{2}a^{2}\cos^{2}\theta)]}{(r^{2}+a^{2} \cos^{2}\theta)^{6}}. \tag{115}\] Since after setting \(\beta\Omega\) to be pure imaginary the metric is real in Euclidean signature, we use a criterion for smoothness that this quantity is finite \(|R^{\mu\nu\rho\sigma}R_{\mu\nu\rho\sigma}|<\infty\). The calculation is straightfoward so we will simply quote the result. As we take the \(\varepsilon\to 0\) limit of the solution in Section 2.1 we find that for generic spacetime points the curvature squared vanishes \(|R^{\mu\nu\rho\sigma}R_{\mu\nu\rho\sigma}|\sim\mathcal{O}(\varepsilon^{6})\). The only exception is at the north and south pole \(\theta=0,\pi\) at the horizon \(r=r_{+}\), where we find \(|R^{\mu\nu\rho\sigma}R_{\mu\nu\rho\sigma}|=24/E^{4}+\mathcal{O}(\varepsilon^ {1})\), which is still finite. ## Appendix C Review of spectral form factor in JT gravity with a \(\mathbf{U}(1)\) gauge field Here we review the calculation of the spectral form factor in a random matrix model with \(\mathrm{U}(1)\) global symmetry, dual to JT gravity coupled to a \(\mathrm{U}(1)\) Maxwell theory. The partition functions for general Riemann surfaces for this theory was discussed in [30]. Here we review their story for the two boundary wormhole, and in particular, the computation where we focus on a microcanonical window around energy \(E\), with width \(\Delta\). We will work in the unit that the length scale \(\phi_{r}\) governing the Schwarzian fluctuations in JT is one. We start with the expression for the double trumpet in this theory [30] \[Z(\beta_{L},\mu_{L};\beta_{R},\mu_{R})=2\pi\int_{0}^{2\pi}\mathrm{d}\phi\int_{ 0}^{\infty}\mathrm{d}b\,b\,Z_{tr}(\beta_{L},\mu_{L};b,\phi)Z_{tr}(\beta_{R}, \mu_{R};b,-\phi), \tag{116}\] where \(Z_{tr}(\beta,\mu;b,\phi)=Z_{tr}^{gauge}Z_{tr}^{grav}\), with \(Z_{tr}^{grav}\) being the ordinary JT trumpet partition function \[Z_{tr}^{grav}(\beta)=\frac{1}{\sqrt{2\pi\beta}}\exp\left[-\frac{b^{2}}{2\beta}\right] \tag{117}\] and \[\begin{split} Z_{tr}^{gauge}(\beta,\mu,\phi)&= \frac{1}{\sqrt{4\pi\beta}}\sum_{n\in\mathbb{Z}}\exp\left[-\frac{1}{2\beta}(2 \pi n-\mathrm{i}\mu\beta-\phi)^{2}\right]\\ &=\frac{1}{\sqrt{8\pi^{2}}}\sum_{n\in\mathbb{Z}}\int\mathrm{d}q\, \exp\left[q(\beta\mu+2\pi\mathrm{i}n-\mathrm{i}\phi)\right]\exp\left[-\beta \frac{q^{2}}{2}\right].\end{split} \tag{118}\] The variable \(\phi\) in (116) and (118) comes from the holonomy of the gauge field at the throat of the wormhole. Note that in the second line of (118), we could further sum over \(n\) which sets \(q\) to be integers, but we will defer this step such that the procedure is more analogues to the general story described in Section 3.1. Taking (114) into (115), we get \[\begin{split} Z(\beta_{L},\mu_{L};\beta_{R},\mu_{R})=& \frac{1}{8\pi^{2}}\sum_{n_{L},n_{R}}\int\mathrm{d}q_{L}\int\mathrm{d}q_{R} \int_{0}^{2\pi}\mathrm{d}\phi\int_{0}^{\infty}\mathrm{d}b\,b\frac{1}{\sqrt{ \beta_{L}\beta_{R}}}\exp\left[-\frac{b^{2}}{2\beta_{L}}-\frac{b^{2}}{2\beta_{ R}}\right]\\ &\exp\left[\beta_{L}\mu_{L}q_{L}+\beta_{R}\mu_{R}q_{R}\right]\exp \left[-\beta_{L}\frac{q_{L}^{2}}{2}-\beta_{R}\frac{q_{R}^{2}}{2}\right]\\ &\exp\left[q_{L}(2\pi\mathrm{i}n_{L}-\mathrm{i}\phi)+q_{R}(2\pi \mathrm{i}n_{R}+\mathrm{i}\phi)\right]\end{split} \tag{116}\] Redefine \(n=n_{R},m=n_{L}+n_{R}\), we can rewrite the exponent on the last line of (116) as \[q_{L}\left(2\pi\mathrm{i}(m-n)-\mathrm{i}\phi\right)+q_{R}\left(2\pi\mathrm{i }n+\mathrm{i}\phi\right)=2\pi\mathrm{i}mq_{L}+(2\pi\mathrm{i}(n+\phi))(q_{R} -q_{L}). \tag{117}\] We can combine the sum over \(n\) and the integral over \(\phi\) into a single integral of \(n+\phi\) over the real axis, which imposes that \(q_{L}=q_{R}\). Then we have \[\begin{split} Z(\beta_{L},\mu_{L};\beta_{R},\mu_{R})=& \frac{1}{8\pi^{2}}\sum_{m}\int\mathrm{d}q\int_{0}^{\infty}\mathrm{d}b\,b\frac{ 1}{\sqrt{\beta_{L}\beta_{R}}}\exp\left[-\frac{b^{2}}{2\beta_{L}}-\frac{b^{2}}{ 2\beta_{R}}\right]\\ &\exp\left[q(\beta_{L}\mu_{L}+\beta_{R}\mu_{R}+2\pi\mathrm{i}m) \right]\exp\left[-\beta_{L}\frac{q^{2}}{2}-\beta_{R}\frac{q^{2}}{2}\right] \end{split} \tag{118}\] Again, we defer the sum over \(m\) which sets the charge to be integers. We are interested in the quantity \(\langle|Y_{E,\mu}(T)|^{2}\rangle_{c}\) defined in (10). It can be computed by applying a transformation of (115) \[\begin{split}\langle|Y_{E,\mu}(T)|^{2}\rangle_{c}& \propto\int\mathrm{d}\beta_{L}\mathrm{d}\beta_{R}\,e^{\beta_{L}E+\beta_{R}E+ \frac{1}{2}(\beta_{L}^{2}+\beta_{R}^{2})\Delta^{2}}Z(\beta_{L}-\mathrm{i}T,\mu ;\beta_{R}+\mathrm{i}T,\mu)\\ &\propto\int\mathrm{d}\beta_{L}\mathrm{d}\beta_{R}\,e^{\beta_{L}E+ \beta_{R}E+\frac{1}{2}(\beta_{L}^{2}+\beta_{R}^{2})\Delta^{2}}\times\\ &\sum_{m}\int\mathrm{d}q\int_{0}^{\infty}\mathrm{d}b\,b\frac{1}{ \sqrt{(\beta_{L}-\mathrm{i}T)(\beta_{R}+\mathrm{i}T)}}\exp\left[-\frac{b^{2}}{ 2(\beta_{L}-\mathrm{i}T)}-\frac{b^{2}}{2(\beta_{R}+\mathrm{i}T)}\right]\\ &\exp\left[q((\beta_{L}+\beta_{R})\mu+2\pi\mathrm{i}m)\right] \exp\left[-(\beta_{L}-\mathrm{i}T)\frac{q^{2}}{2}-(\beta_{R}+\mathrm{i}T) \frac{q^{2}}{2}\right]\end{split} \tag{119}\] To understand this seemingly complicated expression, we can look for the saddle points for the integrals for \(\beta_{L},\beta_{R}\) and \(b\). Similar to the ordinary double cone [8], the saddle point value of \(\beta_{L}\) and \(\beta_{R}\) are located at zero, while the saddle point for \(b\) is located at \[b_{*}=\sqrt{2\left(E+\mu q-\frac{q^{2}}{2}\right)}T. \tag{120}\] We can then expand (119) around the saddle points and evaluate the one loop determinant. The final result of the calculation is very simple \[\begin{split}\langle|Y_{E,\mu}(T)|^{2}\rangle_{c}& \propto\sum_{m}\int\mathrm{d}q\,e^{2\pi\mathrm{i}mq}\,T\\ &\propto\sum_{q}T.\end{split} \tag{121}\] We kept the sum over \(m\) in the first line because in the discussion of 3.1 we did not discuss the effect of summing over shifts (we had \(m=0\) there), so we simply have a continuous integral over the U(1) charge. Of course, the effect of summing over \(m\) is to enforce charge quantization. The final result (C.9) is easy to interpret. We simply have an independent random matrix in each of the charge sectors, which contributes one copy of the linear \(T\) growth. The final result is the sum over all the charge sectors. In particular, the final result is independent of the chemical potential \(\mu\).
2310.00012
Operator-free Equilibrium on the Sphere
We propose a generalized minimum discrepancy, which derives from Legendre's ODE and spherical harmonic theoretics to provide a new criterion of equidistributed pointsets on the sphere. A continuous and derivative kernel in terms of elementary functions is established to simplify the computation of the generalized minimum discrepancy. We consider the deterministic point generated from Pycke's statistics to integrate a Franke function for the sphere and investigate the discrepancies of points systems embedding with different kernels. Quantitive experiments are conducted and the results are analyzed. Our deduced model can explore latent point systems, that have the minimum discrepancy without the involvement of pseudodifferential operators and Beltrami operators, by the use of derivatives. Compared to the random point generated from the Monte Carlo method, only a few points generated by our method are required to approximate the target in arbitrary dimensions.
Xiongming Dai, Gerald Baumgartner
2023-09-10T16:16:06Z
http://arxiv.org/abs/2310.00012v1
# Operator-free Equilibrium on the Sphere ###### Abstract We propose a generalized minimum discrepancy, which derives from Legendre's ODE and spherical harmonic theories to provide a new criterion of equidistributed pointsets on the sphere. A continuous and derivative kernel in terms of elementary functions is established to simplify the computation of the generalized minimum discrepancy. We consider the deterministic point generated from Pycke's statistics to integrate a Franke function for the sphere and investigate the discrepancies of points systems embedding with different kernels. Quantitive experiments are conducted and the results are analyzed. Our deduced model can explore latent point systems, that have the minimum discrepancy without the involvement of pseudodifferential operators and Beltrami operators, by the use of derivatives. Compared to the random point generated from the Monte Carlo method, only a few points generated by our method are required to approximate the target in arbitrary dimensions. Generalized minimum discrepancy Legendre's ODE Beltrami operators ## 1 Introduction Quantifying the criterion of equidistributed pointsets on a sphere is of practical importance in numerical analysis [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16], geophysical [17, 18, 19], geodetic sciences [20, 21, 22] and statistics [23, 24, 25, 26, 27, 28]. The advantage of equidistributed point systems is that they are well separated and sufficiently covered such that only a few points are required to approximate the integral. The uniqueness of these points, compared to random points generated from the Monte Carlo method, makes them extensively used in downsampling methods for machine learning. For earlier researchers, Freeden obtained explicit identities for the error terms in cubature formulas from the deduction of Green's functions with respect to the Laplace Beltrami operator on the sphere [29]. Cui and Freeden extended it further and proposed a generalized discrepancy associated with pseudodifferential operators in \(\mathbb{R}^{3}\)[20]. This approach is limited in that the generated point system is only with the kernel-self and cannot explore latent point systems derived from its derivatives further, with mild assumptions. The purpose of this paper is to study a set of formulas that combines the advantage of Legendre's ODE and further explore latent point systems within error bounds. We consider the properties of the kernel with continuity and derivative, Legendre's ODE and spherical harmonic theoretics to find a new criterion of equidistributed pointsets where the discrepancy becomes smaller, and propose a generalized minimum discrepancy. Our kernel derivative model can explore latent potential point systems that have the minimum discrepancy with operators-free. Our auxiliary intermediaries are spherical harmonic approaches and potential theoretics. The paper is divided into three parts. In Section 2, we first introduce a brief abstract of spherical harmonics [30] and the kernel representation for pseudodifferential operators in \(\mathbb{R}^{3}\)[31]. For the error estimation of the pointsets, we obtain the upper bound with different order of derivatives of Legendre polynomial and further develop the concept of generalized minimum discrepancy in Section 3. Our investigation exhibits that, to obtain small discrepancies, point systems on the sphere can be generated from the use of derivatives of kernels without the involvement of the pseudodifferential operators. For different kernels, if they are differentially associated, we can create a mapping \(f\in\mathbb{L}^{2}(\mathbb{S}^{d})\) for the pseudodifferential operators. Thus, the generalized minimum discrepancy can be used to reversely deduct the associated pseudodifferential operators. For certain pseudodifferential operators, we find a closed-form expression of elementary function and group them into different families. It is shown that the two measures of design quality from the point system generated by the generalized minimum discrepancy and the one by minimizing the energy are equivalent. We use the kernel to develop the relation between the points system generated by the minimum energy model and the generalized discrepancies in Section 4. In Section 5, we use the associated kernel to integrate a Franke function for the sphere such that the minimum discrepancies can be obtained under different orders of derivatives, we statistically analyze the discrepancies for different numbers of nodes, and the smoothing parameter estimation for different kernels. we further conduct the experiments of point systems for different kernels on the sphere and compute the discrepancy from the minimum energy perspective. All the tests where the discrepancy of the pointset generated by our methods becomes smaller are valid. The summary of our contributions is outlined in Section 6. ## 2 Prerequisites **Theory of spherical harmonics.** We use \((x,y,z)\) to represent the element of the three-dimensional Euclidean space \(\mathbb{R}^{3}\) and the Greek alphabet \(\xi\) and \(\eta\) to represent the vectors of the unit sphere \(\mathbb{S}^{d}\) in \(\mathbb{R}^{3}\). **x**=\(\{x_{1},...,x_{N}\}\) represents the point system. \(\Delta^{*}\) represents the Beltrami operator on the unit sphere. A function \(f:\mathbb{S}^{d}\mapsto\mathbb{R}\) possessing \(k\) continuous derivatives on \(\mathbb{S}^{d}\) is said to be of the class \(C^{k}(\mathbb{S}^{d})\). \(C(\mathbb{S}^{d})=C^{0}(\mathbb{S}^{d})\) is the class of real continuous scalar-valued functions on \(\mathbb{S}^{d}\). By \(\mathbb{L}_{2}(\mathbb{S}^{d})\) we denote the space of Lebesgue square-integrable scalar functions on \(\mathbb{S}^{d}\). Let \(Y_{i,j}:i=0,...,n;j=1,...,Z(d,n)\) to be an orthonormalized basis of \(\mathbb{L}_{2}(\mathbb{S}^{d})\), where \(i\) is called degree, \(j\) is the order of the spherical harmonics. The dimension of the space \(V_{i}\) of spherical harmonics of order \(d+1\) on \(\mathbb{S}^{d}\) will be denoted by \[Z(d,i)=(2i+d-1)\frac{\Gamma(i+d-1)}{\Gamma(d)\Gamma(i+1)},\textbf{1}_{n\gg d} \cdot Z(d,n)=\frac{2}{\Gamma(d)}n^{d-1}. \tag{1}\] The space \(V_{i}\) is considered as the eigenspace of the Laplace-Beltrami operator on \(\mathbb{S}^{d}\) for the eigenvalue \(\lambda_{i}=-i(i+d-1)\). The well-known Legendre addition theorem states [30] \[\sum_{j=1}^{Z(d,i)}Y_{i,j}(\xi)Y_{i,j}(\eta)=\frac{Z(d,i)}{c_{d}}P_{i}(\xi\cdot \eta),\ \ \xi,\eta\in\mathbb{S}^{d}, \tag{2}\] where \(P_{i}(x)\) is the Legendre polynomial, an infinitely differentiable eigenfunction of the Legendre operator, orthogonal on the \(x\in[-1,1]\) with respect to \((1-x^{2})^{d/2-1}\), and it satisfies \(P_{n}(1)=1\), \(P_{n}(x)\leq 1\) and \(|P_{n}^{{}^{\prime}}(x)|\leq\frac{n(n+1)}{2}\). The constant \(c_{d}\) denotes the surface area of \(\mathbb{S}^{d}\). **Functional and distributional spaces.** We consider the space [20] \[H^{s}(\mathbb{S}^{d})=\left\{f\in C^{\infty}(\mathbb{S}^{d})|\sum_{i=0}^{ \infty}\sum_{j=1}^{Z(d,i)}f_{i,j}\cdot\hat{i}^{2s}<\infty\right\}, \tag{3}\] where \[\hat{i}=\begin{cases}1,&\text{if}\ \ i=0;\\ i,&\text{otherwise}.\end{cases}\] Then the union of the normalized \(Y_{i,j}\) for all \(i\in\mathbb{R}\) forms a complete orthonormal system in \(\mathbb{L}^{2}(\mathbb{S}^{d})\). Thus for \(f\in\mathbb{L}^{2}(\mathbb{S}^{d})\), it can be formulated as a Fourier series \[f=\sum_{i=0}^{\infty}\sum_{j=1}^{Z(d,i)}\hat{f}_{i,j}Y_{i,j}(\xi), \tag{4}\] where the Fourier coefficients \(\hat{f}_{i,j}\) are given by \[\hat{f}_{i,j}=(f,Y_{i,j})_{\mathbb{L}_{2}(\mathbb{S}^{d})}=\int_{\mathbb{S}^{ d}}f(\xi)Y_{i,j}(\xi)d\sigma_{d}(\xi), \tag{5}\] satisfying \[\sum_{i=0}^{\infty}\sum_{j=1}^{Z(d,i)}(1-\lambda_{i})^{s}\left|\hat{f}_{i,j} \right|<\infty, \tag{6}\] where \(\sigma_{d}(\xi)\) denotes the normalized Hausdorff surface measure on the unit sphere \(\mathbb{S}^{d}\) in \(\mathbb{R}^{d+1}\). The corresponding inner product in the \(H^{s}(\mathbb{S}^{d})\) is \[\left\langle f,g\right\rangle_{H^{s}(\mathbb{S}^{d})}=\sum_{i=0}^{\infty}\sum_ {j=1}^{Z(d,i)}f_{i,j}g_{i,j}\hat{i}^{2s},\text{and}\ \ \left\|f\right\|_{H^{s}(\mathbb{S}^{d})}=\sqrt{\sum_{i=0}^{\infty}\sum_{j=1}^{Z (d,i)}f_{i,j}^{2}\hat{i}^{2s}}<\infty. \tag{7}\] From the Cauchy-Schwarz inequality, we obtain \[\left(\sum_{i=0}^{\infty}\sum_{j=1}^{Z(d,i)}\left|\hat{f}_{i,j}Y_{i,j}(\xi) \right|\right)^{2}\leq\sum_{i=0}^{\infty}\sum_{j=1}^{Z(d,i)}\left|\hat{f}_{i, j}^{2}\hat{i}^{2s}\right|\cdot\sum_{i=0}^{\infty}\sum_{j=1}^{Z(d,i)}\left|Y_{i,j}^ {2}\hat{i}^{2s}\right|=\left\|f\right\|_{H^{s}(\mathbb{S}^{d})}\sum_{i=0}^{ \infty}Z(d,i)i^{-2s}.\] As \(Z(d,i)\leq i^{d-1}\), thus the series uniformly converges for \(d-1-2s<-1\Rightarrow s>\frac{d}{2}\). Thus, the spherical harmonic expansion of any function \(f\) in \(H^{s}(\mathbb{S}^{d})\) will converge uniformly for \(s>\frac{d}{2}\). This is significant since there are functions in \(C^{k}(\mathbb{S}^{d})\) which do not allow a uniformly convergent for spherical harmonic series [20, 32]. For our experiment in Section 5, we use \(s>2\). **Pseudodifferential operator.**\(H^{s}(\mathbb{S}^{d})\subset C^{k}(\mathbb{S}^{d}),\) for \(s>\frac{d}{2}\). Let \(\left\{A_{i}\right\}_{i\in\mathbb{R}^{+}}\) be a sequence of real numbers \(A_{i}\) satisfying \[\lim_{i\to 0}\frac{\left|A_{i}\right|}{(i+\frac{d-1}{2})^{\alpha}}=\text{ const}\neq 0\] for a certain \(\alpha\in\mathbb{R}^{+}\). Then a pseudodifferential operator of order \(\alpha\), \(\mathbf{A}\) from \(H^{s}(\mathbb{S}^{d})\) to \(H^{s}(\mathbb{S}^{d-\alpha})\) is defined by \[\mathbf{A}f=\sum_{i=0}^{\infty}\sum_{j=1}^{Z(d,i)}A_{i}\hat{f}_{i,j}Y_{i,j}( \xi),f\in H^{s}(\mathbb{S}^{d}). \tag{8}\] The sequence \(\left\{A_{i}\right\}_{i\in\mathbb{R}^{+}}\) is called the spherical symbol of \(\mathbf{A}\). It is obvious that, for a pseudodifferential operator \(\mathbf{A}\) of order \(s\), equation (3) \(H^{s}(\mathbb{S}^{d})\) can be equivalently expressed as \[H^{s}(\mathbb{S}^{d})=\left\{f:\mathbb{S}^{d}\rightarrow\mathbb{R}|\mathbf{A} f\in\mathbb{L}_{2}(\mathbb{S}^{d})\right\}.\] The relation between the pseudodifferential operator \(\mathbf{A}\) on the sphere and the Beltrami operator \(\Delta^{s}\) for a certain elementary functional representation is provided by [20]. We consider equation (7), the kernel \(K\) associated with the space \(H^{s}(\mathbb{S}^{d})\) and the inner product \(\left\langle f,g\right\rangle_{H^{s}(\mathbb{S}^{d})}\) is \[K(\xi\cdot\eta)=\sum_{i=0}^{\infty}\sum_{j=1}^{Z(d,i)}\frac{1}{\hat{i}^{2s}} \cdot Y_{i,j}(\xi)\cdot Y_{i,j}(\eta)=\sum_{i=0}^{\infty}\frac{Z(d,i)}{\hat{i} ^{2s}\cdot c_{d}}\cdot P_{i}(\xi\cdot\eta), \tag{9}\] for invariant pseudodifferential operator \(\mathbf{A}\) on the sphere, it can be simplified into \[K_{\mathbf{A}}(\xi\cdot\eta)=\sum_{i=0}^{\infty}\sum_{j=1}^{2n+1}A_{n}\cdot Y_ {i,j}(\xi)\cdot Y_{i,j}(\eta)=\sum_{i=0}^{\infty}\frac{2n+1}{4\pi}\cdot A_{n} \cdot P_{n}(\xi\cdot\eta). \tag{10}\] The equation (10) can be further simplified by convolution into \[\mathbf{A}f=K_{\mathbf{A}}\ast f=\int_{H^{s}(\mathbb{S}^{d})}K_{\mathbf{A}}( \xi\cdot\eta)f(\xi)d\sigma_{d}(\xi).\] The kernel \(K_{\mathbf{A}}(\xi\cdot\eta)\in H^{-(\alpha+\zeta)}(\mathbb{S}^{d})\) for all \(\zeta>0\)[20]. ## 3 Operator-free Equilibrium by Derivatives In this section, we focus on the discrepancies of equilibrium from different self-joint kernels. The problem can be stated as follows: There exist coefficients \(a_{i}\) such that \(\sum_{i=1}^{N}a_{i}f(x_{i})\) is a good approximation to \(\frac{1}{4\pi}\int_{\mathbb{S}^{d}}f(x)d\omega(x)\) in a certain upper bound for any \(f\in\mathbb{L}^{2}(\mathbb{S}^{d})\), as \(N\to\infty\). **Theorem 1** Let \(\mathbf{A}\) be a pseudodifferential operator of order \(s\), \(s>1\), with the symbol \(A_{n}\) satisfying \(A_{n}>0,n\geq 1\). Let \(m\) denote the order of the highest derivative for Legendre polynomial \(P_{n}(t)\), for any function \(\mathbf{A}f(x)\in\mathbb{L}^{2}(\mathbb{S}^{d})\) and \(m\leq N,m\in\mathbb{R}\), we have the estimate \[\left|\sum_{i=1}^{N}a_{i}f(x_{i})-\frac{1}{4\pi}\int_{\mathbb{S}^{d}}f(x)d \omega(x)\right|\leq\frac{1}{N}\sqrt{\left[\sum_{t=1}^{N}\sum_{i=1}^{N}\sum_{ n=1}^{\infty}\frac{Z(d,i)}{A_{n}^{2}}\frac{\partial^{m}P_{n}}{(\partial(\eta_{i} \cdot\eta_{t}))^{m}(\eta_{i}\cdot\eta_{t})}\right]}\left\|\mathbf{A}f(x)\right\| _{L^{2}}. \tag{11}\] **Proof** From (3), we can induce \(f(\xi)\in C^{\infty}(\mathbb{S}^{d})\). As \(s>1\) and \(d\geq 2\), the spherical harmonic expansion of any function \(f(\xi)\in H^{s}(\mathbb{S}^{d})\) will converge uniformly, we have \[f(\xi)=\sum_{n=0}^{\infty}\sum_{j=1}^{2n+1}f_{n,j}Y_{n,j}(\xi),\xi\in\mathbb{S }^{d}. \tag{12}\] We discrete the surface with \(d\omega(\eta)\) on the sphere. From [29], we get \[f(\xi)=\frac{1}{4\pi}\int_{\mathbb{S}^{d}}f(\eta)d\omega(\eta)-\int_{\mathbb{S }^{d}}\sum_{k=0}^{\infty}\frac{1}{k(k+1)-\lambda_{k}}\sum_{j=1}^{2k+1}Y_{k,j}( \xi)Y_{k,j}(\eta)\Delta_{\xi}^{*}f(\eta)d\omega(\eta). \tag{13}\] Given \(\xi=\eta_{i}\), \(i\in[1,N]\) and \(Y_{k,j}(\xi)=\frac{1}{N}\sum_{i=1}^{N}Y_{k,j}(\eta_{i})\), it leads to \[\sum_{i=1}^{N}a_{i}f(\eta_{i})=\frac{1}{4\pi}\int_{\mathbb{S}^{d}}f(\eta)d \omega(\eta)-\frac{1}{N}\int_{\mathbb{S}^{d}}\sum_{k=0}^{\infty}\frac{1}{k(k+ 1)-\lambda_{k}}\sum_{j=1}^{2k+1}\sum_{i=1}^{N}Y_{k,j}(\eta_{i})Y_{k,j}(\eta) \Delta_{\xi}^{*}f(\eta)d\omega(\eta). \tag{14}\] From the Cauchy-Schwarz inequality and the Legendre addition theorem, we get [20] \[\left|\sum_{i=1}^{N}a_{i}f(\eta_{i})-\frac{1}{4\pi}\int_{\mathbb{ S}^{d}}f(\eta)d\omega(\eta)\right| \tag{15}\] \[\leq\frac{1}{N}\sum_{k=0}^{\infty}\sum_{j=1}^{2k+1}\sum_{i=1}^{N }\frac{1}{k(k+1)-\lambda_{k}}\int_{\mathbb{S}^{d}}Y_{k,j}(\eta_{i})Y_{k,j}( \eta)\Delta_{\xi}^{*}f(\eta)d\omega(\eta)\] \[=\frac{1}{N}\int_{\mathbb{S}^{d}}f(\eta)\sum_{k=0}^{\infty}\sum_{j =1}^{2k+1}\sum_{i=1}^{N}\frac{\Delta_{\xi}^{*}}{k(k+1)-\lambda_{k}}Y_{k,j}( \eta_{i})Y_{k,j}(\eta)d\omega(\eta)\] \[=\frac{1}{N}\sqrt{\int_{\mathbb{S}^{d}}f^{2}(\eta)d\omega(\eta)} \cdot\sqrt{\int_{\mathbb{S}^{d}}\left(\sum_{k=0}^{\infty}\sum_{j=1}^{2k+1}\sum_ {i=1}^{N}\frac{Y_{k,j}(\eta_{i})Y_{k,j}(\eta)}{A_{n}}\right)^{2}d\omega(\eta)}\] \[=\frac{1}{N}\left\|\mathbf{A}f(\xi)\right\|_{L^{2}}\sqrt{\sum_{n= 1}^{\infty}\sum_{j=1}^{2n+1}\left(\frac{\sum_{i=1}^{N}Y_{n,j}(\eta_{i})}{A_{n }}\right)^{2}}\] \[=\frac{1}{N}\left\|\mathbf{A}f(\xi)\right\|_{L^{2}}\sqrt{\sum_{n= 1}^{\infty}\sum_{j=1}^{2n+1}\sum_{i=1}^{N}\frac{Y_{n,j}(\eta_{i})Y_{n,j}(\eta_{ t})}{A_{n}^{2}}}\] \[=\frac{1}{N}\left\|\mathbf{A}f(\xi)\right\|_{L^{2}}\sqrt{\sum_{n= 1}^{\infty}\sum_{t=1}^{N}\sum_{i=1}^{N}\frac{2n+1}{4\pi A_{n}^{2}}P_{n}(\eta_{ i}\cdot\eta_{t})}.\] Here, we first focus on the deduction of Legendre polynomial \(P_{n}(\eta_{i}\cdot\eta_{t})\) recurrence relations. Differentiating the generating function [33] \[g(x,t)=(1-2xt+t^{2})^{-\frac{1}{2}}=\sum_{n=0}^{\infty}P_{n}(x)t^{n},|t|<1, \tag{16}\] with respect to \(x\), we get \[\frac{\partial g(x,t)}{\partial x}=\frac{t}{(1-2xt+t^{2})^{\frac{3}{2}}}=\sum_{n=0 }^{\infty}P_{n}^{{}^{\prime}}(x)t^{n}. \tag{17}\] Substituting (16) to (17), we get \[(1-2xt+t^{2})\sum_{n=0}^{\infty}P_{n}^{{}^{\prime}}(x)t^{n}-t\sum_{n=0}^{ \infty}P_{n}(x)t^{n}=0, \tag{18}\] which leads to \[P_{n+1}^{{}^{\prime}}(x)+P_{n-1}^{{}^{\prime}}(x)=2xP_{n}^{{}^{\prime}}(x)+P_{ n}(x). \tag{19}\] Differentiating the following Bonnet's recursion formula \[(2n+1)xP_{n}(x)=(n+1)P_{n+1}(x)+nP_{n-1}(x), \tag{20}\] with respect to \(x\), and adding 2 times \(\frac{d}{dx}\) (20) to \((2n+1)\) times (19), we get \[(2n+1)P_{n}(x)=P_{n+1}^{{}^{\prime}}(x)-P_{n-1}^{{}^{\prime}}(x). \tag{21}\] From the above, we can also find that \[P_{n+1}^{{}^{\prime}}(x)=(2n+1)P_{n}(x)+(2(n-2)+1)P_{n-2}(x)+(2(n-4)+1)P_{n-4} (x)+\cdots, \tag{22}\] or equivalently \[P_{n+1}^{{}^{\prime}}(x)=\frac{2}{\left\|P_{n}\right\|^{2}}P_{n}(x)+\frac{2}{ \left\|P_{n-2}\right\|^{2}}P_{n-2}(x)+\frac{2}{\left\|P_{n-4}\right\|^{2}}P_{ n-4}(x)+\cdots, \tag{23}\] where \(\left\|P_{n}(x)\right\|\) is the norm over the interval \(x\in[-1,1]\) \[\left\|P_{n}\right\|=\sqrt{\int_{-1}^{1}(P_{n}(x))^{2}dx}=\sqrt{\frac{2}{2n+1 }}, \tag{24}\] satisfying from Rodigue's formula \[P_{n}(x)=\frac{1}{2^{n}n!}\frac{d^{n}}{dx^{n}}(x^{2}-1)^{n}. \tag{25}\] The standardization \(P_{n}(1)=1\) fixes the normalization of the Legendre polynomials, since they are also orthogonal with respect to the same norm, and can be recursively nested to the order of the highest derivative \(m\) from Equations (23) and (25), we can find that there exists a \(m\), \(m\leq N,m\in\mathbb{R}\) satisfying \(P_{n}^{(m)}(x)=\sum_{n=1}^{N}\beta_{n}P_{n}(x)\) for a certain \(\beta_{n}\in\mathbb{R}\), thus (15) can be rewritten as \[\begin{split}&\left|\sum_{i=1}^{N}a_{i}f(\eta_{i})-\frac{1}{4\pi} \int_{\mathbb{S}^{d}}f(\eta)d\omega(\eta)\right|\\ &\leq\frac{1}{N}\left\|\mathbf{A}f(\xi)\right\|_{L^{2}}\sqrt{\sum _{n=1}^{\infty}\sum_{j=1}^{2n+1}\sum_{i=1}^{N}\frac{2n+1}{4\pi A_{n}^{2}}P_{ n}(\eta_{i}\cdot\eta_{t})}\\ &=\frac{1}{N}\left\|\mathbf{A}f(\xi)\right\|_{L^{2}}\sqrt{\sum _{n=1}^{\infty}\sum_{t=1}^{N}\sum_{i=1}^{N}\frac{2n+1}{4\pi A_{n}^{2}}\frac{ \partial^{m}P_{n}}{(\partial(\eta_{i}\cdot\eta_{t}))^{m}}(\eta_{i}\cdot\eta_ {t})}.\end{split} \tag{26}\] This completes the proof. Theorem 1 shows that the error highly depends on the pointset. This gives rise to the definition of generalized minimum discrepancy. **Generalized minimum discrepancy.** Let \(\mathbf{A}\) be a pseudodifferential operator of order \(s\), \(s>1\), with symbol \(A_{n}\), \(A_{n}\neq 0\) for \(n\geq 1\). Then the generalized minimum discrepancy associated with a pseudodifferential operator \(\mathbf{A}\) is defined by \[D_{\min}(\mathbf{x};\mathbf{A})=\min(\frac{1}{N}\sqrt{\sum_{t=1}^{N}\sum_{i=1} ^{N}\sum_{n=1}^{\infty}\frac{Z(d,i)}{A_{n}^{2}}\frac{\partial^{m}P_{n}}{( \partial(\eta_{i}\cdot\eta_{t}))^{m}}(\eta_{i}\cdot\eta_{t})}), \tag{27}\] where \(m\in[0,N]\) denotes the order of the highest derivative for Legendre polynomial \(P_{n}(\cdot)\). The minimum discrepancy shows that, for \(m\in[0,N]\), there exist, different groups of point sets, where the minimum discrepancy of the asymptotically distributed is the \(m\)-th order of the highest derivatives. This can be interpreted intuitively as follows: Given a point set \(\mathbf{x}\) on the sphere \(\mathbb{S}^{d}\), the measure for the quality of the distribution is the spherical cap discrepancy \[D(\mathbf{x})=\sup_{C\subseteq\mathbb{S}^{d}}\left|\frac{1}{N}\sum_{i=1}^{N} \delta_{C}(x_{i})-\frac{1}{4\pi}f_{C}(\xi)d\omega(\xi)\right|, \tag{28}\] where the supremum ranges over all spherical caps \(C\subseteq\mathbb{S}^{d}\) (intersections of ball and \(\mathbb{S}^{d}\)) and \(\delta_{C}\) represent the Dirac delta measure that associates to \(C\). The discrepancy simply measures the maximal deviation between the discrete distribution \(\mathbf{x}\) and the normalized surface measure. Let \(f(\xi)\in H^{s}(\mathbb{S}^{d}),s>1\), we have \[\begin{split}& D(\mathbf{x})=\sup_{C\subseteq\mathbb{S}^{d}} \left|\frac{1}{N}\sum_{i=1}^{N}\delta_{C}(x_{k})-\frac{1}{4\pi}f_{C}(\xi)d \omega(\xi)\right|\\ &\approx\left|\frac{1}{N}\sum_{i=1}^{N}\delta_{C}(x_{i})-\frac{1 }{4\pi}\int_{\mathbb{S}^{d}}f(\xi)d\omega(\xi)\right|\\ &=\frac{1}{N}\left\|\mathbf{A}f(\xi)\right\|_{L^{2}}\sqrt{\sum_{n =1}^{\infty}\sum_{t=1}^{N}\sum_{i=1}^{N}\frac{2n+1}{4\pi A_{n}^{2}}\frac{ \partial^{m}P_{n}}{(\partial(\eta_{i}\cdot\eta_{t}))^{m}}(\eta_{i}\cdot\eta_{ t})}\\ &\leq\frac{1}{N}\left\|\mathbf{A}f(\xi)\right\|_{L^{2}}\sqrt{\sum_ {n=1}^{\infty}\sum_{t=1}^{N}\sum_{i=1}^{N}\frac{2n+1}{4\pi A_{n}^{2}}P_{n}(\eta _{i}\cdot\eta_{t})}.\end{split} \tag{29}\] Comparing to [20], we consider the different directions of the point \(x_{i}\) on the sphere by the derivatives with \(m\) order(\(P_{n}^{(m)}(\eta_{i}\cdot\eta_{t})\)), not limited only one direction where \(m=0\) with \(P_{n}(\eta_{i}\cdot\eta_{t})\). Thus, the generalized minimum discrepancy exhibits a more wide range of exploring the candidates to asymptotically distribute the spherical cap \(C\). **Lemma 1** Let \(\mathbf{A}\), \(\mathbf{B}\) be two pseudodifferential operators of order \(s_{1},s_{2}(s_{1}>1,s_{2}>1)\), and with symbols \(\{A_{n}\}\),\(\{B_{n}\}\) satisfying \(A_{n}>0,B_{n}>0\) for \(n\geq 1\), respectively. \(K_{A}(\xi\cdot\eta)\) and \(K_{B}(\xi\cdot\eta)\) satisfying (10). If \[(-1)^{n}c_{n}\frac{\partial^{n}K_{A}}{(\partial(\xi\cdot\eta))^{n}}(\xi\cdot \eta)=K_{B}(\xi\cdot\eta),n\in\mathbb{R}^{+}, \tag{30}\] with the factor \(c_{0}=1,c_{n}=\frac{1}{(n-1)!}\), there exists a \(f\in\mathbb{L}^{2}(\mathbb{S}^{d})\), such that \(B_{n}=f(A_{n})\) and \(D_{\min}(\mathbf{x};\mathbf{A})=D_{\min}(\mathbf{x};\mathbf{B})\). We call the discrepancies from both \(\{A_{n}\}\), and \(\{B_{n}\}\) belong to the same family discrepancies, the associated kernels \(K_{A}\) and \(K_{B}\) belong to the same family kernel. **Proof** From (10), \(K_{A}(\eta\cdot\xi)\propto\sum_{n=0}^{\infty}P_{n}(\eta\cdot\xi)\), as \(P_{n}(\xi\cdot\eta)\) is normalized orthogonal basis, from the Rodigue's formula (25), it is the \(n\)-th order derivative in \([-1,1]\), we obtain \[\frac{\partial^{m_{n}}K_{A}}{(\partial(\xi\cdot\eta))^{m_{n}}}(\xi\cdot\eta)= \sum_{n=0}^{\infty}Z(d,i)\cdot A_{n}\cdot\frac{\partial^{m_{n}}P_{n}}{( \partial(\eta_{i}\cdot\eta_{t}))^{m_{n}}}(\eta_{i}\cdot\eta_{t}). \tag{31}\] Substitute (30) to (31), it yields \[(-1)^{n}c_{n}\frac{\partial^{n+m_{b}}K_{A}}{(\partial(\xi\cdot\eta))^{n+m_{b} }}(\xi\cdot\eta)=\sum_{n=0}^{\infty}Z(d,i)\cdot B_{n}\cdot\frac{\partial^{m_ {b}}P_{n}}{(\partial(\eta_{i}\cdot\eta_{t}))^{m_{b}}}(\eta_{i}\cdot\eta_{t}). \tag{32}\] From (23), each derivative item on the right-side \(\frac{\partial^{m_{b}}P_{n}}{(\partial(\eta_{i}\cdot\eta_{t}))^{m_{b}}}(\eta_{ i}\cdot\eta_{t})\) can be represented by the normalized basis \(P_{i}(\eta_{i}\cdot\eta_{t})\), with orthogonality and completeness, there exists a piecewise continue function \(f(\cdot)\in\mathbb{L}^{2}(\mathbb{S}^{d})\) with finitely many discontinuities in \([-1,1]\), the sequence of sums \[f_{n}(x,A_{n})=\sum_{i=0}^{n}a_{i}\cdot B_{i}\cdot P_{i}(x), \tag{33}\] converges in the mean to \(f(x,\mathbf{A})\) as \(n\to\infty\), provided we take \[a_{i}=\frac{2i+1}{2}\int_{-1}^{1}f(x,\mathbf{A})P_{i}(x)dx. \tag{34}\] For pseudodifferential operator **A**, we obtain \[D_{\min}(\textbf{x};\textbf{A})=\min(\frac{1}{N}\left[\sum_{t=1}^{N}\sum_{i=1}^{N }\sum_{n=1}^{\infty}\frac{Z(d,i)}{A_{n}^{2}}\frac{\partial^{m_{a}}P_{n}}{( \partial(\eta_{i}\cdot\eta_{t}))^{m_{a}}}(\eta_{i}\cdot\eta_{t})\right]^{\frac {1}{2}}),m_{a}\in[0,N], \tag{35}\] comparing to **B** \[D_{\min}(\textbf{x};\textbf{B})=\min(\frac{1}{N}\left[\sum_{t=1}^{N}\sum_{i=1}^ {N}\sum_{n=1}^{\infty}\frac{Z(d,i)}{B_{n}^{2}}\frac{\partial^{m_{b}}P_{n}}{( \partial(\eta_{i}\cdot\eta_{t}))^{m_{b}}}(\eta_{i}\cdot\eta_{t})\right]^{\frac {1}{2}}),m_{b}\in[0,N]. \tag{36}\] Combine with (32), it is obvious that \(\frac{\partial^{m_{b}}P_{n}}{(\partial(\eta_{i}\cdot\eta_{t}))^{m_{b}}}(\eta _{i}\cdot\eta_{t})\propto\frac{\partial^{m_{b}+n}P_{n}}{(\partial(\eta_{i} \cdot\eta_{t}))^{m_{b}+n}}(\eta_{i}\cdot\eta_{t})\). Thus, \(D_{\min}(\textbf{x};\textbf{A})=D_{\min}(\textbf{x};\textbf{B})\). This completes the proof. From lemma 1 we prove that the generalized minimum discrepancy can be used to reversely deduct the associated pseudodifferential operators and find that for different kernels if they are differentially associated, we can create a mapping \(f\in\mathbb{L}^{2}(\mathbb{S}^{d})\) for the pseudodifferential operators. Using this property, we can extend the potential theoretics for the logarithmic energy kernel and Riesz kernel. **Equidistribution in \(H^{s}(\mathbb{S}^{d})\).** A point system **x** is called **A**-equidistributed in \(H^{s}(\mathbb{S}^{d})\), \(s>1\) if the generalized discrepancy associated with a pseudodifferential operator **A** of order \(s\), \(s>1\) satisfies \[\lim_{N\rightarrow\infty}D_{\min}(\textbf{x};\textbf{A})=0. \tag{37}\] If **x** is well equidistributed in \(H^{s}(\mathbb{S}^{d})\), \(s>1\). For \(s^{\prime}>s\), we generally need more points such that the point system also uniformly equidistributes in \(H^{s^{\prime}}(\mathbb{S}^{d})\). Thus, we try to use \(s\) as small as possible [20]. However, for the computation of (27). We need to calculate the series expansion in terms of Legendre polynomials derivative with order \(m\). From (25), the complexity is \(\mathcal{O}(2^{n})\). It is not applicable to use (27) directly for the solver of the generalized minimum discrepancy. For certain pseudodifferential operators, we can find a closed-form expression for (27), which has been verified by statistics. Combining with (10), we get \[D_{\min}(\textbf{x};\textbf{A})\propto\min(\frac{1}{N}\left[\sum_{i=1}^{N} \sum_{n=1}^{\infty}\frac{\partial^{m}K_{A}}{(\partial(\eta_{i}\cdot\eta_{t})) ^{m}}(\eta_{i}\cdot\eta_{t})\right]^{\frac{1}{2}}),m\in[0,N]. \tag{38}\] Certain kernels with the corresponding complicated statistics are provided in [34], we provide some complicated cases as follows. (1) Gine's statistic: \(K_{A}(\eta_{i},\eta_{j})=\frac{1}{2}-\frac{2}{\pi}\sin\cos^{-1}(\eta_{i}\cdot \eta_{j})\). Where \(A_{n}^{2}=+\infty\) for \(n\) odd and \(A_{n}^{2}=\frac{n-1}{n+2}\cdot(\frac{\Gamma(\frac{n}{2})}{\Gamma(\frac{n+1}{2 })})^{2}\) for \(n\) even [25, 35]. (2) Beran's form of Ajne's statistic: \(K_{A}(\eta_{i},\eta_{j})=\frac{1}{4}-\frac{1}{2\pi}\cos^{-1}(\eta_{i}\cdot \eta_{j})\). Where \(A_{n}^{2}=+\infty\) for \(n\) even and \(A_{n}^{2}=n^{2}\cdot(\frac{\Gamma(\frac{n+3}{2})}{\Gamma(\frac{n+2}{2})})^{2}\) for \(n\) odd [25, 28]. (3) Pycke's statistic: \(K_{A}(\eta_{i},\eta_{j})=-\frac{1}{4\pi}\ln\frac{e}{2}(1-\eta_{i}\cdot\eta_{j})\). Where \(A_{n}^{2}=n(n+1)\)[26, 27]. (4) Cui-Freeden Discrepancy: \(K_{A}(\eta_{i},\eta_{j})=1-2\ln(1+\sqrt{\frac{1-\eta_{i}\cdot\eta_{j}}{2}})\). Where \(A_{n}^{2}=n(n+1)(2n+1)\)[20]. (5) Riesz kernels [36]: For \(x_{i},x_{j}\in C\) we define \[K(x_{1},x_{2})=\left\{\begin{array}{rl}&\text{sign}(s)\cdot\left\|x_{1}-x_{2} \right\|_{2}^{-s},s\neq 0,\\ &-\ln\left\|x_{1}-x_{2}\right\|_{2}^{-2},s=0,\end{array}\right. \tag{39}\] where \(\left\|\cdot\right\|_{2}\) is the Euclidean distance. The logarithmic potential is at the case \(s=0\) and the Coulombic potential is at the case \(s=1\), respectively. For a unit sphere, we transform it into the vector format as follows [34]: \[K_{A}(\eta_{i}\cdot\eta_{j})=\left\{\begin{array}{rl}&\text{sign}(s)\cdot \left|2(1-\eta_{i}\cdot\eta_{j})\right|^{-\frac{s}{2}},s\neq 0,\\ &-\ln 2(1-\eta_{i}\cdot\eta_{j}),s=0,\end{array}\right. \tag{40}\] for \(\eta_{i}\cdot\eta_{j}\in[-1,1)\). When \(s\neq 0\), \(A_{n}^{2}=\frac{2^{-s}\Gamma(\frac{n}{2})\Gamma(-\frac{n}{2}+n+2)}{\pi\Gamma( \frac{n}{2}+n)\Gamma(-\frac{n}{2})}\). For \(s<2\), \(K_{A}(\eta_{i}\cdot\eta_{j})=\left|2(1-\eta_{i}\cdot\eta_{j})\right|^{-\frac{s}{2 }}-\frac{2^{-s}}{1-\frac{n}{2}}\). When \(s=0\), \(A_{n}^{2}=\frac{n(n+1)}{4\pi}\), \(K_{A}(\eta_{i}\cdot\eta_{j})=-\ln 2(1-\eta_{i}\cdot\eta_{j})-\ln\frac{e}{4}\). This is a version of Pycke's statistics. Thus, from Lemma 1, the logarithmic potential, Coulombic potential, Pycke's statistics, and Riesz kernel belong to the same family kernel. Thus, we can expand into the generality that for the same family of kernels, we can bypass the pseudodifferential operators from deriving the kernel so as to obtain the minimum discrepancy. We call the minimum discrepancy kernel with more global behavior. ## 4 Discrepancy Inequalities via Energy Methods In physics experiments, we use the principle of mutual repulsion of charges to investigate how to distribute \(N\) point charges over a surface \(M\), usually by minimizing the sum of all potential energies to obtain the optimal configuration of these charges. The study of the accurate distribution of the charges is the subject of classical potential theory, which shows that the energy integral can be solvable or approximated amongst all Borel probability measures supported on the space. This optimal measure depends highly on the curvature of the position on the surface and the value of \(s\) and \(d\). **Kernels, energy and measures.** Let \(\Omega\) denote a compact and measurable subset of Euclidean space in \(\mathbb{R}^{d}\) whose \(d\)-dimensional Borel measure (_charge distributions_) \(\mu\subset(\Omega,\mathbb{R}^{d})\), is finite, and in the context of energy, \(K\) denote a bi-Lipschitz mapping from \(\Omega\times\Omega\) to \(\mathbb{R}^{d}\), for a collection of \(N(\geq 2)\) distinct points of configuration in \(\Omega\), let \(X_{1:N}=x_{1},...,x_{N}\), we define the energy of \(X_{1:N}\) to be \[E(X_{1:N}):=\frac{1}{N^{2}}\sum_{i=1}^{N}\sum_{j=1,j\neq i}^{N}K(x_{i},x_{j})= \frac{1}{N^{2}}\sum_{i\neq j}K(x_{i},x_{j}), \tag{41}\] and let \[\mathcal{E}(\Omega,N):=\inf\{E(X_{1:N}):X_{1:N}\subset\Omega,|X_{1:N}|=N\} \tag{42}\] be the minimal discrete \(N\)-point energy of the configuration in \(\Omega\), where \(|X_{1:N}|\) represents the cardinality of the set \(X_{1:N}\). The measure of the total charge distributed on \(\Omega\) can be expressed as \(Q(\mu):=\mu(\Omega)=\int_{\Omega}d\mu(x)\). For all signed Borel measures (continuous charge distributions) \(\mu\) on \(\mathbb{S}^{d}\), the energy integral \[E(\mu)=\int\limits_{\mathbb{S}^{d}\times\mathbb{S}^{d}}\!\!\int\limits_{ \mathbb{S}^{d}}\!\!K(\xi\cdot\eta)d\mu(\xi)d\mu(\eta)\geq 0,\text{for all }\ \mu\neq 0. \tag{43}\] A measure is a countably additive, non-negative, extended real-valued function defined on a \(\sigma\)-algebra \(\mathcal{T}\)(a nonempty collection of subsets of \(X\) closed under complement, countable unions, and countable intersections). A measure \(\mu\) on a measurable space \((X,\mathcal{T})\) is a mapping \[\mu:\mathcal{T}\rightarrow[0,\infty]\] such that (1) \(\mu(\emptyset)=0\); (2) if \(\{T_{i}\in\mathcal{T}:i\in\mathbb{N}\}\) is a countable collection of pairwise disjoint sets in \(\mathcal{T}\), then \[\mu(\cup_{i=1}^{\infty}T_{i})=\sum_{i=1}^{\infty}\mu(T_{i}).\] Let \(\delta_{x}\in(X,\mathcal{T})\) represent the Dirac delta measure that associates to a unit charge at the point \(x\in X\), satisfying \(\int_{X^{\prime}}d\delta_{x}(\xi)=1\) for all measurable sets \(X^{\prime\prime}\subseteq X\) with \(x\in X^{\prime\prime}\). For the empirical distribution of set \(X^{\prime}\), defined as \[\mu_{X^{\prime}}:=\frac{1}{N}\sum_{i=1}^{N}\delta_{x_{i}}, \tag{44}\] we have \(\mathcal{E}(X^{\prime})=\mathcal{E}(\mu_{X^{\prime}})\). The quadratic form in (43) can be used to define the inner product for the charge distribution \[\langle\mu,\rho\rangle_{(X,\mathcal{T})}=\int\limits_{\mathbb{S}^{d}\times \mathbb{S}^{d}}\!\!\int\limits_{\mathbb{S}^{d}}\!\!K(\xi\cdot\eta)d\mu(\xi)d\rho (\eta), \tag{45}\] and the energy then associates with the square norm of the measure \[\mathcal{E}(\mu)=\|\mu\|_{(X,\mathcal{T})}^{2}\,. \tag{46}\] The discrepancy of the measure \(\rho\) with respect to the measure \(\mu\) is defined as in [37] as \[D(\rho;\mu):=\|\rho-\mu\|_{(X,\mathcal{T})}\,. \tag{47}\] Both the energy and the discrepancy highly depend on the choice of the kernel and the charge distribution. For every signed measure \(\mu\in(X,\mathcal{T})\), the potential field induced by the charge distribution by \[f_{\mu}(x)=\int_{\Omega}K(x,y)d\mu(y).\] Let \(\mathcal{U}(K)\) represent the domain of measures of potential fields, the inner product on \(\mathcal{U}(K)\) \[\langle f_{\mu},f_{\rho}\rangle_{\mathcal{U}(K)}=\langle\mu,\rho\rangle_{(X, \mathcal{T},\mu)}\ \ \forall f_{\mu},f_{\rho}\in\mathcal{U}(K). \tag{48}\] The energy can be rewritten into the format with respect to the potential fields by \[E(\mu)=\int\limits_{\mathbb{S}^{d}\times\mathbb{S}^{d}}\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! where kernel \(K(\cdot)\) acting on the geodesic distance between the center \(\textbf{x}_{i}^{\prime}\) and the query direction **x**, \(p_{1}(x),\cdots,p_{M}(x)\) forms a basis for the \(M=C_{s+m^{\prime}-1}^{m^{\prime}-1}\)-dimensions linear space \(\mathbb{R}_{m-1}^{s}\) of polynomials of total degree less than or equal to \(m^{\prime}-1\) in \(s\) variables. The coefficients \(\textbf{w}=\left[w_{1},\cdots,w_{N}\right]^{T}\) and \(\textbf{b}=\left[b_{1},\cdots,b_{N}\right]^{T}\) are solutions to the linear equations \[\left(\textbf{K}+\sigma^{2}\textbf{I}\right)\cdot\textbf{w}+\textbf{p}\cdot \textbf{b}=\textbf{y}. \tag{54}\] Since enforcing the interpolation condition \(f(\textbf{x})\approx\hat{f}(\textbf{x})\), leads to a system of \(N\) linear equation with \(N+M\) unknown coefficient \(w_{i}\) and \(b_{j}\), and \[\sum_{j=1}^{M}w_{i}p_{j}(\textbf{x})=0.\;\;j=1,\cdots,M, \tag{55}\] where **K** is a matrix with the component \(K_{ij}(\cdot)=K(\left\|\textbf{x}_{i}-\textbf{x}_{j}\right\|_{2})\), \(\sigma\) is a smoothing parameter that controls the approximation of the target \(f(\textbf{x})\) to fit the observations **y**. If **K** is positive definite and **p** has a full column rank, the solution for **w** and **b** would be unique. If the chosen **K** is conditionally positive definite of order \(m^{\prime}\) and **p** has a full column rank, the solution would be uniquely provided that the degree of the monomial terms is at least \(m^{\prime}-1\)[40, 41]. Here, our goal is to integrate scattered observations of the Franke function with smoothed parameters for the sphere [39] defined by \[\begin{split} f(x,y,z):=&\frac{3}{4}\exp(-\frac{(9x -2)^{2}}{4}-\frac{(9y-2)^{2}}{4}-\frac{(9z-2)^{2}}{4})\\ &+\frac{3}{4}\exp(-\frac{(9x+1)^{2}}{49}-\frac{(9y+1)^{2}}{10}- \frac{(9z+1)^{2}}{10})\\ &+\frac{1}{2}\exp(-\frac{(9x-7)^{2}}{4}-\frac{(9y-3)^{2}}{4}- \frac{(9z-5)^{2}}{4})\\ &-\frac{1}{5}\exp(-\frac{(9x-4)^{2}}{4}-(9y-7)^{2}-(9z-5)^{2}), \;\;\;(x,y,z)^{T}\in\mathbb{S}^{2}.\end{split} \tag{56}\] Here, we consider Pycke's statistic \(K_{A}(\eta_{i},\eta_{j})=-\frac{1}{4\pi}\ln\frac{e}{2}(1-\eta_{i}\cdot\eta_{j})\) to interpolate the target with the cases of its first \(K_{A}^{(1)}(\eta_{i},\eta_{j})\) and second \(K_{A}^{(2)}(\eta_{i},\eta_{j})\) order of derivatives, respectively. Inspired by [42], in order to scale the evaluations with unique points, suppose that we have already generated \(n\) points the interpolation points are generated sequentially by \[\eta_{n+1}=\underset{\eta\in\mathbb{S}^{2}}{\text{arg}\min}\sum_{i=1}^{n}K( \eta_{i},\eta). \tag{57}\] The initial point we choose \[\eta_{1}=\underset{\eta\in\mathbb{S}^{2}}{\text{arg}\max}\phi(\eta-\eta_{i}), \tag{58}\] where \(\phi(\cdot)\) follows Gaussian distribution. Thus, the formulas to generate interpolation points can be written as follows. \[\begin{split}\eta_{n+1}&=\underset{\eta\in\mathbb{S} ^{2}}{\text{arg}\min}\sum_{i=1}^{n}-\frac{1}{4\pi}\ln\frac{e}{2}(1-\eta_{i} \cdot\eta),\\ \eta_{n+1}^{(1)}&=\underset{\eta\in\mathbb{S}^{2}}{ \text{arg}\min}\sum_{i=1}^{n}\frac{1}{1-\eta_{i}\cdot\eta},\\ \eta_{n+1}^{(2)}&=\underset{\eta\in\mathbb{S}^{2}}{ \text{arg}\min}\sum_{i=1}^{n}\frac{1}{(1-\eta_{i}\cdot\eta)^{2}}.\end{split} \tag{59}\] We use a spherical coordinate system \((r,\theta,\varphi)\), \(r=1\) represents the radial distance is equal to \(1\) for our experiment, polar angle \(\theta\in[0,\pi]\) represents the angle with respect to the polar axis, azimuthal angle \(\varphi\in[0,2\pi)\) represents the angle of rotation from the initial meridian plane. The Cartesian coordinates can be retrieved from the spherical coordinate by \[\begin{split} x&=\sin\theta\cos\varphi,\\ y&=\sin\theta\sin\varphi,\\ z&=\cos\theta.\end{split} \tag{60}\] The Cartesian coordinate of the point on the sphere \(\eta=(x,y,z)\). We plot the interpolant in spherical coordinates under three different kernel interpolations in Figure 1. It shows the point system with the minimum discrepancy is by \(K_{A}^{(2)}(\eta_{i},\eta_{j})\). Table 1 provides the computed values of the generalized discrepancy for different kernels from the same family. Among them, the best point system is from the second order of derivatives. We further estimate the kernel parameter \(\varepsilon\) with \(N=1000\), by minimizing the mean square error for a fit to the data based on an interpolant. From (53), (54) and (55), the coefficient vector \(\textbf{w}=\left[w_{1},\cdots,w_{N}\right]^{T}\) and \(\textbf{y}=\left[y_{1},\cdots,y_{N}\right]^{T}\) are determined by interpolating the observational data \(\textbf{y}=\left[y_{1},\cdots,y_{N}\right]^{T}\). \[f(x_{i})=y_{i},i=1,\cdots,N, \tag{61}\] which is equivalent to solving the linear system \(\textbf{c}=\left[\textbf{w},\textbf{b}\right]^{T}\), \[Q\textbf{c}=\textbf{y},\ \ Q=g(K(\|x_{i}-x_{j}\|)),\] \begin{table} \begin{tabular}{|c|c|c|c|} \hline \# of points & \(D(\{\eta_{1},\cdots,\eta_{N}\}\,;K_{A})\)[20] & \(D(\{\eta_{1},\cdots,\eta_{N}\}\,;K_{A}^{(1)})\) & \(D(\{\eta_{1},\cdots,\eta_{N}\}\,;K_{A}^{(2)})\) \\ \hline \(15\) & 0.68137655 & 0.68339536 & 0.69559213 \\ \hline \(43\) & 0.59310219 & 0.59549629 & 0.61001457 \\ \hline \(86\) & 0.54524042 & 0.54779668 & 0.56333339 \\ \hline \(151\) & 0.51181878 & 0.51446924 & 0.53060611 \\ \hline \(206\) & 0.49792697 & 0.50061168 & 0.51696945 \\ \hline \(313\) & 0.47840121 & 0.48112904 & 0.47823923 \\ \hline \(529\) & 0.45436048 & 0.45713295 & 0.45419589 \\ \hline \(719\) & 0.44430551 & 0.44709377 & 0.44413999 \\ \hline \(998\) & 0.43388233 & 0.43668512 & 0.43371597 \\ \hline \end{tabular} \end{table} Table 1: The generalized discrepancy of integrated nodes. Figure 1: Partition of unity property of the interpolant and the corresponding error distribution. Left: The discrepancy is the average error of 0.43376 for \(K_{A}(\eta_{i},\eta_{j})\) with 1000 nodes[20]. Middle: The discrepancy is the average error of 0.43659 for \(K_{A}^{(1)}(\eta_{i},\eta_{j})\) with 1000 nodes. Right: The discrepancy is the average error of 0.43361 for \(K_{A}^{(2)}(\eta_{i},\eta_{j})\) with 1000 nodes. where \(g(x)\) is a function of \(x\). Inspired by [43], let \(U^{(v)}\) the subset obtained by removing the point \(x_{v}\) from \(U\) and by \(\textbf{y}^{(v)}=\left[y_{1}^{(v)},\cdots,y_{v-1}^{(v)},y_{v+1}^{(v)},\cdots,y_{ N}^{(v)}\right]^{T}\) the vector obtained by removing the element \(y_{v}\) from **y**. From the perspective of the interpolant \[f^{(k)}(x)=\sum_{j=1,j\neq v}^{N}w_{j}^{(v)}g(K(\|x_{j}-x\|)), \tag{62}\] where \(\textbf{a}^{(v)}=\left[w_{1}^{(v)},\cdots,w_{v-1}^{(v)},w_{v+1}^{(v)},\cdots,w _{N}^{(v)}\right]^{T}\) is determined by the interpolation conditions \[f^{(k)}(x_{i})=y_{i},i=1,\cdots,N,i\neq v.\] which is equivalent to solving \[Q^{(v)}w^{(v)}=f^{(v)}, \tag{63}\] where \(Q^{(v)}\) is obtained from \(Q\) be removing the \(v\)-th row and \(v\)-th column, we can obtain the \(v\)-th error term by \[\varepsilon_{v}=y_{v}-f^{v}(x_{k}). \tag{64}\] As the linear system (63) is of order \((N-1)\times(N-1)\), the time complexity is of order \(\mathcal{O}(N^{4})\) for the lower-upper decomposition. Fortunately, in real applications, these error components can be simplified to \[\varepsilon_{v}=\frac{c_{v}}{G_{vv}^{-1}}. \tag{65}\] where \(c_{v}\) is the \(v\)-th coefficient in the interpolant \(f_{i}\) based on the full dataset, and \(G_{vv}^{-1}\) is the \(v\)-th element of the inverse of the corresponding interpolant matrix, since the complexity of both \(c_{v}\) and \(G_{vv}^{-1}\) is \(\mathcal{O}(N^{3})\), the computational load will be scaled greatly [43]. In Figure 2, the optimal \(\varepsilon=2.48\) with the minimum mean square of error \(7.55*10^{-6}\) is for \(K_{A}(\eta_{i},\eta_{j})\). The point systems generated from the first order of derivative become worse with the MSE of \(2.88*10^{-5}\) when \(\varepsilon=4.17\). While the second order of the derivative is the best at \(\varepsilon=2.75\) with the MSE of \(7.24*10^{-6}\). **Point systems for different kernels on the sphere.** Vlasiuk proposes an algorithm to generate high-dimensional points by a combination of quasi-Monte Carlo methods and weighted Riesz energy minimization embedding with a nearest-neighbor distance function [44]. For the node generation on a unit sphere, we simplify the process from the random sampling and normalize it to project on the sphere, which ensures the node must be restricted to a certain compact set \(\mathbb{S}^{2}\). The schema can be described as follows. (a) 3D nodes are generated Randomly and normalized to ensure that they are within the unit sphere. (b) Set up \(K^{\prime}\) nearest neighbors of each node \(r=\|x-x_{i}\|\). (c) Compute the Riesz weight for each node from the corresponding \(r\) and normalize it. (d) Sum the entire weights and find the mean as the discrepancy by \(D=\frac{1}{N^{2}}\sum_{i=1}^{N}\sum_{j=1}^{N}K(\|x_{i}-x_{j}\|)\). Figure 2: Partition of unity property of the interpolant, as a function of the kernel parameter \(\varepsilon\) in \(d=3\) for \(K_{A}(\eta_{i},\eta_{j})\)[20] (left), \(K_{A}^{(1)}(\eta_{i},\eta_{j})\) (middle) and \(K_{A}^{(2)}(\eta_{i},\eta_{j})\) (right). (e) Perform \(T\) iterations of the partial gradient descent on the Cui-Freeden discrepancy kernel \(K=2-2\log(1+\frac{r}{2})\). Let the configuration by \(t\)th iteration is \(x_{i}^{t}\), we have \(x_{i}^{0}=x_{i},i=1,\cdots,N\), \(N\) denotes the number of nodes. Given a node \(x_{i}^{(t)}\) with \(K^{\prime}\) nearest neighbors \(x_{j(i,k)}^{(t)},k=1,\cdots,K^{\prime}\), the weighted vector sum is \[g_{i}^{(t)}=s\sum_{k=1}^{K^{\prime}}\frac{x_{i}^{(t)}-x_{j(i,k)}^{(t)}}{\left\| x_{i}^{(t)}-x_{j(i,k)}^{(t)}\right\|^{s+2}},1\leq i\leq N, \tag{66}\] and the neighbor indices \(j(i,k)\) will be updated after a few iterations. The \(t+1\)th iteration node can be written as \[x_{i}^{(t+1)}=x_{i}^{(t)}+\frac{\Delta(x_{i}^{(t)})}{t+C_{2}}\frac{g_{i}^{(t)} }{\left\|g_{i}^{(t)}\right\|},x_{i}\in\mathbb{S}^{2}, \tag{67}\] where \(C_{2}=19\) denotes a fixed offset to control the step size between \(x_{i}^{(t)}\) and \(x_{i}^{(t+1)}\). Figure 3 shows the node distribution from different kernels, the discrepancy is calculated by the average summation of the kernels for different point distributions on the sphere with 1000 nodes, and the point system generated from the second order of derivatives has the minimum discrepancy. Table 2 provides the computed values of the generalized discrepancy with different numbers of nodes for different kernels from the same family. Among them, the best point system is from the second order of derivatives. ## 6 Conclusion Generating equidistributed pointsets on the sphere is practical of importance, which generally involves pseudodifferential operators and Beltrami operators to give a quantifying criterion, it limits to the kernel-self when there exists a closed-form expression. We use the advantage of Legendre's ODE and further explore latent point systems within error bounds, \begin{table} \begin{tabular}{|c|c|c|c|} \hline \# of points & \(D(\left\{x_{1},\cdots,x_{N}\right\};K)\)[20] & \(D(\left\{x_{1},\cdots,x_{N}\right\};K^{(1)})\) & \(D(\left\{x_{1},\cdots,x_{N}\right\};K^{(2)})\) \\ \hline \(15\) & 0.26549542 & 0.19032185 & 0.09912521 \\ \hline \(43\) & 0.17015372 & 0.10908718 & 0.04564855 \\ \hline \(86\) & 0.13308495 & 0.08021861 & 0.02974657 \\ \hline \(151\) & 0.1125596 & 0.06505319 & 0.02221502 \\ \hline \(206\) & 0.1074984 & 0.06141475 & 0.02050324 \\ \hline \(313\) & 0.0989759 & 0.05538551 & 0.01775417 \\ \hline \(529\) & 0.0882991 & 0.04801519 & 0.01455116 \\ \hline \(719\) & 0.0864768 & 0.04677885 & 0.01403181 \\ \hline \(998\) & 0.0844954 & 0.04544187 & 0.01347624 \\ \hline \end{tabular} \end{table} Table 2: The generalized discrepancy of discretized nodes. Figure 3: The node rendering distribution generated from different kernels on the sphere. Left: The discrepancy is 0.092856 for \(K\)[20]. Middle: The discrepancy is 0.051135 for \(K^{(1)}\). Right: The discrepancy is 0.015885 for \(K^{(2)}\). We consider the properties of the kernel with continuity and derivative, Legendre's ODE and spherical harmonic theories to find a new criterion of equidistributed pointsets where the discrepancy becomes smaller, and propose a generalized minimum discrepancy. Our kernel-derivative model can explore latent potential point systems that have the minimum discrepancy with operators-free, which has been verified by several quantitive tests in our experiments. ## Acknowledgments This was supported in part by BRBytes project.
2309.05009
Dual of the Hopf Algebra Consisting of the Adjacency Matrices
In this article we discuss the Hopf algebras spanned by the adjacency matrices in detail. We show that there two Hopf algebraic structures concerning the adjacency matrices, one is the copy of Connes-Kreimer Hopf algebra, another one is the copy of the dual of Connes-Kreimer Hopf algebra.
Zhou Mai
2023-09-10T12:00:53Z
http://arxiv.org/abs/2309.05009v1
# Dual of the Hopf Algebra Consisting of the Adjacency Matrices ###### Abstract In this article we discuss the Hopf algebras spanned by the adjacency matrices in detail. We show that there two Hopf algebraic structures concerning the adjacency matrices, one is the copy of Connes-Kreimer Hopf algebra, another one is the copy of the dual of Connes-Kreimer Hopf algebra. ###### Contents * 1 Introduction * 2 Hopf algebras of adjacency matrices * 2.1 The basic notations and the connectivity about the adjacency matrices * 2.2 Quotient * 2.3 The coproduct * 3 Insertion of the adjacency matrices * 4 The algebraic structure of \(\mathcal{H}^{*}_{adj}\) * 4.1 Basic notations and the primitive elements * 4.2 The product on \(\mathcal{H}^{*}_{adj}\) ## 1 Introduction It is well known that the adjacency matrices indecate the multigraphs (see [1]) which can be regarded as Feynman diagrams without external lines. To indecate the general Feynman diagrams with the external lines, we introduce the notation of the extended adjacency matrices. In the present article we discuss the Hopf algebras over \(\mathbb{C}\) spanned by the set of the all adjacency matrices. More precisely, the vector spaces under consideration denoted by \(\mathcal{H}_{adj}\) (or \(\mathcal{H}_{adj(e)}\) in the situation of the extended adjacency matrices) are the ones spanned by the equivalent classes of the adjacency matrices. The equivalent relations are usual and natural ones ([1]) to describe the isomorphic classes of the graphs (or of Feynman diagrams). Due to the correspondence between the adjacency matrices and Feynman diagrams ([1, 5]), the vector spaces in our setting is another version of the ones in Connes-Kreimer theory ([2, 3, 4]). We prove that there are two Hopf algebraic structures on \({\cal H}_{adj}\) (or on \({\cal H}_{adj(e)}\)). The first Hopf algebra denoted by \(({\cal H}_{adj},\oplus,u,\triangle,\eta,S)\) (or \(({\cal H}_{adj(e)},\oplus,u,\triangle,\eta,S)\)) is the copy of Connes-Kreimer Hopf algebra ([2, 3, 4]). The commutative multiplication \(\oplus\) is reduced from the direct sum of the matrices corresponding to the disjoint union of the graphs. The coproduct \(\triangle\) is defined in terms of the quotient which is the copy of the quotient of Feynman diagrams. \(u\) and \(\eta\) are the unit and the co-unit respectivly. \(S\) is the antipode. In this article we focus on the second Hopf algebra denoted by \(({\cal H}_{adj},\bullet,u,\triangle_{1},\eta,S_{1})\) (or \(({\cal H}_{adj(e)},\bullet,u,\triangle_{1},\eta,S_{1})\)) which is isomorphic to the dual hopf algebra of \(({\cal H}_{adj},\oplus,u,\triangle,\eta,S)\). The multiplication \(\bullet\) in \(({\cal H}_{adj},\bullet,u,\triangle_{1},\eta,S_{1})\) is defined with the help of the notion of the insertion which is the copy of the insertion of Feynman diagrams ([2, 3, 4]). We detail the multiplication \(\bullet\) and the coproduct \(\triangle_{1}\). Moreover, the structure of \(({\cal H}_{adj},\bullet,u,\triangle_{1},\eta,S_{1})\) is described in a explicit way. The unit \(u\) and the co-unit \(\eta\) of \(({\cal H}_{adj},\bullet,u,\triangle_{1},\eta,S_{1})\) are same as ones of \(({\cal H}_{adj},\oplus,u,\triangle,\eta,S)\). Because \(\oplus\) is commutative, \(\triangle_{1}\) is co-commutative. Both \(\triangle\) and \(\triangle_{1}\) are conilpotent, therefore the antipodes \(S\) and \(S_{1}\) can be given by the standard formula concerning the products and reduced coproducts ([6]). The present paper is organized as follows. In the section 2 we discuss the Hopf algebra consisting of the adjacency matrices which is a different version of Connes-Kreimer Hopf algebra by means of the matrix. At beginning of this section we talk about some basic subjects concerning the adjacency matrices (or the extended adjacency matrices), for example, the equivalent relation, the direct sum and the connectivity. Then we discuss the quotient of the adjacency matrices which is parallel to the quotient of Feynman diagrams in Connes-Kreimer theory. In addition, based on the notation of the quotient, we can define the coproduct on \({\cal H}_{adj}\), or on \({\cal H}_{adj(e)}\), such that they become the Hopf algebras \(({\cal H}_{adj},\oplus,u,\triangle,\eta,S)\) or \(({\cal H}_{adj(e)},\oplus,u,\triangle,\eta,S)\). In the section 3 we consider the insertion of the adjacency matrices, or the extended adjacency matrices, which can be regarded as the translation of the insertion of Feynman diagrams into the language of the matrix. The properties of the insertion are discussed in detail. In the section 4 we turn to the dual of \(({\cal H}_{adj},\oplus,u,\triangle,\eta,S)\) (or \(({\cal H}_{adj(e)},\oplus,u,\triangle,\eta,S)\)). We prove that the dual of \(({\cal H}_{adj},\oplus,u,\triangle,\eta,S)\) can be realized on \({\cal H}_{adj}\), i.e. there is a Hopf algebra \(({\cal H}_{adj},\bullet,u,\triangle_{1},\eta,S_{1})\) being isomorphic to the dual of \(({\cal H}_{adj},\oplus,u,\triangle,\eta,S)\). The product \(\bullet\) and the coproduct \(\triangle_{1}\) are described in detail. Moreover we have \({\cal H}_{adj}=U({\bf P}({\cal H}_{adj}))\), where \({\bf P}({\cal H}_{adj})\) is the Lie algebra consisting of primitive elements of \(({\cal H}_{adj},\bullet,u,\triangle_{1},\eta,S_{1})\). The situation of the extended adjacency matrices is similar. Hopf algebras of adjacency matrices In this section we will discuss the Hopf algebra consisting of the adjacency matrices. For simplification we focus on the adjacency matrices with zero diagonal. The general situation is similar. To indecate Feynman diagrams with external lines, we introduce the notation of the extended adjacency matrices which are also the adjacency matrices divided into internal part and external part. Actually, a more general situation, the complex matrices with zero diagonal, was discussed in [7]. ### The basic notations and the connectivity about the adjacency matrices At the beginning of this subsection we introduce some notations. In this article we set \([m]=\{1,\cdots,m\}\) for a positive integer \(m\). For a finite set \(I\), we let \(|I|\) denote the number of the elements in \(I\), and \(\mathbf{Part}(I)\) denotes the set of all partitions of \(I\), i.e. \[\mathbf{Part}(I)=\{\{I_{i}\}_{i=1}^{k}|I_{i}\subset I,I=\bigcup_{i=1}^{k}I_{i},I_{i}\cap I_{i^{\prime}}=\emptyset,i\neq i^{\prime},1\leq i,i^{\prime}\leq k, k\leq|I|\}.\] The symbol \(\mathbf{part}(I)\) denotes the set of all sequences of disjoint subsets in \(I\), i.e. \[\mathbf{part}(I)=\{\{I_{i}\}_{i=1}^{k}|\{I_{i}\}_{i=1}^{k}\in\mathbf{Part}( \bigcup_{i=1}^{k}I_{i}),\bigcup_{i=1}^{k}I_{i}\subset I\}.\] For two sequences of the disjoint subsets \(\{I_{i}\},\{J_{j}\}\in\mathbf{part}(I)\), we say \(\{I_{i}\}\subset\{J_{j}\}\), if for each \(I_{i}\) there is a \(J_{j}\) such that \(I_{i}\subset J_{j}\). We now turn to the discussion of the adjacency matrices. **Definition 2.1**.: * _An adjacency matrix is a symmetric matrix with non-negative integer entries and zeros along the main diagonal. We call_ \(\sum_{i<j}m_{ij}\) _the degree of_ \(M\) _denoted by_ \(\mathbf{deg}M\)_. The set of adjacency matrices of_ \(m\times m\) _is denoted by_ \(M_{adj}(m,\mathbb{N})\)_._ * _Let_ \(M\in M_{adj}(m,\mathbb{N})\) _be an adjacency matrix,_ \(a=(a_{1},\cdots,a_{m})\in\mathbb{N}^{m}\) _be a multiple index. Then, an extended adjacency matrix_ \((M,a)\) _is defined to be an adjacency matrix of order_ \(m+1\) _with the following form,_ \[(M,a)=\begin{pmatrix}M&a^{T}\\ a&o\end{pmatrix},\] (2.1) _where_ \(M\) _is called the internal part of_ \((M,a)\)_, and_ \(a\) _is called the external part of_ \((M,a)\)_. The degree of an extended adjacency matrix_ \((M,a)\) _is same as one of its internal part, i.e._ \(\mathbf{deg}(M,a)=\mathbf{deg}M\)_. The set of the all extended adjacency of order_ \(m+1\) _is denoted by_ \(M_{adj}(m+1,\mathbb{N})_{(e)}\) **Remark 2.1**.: _An adjacency matrix \(M\in M_{adj}(m,\mathbb{N})\) indicates a Feynman diagram without external lines and loops, or a graph without loops. For an extended adjacency matrix \((M,b)\), \(b\) indicates \(|b|=b_{1}+\cdots+b_{m}\) external lines, where \(i\)th vertex of the Feynman diagram is assigned to \(b_{i}\) external lines (\(i=1,\cdots,m\))._ **Proposition 2.1**.: _Under the addition of the matrices, \(M_{adj}(m,\mathbb{N})\) is a monoid with generators \(\{M(i,j)\}\), where \(M(i,j)=(m_{kl})_{m\times m}\) satisfies \(m_{kl}=m_{lk}=\delta_{ik}\delta_{jl}\), \(i\leq j,\,k\leq l\)._ Recalling every row and every column of a permutation matrix contain exactly one nonzero entry, which is \(1\). Now we define a equivalent relation on \(M_{adj}(m,\mathbb{N})\) as follows. Let \(M_{1},M_{2}\in M_{adj}(m,\mathbb{N})\), then \[M_{1}\sim M_{2}\iff M_{1}=PM_{2}P^{T}, \tag{2.2}\] where \(P\) is a permutation matrix. The equivalent relation mentioned above can be described in a different way. Let \(M=(m_{ij})_{m\times m}\in M_{adj}(m,\mathbb{N})\), \(\pi\in\mathbf{S}_{m}\) be a permutation \(\pi:\{1,\cdots,m\}\rightarrow\{1,\cdots,m\}\), \[\pi=\begin{pmatrix}1&2&\cdots&m\\ \pi(1)&\pi(2)&\cdots&\pi(m)\end{pmatrix}.\] Then, the action of \(\pi\) on \(M\) is defined to be an adjacency matrix \(\pi(M)=(m^{\prime}_{ij})_{m\times m}\) satisfying \(m^{\prime}_{ij}=m_{\pi(i)\pi(j)}\). Let \(M_{1},M_{2}\in M_{adj}(m,\mathbb{N})\), then \(M_{1}\sim M_{2}\) if and only if there is a \(\pi\in\mathbf{S}_{m}\) such that \(M_{1}=\pi(M_{2})\). Thus, the equivalent classes under above equivalent relation are the orbits of the permutation group \(\mathbf{S}_{m}\) acting on \(M_{adj}(m,\mathbb{N})\). Let \(M\in M_{adj}(m,\mathbb{N})\), we denote the equivalent class of \(M\), or an orbit of \(M\), by \(\{M\}\), then, \(\{M\}=\{\pi(M)|\pi\in\mathbf{S}_{m}\}\). The set of equivalent class is denoted by \(M_{adj}(m,\mathbb{N})\diagdown\). It is obvious that \(\mathbf{deg}M=\mathbf{deg}(PMP^{T})\), where \(P\) is a permutation matrix. Thus we define \(\mathbf{deg}\{M\}=\mathbf{deg}M\). We will mainly focus on the equivalent classes,or orbits, below. The equivalent relation concerning the adjacency matrices can be generalized to the situation of the extended adjacency matrices. Let \[(M_{i},b_{i})=\begin{pmatrix}M_{i}&b_{i}^{T}\\ b_{i}&0\end{pmatrix}\in M_{adj}(m+1,\mathbb{N})_{e}\] be two extended adjacency matrices of oeder \(m+1\) (\(i=1,2\)), we say \((M_{1},b_{1})\) is equivalent to \((M_{2},b_{2})\) if and only if there is a permutation matrix \(P\) of order \(m\) such that \[\begin{pmatrix}M_{1}&b_{1}^{T}\\ b_{1}&0\end{pmatrix}=\begin{pmatrix}P&0\\ 0&1\end{pmatrix}\begin{pmatrix}M_{2}&b_{2}^{T}\\ b_{2}&0\end{pmatrix}\begin{pmatrix}P^{T}&0\\ 0&1\end{pmatrix}.\] Let \((M,b)\in M_{adj}(m+1,\mathbb{N})_{(e)}\), \(\pi\in\mathbb{S}_{m}\), we define \(\pi((M,b))=(\pi(M),\pi(b))\), where \(\pi(b)=(b_{\pi(1)}.\cdots,b_{\pi(m)})\). Similar to the previous situation, we consider the equivalent class \[\{(M,b)\}=\{\pi((M,b))|\pi\in\mathbb{S}_{m}\}.\] Thus, each equivalent class is the orbit of the action of \(\mathbb{S}_{m}\). Let \[M_{adj}(+\infty,\mathbb{N})=(\bigcup_{m\geq 2}(M_{adj}(m,\mathbb{N})\diagup\sim) \setminus\{0\})\cup\{0\}. \tag{2.3}\] In \(M_{adj}(+\infty,\mathbb{N})\), we do not distinguish the zero matrices with different order. Actually, from the viewpoint of the graphic theory, zero matrix corresponding to the empty set. Let \(M_{i}\in M_{adj}(m_{i},\mathbb{N})\) (\(i=1,2\)), then direct sum \(M_{1}\oplus M_{2}\in M_{adj}(m_{1}+m_{2},\mathbb{N})\). Actually, the direct sum \(M_{1}\oplus M_{2}\) can be realized by a block diagonal matrix \[M_{1}\oplus M_{2}=\mathbf{diag}(M_{1},M_{2})=\begin{pmatrix}M_{1}&0\\ 0&M_{2}\end{pmatrix}.\] The direct sum mentioned above can be extened into \(M_{adj}(+\infty,\mathbb{N})\). Let \(M_{1}\in M_{adj}(m_{1},\mathbb{N})\), \(M_{2}\in M_{adj}(m_{2},\mathbb{N})\), It is obvious that \[\mathbf{diag}(M_{1},M_{2})\sim\mathbf{diag}(M_{2},M_{1}).\] Furthermore, we have \[\{\mathbf{diag}(\pi_{1}(M_{1}),\pi_{2}(M_{2}))|\pi_{i}\in\mathbb{S}_{m_{i}},i= 1,2\}\] \[\subset\{\pi(\mathbf{diag}(M_{1},M_{2}))|\pi\in\mathbb{S}_{m_{1}+m_{2}}\}.\] Therefore, we can define \[\{M_{1}\}\oplus\{M_{2}\}=\{M_{1}\oplus M_{2}\}. \tag{2.4}\] Based on the previous discussion, we have \[\{M_{1}\}\oplus\{M_{2}\}=\{M_{2}\}\oplus\{M_{1}\}.\] Moreover, it is easy to check that for \(M_{i}\in M_{adj}(m_{i},\mathbb{N})\), \(i=1,2,3\), we have \[(\{M_{1}\}\oplus\{M_{2}\})\oplus\{M_{3}\}=\{M_{1}\}\oplus(\{M_{2}\}\oplus\{M_ {3}\})=\{\mathbf{diag}(M_{1},M_{2},M_{3})\}.\] Thus the direct sum (2.2) is associative and commutative. On the other hand, it is obvious that \[\mathbf{deg}\{M_{1}+M_{2}\}=\mathbf{deg}\{M_{1}\}+\mathbf{deg}\{M_{2}\}.\] Similarly, in the situation of the extended adjacency matrices, we take \[M_{adj}(+\infty,\mathbb{N})_{(e)}=(\bigcup_{m\geq 2}(M_{adj}(m,\mathbb{N})_{(e)} \diagup\sim)\setminus\{0\})\cup\{0\}. \tag{2.5}\] Let \(a=(a_{1},\cdots,a_{m})\in\mathbb{N}^{m}\), \(b=(b_{1},\cdots,b_{n})\in\mathbb{N}^{n}\) be two multiple indices, we define the direct sum of \(a\) and \(b\) denoted by \(a\boxplus b\) to be a multiple index in \(\mathbb{N}^{m+n}\), \[a\boxplus b=(a_{1},\cdots,a_{m},b_{1},\cdots,b_{n})\in\mathbb{N}^{m+n}. \tag{2.6}\] Especially, let \(k,l\in\mathbb{N}\), we define \(k\boxplus a=(k,a_{1},\cdots,a_{m})\) and \(k\boxplus l=(k,l)\). For two extended adjacency matrices \((M_{i},b_{i})\)\((M_{i}\in M_{adj}(m_{i},\mathbb{N}),\,b_{i}\in\mathbb{N}^{m_{i}})\), we define their direct sum in the following way: \[(M_{1},b_{1})\oplus(M_{2},b_{2})=(M_{1}\oplus M_{2},b_{1}\boxplus b_{2}). \tag{2.7}\] \((M_{1},b_{1})\oplus(M_{2},b_{2})\) is also expressed by a block matrix as following, \[(M_{1},b_{1})\oplus(M_{2},b_{2})=\begin{pmatrix}M_{1}&0&b_{1}^{T}\\ 0&M_{2}&b_{2}^{T}\\ b_{1}&b_{2}&0\end{pmatrix}.\] It is obvious that \[\{\pi_{1}((M_{1},b_{1}))\oplus\pi_{2}((M_{2},b_{2}))|\pi_{i}\in\mathbb{S}_{m_{ i}},i=1,2\}\subset\{(M_{1},b_{1})\oplus((M_{2},b_{2}))\},\] thus, we do not need to distinguish \(\{(M_{1},b_{1})\}\oplus\{(M_{2},b_{2})\}\) and \(\{(M_{1},b_{1})\oplus(M_{2},b_{2})\}\). In the other words, we have \[\{(M_{1},b_{1})\}\oplus\{(M_{2},b_{2})\}=\{(M_{1},b_{1})\oplus(M_{2},b_{2})\}.\] In the situation of the equivalent classes, the direct sum is commutative, i.e. we have \[\{(M_{1},b_{1})\}\oplus\{(M_{2},b_{2})\}=\{(M_{2},b_{2})\}\oplus\{(M_{1},b_{1 })\}.\] **Definition 2.2**.: _Let \(\{M\}\in M_{adj}(m,\mathbb{N})\diagup\sim\)._ * _When_ \(m\geq 4\)_, if there are_ \(M_{1}\in M_{adj}(k,\mathbb{N})\)_,_ \(M_{2}\in M_{adj}(m-k,\mathbb{N})\)_, such that_ \[\{M\}=\{M_{1}\}\oplus\{M_{2}\},\] _where_ \(M_{1},M_{2}\neq 0\)_,_ \(k\geq 2\)_,_ \(m-k\geq 2\)_, we say_ \(\{M\}\) _is disconnected. Otherwise, we say_ \(\{M\}\) _is connected._ * _When_ \(2\leq m\leq 3\)_, if on each row (column) of_ \(M\) _there is a non-zero entry, we say_ \(\{M\}\) _is connected._ * _An adjacency matrix_ \(M\in M_{adj}(m,\mathbb{N})\) _is called a connected one if_ \(\{M\}\) _is connected._ **Remark 2.2**.: * _It is well known that the adjacency matrices arises from graphic theory to characterize the graphs. In other words, the adjacency matrices can be regarded as "coordinates" of the graphs, and the graphs indicate the geometric meaning of the adjacency matrices. The condition of zero diagonal indicates the graphs without loops. The connectivity of the adjacency matrices defined in definition_ 2.2 _is equivalent to the connectivity of the graphs._ * _Let_ \(M\in M_{adj}(2,\mathbb{N})\)_, then_ \[M\ is\ connected\Leftrightarrow M\neq 0.\] * _We say an extended adjacency matrix_ \((M,a)\) _is connected, if_ \(M\) _is connected._ * _We define the zero matrix is connected._ **Proposition 2.2**.: _Let \(\{M\}\in M_{adj}(m,\mathbb{N})\diagup\sim\) be disconnected, then \(\{M\}\) adapts the decomposition as follows_ \[\{M\}=\{M_{1}\}\oplus\cdots\oplus\{M_{k}\}, \tag{2.8}\] _where each \(\{M_{i}\}\in M_{adj}(m_{i},\mathbb{N})\diagup\sim\) is connected (\(i=1,\cdots,k,\ m_{1}+\cdots+m_{k}=m\))._ **Corollary 2.1**.: _Let \(M\in M_{adj}(m,\mathbb{N})\), then \(M\) is disconnected if and only if there is a partition \(\{I_{i}\}_{i=1}^{k}\in\mathbf{Part}([m])\) (\(k\geq 2\)), such that each \(M_{I_{i}}\) is connected (\(i=1,\cdots,k\)), and \(M\sim M_{I_{1}}\oplus\cdots\oplus M_{I_{k}}\)._ **Corollary 2.2**.: _Under the direct sum (2.2), \(M_{adj}(+\infty,\mathbb{N})\) is a commutative monoid generated by all connected classes._ **Corollary 2.3**.: _Let \(M\in M_{adj}(m,\mathbb{N})\), \(I,J\subset[m]\) be two subsets satisfying:_ * \(J\subset I\)_,_ * \(M_{I}\sim M_{I_{1}}\oplus\cdots\oplus M_{I_{k}}\)_, where_ \(\{I_{i}\}_{i=1}^{k}\in\mathbf{Part}(I)\)_, each_ \(M_{I_{i}}\) _is connected (_\(i=1,\cdots,k\)_),_ * \(M_{J}\sim M_{J_{1}}\oplus\cdots\oplus M_{J_{l}}\)_, where_ \(\{J_{j}\}_{j=1}^{l}\in\mathbf{Part}(J)\)_, each_ \(M_{J_{j}}\) _is connected (_\(j=1,\cdots,l\)_)._ _Then, for each \(J_{j}\), there a \(I_{i}\) such that \(J_{j}\subset I_{i}\) (\(1\leq i\leq k,\)\(1\leq j\leq l\))._ All discussions about connectedness can be generalized to the situation of the extended adjacency matrices. ### Quotient Let \(M\in M_{adj}(m,\mathbb{N})\), \(I=\{i_{1},\cdots,i_{k}\}\subset[m]\) (\(k\geq 2,\,0<i_{1}<\cdots<i_{k}\)), then \(I\) determines a diagonal submatrix \(M_{I}=(m_{i_{a},i_{b}})_{k\times k}\) of \(M\). In fact the subset \(I\) determines a homomorphism of the monoids \[\mathfrak{R}_{I}:M_{adj}(m,\mathbb{N})\longrightarrow M_{adj}(k,\mathbb{N}),\,\mathfrak{R}_{I}:M\mapsto M_{I}.\] Conversely, for the given subset \(I\subset[m]\) as above, we can define an embedding \(\iota_{I}:M_{adj}(k,\mathbb{N})\hookrightarrow M_{adj}(m,\mathbb{N})\) in the following way. Let \(N=(n_{ij})_{k\times k}\in M_{adj}(k,\mathbb{N})\), then \(\iota_{I}N\in M_{adj}(m,\mathbb{N})\) with the form \(\iota_{I}N=(m^{\prime}_{ij})_{m\times m}\) satisfying \(m^{\prime}_{i_{i}i_{j}}=n_{ij}\,(i,j=1,\cdots,k)\), \(m^{\prime}_{pq}=0\) (\(p\in I^{c}\) or \(q\in I^{c}\), \(I^{c}=[m]\setminus I\)). It is obvious that \((\iota_{I}M_{I})_{I}=M_{I}\). For another subset \(J\subset[m]\), if \(J\subset I\), then \(M_{J}=(M_{I})_{J}\). We now define the quotient of \(M\) by \(M_{I}\) as follows. **Definition 2.3**.: _Let \(m\geq 2\) be an integer, \(I=\{i_{1},\cdots,i_{k}\}\subset[m]\), \(I^{c}=[m]\setminus I=\{j_{1},\cdots,j_{m-k}\}\) (\(m\geq k\geq 2,\,0<i_{1}<\cdots<i_{k},\,j_{1}<\cdots<j_{m-k}\))._ * _The quotient is a map_ \[\mathcal{Q}_{m,I}:M_{adj}(m,\mathbb{N})\longrightarrow M_{adj}(m-k+1,\mathbb{ N}).\] _For_ \(M=(m_{ij})_{m\times m}\in M_{adj}(m,\mathbb{N})\)_,_ \(\mathcal{Q}_{m,I}(M)\) _is called the quotient of_ \(M\) _by_ \(M_{I}\) _defined by the following expression:_ \[\mathcal{Q}_{m,I}(M)=\left(\begin{array}{ccc}0&m_{1^{*},j_{1}}\cdots m_{1^{* },j_{m-k}}\\ m_{j_{1},1^{*}}&\\ \vdots&M_{I^{c}}\\ m_{j_{m-k},1^{*}}&\end{array}\right),\] (2.9) _where_ \(m_{j_{b},1^{*}}=m_{1^{*},j_{b}}=\sum_{a=1}^{k}m_{i_{a},j_{b}}\) _(_\(b=1,\cdots,m-k\)_). The matrix (_2.4_) is also denoted by_ \(M\diagdown M_{I}\)_. We define_ \(M\diagdown M=0\)_,_ \(M\diagdown 0=M\)_._ * _Let_ \((M,b)\in M_{adj}(m+1,\mathbb{N})_{(e)}\)_, the quotient of_ \((M,b)\) _by_ \(M_{I}\) _is defined to be an extended adjacency matrix in_ \(M_{adj}(m-k+2,\mathbb{N})_{(e)}\)_, denoted by_ \((M,b)\diagdown M_{I}\)_, with the following form,_ \[(M,b)\diagdown M_{I}=(M\diagdown N_{I},b_{*}\boxplus b_{I^{c}}),\] (2.10) _where_ \(b_{*}=\sum_{i\in I}b_{i}\)_,_ \(b_{I^{c}}=(b_{j_{1}},\cdots,b_{j_{m-k}})\)_._ There is a basic property as follows. **Lemma 2.1**.: _Let \(M\in M_{adj}(m,\mathbb{N})\), \(I\subset K\subset[m]\). Then we have_ \[M_{K}\diagdown M_{I}=(M\diagdown M_{I})_{J},\] _where \(J=(K\setminus I)\cup\{1^{*}\}\)._ Proof.: Without loss of the generality, we assume \(I=[n]\), \(K=[n+r]\)\((n+r<m)\). Thus we have \(J=\{1^{*}\}\cup\{n+1,\cdots,n+r\}\). Let \[M\diagup N_{I}=\left(\begin{array}{cccc}m_{1^{*}1^{*}}&m_{1^{*}2}\cdots m_{1^ {*}r+1}&m_{1^{*}r+2}\cdots m_{1^{*}m-n+1}\\ m_{21^{*}}&&&\\ \vdots&M_{K\setminus I}&M_{12}^{T}\\ m_{r+11^{*}}&&&\\ m_{r+21^{*}}&&&\\ \vdots&M_{12}&M_{K^{c}}&\\ m_{m-n+11^{*}}&&&\end{array}\right),\] then \[(M\diagup N_{I})_{J}=\left(\begin{array}{cccc}m_{1^{*}1^{*}}&m_{1^{*}2} \cdots m_{1^{*}r+1}\\ m_{21^{*}}&&&\\ \vdots&M_{K\setminus I}&\\ m_{r+11^{*}}&&&\end{array}\right).\] By a straightforward calculation we can get the conclusion of lemma. **Remark 2.3**.: * _In the present article, we focus on the adjacency matrices with zero diagonal which correspond to the graphs without loops. In the general situation, the entries on diagonal may be non-zero. The quotient in definition_ 2.3 _can be generalized to the situation of the adjacency matrices with non-zero diagonal. For instance, from the geometric viewpoint, we consider the quotient of a Feynman diagram by a subdiagram. Recalling that a subdiagram of a Feynman diagram is determined by a subset of the internal lines, or spanned by a subset of the internal lines, and a subgraph is spanned by a subset of the vertices, therefore, the adjacency matrix characterizing this quotient should be of the form_ \[\mathbf{diag}(m_{1^{*}1^{*}},0,\cdots,0)+M\diagup M_{I},\] _where_ \(M_{I}\) _indecates the subgraph with same vertices as subdiagram metioned above,_ \(m_{1^{*}1^{*}}\) _indecates the number of the loops arising from the procedure of the quotient (_\(0\leq m_{1^{*}1^{*}}<\mathbf{deg}M_{I}\)_). The above adjacency matrix shows that when we discuss the quotient, it is enough for us to consider the situation of the graphs without loops. The situation of the extended adjacency matrices is similar._ * _Let_ \(M,\in M_{adj}(m,\mathbb{N})\) _(or_ \((M,a)\in M_{adj}(m,\mathbb{N})_{(e)}\)_),_ \(\{I_{i}\}_{i=1}^{k}\) _be a sequence consisting of disjoint subsets of_ \([m]\)_, i.e._ \(I_{i}\cap I_{i^{\prime}}=\emptyset\) _(_\(i\neq i^{\prime},\,1\leq i,i^{\prime}\leq k\)_). We can make quotient repeatedly as follows,_ \[(\cdots((M\diagup M_{I_{1}})\diagup M_{I_{2}})\cdots)\diagup M_{I_{k}}.\] _or_ \[(\cdots(((M,a)\diagdown M_{I_{1}})\diagdown M_{I_{2}})\cdots)\diagdown M_{I_{k}}.\] _We denote above quotient by \(M\diagdown(M_{I_{i}})\) (\((M,a)\diagdown(M_{I_{i}})\))for short. If \(|I_{1}|+\cdots+|I_{k}|=n\), then \(M\diagdown(M_{I_{i}})\in M_{adj}(m-n+k,\mathbb{N})\). Precisely, we have_ \[=\left(\begin{array}{cccccc}&&(\cdots((M\diagdown M_{I_{1}}) \diagdown M_{I_{2}})\cdots)\diagdown M_{I_{k}}&&&\\ 0&m_{1^{*}2^{*}}&\cdots&&\cdots&m_{1^{*}\,m-n+k}\\ m_{2^{*}\,1^{*}}&\ddots&\cdots&&\cdots&m_{2^{*}\,m-n+k}\\ \vdots&\vdots&\ddots&&\vdots&\\ &&&0&m_{k^{*}k+1}&\cdots&m_{k^{*}m-n+k}\\ &&&m_{k+1\,k^{*}}&&\\ \vdots&&\vdots&&M_{I^{c}}&\\ m_{m-n+k\,1^{*}}&\cdots&&m_{m-n+k\,k^{*}}&&\end{array}\right),\] _where \(I=\bigcup_{i=1}^{k}I_{i}\) and \(1th,\cdots,ith\) rows (or \(1th,\cdots,ith\) columns) in \(M\diagdown(M_{I_{i}})\) consist of ideal entries arising from the quotient. It is obvious that we have \(M\diagdown(M_{I_{i}})\sim M\diagdown(M_{I_{\sigma(i)}})\) for each \(\sigma\in\mathbb{S}_{k}\). The situation of the extended adjacency matrices is similar._ The notation of the quotient can be extended into the situation of equivalent classes. Actually, we have the following lemma. **Proposition 2.3**.: _Let \(M\in M_{adj}(m,\mathbb{N})\), \(I\subset[m]\). Then, each \(\pi\in\mathbf{S}_{m}\) induces a permutation \(\pi_{I}\in\mathbf{S}_{|I|}\) such that_ \[\pi_{I}(M_{I})=\pi(M)_{\pi^{-1}(I)},\] _and_ \[M\diagdown M_{I}\sim\pi(M)\diagdown\pi_{I}(M_{I}). \tag{2.11}\] Proof.: Let \(|I|=k\), \(I=\{i_{1},\cdots,i_{k}\}\subset[m]\), \(I^{c}=[m]\setminus I=\{j_{1},\cdots,j_{m-k}\}\) (\(m\geq k\geq 2,\,0<i_{1}<\cdots<i_{k}\), \(0<j_{1}<\cdots<j_{m-k}\)), \(\pi\in\mathbf{S}_{m}\). Then \(\pi\) induces a permutation \(\pi_{I}\in\mathbf{S}_{k}\) acting on \(I\). Actually, let \(\pi^{-1}(I)=\{l_{1},\cdots,l_{k}\}\), where \(\pi^{-1}(i_{a})=l_{a}\) (\(a=1,\cdots,k\)), then \(\pi(M)_{\pi^{-1}(I)}=(m_{\pi(l_{\alpha_{a}})\pi(l_{\alpha_{b}})})_{k\times k}\), where \(0<l_{\alpha_{1}}<\cdots<l_{\alpha_{k}}\). Thus, we get a permutation \(\pi_{I}\in\mathbf{S}_{k}\), \[\pi_{I}=\begin{pmatrix}1&\cdots&k\\ \alpha_{1}&\cdots&\alpha_{k}\end{pmatrix}.\] Let \(\pi_{I}\) acts on \(I\) in such a way \(\pi_{I}(i_{a})=i_{\pi_{I}(a)}=i_{\alpha_{a}}\) (\(a=1,\cdots,k\)), then, it is obvious that we have \[\pi(M)_{\pi^{-1}(I)}=\pi_{I}(M_{I}).\] Similarly, \(\pi\) induces a permutation \(\pi_{I^{c}}\in\mathbf{S}_{m-k}\) such that \[\pi_{I^{c}}(M_{I^{c}})=\pi(M)_{\pi^{-1}(I^{c})}.\] By a straightforward calculation, we can get \[\pi(M)\diagup\pi_{I}(M_{I})=\left(\begin{array}{ccc}0&m_{1^{*},j_{\tau(1)}}, \cdots,m_{1^{*},j_{\tau(m-k)}}\\ m_{j_{\tau(1)},1^{*}}&\\ \vdots&\tau(M_{I^{c}})\\ m_{j_{\tau(m-k)},1^{*}}&\end{array}\right),\] where \(\tau=\pi_{I^{c}}\). Let \(P_{\tau}\) be a permutation matrix of order \(m-k\) corresponding to the permutation \(\tau\in\mathbf{S}_{m-k}\), it is obvious that \[\pi(M)\diagup\pi_{I}(M_{I})=\mathbf{diag}(1,P_{\tau})(M\diagup N_{I})\mathbf{ diag}(1,P_{\tau}^{T}).\] **Corollary 2.4**.: _Let \((M,a)\in M_{adj}(m+1,\mathbb{N})_{(e)}\), \(I\subset\underline{m}\), \(\pi\in\mathbb{S}_{m}\). Then, there is \(\pi_{I}\in\mathbf{S}_{|I|}\) such that_ \[(\pi_{I}(M_{I}),\pi_{I}(a_{I}))=(\pi(M)_{\pi^{-1}(I)},\pi(a)_{\pi^{-1}(I)}),\] _and_ \[(M,a)\diagup M_{I}\sim(\pi(M),\pi(a))\diagup\pi_{I}(M_{I}).\] **Remark 2.4**.: * _Let_ \(m\) _be a positive integer,_ \(I\subset[m]\)_,_ \(M\in M_{adj}(m,\mathbb{N})\)_, we call_ \[\{\pi(M)_{\pi^{-1}(I)}|\,\pi\in\mathbb{S}_{m}\}\] _the diagonal sub-class of_ \(\{M\}\) _corresponding to_ \(I\)_, denoted by_ \(\{M\}_{I}\)_. By proposition_ 2.3_, we know that for_ \(M\in M_{adj}(m,\mathbb{N})\)_,_ \(I\subset[m]\) _with_ \(|I|=k\,(k\geq 2)\) _the quotient_ \(M\diagup M_{I}\) _defines a map_ \[\{\pi(M)|\,\pi\in\mathbb{S}_{m}\}\mapsto\{\pi(M)\diagup\pi(M)_{\pi^{-1}(I)}|\, \pi\in\mathbb{S}_{m}\}\subset\{M\diagup M_{I}\},\] _thus, a map_ \[\{M\}\mapsto\{M\diagup M_{I}\}.\] * _From definition of the quotient, it is easy to see_ \[\mathbf{deg}\{M\}=\mathbf{deg}\{N_{I}\}+\mathbf{deg}\{M\diagup N_{I}\}.\] In definition 2.3 we do not require \(M\) and \(M_{I}\) are connected. From now on, when we discuss the quotient \(M\diagdown M_{I}\) given by the expression (2.8), we assume both \(M\) and \(M_{I}\) are connected. In the situation of that \(M\) is connected and \(M_{I}\) is disconnected, \(M_{I}\) will adapt a decomposition \(M_{I}\sim M_{I_{1}}\oplus\cdots\oplus M_{I_{k}}\), \(\{I_{i}\}_{I=1}^{k}\in\mathbf{Part}(I)\), each \(M_{I_{i}}\) is connnected (\(i=1,\cdots,k\)). In this situation, the quotient \(M\diagdown M_{I}\) will be regarded as \[M\diagdown M_{I}=M\diagdown(M_{I_{i}}). \tag{2.12}\] If \(M\) is disconnected, but \(M_{I}\) is connected, \(M\) will adapt a decomposition \(M\sim M_{J_{1}}\oplus\cdots\oplus M_{J_{l}}\), where \(\{J_{j}\}\in\mathbf{Part}([m])\), and each \(M_{J_{j}}\) is connected (\(j=1,\cdots,l\)). In this situation, there is some \(J_{j^{\prime}}\) such that \(I\subset J_{j^{\prime}}\). The quotient \(M\diagdown N_{I}\) should satisfy \[M\diagdown M_{I}\sim(M_{J_{j^{\prime}}}\diagdown M_{I})\oplus(\bigoplus_{j\neq j ^{\prime}}M_{J_{j}}). \tag{2.13}\] The situation of the extended adjacency matrices is similar. We now give a explicit description about the quotient. **Proposition 2.4**.: _Let \(M\in M_{adj}(m,\mathbb{N})\),_ \[M\sim M_{I_{1}}\oplus\cdots\oplus M_{I_{k}},\] _where each \(M_{I_{i}}\) is connected (\(i=1,\cdots,k\)), \(\{I_{i}\}_{i=1}^{k}\in\mathbf{Part}(\underline{m})\). For a subset \(J\subset\underline{m}\), let \(M_{J}\sim M_{J_{1}}\oplus\cdots\oplus M_{J_{l}}\), \(\{J_{j}\}_{j=1}^{l}\in\mathbf{Part}(J)\), and each \(M_{J_{j}}\) be connected (\(j=1,\cdots,l\)). Then, the quotient of \(M\) by \(M_{J}\) is of the following form,_ \[M\diagdown M_{J}\sim(\bigoplus_{I_{i}\cap J=\emptyset}M_{I_{i}})\oplus( \bigoplus_{I_{i}\cap J\neq\emptyset}M_{I_{i}}\diagdown(M_{J_{j}})_{J_{j} \subset I_{i}}). \tag{2.14}\] Proof.: At first, we consider \(M\diagdown N_{J_{1}}\). In this situation, by corollary 2.4, we know that there is some \(I_{i^{\prime}}\) such that \(J_{1}\subset I_{i^{\prime}}\). From definition 2.3 we know that \[M\diagdown M_{J_{1}}\sim(\bigoplus_{i\neq i^{\prime}}M_{I_{i}})\oplus(M_{I_{i ^{\prime}}}\diagdown M_{J_{1}}).\] Because \(M_{I_{i^{\prime}}}\diagdown M_{J_{1}}\) is connnected, for \(J_{2}\) there are two possibilities. * There is some \(I_{i^{\prime\prime}}\) such that \(J_{2}\subset I_{i^{\prime\prime}}\) (\(i^{\prime\prime}\neq i^{\prime}\)). Then we have \[M\diagdown(M_{J_{j}})_{j=1,2}\sim(\bigoplus_{i\neq i^{\prime},i^{\prime\prime} }M_{I_{i}})\oplus(M_{I_{i^{\prime}}}\diagdown M_{J_{1}})\oplus(M_{I_{i^{ \prime\prime}}}\diagdown M_{J_{2}}).\] * \(J_{2}\subset I_{i^{\prime}}\setminus J_{1}\), then we hvae \[M\diagdown(M_{J_{j}})_{j=1,2}\sim(\bigoplus_{i\neq i^{\prime}}M_{I_{i}})\oplus( M_{I_{i^{\prime}}}\diagdown(M_{J_{j}})_{j=1,2}).\] Repeating above procedure, inductively, we can prove the formula (2.14). **Proposition 2.5**.: _Let \(M\in M_{adj}(m,\mathbb{N})\), \(I\subset[m]\). Then \(M\diagdown M_{I}\) is connected if and only if \(M\) is connected._ Proof.: Without loss of generality, we assume \(M_{I}\) is connected. First we assume \(M\) is connected, but \(M\diagdown M_{I}\) is disconnected, then there is a subset \(J\subset(\{1^{*}\}\cup\{1,\cdots,m-k\})\), such that \(1^{*}\in J\), \((M\diagdown M_{I})_{J}\) is conncted, and \(M\diagdown M_{I}\sim(M\diagdown M_{I})_{J}\oplus(M\diagdown M_{I})_{J^{c}}\), where \(J^{c}=\{1,\cdots,m-k\}\setminus J\), \(k=|I|\). It is obvious that \(J^{c}\subset I^{c}\), thus \((M\diagdown M_{I})_{J^{c}}=M_{J^{c}}\). Due to definition 2.3, it is easy to check that \(M\sim M_{J^{\prime}\cup I}\oplus M_{J^{c}}\), where \(J^{\prime}=J\setminus\{1^{*}\}\). Up to now, we reach a contradiction. Suppose \(M\diagdown M_{I}\) is connected, by proposition 2.4, \(M\) is also connected. Regarding the quotient as an operation, we will prove that the quotient is compatible with the direct sum of the adjacency matrices. For two adjacency matrices \(M\in M_{adj}(m,\mathbb{N})\) and \(N\in M_{adj}(n,\mathbb{N})\), we can identify \(M\oplus N\) with \(\mathbf{diag}(M,N)\), which means we embed \([n]\) into \([m+n]\). In this situation we will identify \([n]\) with \(\{m+1,\cdots,m+n\}\). Thus, for a subset \(J\subset[n]\), we do not distinguish between \(J\) and \(\{j+m|j\in J\}\subset\{m+1,\cdots,m+n\}\). Conversely, for any subset \(K\subset[m+n]\), we have a decomposition \(K=K_{1}\cup K_{2}\), where \(K_{1}=K\cap[m]\) and \(K_{2}=K\cap\{m+1,\cdots,m+n\}\), \(K_{2}\) can be regarded as a subset of \([n]\). Let \(M\) adapt the decomposition \(M\sim M_{I_{1}}\oplus\cdots\oplus M_{I_{k}}\), \(N\) adapt the decompsition \(N\sim N_{J_{1}}\oplus\cdots\oplus N_{J_{l}}\), where \(\{I_{i}\}\in\mathbf{Part}(I)\), \(\{J_{j}\}\in\mathbf{Part}(J)\), each \(M_{I_{i}}\) and each \(N_{J_{j}}\) are connected (\(i=1,\cdots,k\), \(j=1,\cdots,l\)). Then we have \[M\oplus N\sim(\bigoplus_{i=1}^{k}M_{I_{i}})\oplus(\bigoplus_{j=1}^{l}N_{J_{j }}).\] On the other hand, if \(M_{K_{1}}\sim\bigoplus_{\alpha=1}^{p}M_{D_{\alpha}}\), and \(N_{K_{2}}\sim\bigoplus_{\beta=1}^{q}N_{E_{\beta}}\), where \(\{D_{\alpha}\}\in\mathbf{Part}(K_{1})\), \(\{E_{\beta}\}\in\mathbf{Part}(K_{2})\), each \(M_{D_{\alpha}}\) and each \(N_{E_{\beta}}\) are connected, it is obvious that \((M\oplus N)_{K}\) adapts the following decomposition, \[(M\oplus N)_{K}\sim(\bigoplus_{\alpha=1}^{p}M_{D_{\alpha}})\oplus(\bigoplus_{ \beta=1}^{q}N_{E_{\beta}}).\] Due to proposition 2.4, it is easy to check that \[\begin{array}{c}(M\oplus N)\diagdown(M\oplus N)_{K}\\ \sim(M\oplus N)\diagdown((M_{D_{\alpha}})\cup(N_{E_{\beta}}))\\ \sim(M\diagdown(M_{D_{\alpha}}))\oplus(N\diagdown(N_{E_{\beta}}))\\ =(M\diagdown M_{K_{1}})\oplus(N\diagdown N_{K_{2}}).\end{array}\] Summarizing the previous discussion, we reach the following conclusion. **Lemma 2.2**.: _Let \(M\in M_{adj}(m,\mathbb{N})\), \(N\in M_{adj}(n,\mathbb{N})\), \(K\subset[m+n]\). Then, we have_ \[(M\oplus N)\diagdown(M_{K_{1}}\oplus N_{K_{2}})\sim(M\diagdown M_{K_{1}}) \oplus(N\diagdown N_{K_{2}}),\] _where \(K_{1}=K\cap[m]\), \(K_{2}=K\cap\{m+1,\cdots,m+n\}\), and we identify \(K_{2}\) with the set \(\{k-m|k\in K_{2}\}\subset[n]\)._ Equivalently, we have **Corollary 2.5**.: _Let \(M\in M_{adj}(m,\mathbb{N})\), \(N\in M_{adj}(n,\mathbb{N})\), \(K\subset[m]\). Then, we have_ \[\{(M\oplus N)\diagup(M_{K_{1}}\oplus N_{K_{2}})\}=\{M\diagup M_{K_{1}}\}\oplus \{N\diagup N_{K_{2}}\},\] _where \(K_{1}=K\cap[m]\), \(K_{2}=K\cap\{m+1,\cdots,m+n\}\), and we identify \(K_{2}\) with the set \(\{k-m|k\in K_{2}\}\subset[n]\)._ Now we turn to more complicated situation of the quotient. **Proposition 2.6**.: _Let \(m\geq 2\) be an integer, and_ * \(\{I_{i}\}_{i=1}^{p}\in\mathbf{part}([m])\)_,_ \(\bigcup_{i=1}^{p}I_{i}=I\subset K\subset[m]\)_,_ * \(M\in M_{adj}(m,\mathbb{N})\)_,_ _If we take \(J=(K\setminus I)\cup\{1^{*},\cdots,p^{*}\}\), then we have_ * \[(M\diagup(M_{I_{i}}))_{J}=M_{K}\diagup(M_{I_{i}}),\] * \[(M\diagup(M_{I_{i}}))\diagup(M\diagup(M_{I_{i}}))_{J}=M\diagup M_{K}.\] _Particularly, if each \(M_{I_{i}}\) is connected (\(i=1,\cdots,p\)), then \((M\diagup(M_{I_{i}}))_{J}\) is connected if and only if \(M_{K}\) is connected._ Proof.: We will prove the conclusion by induction on \(p\). The situation of \(p=1\) has been proven in lemma 2.1. Assuming the conclusion is valid for \(p>0\), we consider the situation of \(p+1\). Noting that \[M\diagup(M_{I_{i}})_{i=1}^{p+1}=(M\diagup(M_{I_{i}})_{i=1}^{p})\diagup M_{I_{p +1}}=\tilde{M}\diagup M_{I_{p+1}},\] where \(\tilde{M}=M\diagup(M_{I_{i}})_{i=1}^{p}\), then there is \(J^{\prime}=((K\setminus I)\cup I_{p+1})\cup\{1^{*},\cdots,p^{*}\}\), such that \[(\tilde{M}\diagup M_{I_{p+1}})\diagup(\tilde{M}\diagup M_{I_{p+1}})_{J}= \tilde{M}\diagup\tilde{M}_{J^{\prime}}.\] By the inductive hypothesis, we know that thr conclusion of proposition is valid. The following conclusion concerns the general situation. **Proposition 2.7**.: _Let \(M\in M_{adj}(m,\mathbb{N})\) (\(m\geq 2\)), \(\{I_{i}\}_{i=1}^{p},\{K_{k}\}_{k=1}^{q}\in\mathbf{part}([m])\), \(\{I_{i}\}\subset\{K_{k}\}\). If we take \(\{J_{j}\}\in\mathbf{part}(([m]\setminus I)\cup\{1^{*},\cdots,p^{*}\})\) with the following form_ \[J_{j}=(K_{k}\setminus(\bigcup_{I_{i}\subset K_{k}}I_{i}))\cup\{i^{*}\}_{I_{i} \subset K_{k}},\,K_{k}\neq I_{i}\,(1\leq i\leq p),\] _where \(I=\bigcup_{i}I_{i}\), then we have_ * \[(M\diagup(N_{I_{i}}))_{J_{j}}=M_{K_{k}}\diagup(I_{i})_{I_{i}\subset K_{k}},K_{ k}\neq I_{i}\,(1\leq i\leq p),\] * \[(M\diagup(N_{I_{i}}))\diagup((M\diagup(N_{I_{i}}))_{J_{j}})=M\diagup(M_{K_{k}}).\] Proof.: Let \(\{I_{i^{\prime}}\}=\{I_{i}\}_{I_{i}\cap J^{\prime}=\emptyset}\). Without loss of generality, we can replace \(M\) by \(M\diagup(I_{i^{\prime}})\). Thus, we can assume that each \(K_{k}\) (\(1\leq k\leq q\)) satisfies the following condition: \[|\{I_{i}|I_{i}\subset K_{k}\}|=1\,\Rightarrow\,K_{k}\setminus I\neq\emptyset.\] We now prove the conclusion for each \[J_{k}=(K_{k}\setminus(\bigcup_{I_{i}\subset K_{k}}I_{i}))\cup\{i^{*}\}_{I_{i }\subset K_{k}},\,k=1,\cdots,p.\] For instance, we consider \(J_{1}\). For simplicity, we assume \[J_{1}=J_{1}^{\prime}\cup\{1^{*},\cdots,r^{*}\},\,1\leq r\leq p,\] where \(J_{1}^{\prime}=K_{1}\setminus(\bigcup_{i=1}^{r}I_{i})\). We want to prove \[(M\diagup(N_{I_{i}}))\diagup(M\diagup(N_{I_{i}}))_{J_{1}}=(M\diagup(I_{i})_{ i>r})\diagup M_{K_{1}},\] and \[(M\diagup(N_{I_{i}}))_{J_{1}}=M_{K_{1}}\diagup(N_{I_{i}})_{1\leq i\leq r}.\] It is obvious that we have \[M\diagup(I_{i})=(M\diagup(I_{i})_{i>r})\diagup(I_{i})_{1\leq i\leq r},\] and \[(M\diagup(I_{i})_{i>r})_{K_{1}}=M_{K_{1}}.\] If we take \(\tilde{M}=M\diagup(I_{i})_{i>r}\), the situation is reduced to Lemma 2.6. Conversely, we have, **Proposition 2.8**.: _Let \(M\in M_{adj}(m,\mathbb{N})\) (\(m\geq 2\)), \(\{I_{i}\}_{i=1}^{p}\in\mathbf{part}([m])\), \(\{J_{j}\}_{j=1}^{q}\in\mathbf{part}(([m]\setminus I)\cup\{1^{*},\cdots,p^{*}\})\), where \(I=\bigcup_{i=1}^{p}I_{i}\). If we take \(\{K_{k}\}\in\mathbf{part}([m])\) with the following form_ \[K_{k}=\left\{\begin{array}{cc}I_{i},&I_{i}\cap J^{\prime}=\emptyset,\\ J_{j},&J_{j}\cap\{1^{*},\cdots,p^{*}\}=\emptyset,\\ J^{\prime}_{j}\cup(\bigcup_{i^{*}\in J_{j}}I_{i}),&J_{j}\cap\{1^{*},\cdots,p^ {*}\}\neq\emptyset,\end{array}\right.\] _where \(J^{\prime}=(\bigcup_{j}J_{j})\setminus\{1^{*},\cdots,p^{*}\}\), \(J^{\prime}_{j}=J_{j}\cap J^{\prime}\), then, we have_ * \[(M\diagup(N_{I_{i}}))_{J_{j}}=\left\{\begin{array}{cc}M_{K_{k}}\diagup(I_{i} )_{I_{i}\subset K_{k}},&J_{j}\cap\{1^{*},\cdots,p^{*}\}\neq\emptyset,\\ M_{J_{j}},&J_{j}\cap\{1^{*},\cdots,p^{*}\}=\emptyset.\end{array}\right.\] * \[(M\diagup(N_{I_{i}}))\diagup((M\diagup(N_{I_{i}}))_{J_{j}})=M\diagup(M_{K_{k}}).\] Proof.: Let \(\{J_{j^{\prime}}\}=\{J_{j}\}_{J_{j}\cap\{1^{*},\cdots,p^{*}\}=\emptyset}\), then we can replace \(\{I_{i}\}\) by \(\{I_{i}\}\cup\{J_{j^{\prime}}\}\). Thus, we assume \[\{K_{k}\}=\{I_{i}\}_{I_{i}\cap J^{\prime}=\emptyset}\cup\{J^{\prime}_{j}( \bigcup_{i^{*}\in J_{j}}I_{i})\}.\] For instance, we consider the situation of \(J_{1}\), and for simplicity we assume \(\bigcup_{i^{*}\in J_{1}}I_{i}=\{1,\cdots,r\}\) and \(K_{1}=J^{\prime}_{1}\cup\{1,\cdots,r\}\). By the same reason as proposition 2.6, we know that \((M\diagup(I_{i}))_{J_{1}}=M_{K_{1}}\diagup(I_{i})_{1\leq i\leq r}\). With the arguments which are similar to the ones in the the proofs of proposition 2.6 and proposition 2.7, we can prove \[(M\diagup(M_{I_{i}}))\diagup((M\diagup(M_{I_{i}}))_{J_{j}})=M\diagup(M_{K_{k}}).\] **Corollary 2.6**.: _Let \(M\in M_{adj}(m,\mathbb{N})\) be a connected adjacency matrices. For two subsets \(I,K\subset\underline{m}\), If_ * \(I\subset K\)_,_ * \(M_{K}\sim M_{K_{1}}\oplus\cdots\oplus M_{K_{p}}\)_, where_ \(\{K_{k}\}\in\mathbf{Part}(K)\)_, and each_ \(M_{K_{k}}\) _is connected (_\(k=1,\cdots,p\)_),_ * \(M_{I}\sim M_{I_{1}}\oplus\cdots\oplus M_{I_{q}}\)_, where_ \(\{I_{i}\}\in\mathbf{Part}(I)\)_, each_ \(M_{I_{i}}\) _is connected (_\(i=1,\cdots,q\)_),_ _then, there is a subset \(J\subset(K\setminus I)\cup\{1^{*},\cdots,q^{*}\}\) such that_ * \[M_{K}\diagup M_{I}=(M\diagup M_{I})_{J},\] * \[(M\diagup M_{I})\diagup(M\diagup N_{I})_{J}=M\diagup M_{K}.\] (2.15) * \((M\diagdown M_{I})_{J}\sim(M\diagdown M_{I})_{J_{1}}\oplus\cdots\oplus(M\diagdown M_{I })_{J_{I}}\)_, where_ \(\{J_{j}\}\) _is same as one in proposition_ 2.7_, and each_ \((M\diagdown M_{I})_{J_{j}}\) _is connected (_\(j=1,\cdots,l\)_)._ Conversely, we have the following conclusion. **Corollary 2.7**.: _Let \(M,\in M_{adj}(m,\mathbb{N})\), be a connected adjacency matrix, \(I\subset[m]\), \(M_{I}\sim M_{I_{1}}\oplus\cdots\oplus M_{I_{p}}\), \(\{I_{i}\}_{i=1}^{p}\in\mathbf{Part}(I)\), each \(M_{I_{i}}\) is connected (\(i=1,\cdots,p\)), Then, for each subset \(J\subset([m]\setminus I)\cup\{1^{*},\cdots,p^{*}\}\), there is a subset \(K\subset[m]\) satisfying the following conditions:_ * \(K=I\cup(J\cap[m])\)_,_ * \[(M\diagdown M_{I})\diagdown(M\diagdown M_{I})_{J}=M\diagdown M_{K}.\] (2.16) _More precisely, if_ \[(M\diagdown M_{I})_{J}\sim(M\diagdown M_{I})_{J_{1}}\oplus\cdots\oplus(M \diagdown M_{I})_{J_{q}},\] _where \(\{J_{j}\}_{j=1}^{q}\in\mathbf{Part}(J)\) and each \((M\diagdown M_{I})_{J_{j}}\) is connected (\(j=1,\cdots,q\)), then_ \[M_{K}\sim M_{K_{1}}\oplus\cdots\oplus M_{K_{r}},\] _where \(\{K_{k}\}\in\mathbf{Part}(K)\) and each \(M_{K_{k}}\) is connected (\(k=1,\cdots,r\)), additionally, \(\{K_{k}\}\) and \(M_{K_{k}}\) satisfy:_ * \[\{K_{k}\}=\{I_{i}\}_{i^{*}\notin J}\cup\{J_{j}\}_{J_{j}\cap\{1^{*},\cdots,l^{* }\}=\emptyset}\cup\{L_{J_{j}}\}_{J_{j}\cap\{1^{*},\cdots,l^{*}\}\neq\emptyset},\] _where \(L_{J_{j}}=J_{j}^{\prime}\cup(\bigcup_{i^{*}\in J_{j}}I_{i})\), \(J_{j}^{\prime}=J_{j}\cap[m]\)._ * \[(M\diagdown M_{I})_{J_{j}}=\left\{\begin{array}{cl}M_{L_{J_{j}}}\diagup(I_{ i})_{i^{*}\in J_{j}},&J_{j}\cap\{1^{*},\cdots,l^{*}\}\neq\emptyset,\\ M_{J_{j}},&J_{j}\cap\{1^{*},\cdots,l^{*}\}=\emptyset.\end{array}\right.\] ### The coproduct Let \[\mathcal{H}_{adj}=\mathbf{Span}_{\mathbb{C}}(M_{adj}(+\infty,\mathbb{N})). \tag{2.17}\] The direct sum in \(M_{adj}(+\infty,\mathbb{N})\) can be extended to the situation of tensor, thus, the tensor of \(\mathcal{H}_{adj}\). Let \(\{M_{i}\},\{N_{i}\}\in M_{adj}(+\infty,\mathbb{N})\) (\(i=1,2\)), it is natural for us to define the direct sum of the tensor in the following way. \[(\{M_{1}\}\otimes\{M_{2}\})\oplus(\{N_{1}\}\otimes\{N_{2}\})=(\{M_{1}\} \oplus\{N_{1}\})\otimes(\{M_{2}\}\oplus\{N_{2}\}).\] Above direct sum is obviously well defined and can be extened to the situation of the tensor with multiple factors. we now define the coproduct on \(\mathcal{H}_{adj}\). **Definition 2.4**.: _The coproduct on \(\mathcal{H}_{adj}\) is defined as follows._ * _Let_ \(\{M\}\in M_{adj}(m,\mathbb{N})\diagup\sim\) _be connected,_ \(\{M\}\neq 0\)_, we define the coproduct as follows,_ \[\triangle\{M\}=\{M\}\otimes\{0\}+\{0\}\otimes\{M\}+\sum_{I\subset[m],\,I\neq[ m]}\{M_{I}\}\otimes\{M\diagup M_{I}\}.\] (2.18) * \[\triangle\{0\}=\{0\}\otimes\{0\}.\] * _Let_ \(\{M_{i}\}\in M_{adj}(m_{i},\mathbb{N})\)_, each_ \(\{M_{i}\}\) _be connected (_\(i=1,\cdots,n\)_). Then we define_ \[\triangle(\bigoplus_{i=1}^{n}\{M_{i}\})=\bigoplus_{i=1}^{n}\triangle\{M_{i}\}.\] We now prove the co-associativity of the coproduct \(\triangle\). **Theorem 2.1**.: _The coproduct \(\triangle\) satisfies the following formula,_ \[(1\otimes\triangle)\triangle=(\triangle\otimes 1)\triangle\,.\] Proof.: Let \(\{M\}\in M_{adj}(m,\mathbb{N})\diagup\sim\) be connected, \(\{M\}\neq 0\), we first consider the left side of the formula in theoem 2.1. It is \[(1\otimes\triangle)\triangle\{M\}\] \[=(1\otimes\triangle)(\{M\}\otimes\{0\}+\{0\}\otimes\triangle\{M\}+ \sum_{I\subset[m],\,I\neq[m]}\{M_{I}\}\otimes\{M\diagup M_{I}\})\] \[=\{M\}\otimes\{0\}\otimes\{0\}+\{0\}\otimes\triangle\{M\}+\sum_{I \subset[m],\,I\neq[m]}\{M_{I}\}\otimes\triangle\{M\diagup M_{I}\},\] where \[\triangle\{M\diagup M_{I}\}=\{M\diagup M_{I}\}\otimes\{0\}+\{0\}\otimes\{M \diagup M_{I}\}+\] \[\sum_{J\subset([m]\setminus I)\cup\{1^{*},\cdots,p^{*}\}}\{(M\diagup M_{I}) \diagup M_{I}\diagup M_{I}\}.\] In above sum, the positive integer \(p\) arises from the decomposition of \(M_{I}\), that is \(M_{I}\sim M_{I_{1}}\oplus\cdots\oplus M_{I_{p}}\), where each \(M_{I_{i}}\) is connected (\(i=1,\cdots,p\)). According to corollary 2.6 and corollary 2.7, we know that for each \(J\subset([m]\setminus I)\cup\{1^{*},\cdots,p^{*}\})\) there is \(K\subset[m]\) such that \((M\diagup M_{I})_{J}=M_{K}\diagup M_{I}\), \((M\diagup M_{I})\diagup(M\diagup M_{I})_{J}=M\diagup M_{K}\), and vice versa. In summary, we have \[(1\otimes\triangle)\triangle\{M\}=\{M\}\otimes\{0\}\otimes\{0\}+\{0\} \otimes\triangle\{M\}+\] \[\sum_{I\subset[m],\,I\neq[m]}\{M_{I}\}\otimes(\{M\diagup M_{I}\}\otimes\{0 \}+\{0\}\otimes\{M\diagup M_{I}\})+\] \[\sum_{I\subset K\subset[m],\,K\neq[m]}\{M_{I}\}\otimes\{M_{K}\diagup M_{I} \}\otimes\{M\diagup M_{K}\}.\] We now consider the right side of the formula in theorem 2.1. We have \[(\triangle\otimes 1)\triangle\{M\}\] \[=(\triangle\otimes 1)(\{M\}\otimes\{0\}+\{0\}\otimes\{M\}+\sum_{K \subset[m],K\neq[m]}\{M_{K}\}\otimes\{M\diagup M_{K}\})\] \[=\triangle\{M\}\otimes\{0\}+\{0\}\otimes\{0\}\otimes\{M\}+\sum_{K \subset[m],K\neq[m]}\triangle\{M_{K}\}\otimes\{M\diagup M_{K}\},\] where \[\triangle\{M_{K}\}=\{M_{K}\}\otimes\{0\}+\{0\}\otimes\{M_{K}\}+\sum_{I \subset K,\,I\neq K}\{M_{I}\}\otimes(\{M_{K}\diagup M_{I}\})\] Comparing both sides of the formula in theorem 2.1 we reach the conclusion of theorem 2.1. The unit \(u\) and counit \(\eta\) of \(\mathcal{H}_{adj}\) are defined as follows: \[u:c\mapsto c\{0\},\ c\in\mathbb{C}, \tag{2.19}\] \[\eta:\{0\}\mapsto 1,\eta:M\mapsto 0,for\ M\neq 0. \tag{2.20}\] It is easy to check that tuple \((\mathcal{H}_{adj},\oplus,u,\triangle,\eta)\) is a bialgebra. Let \(\overline{\mathcal{H}}_{adj}=\mathbf{ker}(\eta)\), and \(\overline{\triangle}\) denote the reduced coproduct on \(\overline{\mathcal{H}}_{adj}\), \[\overline{\triangle}\{M\}=\triangle\{M\}-\{M\}\otimes\{0\}-\{0\}\otimes\{M\}.\] Then we have the following conclusion. **Proposition 2.9**.: \(\overline{\triangle}\) _is conilpotent, i.e. for any connected \(\{M\}\in M_{adj}(\infty,\mathbb{N})\) (\(M\neq 0\)), there is an positive integer \(n\) such that_ \[\overline{\triangle}^{n}\{M\}=0,\] _where \(\overline{\triangle}^{n}\) is defined to be_ \[\overline{\triangle}^{n+1}=(\overline{\triangle}\otimes\ \ \underbrace{1 \otimes\cdots\otimes 1}_{n-times}\ \ )\overline{\triangle}^{n}.\] According to the previous discussion we know that \(\mathcal{H}_{adj}\) is a conilpotent bialgebra, thus, a Hopf algebra. Actually, when the reduced coproduct is conilpotent, the antipode \(S\) can be expressed by the redeced coproduct (see?). Setting \[\begin{array}{c}\oplus^{n}:\mathcal{H}_{adj}^{\otimes^{n}}\to\mathcal{H}_{ adj},\ \oplus^{n}:\{M_{1}\}\otimes\cdots\otimes\{M_{n}\}\mapsto\{M_{1}\}\oplus\cdots \oplus\{M_{n}\},\\ \{M_{i}\}\in M_{adj}(m_{i},\mathbb{N})\diagup\sim,\,i=1,\cdots,n,\end{array}\] then the antipode \(S\) will be of the following form, \[S(\{M\})=-\{M\}+\sum_{n\geq 1}(-1)^{n}\oplus^{n}(\overline{\bigtriangleup}^{n-1} \{M\}),\ \{M\}\in M_{adj}(+\infty,\mathbb{N}).\] All previous discussions in this subsection can be generalized to the situation of the extended adjacency matrices. Let \[\mathcal{H}_{adj(e)}=\mathbf{Span}_{\mathbb{C}}(M_{adj}(+\infty,\mathbb{N})_ {(e)}).\] Replacing the adjacency matrices with the extended adjacency matrices in definition 2.4 and in all previous conclusions, we can prove that \(\mathcal{H}_{adj(e)}\) is a Hopf algebra. ## 3 Insertion of the adjacency matrices In this section we will discuss the insertion of the adjacency matrices. To define the insertion of the adjacency matrices we need to introduce the decomposing map for the non-negative integers or mutiple indices. The decomposing map for the non-negative integers is map \(\iota:\mathbb{N}\rightarrow\mathbb{N}^{l}\), \[\iota:a\mapsto(a_{1},\cdots,a_{l}),\,a,a_{1},\cdots,a_{l}\in\mathbb{N},\,a_{1 }+\cdots+a_{l}=a,\] where \(l\) is a positive integer. In the situation of the multiple indices, the decomposing map can be defined in terms of the matrices as follows. Let \((m_{1},\cdots,m_{k})\in\mathbb{N}^{k}\), then \[\iota:\begin{pmatrix}m_{1}\\ \vdots\\ m_{k}\end{pmatrix}\mapsto\begin{pmatrix}a_{11}&\cdots&a_{1l}\\ \vdots&\ddots&\vdots\\ a_{k1}&\cdots&a_{kl}\end{pmatrix},\] where \(\iota(m_{i})=(a_{i1},\cdots,a_{il})\in\mathbb{N}^{l}\), \(a_{i1}+\cdots+a_{il}=m_{i}\)\((i=1,\cdots,k)\). **Definition 3.1**.: _Let \(M\in M_{adj}(m,\mathbb{N})\) (or \((M,b)\in M_{adj}(m+1,\mathbb{N})_{(e)}\)), \(N\in M_{adj}(n,\mathbb{N})\), we define the insertion of the adjacency matrices as follows._ * **The situation of N being connected** : _Let_ \(1\leq i\leq m\)_,_ \(\iota_{i}\) _be a decomposing map,_ \[\iota_{i}:\begin{pmatrix}m_{1i}\\ \vdots\\ m_{i-1i}\end{pmatrix}\mapsto A_{12}=\begin{pmatrix}a_{11}&\cdots&a_{1n}\\ \vdots&\ddots&\vdots\\ a_{i-1\,1}&\cdots&a_{i-1\,n}\end{pmatrix},\] \[\iota_{i}:\begin{pmatrix}m_{i+1i}\\ \vdots\\ m_{mi}\end{pmatrix}\mapsto A_{32}=\begin{pmatrix}a_{i+1\,1}&\cdots&a_{i+1\,n} \\ \vdots&\ddots&\vdots\\ a_{m1}&\cdots&a_{mn}\end{pmatrix},\] _where \((m_{1i},\cdots,m_{i-1i},0,m_{i+1i},\cdots,m_{mi})^{T}\) is the \(i\)th column of \(M\). The insertion of \(N\) into \(M\) at \(i\) by \(\iota_{i}\) is an adjacency matrix in \(M_{adj}(m+n-1,\mathbb{N})\) with the following form:_ \[\begin{pmatrix}M_{11}&A_{12}&M_{13}\\ A_{12}^{T}&N&A_{32}^{T}\\ M_{13}^{T}&A_{32}&M_{33}\end{pmatrix}, \tag{3.1}\] _where \(M_{11}=M_{I_{1}}\) (\(I_{1}=\{1,\cdots,i-1\}\)), \(M_{33}=M_{I_{2}}\) (\(I_{2}=\{i+1,\cdots,m\}\)),_ \[M_{13}=\begin{pmatrix}m_{1i+1}&\cdots&m_{1m}\\ \vdots&\ddots&\vdots\\ m_{i-1\,i+1}&\cdots&m_{i-1\,m}\end{pmatrix},\] _The block matrix (3.1) is denoted by \((N\hookrightarrow_{(i,\iota_{i})}M)\) and \(i\) is called the position of the above insertion._ _The stuation of the extended adjacency matrices is similar. Let \((M,a)\in M_{adj}(m+1,\mathbb{N})_{(e)}\), \((N,b)\in M_{adj}(n+1,\mathbb{N})_{(e)}\), the insertion \((N,b)\) into \((M,a)\) at \(i\) by \(\iota_{i}\) is defined to be_ \[((N,b)\hookrightarrow_{(i,\iota_{i})}(M,a))=((N\hookrightarrow_{(i,\iota_{i} )}M),a_{I_{1}},\iota_{i}(a_{i}),a_{I_{2}}), \tag{3.2}\] _where \(\iota_{i}(a_{i})=(a_{i1},\cdots,a_{in})\) (\(a_{i1}+\cdots+a_{in}=a_{i}\))._ _We define_ \[(0\hookrightarrow_{i}M)=M,\,(M\hookrightarrow 0)=M,\] \[(0\hookrightarrow_{i}(M,a))=(M,a),\,((M,a)\hookrightarrow 0)=(M,a).\] * **The situation of N being disconnected**: _Let \(N\sim N_{1}\oplus\cdots\oplus N_{k}\) (\(2\leq k\leq n-1\)), each \(N_{j}\) be connected (\(j=1,\cdots,k\)), then we define the insertion of \(N\) into \(M\), or \((M,b)\), by \(\iota_{i_{1}},\cdots,\iota_{i_{k}}\) at \(i_{1},\cdots,i_{k}\) as_ \[(N_{k}\hookrightarrow_{(i_{k},\iota_{i_{k}})}(\cdots(N_{2}\hookrightarrow_{(i_ {2},\iota_{i_{2}})}(N_{1}\hookrightarrow_{(i_{1},\iota_{i_{1}})}M))\cdots)),\] (3.3) _or_ \[(N_{k}\hookrightarrow_{(i_{k},\iota_{i_{k}})}(\cdots(N_{2}\hookrightarrow_{(i_ {2},\iota_{i_{2}})}(N_{1}\hookrightarrow_{(i_{1},\iota_{i_{1}})}(M,b)))\cdots)),\] (3.4) _where \(i_{a}\neq i_{b}\) (\(a\neq b\)). We denote the matrix in (3.3), or (3.4), by_ \[(N_{1}\oplus\cdots\oplus N_{k}\hookrightarrow_{(i_{1},\cdots,i_{k},\iota_{i_{1 }},\cdots,\iota_{i_{k}})}M),\] _or_ \[(N_{1}\oplus\cdots\oplus N_{k}\hookrightarrow_{(i_{1},\cdots,i_{k},\iota_{i_{1 }},\cdots,\iota_{i_{k}})}(M,b)).\] The elmentray subjects concerning the adjacency matrices, or the extended adjacency matrices in this article are connectedness, quotient and insertion. We have seen and we will see the properties of the adjacency matrices and the extended adjacency matrices are almost same. Thus, we will foucs on the situation of the adjacency matrices below. **Remark 3.1**.: * _For convenience, we introduce some compact symbols about the direct sum and insertion. Some time the direct sum_ \(N_{1}\oplus\cdots\oplus N_{n}\) _will be denoted by_ \((N)_{[n]}\) _below. Similarly, for a subset_ \(\Lambda\subset[n]\)_, the direct sum_ \(\bigoplus_{j\in\Lambda}N_{j}\) _will be denoted by_ \((N)_{\Lambda}\) _for short. Furthermore, the insertion_ \[(N_{k}\hookrightarrow_{(i_{k},\iota_{i_{k}})}(\cdots(N_{2}\hookrightarrow_{(i_ {2},\iota_{i_{2}})}(N_{1}\hookrightarrow_{(i_{1},\iota_{i_{1}})}M))\cdots))\] _will be denoted by_ \[((N)_{[n]}\hookrightarrow_{(i_{[n]},\iota_{i_{[n]}})}M),\] _where_ \(i_{[n]}=(i_{1},\cdots,i_{n})\)_,_ \(\iota_{i_{[n]}}=(\iota_{i_{1}},\cdots,\iota_{i_{n}})\)_._ * _If_ \(M\in M_{adj}(m,\mathbb{N})\) _is disconnected, then_ \(M\sim M_{1}\oplus\cdots\oplus M_{k}\) _where each_ \(M_{i}\) _is connected (_\(i=1,\cdots,k\)_). Then, for_ \(1\leq i\leq m\)_, there is_ \(j\) _(_\(1\leq j\leq k\)_), such that_ \(i\) _is an index of the rows (or columns) of_ \(M_{j}\)_. By definition_ 3.1_, It is easy to see that_ \[(N\hookrightarrow_{(i,\iota_{i})}M)=(M)_{[k]\setminus\{j\}}\oplus(N \hookrightarrow_{(i,\iota_{i})}M_{j}).\] _where_ \(N\) _is a connected adjacency matrix._ **Proposition 3.1**.: _Let \(M\in M_{adj}(m,\mathbb{N})\), \(N\in M_{adj}(n,\mathbb{N})\), \(M\) and \(N\) be connected, \((N\hookrightarrow_{i,\iota_{i}}M)\) be the insertion of \(N\) into \(M\) at \(i\) by \(\iota_{i}\), where \(1\leq i\leq m\), \(\iota_{i}\) is the decomposing map, \(\sigma\in\mathbf{S}_{n}\), \(\tau\in\mathbb{S}_{m}\). Then, we have_ \[(N\hookrightarrow_{(i,\iota_{i})}M)\sim(\sigma(N)\hookrightarrow_{(\tau^{-1} (i),\sigma(\iota_{i}))}\tau(M)), \tag{3.5}\] _where, based on the block matrix (3.1), \(\sigma(\iota_{i})(m_{ji})=(a_{j\sigma(1)},\cdots,a_{j\sigma(n)})\)._ Proof.: By definition, we know that \(\tau(M)=(m_{\tau(a)\tau(b)})_{m\times m}\), the entries on \(i\)th column of \(M\) will be on \(\tau^{-1}(i)\)th column of \(\tau(M)\). Pricisely, let \(a=\tau^{-1}(i)\), we have \[\tau(M)=\begin{pmatrix}&m_{\tau(1)i}&&\\ V_{11}&\vdots&V_{12}&\\ &&m_{\tau(a-1)\,i}&\\ m_{i\tau(1)}\cdots m_{i\tau(a-1)}&0&m_{i\tau(a+1)}\cdots m_{i\tau(m)}\\ &&m_{\tau(a+1)\,i}&\\ V_{21}&\vdots&V_{22}&\\ &&m_{\tau(m)\,i}&\end{pmatrix},\] where \[V_{11} =\begin{pmatrix}m_{\tau(1)\tau(1)}&\cdots&m_{\tau(1)\tau(a-1)}\\ \vdots&\ddots&\vdots\\ m_{\tau(a-1)\tau(1)}&\cdots&m_{\tau(a-1)\tau(a-1)}\end{pmatrix},\] \[V_{12} =\begin{pmatrix}m_{\tau(1)\tau(a+1)}&\cdots&m_{\tau(1)\tau(m)}\\ \vdots&\ddots&\vdots\\ m_{\tau(a-1)\tau(a+1)}&\cdots&m_{\tau(a-1)\tau(m)}\end{pmatrix},\] \[V_{22} =\begin{pmatrix}m_{\tau(a+1)\tau(a+1)}&\cdots&m_{\tau(a+1)\tau(m)} \\ \vdots&\ddots&\vdots\\ m_{\tau(m)\tau(a+1)}&\cdots&m_{\tau(m)\tau(m)}\end{pmatrix},\] \(V_{21}=V_{12}^{T}\). If \(\sigma(N)=PNP^{T}\), where \(P\) is a \(n\times n\) permutation matrix, then we have \[(\sigma(N)\hookrightarrow_{(\tau^{-1}(i),\sigma(\iota_{i}))}\tau(M))= \begin{pmatrix}V_{11}&B_{12}&V_{12}\\ B_{12}^{T}&PNP^{T}&B_{32}^{T}\\ V_{21}&B_{32}&V_{22}\end{pmatrix}.\] If we express the decomposing map \(\iota_{i}\) as a matrix, i.e. \[A=\begin{pmatrix}a_{11}&\cdots&a_{1n}\\ \vdots&\ddots&\vdots\\ a_{i-1\,1}&\cdots&a_{i-1\,n}\\ a_{i+1\,1}&\cdots&a_{i+1\,n}\\ \vdots&\ddots&\vdots\\ a_{m1}&\cdots&a_{mn}\end{pmatrix}=\begin{pmatrix}A_{12}\\ A_{32}\end{pmatrix},\] and let \(\tau(M)=P_{1}MP_{1}^{T}\), where \(A_{12}\) and \(A_{32}\) are given in definition 3.1, \(P_{1}\) is a \(m\times m\) permutation matrix, then \[B=\begin{pmatrix}B_{12}\\ B_{32}\end{pmatrix}=P_{1}AP^{T}=\begin{pmatrix}a_{\tau(1)\sigma(1)}&\cdots&a_{ \tau(1)\sigma(n)}\\ \vdots&\ddots&\vdots\\ a_{\tau(a-1)\,\sigma(1)}&\cdots&a_{\tau(a-1)\,\sigma(n)}\\ a_{\tau(a+1)\,\sigma(1)}&\cdots&a_{\tau(a+1)\,\sigma(n)}\\ \vdots&\ddots&\vdots\\ a_{\tau(m)\sigma(1)}&\cdots&a_{\tau(m)\sigma(n)}\end{pmatrix}.\] Comparing the expression of \((\sigma(N)\hookrightarrow_{(\tau^{-1}(i),\sigma(\iota_{i}))}\tau(M))\) with the block matrix (3.1) we know that the formula (3.5) is valid. We denote the set \(\{(\sigma(N)\hookrightarrow_{(\tau^{-1}(i),\sigma(\iota_{i}))}\tau(M))\}_{\sigma \in\mathbf{S}_{n},\tau\in\mathbf{S}_{m}}\) by \(\{N\}\hookrightarrow_{(i,\iota_{i})}\{M\}\). The formula (3.5) means that \[\{(\sigma(N)\hookrightarrow_{(\tau^{-1}(i),\sigma(\iota_{i}))}\tau(M))\}_{ \sigma\in\mathbf{S}_{n},\tau\in\mathbf{S}_{m}}\subset\{(N\hookrightarrow_{i, \iota_{i}}M)\}.\] Thus, we do not distinguish \(\{(\sigma(N)\hookrightarrow_{(\tau^{-1}(i),\sigma(\iota_{i}))}\tau(M))\}_{ \sigma\in\mathbf{S}_{n},\tau\in\mathbf{S}_{m}}\) and \(\{(N\hookrightarrow_{(i,\iota_{i})}M)\}\). In the sense of previous discussions, the insertion is a well defined map \[(\{N\},\{M\})\longrightarrow\{(N\hookrightarrow_{(i,\iota_{i})}M)\}.\] **Proposition 3.2**.: _Let \(M\in M_{adj}(m,\mathbb{N})\), \(N\in M_{adj}(n,\mathbb{N})\), both \(M\) and \(N\) be connected, then \((N\hookrightarrow_{(i,\iota_{i})}M)\) is also connected._ Proof.: We assume \((N\hookrightarrow_{(i,\iota_{i})}M)\) is disconnectd. We want to show that this assumption will result in a contradiction. Let \(O=(N\hookrightarrow_{(i,\iota_{i})}M)\) for short, then \(O\) adapts a decomposition \[O\sim O_{I_{1}}\oplus\cdots\oplus O_{I_{k}}.\] where \(\{I_{i}\}_{i=1}^{k}\in\mathbf{Part}([m+n-1])\), each \(O_{I_{i}}\) is connected \((i=1,\cdots,k,\,k\geq 2)\). Without loss of generality, we assume \(i=1\). Then, \((N\hookrightarrow_{(i,\iota_{i})}M)\) will be of the following form, \[(N\hookrightarrow_{(1,\iota_{1})}M)=\begin{pmatrix}N&A^{T}\\ A&M_{I}\end{pmatrix},\] where \(I=\{2,\cdots,m\}\). Because both \(M\) and \(N\) are connected, we know that \(A\neq 0\) and there is some \(I_{i^{\prime}}\) such that \([n]\subset I_{i^{\prime}}\), \([n]\neq I_{i^{\prime}}\). For simplicity, we assume \(i^{\prime}=1\) and identify \(I\) with \(\{n+1,\cdots,m+n-1\}\). Then, we know that \(I_{2}\cup\cdots\cup I_{k}\subset I\). Let \[\pi(O)=\mathbf{diag}(O_{I_{1}},\cdots,O_{I_{k}}),\,\pi\in\mathbf{S}_{m+n-1}.\] Noting \([n]\subset I_{1}\), we can assume \[O_{I_{1}}=\begin{pmatrix}N&B^{T}\\ B&O_{I_{1}^{\prime}}\end{pmatrix},\] where \(I_{1}^{\prime}=I_{1}\setminus[n]\). Above expression means that the permutation \(\pi\) keeps the positions of \(1,\cdots,n\), equivalently, \(\pi\) is of the following form \[\pi=\begin{pmatrix}1&\cdots&n&n+1&\cdots&n+m-1\\ 1&\cdots&n&\pi(n+1)&\cdots&\pi(n+m-1)\end{pmatrix}.\] Thus \(\pi\) induces a permutation on \([m]\) denoted by \(\pi^{\prime}\), \[\pi^{\prime}=\begin{pmatrix}1&2&\cdots&m\\ 1&\pi(n+1)-n+1&\cdots&\pi(n+m-1)-n+1\end{pmatrix}.\] If \(\pi^{\prime}\) corresponds to a \((m-1)\times(m-1)\) permutation matrix \(P\), then \(\pi\) will correspond to the permutation matrix \[P_{1}=\begin{pmatrix}E_{n}&0\\ 0&P\end{pmatrix},\] where \(E_{n}\) denotes the unit matrix of order \(n\). Therefore we have \[P_{1}OP_{1}^{T}=\begin{pmatrix}N&A^{T}P^{T}\\ PA&PM_{I}P^{T}\end{pmatrix}=\mathbf{diag}(O_{I_{1}},\cdots,O_{I_{k}}).\] Above expression implies \[PM_{I}P^{T}=\mathbf{diag}(O_{I_{1}^{\prime}},O_{I_{2}},\cdots,O_{I_{k}}).\] By recovering \(M\) from \(P_{1}OP_{1}^{T}\) we know that \[M\sim M_{1}\oplus O_{I_{2}}\oplus\cdots\oplus O_{I_{k}}.\] Finally, we reach a contradiction. We now turn to the situation of \(\{((\bigoplus_{j}N_{j})\hookrightarrow_{\{*\}}(\bigoplus_{i}M_{i}))\}\). **Proposition 3.3**.: _Let \(\{M_{i}\}\), \(\{N_{j}\}\) be connected adjacency matrices (\(i=1,\cdots,m,\)\(j=1,\cdots,n\)). Then, there a subset \(I\subset[m]\) assigned to a sequence of subsets of \([n]\), \(\{J_{i}\}_{i\in I}\in\mathbf{Part}([n])\), such that_ \[\{((N)_{[n]}\hookrightarrow_{(i_{[n]},i_{[n]})}(M)_{[m]})\}=\{((N)_{[n]} \hookrightarrow_{(i_{[n]},i_{[n]})}(M)_{I})\}\oplus\{(M)_{I^{c}}\}, \tag{3.6}\] _and_ \[\{((N)_{[n]}\hookrightarrow_{(i_{[n]},i_{[n]})}(M)_{I})\}=\bigoplus_{i\in I} \{((N)_{J_{i}}\hookrightarrow_{(i_{J_{i}},i_{J_{i}})}M_{i})\},\] _where \(I^{c}=[m]\setminus I\)._ Proof.: Let \(M=\bigoplus_{i=1}^{m}M_{i}\in M_{adj}(p,\mathbb{N})\), then there is a sequence of the subsets \(\{I_{i}\}_{1\leq i\leq m}\in\mathbf{Part}([p])\) such that \(M_{I_{i}}=M_{i}\)\((1=1,\cdots,m)\). We take \[I=\{i\in[m]|\,\exists j\in[n],i_{j}\in I_{i}\}.\] Then, from definition 3.1 we have that \[\{((N)_{[n]}\hookrightarrow_{(i_{[n]},i_{[n]})}(M)_{[m]})\}\] \[=\{((N)_{[n]}\hookrightarrow_{(i_{[n]},i_{[n]})}((M)_{I}\oplus(M )_{I^{c}}))\}\] \[=\{((N)_{[n]}\hookrightarrow_{(i_{[n]},i_{[n]})}(M)_{I})\}\oplus \{(M)_{I^{c}}\}.\] If we take \[J_{i}=\{j\in[n]|\,i_{j}\in I_{i}\},\] then we we have \[\{((N)_{[n]}\hookrightarrow_{(i_{[n]},\iota_{i_{[n]}})}(M)_{I})\}= \bigoplus_{i\in I}\{((N)_{J_{i}}\hookrightarrow_{(i_{J_{i}},\iota_{i_{J_{i}}})} M_{i})\}.\] Up to now we complete the proof of proposition. Regarding the insertion as inverse operation of the quotient, we have the following conclusion. **Proposition 3.4**.: _Let \(M=(m_{ij})_{m\times m}\), \(N=(n_{ij})_{n\times n}\) and \(Q=(q_{ij})_{q\times q}\) be three connected adjacency matrices, \(q=m+n-1\). Then,_ \[M\sim Q\diagup N\] _if and only if there is a decompsing map \(\iota_{i}:\{m_{ji}\}_{1\leq j\leq m,\,j\neq i}\to\mathbb{N}^{n}\) for some \(i\) (\(1\leq i\leq m\)) such that_ \[Q\sim(N\hookrightarrow_{(i,\iota_{i})}M).\] Proof.: By definition 2.3 and definition 3.1, it is obvious that we have \[(N\hookrightarrow_{(i,\iota_{i})}M)\diagup N\sim M.\] Now we assume \(M\sim Q\diagup N\), then, there is a subset \(I\subset[q]\) (\(|I|=n\)) such that \(Q_{I}=N\). Recalling definition 2.3, we have \[M=\left(\begin{array}{ccc}0&q_{12}^{*}\cdots q_{1m}^{*}\\ q_{21}^{*}&&\\ \vdots&Q_{I^{c}}\\ q_{m1}^{*}&&\end{array}\right).\] Without loss of generality, we assume \(I=[n]\), then \(I^{c}=\{n+1,\cdots,q\}\). By definition 2.3, we know that \(q_{j1}^{*}=\sum_{k=1}^{n}q_{j+n-1\,k}\), \(j=2,\cdots,m\). We can now construct the decomposing map in the following way. \[\iota_{1^{*}}:\begin{pmatrix}q_{21}^{*}\\ \vdots\\ q_{m1}^{*}\end{pmatrix}\mapsto\begin{pmatrix}q_{n+1\,1}&\cdots&q_{n+1\,n}\\ \vdots&\ddots&\vdots\\ q_{q1}&\cdots&q_{qn}\end{pmatrix}=Q_{21},\] then we have \[(N\hookrightarrow_{(1^{*},t_{1^{*}})}M)=\begin{pmatrix}N&Q_{21}^{T}\\ Q_{21}&Q_{I^{c}}\end{pmatrix}=Q.\] In the rest of this section we will discuss the situation to make insertion repeatly. Let \(M=(m_{ij})_{m\times m}\), \(N=(n_{ij})_{n\times n}\) and \(Q=(q_{ij})_{q\times q}\) be three connected adjacency matrices. there are two possible order to make insertion twice, which are \[((N\hookrightarrow_{(i,\iota_{i})}M)\hookrightarrow_{(j,\tau_{j})}Q)\ and\ (N \hookrightarrow_{(a,\mu_{a})}(M\hookrightarrow_{(b,\nu_{b})}Q)).\] Actually, we are interested in the situation of \((N\hookrightarrow_{(a,\mu_{a})}(M\hookrightarrow_{(b,\nu_{b})}Q))\) which is more complicated than other. In this situation, there is a subset \(I\subset[m+q-1]\) such that \((M\hookrightarrow_{(b,\nu_{b})}Q)_{I}=M\). For the index \(a\), there two possibilities which are \(a\in I\) or \(a\notin I\). When \(a\in I\), it is easy to know that \(a\) corresponds to an index \(a^{\prime}\) of the row (or column) of \(M\). Thus, in this situation, we say \(a\in M\). Similarly, when \(a\notin I\), we say \(a\notin M\). We have the following conclusion. **Lemma 3.1**.: _Let \(M=(m_{ij})_{m\times m}\), \(N=(n_{ij})_{n\times n}\) and \(Q=(q_{ij})_{q\times q}\) be three connected adjacency matrices. About the insertion \((N\hookrightarrow_{(a,\mu_{a})}(M\hookrightarrow_{(b,\nu_{b})}Q))\), we have_ \[(N\hookrightarrow_{(a,\mu_{a})}(M\hookrightarrow_{(b,\nu_{b})}Q))=\left\{ \begin{array}{cc}((N\hookrightarrow_{(a^{\prime},\mu^{\prime}_{a^{\prime}})}M) \hookrightarrow_{(b,\nu^{\prime}_{b})}Q),&a\in M,\\ (N\oplus N\hookrightarrow_{(a,\mu_{a}),(b,\nu_{b})}Q),&a\notin M.\end{array}\right.\] Proof.: Here we focus on the situation of \(a\in M\). If \(a\in M\), it is easy to see that \(a\) correspondes to an index \(a^{\prime}\) of the rows (or columns) of \(M\). For simplicity, we assume \(a=b=1\). Then we have \[(M\hookrightarrow_{(1,\nu_{1})}Q)=\begin{pmatrix}M&A^{T}\\ A&Q_{1}\end{pmatrix},\] and \[O=(N\hookrightarrow_{(1,\mu_{1})}(M\hookrightarrow_{(1,\nu_{1})}Q))=\begin{pmatrix} N&B_{21}^{T}&B_{31}^{T}\\ B_{21}&M_{1}&B_{32}^{T}\\ B_{31}&B_{32}&Q_{1}\end{pmatrix},\] where we let \(O\) denote \((N\hookrightarrow_{(1,\mu_{1})}(M\hookrightarrow_{(1,\nu_{1})}Q))\) for short, \(M_{1}=(m_{ij})_{2\leq i,j\leq m}\), \(Q_{1}=(q_{ij})_{2\leq i,j\leq q}\). The decomposing map \(\nu_{1}\) is given by \(\nu_{1}:(q_{21},\cdots,q_{2q})^{T}\mapsto A=(a_{ij})_{(q-1)\times m}\). The decomposing map \(\mu_{1}\) is given by \[\mu_{1}:\begin{pmatrix}(m_{21},\cdots,m_{m1})^{T}\\ (a_{21},\cdots,a_{q1})^{T}\end{pmatrix}\mapsto\begin{pmatrix}B_{21}\\ B_{31}\end{pmatrix}\] If we take \(I=[m+n-1]\) regarded as a subset of \([m+n+q-2]\), then \(O_{I}\) should be of the following form. \[O_{I}=\begin{pmatrix}N&B_{21}^{T}\\ B_{21}&M_{1}\end{pmatrix}.\] Thus, \(O_{I}=(N\hookrightarrow_{(1,\mu_{1}^{\prime})}M)\), where \(\mu_{1}^{\prime}=\mu_{1}|_{\{m_{i1}\}_{2\leq i\leq m}}\), precisely, we have \[\mu_{1}^{\prime}:\begin{pmatrix}m_{21}\\ \vdots\\ m_{m1}\end{pmatrix}\mapsto B_{21}.\] The decomposing map \(\nu_{1}^{\prime}\) should be of the following form \[\nu_{1}^{\prime}:\begin{pmatrix}q_{21}\\ \vdots\\ q_{q1}\end{pmatrix}\mapsto\left(B_{31},B_{32}\right).\] Similarly, we can prove a more general conclusion as follows. **Proposition 3.5**.: _Let \(M_{i}\), \(N_{j}\) and \(Q\) be connected adjacency matrices (\(i=1,\cdots,m,\,j=1,\cdots,n,\,n\geq m\)). Then we have_ \[\begin{array}{c}((N)_{[n]}\hookrightarrow_{(a_{[n]},t_{a_{[n]}})}((M)_{[m]} \hookrightarrow_{(q_{[m]},\tau_{q_{[m]}})}Q))\\ =((N)_{\varLambda^{c}}\oplus((N)_{\varLambda}\hookrightarrow_{(i^{\prime}_{ \varLambda},i^{\prime}_{\varLambda})}(M)_{\varGamma})\oplus(M)_{\varGamma^{c }})\\ \hookrightarrow_{(q_{\varLambda^{c}},t_{\varLambda^{c}})\cup(q_{\varGamma}, \tau^{\prime}_{q_{\varGamma}})\cup(q_{\varGamma^{c}},\tau_{q_{\varGamma^{c}}}) }Q),\end{array} \tag{3.7}\] _where,_ \[\varLambda=\{j\in[n]|\,\exists i\in[m],\,s.t.\,\,a_{j}\in M_{i}\},\,\,\varLambda ^{c}=[n]\setminus\varLambda,\] \[\varGamma=\{i\in[m]|\,\exists j\in\varLambda,a_{j}\in M_{i}\},\,\,\varGamma^{c }=[m]\setminus\varGamma.\] Proof.: Observeong the insertion \[((N)_{[n]}\hookrightarrow_{(a_{[n]},t_{a_{[n]}})}((M)_{[m]}\hookrightarrow_{( q_{[m]},\tau_{q_{[m]}})}Q)),\] by definition 3.1, we know that for each \(N_{j}\) (\(1\leq j\leq n\)), there are two possibilities which are \(a_{j}\in M_{i}\) for some \(i\), or, \(a_{j}\notin M_{i}\) for any \(i\) (\(1\leq i\leq m\)). Thus, we have a decomposition \([n]=\varGamma\cup\varGamma^{c}\), where \[\varLambda=\{j\in[n]|\,\exists i\in[m],\,s.t.\,\,a_{j}\in M_{i}\}.\] Similarly, for each \(M_{i}\), there are two possibilities of \(i\), there is some \(j\) such that \(a_{j}\in M_{i}\), or \(a_{j}\notin M_{i}\) for any \(j\in[n]\). We can take \[\varGamma=\{i\in[m]|\,\exists j\in[n],a_{j}\in M_{i}\}.\] When \(j\in\Lambda^{c}\), \(N_{j}\) inserts into \(Q\), thus, \(i_{j}\) will be assigned to some \(q_{j}\), where \(q_{j}\) is an index of the rows (or columns) of \(Q\). By definition 3.1, it is easy to see that, \[\begin{array}{c}((N)_{[n]}\hookrightarrow_{(a_{[n]},t_{a_{[n]}})}((M)_{[m] }\hookrightarrow_{(q_{[m]},\tau_{q_{[m]}})}Q))\\ =((N)_{\Lambda}\hookrightarrow_{(a_{\Lambda},t_{a_{\Lambda}})}((N)_{A^{c}} \oplus(M)_{\Gamma}\oplus(M)_{\Gamma^{c}})\hookrightarrow_{(q_{A^{c}},t_{q_{A^{c }}})\cup(q_{[m]},\tau_{q_{[m]}})}Q))\\ =((N)_{\Lambda}\hookrightarrow_{(a_{\Lambda},t_{a_{\Lambda}})}((M)_{\Gamma} \hookrightarrow_{(q_{\Gamma},\tau_{q_{\Gamma}})}O)),\end{array}\] where \[O=(((N)_{\Lambda^{c}}\oplus(M)_{\Gamma^{c}})\hookrightarrow_{(q_{A^{c}},t_{q_{ A^{c}}})\cup(q_{\Gamma^{c}},\tau_{q_{\Gamma^{c}}})}Q).\] When \(j\in\Lambda\), there some \(i^{\prime}\in[m]\) such that \(N_{j}\) inserts into \(M_{i^{\prime}}\) at \(i_{j}\), thus \(i_{j}\) will be assigned to some \(i^{\prime}_{j}\), where \(i^{\prime}_{j}\) is an index of the rows (or columns) of \(M_{i^{\prime}}\). In a way which is similar to one in the proof of lemma 3.1, we can prove that \[\begin{array}{c}(N_{j}\hookrightarrow_{(a_{j},t_{a_{j}})}(M_{i^{\prime}} \hookrightarrow_{(q_{i^{\prime}},\tau_{q_{i^{\prime}}})}((M)_{[m]\setminus\{i^ {\prime}\}}\hookrightarrow_{(q_{i},\tau_{q_{i}})_{i\in([m]\setminus\{i^{ \prime}\})}}O)))\\ =((N_{j}\hookrightarrow_{(i^{\prime}_{j},t^{\prime}_{j}})M_{i^{\prime}}) \hookrightarrow_{(q_{i^{\prime}},\tau^{\prime}_{q^{\prime}_{i^{\prime}}})}((M)_ {[m]\setminus\{i^{\prime}\}}\hookrightarrow_{(q_{i},\tau_{q_{i}})_{i\in([m] \setminus\{i^{\prime}\}}}O)))\end{array}\] Repeating above argument, we can prove the formula (3.7). **Proposition 3.6**.: _Let \(M_{i}\), \(N_{j}\) and \(Q_{k}\) be connected adjacency matrices (\(i=1,\cdots,m\), \(j=1,\cdots,n\), \(n\geq m\), \(k=1,\cdots,q\)). Then we have_ \[\begin{array}{c}((N)_{[n]}\hookrightarrow_{(a_{[n]},t_{a_{[n]}})}((M)_{[m]} \hookrightarrow_{(q_{[m]},\tau_{q_{[m]}})}(Q)_{[q]}))\\ =(Q)_{\Xi_{1}}\oplus((N)_{A_{1}}\hookrightarrow_{(q_{A_{1}},t_{a_{\Lambda_{1} }})}(Q)_{\Xi_{2}})\oplus((M)_{\Gamma_{1}}\hookrightarrow_{(q_{\Gamma_{1}},\tau _{q_{\Gamma_{1}}})}(Q)_{\Xi_{3}})\oplus O_{\Lambda,\Gamma,\Xi},\end{array} \tag{3.8}\] _where_ \[\begin{array}{c}O_{\Lambda,\Gamma,\Xi}=((N)_{\Lambda_{2}}\oplus((N)_{ \Lambda_{3}}\hookrightarrow_{(i_{A_{3}},\kappa_{i_{A_{3}}})}(M)_{\Gamma_{2}}) \oplus(M)_{\Gamma_{3}}\\ \hookrightarrow_{(q_{A_{2}},\lambda_{q_{A_{2}}})\cup(q_{\Gamma_{2}},\tau_{q_{ \Gamma_{2}}})\cup(q_{\Gamma_{3}},\gamma_{q_{\Gamma_{3}}})}(Q)_{\Xi_{4}}), \end{array}\] _and \(\{\Lambda_{1},\Lambda_{2},\Lambda_{3}\}\in\mathbf{Part}([n])\), \(\{I_{1},\Gamma_{2},I_{3}\}\in\mathbf{Part}([m])\), \(\{\Xi_{1},\Xi_{2},\Xi_{3},\Xi_{4}\}\in\mathbf{Part}([q])\)._ Proof.: The proof of the formula (3.8) concerns the decomposition of \(((N)_{[n]}\hookrightarrow_{(a_{[n]},t_{a_{[n]}})})\)\(((M)_{[m]}\hookrightarrow_{(q_{[m]},\tau_{q_{[m]}})}(Q)_{[q]}))\) according to the way \(N_{[n]}\) and \(M_{[m]}\) insert into \(Q_{\underline{q}}\), thus, concerns the decomposition of \([n]\), \([m]\) and \([q]\). Firstly, recalling the formula (3.4), we know that there is a obvious decomposition of \([q]\), \([q]=\Xi\cup\Xi^{c}\), where \(\Xi^{c}=[q]\setminus\Xi\), and \[\Xi=\{k\in[q]|\:\exists q_{i}\ s.t.\ q_{i}\in Q_{k}\}.\] Thus we have \[((M)_{[m]}\hookrightarrow_{(q_{[m]},\tau_{q_{[m]}})}(Q)_{[q]})\] \[=((M)_{[m]}\hookrightarrow_{(q_{[m]},\tau_{q_{[m]}})}((Q)_{\Xi} \oplus(Q)_{\Xi^{c}}))\] \[=((M)_{[m]}\hookrightarrow_{(q_{[m]},\tau_{q_{[m]}})}(Q)_{\Xi}) \oplus(Q)_{\Xi^{c}}.\] Smiarly, the decomposition \([q]=\Xi\cup\Xi^{c}\) will induces a decomposition of \([n]\), \([n]=\Lambda\cup\Lambda^{c}\), such that \[((N)_{[n]}\hookrightarrow_{(a_{[n]},t_{a_{[n]}})}((M)_{[m]} \hookrightarrow_{(q_{[m]},\tau_{q_{[m]}})}(Q)_{[q]}))\] \[=((N)_{[n]}\hookrightarrow_{(a_{[n]},t_{a_{[n]}})}(((M)_{[m]} \hookrightarrow_{(q_{[m]},\tau_{q_{[m]}})}(Q)_{\Xi})\oplus(Q)_{\Xi^{c}}))\] \[=((N)_{\Lambda}\hookrightarrow_{(a_{\Lambda},t_{a_{\Lambda}})}((M )_{[m]}\hookrightarrow_{(q_{[m]},\tau_{q_{[m]}})}(Q)_{\Xi}))\oplus((N)_{ \Lambda^{c}}\hookrightarrow_{(a_{\Lambda^{c}},t_{a_{\Lambda^{c}}})}(Q)_{\Xi^{c }}).\] With the help of the formula (3.6) once more, we have \[((N)_{\Lambda}\hookrightarrow_{(a_{\Lambda},t_{a_{\Lambda}})}((M) _{[m]}\hookrightarrow_{(q_{[m]},\tau_{q_{[m]}})}(Q)_{\Xi}))\] \[=((N)_{\Lambda}\hookrightarrow_{(a_{\Lambda},t_{a_{\Lambda}})}((M )_{\Gamma}\hookrightarrow_{(q_{\Gamma},\tau_{q_{\Gamma^{c}}})}(Q)_{\Xi^{ \prime}})\oplus((M)_{\Gamma^{c}}\hookrightarrow_{(q_{\Gamma^{c}},\tau_{q_{ \Gamma^{c}}})}(Q)_{\Xi^{\prime\prime}})))\] \[=((N)_{\Lambda}\hookrightarrow_{(a_{\Lambda},t_{a_{\Lambda}})}((M )_{\Gamma}\hookrightarrow_{(q_{\Gamma},\tau_{q_{\Gamma^{c}}})}(Q)_{\Xi^{ \prime}}))\oplus((M)_{\Gamma^{c}}\hookrightarrow_{(q_{\Gamma^{c}},\tau_{q_{ \Gamma^{c}}})}(Q)_{\Xi^{\prime\prime}}),\] In fact, by definition of \(\Xi\), we know that \(\Xi\) induces a decomposition of \([m]\), \(\{I_{k}\}_{k\in\Xi}\in\mathbf{Part}([m])\), where \[I_{k}=\{i\in[m]|\,q_{i}\in Q_{k}\},\,k\in\Xi.\] Then, \(\Xi^{\prime\prime}\) is able to be taken as \[\Xi^{\prime\prime}=\{k\in\Xi|\,a_{j}\notin((M)_{I_{k}}\hookrightarrow_{(q_{I_{k }},\tau_{q_{I_{k}}})}Q_{k}),\forall j\in\Lambda\},\] \(\Xi^{\prime}=\Xi\setminus\Xi^{\prime\prime}\). Moreover, we have \(\Gamma=\bigcup_{k\in\Xi^{\prime}}I_{k}\), \(\Gamma^{c}=[m]\setminus\Gamma\). Similarly, we have \[((N)_{\Lambda^{c}}\hookrightarrow_{(a_{\Lambda^{c}},t_{a_{\Lambda^{c}}})}(Q)_{ \Xi^{c}})=((N)_{\Lambda^{c}}\hookrightarrow_{(a_{\Lambda^{c}},t_{a_{\Lambda^{c }}})}(Q)_{\Xi_{c,N}\hookrightarrow Q})\oplus(Q)_{\Xi_{c,Q}}.\] We now pay attention to the term \(((N)_{\Lambda}\hookrightarrow_{(a_{\Lambda},t_{a_{\Lambda}})}((M)_{\Gamma} \hookrightarrow_{(q_{\Gamma},\tau_{q_{\Gamma}})}(Q)_{\Xi^{\prime}}))\). The decomposition \[((M)_{\Gamma}\hookrightarrow_{(q_{\Gamma},\tau_{q_{\Gamma}})}(Q)_{\Xi^{ \prime}})=\bigoplus_{k\in\Xi^{\prime}}((M)_{I_{k}}\hookrightarrow_{(q_{I_{k}}, \tau_{q_{I_{k}}})}Q_{k})\] induces a decomposition of \(\Lambda\), which is \(\{J_{k}\}_{k\in\Xi^{\prime}}\), where \[J_{k}=\{j\in\Lambda|\,a_{j}\in((M)_{I_{k}}\hookrightarrow_{(q_{I_{k}},\tau_{q_ {I_{k}}})}Q_{k})\}.\] By definiton of \(\Xi^{\prime}\), it is easy to see that \(I_{k}\neq\emptyset\), and \(J_{k}\neq\emptyset\) (\(k\in\Xi^{\prime}\)), and \[((N)_{\Lambda}\hookrightarrow_{(a_{A},\iota_{a_{A}})}((M)_{\Gamma} \hookrightarrow_{(q_{T},\tau_{q_{T}})}(Q)_{\Xi^{\prime}}))\] \[=\bigoplus_{k\in\Xi^{\prime}}((N)_{J_{k}}\hookrightarrow_{(a_{J_{k} },\iota_{a_{J_{k}}})}((M)_{I_{k}}\hookrightarrow_{(q_{I_{k}},\tau_{q_{I_{k}}})} Q_{k})).\] Noting the formula (3.7), we have \[((N)_{J_{k}}\hookrightarrow_{(a_{J_{k}},\iota_{a_{J_{k}}})}((M)_{ I_{k}}\hookrightarrow_{(q_{I_{k}},\tau_{q_{I_{k}}})}Q_{k}))\] \[=((N)_{J^{\prime\prime}_{k}}\oplus((N)_{J^{\prime}_{k}} \hookrightarrow_{(i_{I^{\prime}_{k}},\iota_{i^{\prime}_{J^{\prime}_{k}}})}(M)_ {I^{\prime}_{k}})\oplus(M)_{I^{\prime\prime}_{k}}\hookrightarrow_{(q_{J^{ \prime\prime}_{k}},\iota_{q_{J^{\prime\prime}_{k}}})\cup(q_{I_{k}},\tau^{ \prime}_{q_{I_{k}}})}Q_{k}),\] where \(k\in\Xi^{\prime}\), \(J_{k}=J^{\prime}_{k}\cup J^{\prime\prime}_{k}\), \(J^{\prime}_{k}\cap J^{\prime\prime}_{k}\neq\emptyset\), \(I_{k}=I^{\prime}_{k}\cup I^{\prime\prime}_{k}\), \(I^{\prime}_{k}\cap I^{\prime\prime}_{k}\neq\emptyset\). If we take \(\Lambda^{\prime}=\bigcup_{k\in\Xi^{\prime}}J^{\prime}_{k}\), \(\Lambda^{\prime\prime}=\bigcup_{k\in\Xi^{\prime}}J^{\prime\prime}_{k}\), \(\Lambda^{\prime}=\bigcup_{k\in\Xi^{\prime}}I^{\prime}_{k}\), \(\Lambda^{\prime\prime}=\bigcup_{k\in\Xi^{\prime}}I^{\prime\prime}_{k}\) then we have \[((N)_{\Lambda}\hookrightarrow_{(a_{A},\iota_{a_{A}})}((M)_{ \Gamma}\hookrightarrow_{(q_{T},\tau_{q_{T}})}(Q)_{\Xi^{\prime}}))\] \[=(((N)_{\Lambda^{\prime\prime}}\oplus((N)_{\Lambda^{\prime}} \hookrightarrow_{(i_{A^{\prime}},\kappa_{i_{A^{\prime}}})}(M)_{\Gamma^{\prime}} )\oplus(M)_{\Gamma^{\prime\prime}})\hookrightarrow_{(q_{A^{\prime\prime}},\iota _{q_{A^{\prime\prime}}})\cup(q_{T},\tau^{\prime}_{q_{T}})}(Q)_{\Xi^{\prime}}).\] Summarizing the previous discussions, we can reach the formula (3.8). **Remark 3.2**.: _In the formula (3.8), we can take \(\Lambda=\Lambda_{3}\), \(\Lambda^{c}=\Lambda_{1}\cup\Lambda_{2}\), \(\Gamma=\Gamma_{2}\), \(\Gamma^{c}=\Gamma_{1}\cup\Gamma_{3}\), \(\Xi=\Xi_{2}\cup\Xi_{3}\cup\Xi_{4}\), \(\Xi^{c}=\Xi_{1}\), then we have_ \[((N)_{[n]}\hookrightarrow_{(a_{[n]},\iota_{a_{[n]}})}((M)_{[m]} \hookrightarrow_{(q_{[m]},\tau_{q_{[m]}})}(Q)_{[q]}))\] \[=((N)_{\Lambda^{c}}\oplus((N)_{\Lambda}\hookrightarrow_{(i_{A}, \kappa_{i_{A}})}(M)_{\Gamma})\oplus(M)_{\Gamma^{c}}\] \[\hookrightarrow_{(q_{A^{c}},\lambda_{q_{A^{c}}})\cup(q_{T},\gamma _{q_{T}})\cup(q_{T^{c}},\gamma_{q_{T^{c}}})}(Q)_{\Xi})\oplus(Q)_{\Xi^{c}}.\] ## 4 The algebraic structure of \(\mathcal{H}^{*}_{adj}\) ### Basic notations and the primitive elements Let \[\mathcal{H}_{adj,n}=\mathbf{Span}_{\mathbb{C}}\{\{M\}\in M_{adj}(+\infty, \mathbb{N})|\mathbf{deg}\{M\}=n\},\,n\geq 0,\] where \(\mathcal{H}_{adj,0}=\mathbb{C}\{0\}\cong\mathbb{C}\). Then each \(\mathcal{H}_{adj,n}\) is finite dimensional, and we have \[\mathcal{H}_{adj}=\bigoplus_{n=0}^{+\infty}\mathcal{H}_{adj,n}.\] For \(\{M_{i}\}\in\mathcal{H}_{adj,n_{i}}\)\((i=1,2)\), we have \[\{M_{1}\}\oplus\{M_{2}\}\in\mathcal{H}_{adj,n_{1}+n_{2}}.\] On the other hand, it is easy to check that about coproduct we have \[\triangle:\mathcal{H}_{adj,n}\longrightarrow\bigoplus_{p+q=n}\mathcal{H}_{adj,p} \otimes\mathcal{H}_{adj,q}.\] Therefore, \(\mathcal{H}_{adj}\) is a connected graded Hopf algebra (see?). In this section we will discuss the dual Hopf algebra in the following sense \[\mathcal{H}_{adj}^{*}=\bigoplus_{n=0}^{+\infty}\mathcal{H}_{adj,n}^{*}. \tag{4.1}\] It is well known that, by definition, the coproduct on \(\mathcal{H}_{adj}^{*}\) is dual to the product on \(\mathcal{H}_{adj}\), i.e. for \(f\in\mathcal{H}_{adj}^{*}\) we have \[<\triangle f,\{M_{1}\}\otimes\{M_{2}\}>=<f,\{M_{1}\}\oplus\{M_{2}\}>,\] where \(\{M_{1}\},\{M_{2}\}\in M_{adj}(+\infty,\mathbb{N})\). Similarly, the product on \(\mathcal{H}_{adj}^{*}\) is dual to the coproduct on \(\mathcal{H}_{adj}\). Thus, for \(f,g\in\mathcal{H}_{adj}^{*}\) and \(\{M\}\in M_{adj}(+\infty,\mathbb{N})\) we have \[<f\bullet g,\{M\}>=<f\otimes g,\triangle\{M\}>,\] where \(\bullet\) denotes the product on \(\mathcal{H}_{adj}^{*}\). Because the coproduct on \(\mathcal{H}_{adj}\) is not co-commutative, thus the multiplication \(\bullet\) is not commutative. Let \[\{f_{\{M\}}|\{M\}\in M_{adj}(+\infty,\mathbb{N}),\{M\}\neq 0\}\] denote the set of dual bases of \(\mathcal{H}_{adj}^{*}\), which means each \(f_{\{M\}}\) (\(\{M\}\neq 0\)) satisfies \[<f_{\{M\}},\{N\}>=\left\{\begin{array}{ll}1,&\{N\}=\{M\},\\ 0,&others.\end{array}\right.\] About dual bases mentioned above we have, **Proposition 4.1**.: _Let \(\{M\}\in M_{adj}(+\infty,\mathbb{N})\), (\(\{M\}\neq 0\)), \(\{M\}=\bigoplus_{i=1}^{k}\{M_{i}\}\), each \(\{M_{i}\}\) be connected (\(i=1,\cdots,k\)). Then we have_ \[\triangle f_{\{M\}}=f_{\{M\}}\otimes\eta+\eta\otimes f_{\{M\}}+\sum_{I\subset[ k],I\neq I,\emptyset}f_{\bigoplus_{i\in I}\{M_{i}\}}\otimes f_{\bigoplus_{i\in I ^{c}}\{M_{i}\}}, \tag{4.2}\] _where \(\eta\) is the co-unit on \(\mathcal{H}_{adj}\), \(I^{c}=[k]\setminus I\)._ Proof.: Recalling the definition of \(f_{\{M\}}\), \[<f_{\{M\}},\{N\}>=\left\{\begin{array}{ll}1,&\{N\}=\{M\},\\ 0,&others,\end{array}\right.\] we know that when \(\{N_{1}\}\oplus\{N_{2}\}=\{M\}\), \[<\triangle f_{\{M\}},\{N_{1}\}\otimes\{N_{2}\}>=<f_{\{M\}},\{N_{1}\}\oplus\{N _{2}\}>=<f_{\{M\}},\{M\}>\neq 0,\] otherwise, \[<\triangle f_{\{M\}},\{N_{1}\}\otimes\{N_{2}\}>=0.\] The condition \(\{N_{1}\}\oplus\{N_{2}\}=\{M\}\) means that \(\{N_{1}\}=\bigoplus_{i\in I}\{M_{i}\}\), \(\{N_{2}\}=\bigoplus_{i\in I^{c}}\{M_{i}\}\) for some subset \(I\subset[k]\). Therefore, it is natural that \(\triangle f_{\{M\}}\) should be of the form \[\triangle f_{\{M\}}=\sum_{I\subset[k]}g_{I}\otimes h_{I^{c}},\] where \(g_{I},h_{I^{c}}\in\mathcal{H}^{*}_{adj}\) satisfying \[<g_{I}\otimes h_{I^{c}},\{N_{1}\}\otimes\{N_{2}\}>=<g_{I},\{N_{1 }\}><h_{I^{c}},\{N_{2}\}>\] \[=\left\{\begin{array}{ll}1,&\{N_{1}\}=\bigoplus_{i\in I}\{M_{i} \},\{N_{2}\}=\bigoplus_{i\in I^{c}}\{M_{i}\},\\ 0&others.\end{array}\right.\] Thus, \(g_{I}\) and \(h_{I^{c}}\) will be \(f_{\bigoplus_{i\in I}\{M_{i}\}}\) and \(f_{\bigoplus_{i\in I^{c}}\{M_{i}\}}\) respectively. Particularly, when \(I=\emptyset\), \(g_{I}=\eta\), when \(I^{c}=\emptyset\), \(h_{I^{c}}=\eta\). **Corollary 4.1**.: _Let \(\{M\}\in M_{adj}(+\infty,\mathbb{N})\), then \(\{M\}\) is connected if and only if_ \[\triangle f_{\{M\}}=f_{\{M\}}\otimes\eta+\eta\otimes f_{\{M\}}.\] Let \(f\in\mathcal{H}^{*}_{adj}\), it is well known that, by the definition, if \(f\) satisfies \[\triangle f=f\otimes\eta+\eta\otimes f,\] then it is called a primitive element in \(\mathcal{H}^{*}_{adj}\). Let \(\mathbf{P}(\mathcal{H}^{*}_{adj})\) denote the set of all primitive elements of \(\mathcal{H}^{*}_{adj}\). Then, with the help of corollary 4.1, we have \[\mathbf{P}(\mathcal{H}^{*}_{adj})=\mathbf{Span}_{\mathbb{C}}(\{f_{\{M\}}|\{M \}\ is\ connected\}). \tag{4.3}\] ### The product on \(\mathcal{H}^{*}_{adj}\) About the product on \(\mathcal{H}^{*}_{adj}\) we have the following formula. **Proposition 4.2**.: _Let \(M\in M_{adj}(m,\mathbb{N})\), \(N\in M_{adj}(n,\mathbb{N})\) be two connected adjacency matrices. Then, we have_ \[f_{\{N\}}\bullet f_{\{M\}}=\sum_{i,\iota_{i}}f_{\{(N\hookrightarrow_{i,\iota _{i}}M)\}}+f_{\{M\}\oplus\{N\}}. \tag{4.4}\] Proof.: By the definition, the product \(f_{\{N\}}\bullet f_{\{M\}}\) is defined by the following formula, \[<f_{\{N\}}\bullet f_{\{M\}},\{Q\}>=<f_{\{N\}}\otimes f_{\{M\}},\triangle\{Q \}>,\ \{Q\}\in M_{adj}(+\infty,\mathbb{N}).\] It is easy to see that when \(\{Q\}\) is connected, the meaningful choice of \(\{Q\}\) shouls be \(\{(N\hookrightarrow_{i,\iota_{i}}M)\}\). Actually, we have \[\begin{array}{c}\triangle\{(N\hookrightarrow_{i,\iota_{i}}M)\}\\ =\{(N\hookrightarrow_{i,\iota_{i}}M)\}\otimes 0+0\otimes\{(N\hookrightarrow_{i, \iota_{i}}M)\}+\cdots+\{N\}\otimes\{(N\hookrightarrow_{i,\iota_{i}}M) \diagup N\}+\cdots.\end{array}\] Thus \[<f_{\{N\}}\bullet f_{\{M\}},\{(N\hookrightarrow_{i,\iota_{i}}M)\}>=<f_{\{N\}} \otimes f_{\{M\}},\{N\}\otimes\{M\}>=1.\] In the situation of \(\{Q\}\) being disconnected, the suitable choice of \(\{Q\}\) should be \(\{N\}\oplus\{M\}\). It is obvious that \[<f_{\{N\}}\bullet f_{\{M\}},\{N\}\oplus\{M\}>=1.\] For other \(\{Q\}\), we have \[<f_{\{N\}}\bullet f_{\{M\}},\{Q\}>=0.\] Up to now, we have proved the formula (4.4). Furthermore, we have a more general formula about the product on \(\mathcal{H}^{*}_{adj}\). **Theorem 4.1**.: _Let \(M_{i}\), \(N_{j}\) be connected adjacency matrices (\(i=1,\cdots,k,\,j=1,\cdots,l\)). Then we have_ \[\begin{array}{c}f_{\{(N)_{[n]}\}}\bullet f_{\{(M)_{[m]}\}}\\ =\sum\limits_{\Lambda\subset[n],\Lambda\neq\emptyset}\sum\limits_{(i_{\Lambda,\iota_{i}})}f_{\{(N)_{\Lambda^{c}}\}\oplus\{((N)_{\Lambda}\hookrightarrow_{( i_{\Lambda^{c}i_{\Lambda}}})(M)_{[m]})\}}+f_{\{(N)_{[n]}\}\oplus\{(M)_{[m]}\}},\end{array} \tag{4.5}\] _where \(\Lambda^{c}=[n]\setminus\Lambda\)._ Proof.: Recalling the definition of the product on \(\mathcal{H}^{*}_{adj}\), we have \[\begin{array}{c}<f_{\{(N)_{[n]}\}}\bullet f_{\{(M)_{[m]}\}},\{Q\}>\\ =<f_{\{(N)_{[n]}\}}\otimes f_{\{(M)_{[m]}\}},\triangle\{Q\}>,\ \{Q\}\in M_{adj}(+ \infty,\mathbb{N}).\end{array}\] In order to prove theorem 4.1, we need to choose \(\{Q\}\) such that \[<f_{\{(N)_{[n]}\}}\otimes f_{\{(M)_{[m]}\}},\triangle\{Q\}>\neq 0.\] Here we are interested in the situation of \(m\geq 2\). Hence, \(\{Q\}\) should be disconnected. Actually, if \(\{Q\}=\{Q_{1}\}\oplus\cdots\oplus\{Q_{p}\}\), where each \(\{Q_{k}\}\) is connected (\(k=1,\cdots,p\)), then \(p\geq m\). We focus on the right factors in the tensor, then \(\triangle\{Q_{k}\}\) (\(i=1,\cdots,p\)) will be required to provide \(\{M_{i}\}\) (\(i=1,\cdots,m\)) on the right factors. By the same reason, \(\triangle\{Q_{i}\}\) should provide \(\{N_{j}\}\) (\(j=1,\cdots,n\)) on their left factors. Therefore, there are only three meaningful possibilities of \(\{Q_{i}\}\) as follows. * \(\{Q_{k}\}=\{((N)_{J}\hookrightarrow_{(i_{J},i_{I_{J}})}M_{a})\}\), where \(J\subset[n]\). Then \(\triangle\{Q_{i}\}\) will contain the term \[\{(N)_{J}\}\otimes\{M_{a}\}.\] * \(\{Q_{k}\}=\{M_{a}\}\), then \[\triangle\{Q_{k}\}=0\otimes\{M_{a}\}+\cdots.\] * \(\{Q_{k}\}=\{(N)_{J}\}\) for some \(J\subset[n]\), then \[\triangle\{Q_{i}\}=\{(N)_{J}\}\otimes 0+\cdots.\] The previous discussions show that the suitable choices of \(\{Q\}\) should be of the following form: \[\{Q\}=\{(N)_{A^{c}}\}\oplus(\bigoplus_{i\in I}\{((N)_{J_{i}} \hookrightarrow_{\{(i_{J_{i}},i_{I_{i}})\}}M_{i})\})\oplus\{(M)_{I^{c}}\},\] where \(\Lambda\subset[n]\), \(\Lambda^{c}=[n]\setminus\Lambda\), \(\{J_{i}\}_{i\in I}\in\mathbf{Part}(\Lambda)\). Comparing above expression with the formula (3.6), we know that \(\{Q\}\) should be taken to be \[\{Q\}=\left\{\begin{array}{c}\{(N)_{\Lambda^{c}}\}\oplus\{((N)_{\Lambda} \hookrightarrow_{(i_{A},i_{A})}(M)_{[m]})\},\ \ \Lambda\neq\emptyset,\\ \{(N)_{[n]}\}\oplus\{(M)_{[m]}\}.\end{array}\right.\] Above discussions mean that the formula (4.5) is valid. The formula (4.5) suggests us to define a new multiplication on \(\mathcal{H}_{adj}\). **Definition 4.1**.: _Let \(\{M_{i}\}\), \(\{N_{j}\}\) be connected (\(i=1,\cdots,m,\,j=1,\cdots,n\)). We define the multiplication \(\bullet\) between \(\{M_{1}\}\oplus\cdots\oplus\{M_{m}\}\) and \(\{N_{1}\}\oplus\cdots\oplus\{N_{n}\}\) as follows:_ \[\begin{array}{c}\{(N)_{[n]}\}\bullet\{(M)_{[m]}\}\\ =\sum\limits_{\Lambda\subset[n],\Lambda\neq\emptyset}\ \sum\limits_{(i_{A},i_{A})} \{(N)_{A^{c}}\}\oplus\{((N)_{\Lambda}\hookrightarrow_{(i_{A},i_{A})}(M)_{[m]})\} \\ +(\{(N)_{[n]}\}\oplus\{(M)_{[m]}\}),\end{array} \tag{4.6}\] _where \(\Lambda^{c}=[n]\setminus\Lambda\)._ It is easy to see the multiplication (4.6) is non-commutative. We want to prove the associativity of the product \(\bullet\). **Theorem 4.2**.: _Let \(\{M_{i}\}\), \(\{N_{j}\}\) and \(\{Q_{k}\}\) be connected (\(i=1,\cdots,m,\,j=1,\cdots,n,\,k=1,\cdots,q\)). Then we have_ \[\{(N)_{[n]}\}\bullet(\{(M)_{[m]}\}\bullet\{(Q)_{[q]}\})=(\{(N)_{[n]}\}\bullet \{(M)_{[m]}\})\bullet\{(Q)_{[q]}\}. \tag{4.7}\] Proof.: The sum on the left side of (4.6) is over all possible insertion. Therefore, to prove the formula (4.7) we need to know what types of the terms will appear on both sides of (4.7). **The situation of the right side** : First, we consider the right side of (4.7). By the formulas (4.6), (3.6), we know that \[(\{(N)_{[n]}\}\bullet\{(M)_{[m]}\})\bullet\{(Q)_{[q]}\}\] \[=\sum\limits_{\Lambda\subset[n],\,\Lambda\neq\emptyset}\ \sum\limits_{(i_{\Lambda},i_{\Lambda})}(\{(N)_{A^{c}}\}\oplus\{((N)_{ \Lambda}\hookrightarrow_{(i_{\Lambda},i_{\Lambda})}(M)_{\Gamma})\}\oplus\{(M) _{\Gamma^{c}}\})\bullet\{(Q)_{[q]}\}\] \[+(\{(N)_{[n]}\}\oplus\{(M)_{[m]}\})\bullet\{(Q)_{[q]}\}.\] We focus on the terms with the following form, \[(\{(N)_{A^{c}}\}\oplus\{((N)_{\Lambda}\hookrightarrow_{(i_{\Lambda},i_{ \Lambda})}(M)_{\Gamma})\}\oplus\{(M)_{\Gamma^{c}}\})\bullet\{(Q)_{[q]}\}.\ \ \ \ (***)\] In the expression \((***)\) \[\Gamma=\{i\in[m]|\,\exists j\in[m]\text{ {s.t.} }i_{j}\in M_{i}\},\] \(\Lambda\neq\emptyset\), thus \(\Gamma\neq\emptyset\). With the same reason due to the formula (4.6), we have \[(\{(N)_{A^{c}}\}\oplus\{((N)_{\Lambda}\hookrightarrow_{(i_{\Lambda },i_{\Lambda})}(M)_{\Gamma})\}\oplus\{(M)_{\Gamma^{c}}\})\bullet\{(Q)_{[q]}\}\] \[=\sum\limits_{\Lambda_{c,2},\Lambda_{2},\Gamma_{2},\Gamma_{c,2}} \{(N)_{\Lambda_{c,1}}\}\oplus\{(M)_{\Gamma_{c,1}}\}\oplus\{(Q)_{\Xi^{c}}\}\] \[\oplus\{((N)_{\Lambda_{1}}\hookrightarrow_{(i_{\Lambda},i_{ \Lambda_{1}})}(M)_{\Gamma_{1}})\}\oplus\sum\limits_{\{*\}\cup\{*\}\cup\{*\} \cup\{*\}}\{O_{\Lambda_{c,2},\Lambda_{2},\Gamma_{2},\Gamma_{c,2},\Xi,\{*\}\cup \{*\}\cup\{*\}}\}\] \[+\{(N)_{A^{c}}\}\oplus\{((N)_{\Lambda}\hookrightarrow_{(i_{ \Lambda},i_{\Lambda})}(M)_{\Gamma})\}\oplus\{(M)_{\Gamma^{c}}\}\oplus\{(Q)_{[ q]}\},\] where \(\Lambda_{c,2}\cup\Lambda_{2}\cup\Gamma_{2}\cup\Gamma_{c,2}\neq\emptyset\), and \[\{O_{\Lambda_{c,2},\Lambda_{2},\Gamma_{2},\Gamma_{c,2},\Xi,\{*\} \cup\{*\}\cup\{*\}}\}\] \[=\{[((N)_{\Lambda_{c,2}}\oplus((N)_{\Lambda_{2}}\hookrightarrow_{ (i_{\Lambda_{2}},i_{\Lambda_{2}})}(M)_{\Gamma_{2}})\] \[\oplus(M)_{\Gamma_{c,2}})\hookrightarrow_{\{*\}\cup\{*\}\cup\{*\} \cup\{*\}}(Q)\Xi]\},\] moreover, * \(\Lambda=\Lambda_{1}\cup\Lambda_{2}\), \(\Lambda_{1}\cap\Lambda_{2}=\emptyset\). * \(\Lambda^{c}=\Lambda_{c,1}\cup\Lambda_{c,2}\), \(\Lambda_{c,1}\cap\Lambda_{c,2}=\emptyset\). * \(\Gamma=\Gamma_{1}\cup\Gamma_{2}\),\(\Gamma_{1}\cap\Gamma_{2}=\emptyset\). * \(\Gamma^{c}=\Gamma_{c,1}\cup\Gamma_{c,2}\), \(\Gamma_{c,1}\cup\Gamma_{c,2}=\emptyset\). * \([q]=\Xi\cup\Xi^{c}\), where the choice of \(\Xi\) depends on the other decompositions mentioned above. Addtionlly, we need to consider the term \((\{(N)_{[n]}\}\oplus\{(M)_{[m]}\})\bullet\{(Q)_{[q]}\}\). With the same reason as above, we have, \[=\sum_{\begin{subarray}{c}\Lambda\subset[n],\Gamma\subset[m],\Lambda\cup\Gamma \neq\emptyset\\ \qquad\{(((N)_{\Lambda}\oplus(M)_{\Gamma})\hookrightarrow_{(q_{A},\iota_{q_{A} })\cup(q_{T},\tau_{q_{\Gamma}})}(Q)_{\Xi})\}\\ \qquad\qquad+\{(N)_{[n]}\}\oplus\{(M)_{[m]}\}\oplus\{(Q)_{[q]}\}.\end{subarray}\] In summary, we get a general expression of the right side of the formula (4.7) as follows. \[(\{(N)_{[n]}\}\bullet\{(M)_{[m]}\})\bullet\{(Q)_{[q]}\}\] \[=\sum_{*}\{(N)_{A_{1}}\}\oplus\{(M)_{\Gamma_{1}}\}\oplus\{(Q)_{ \Xi^{c}}\}\oplus\{((N)_{A_{2}}\hookrightarrow_{(i_{A_{2}},\iota_{i_{A_{2}}})}( M)_{\Gamma_{2}})\}\] \[\oplus\{[((N)_{A_{3}}\oplus((N)_{A_{4}}\hookrightarrow_{(i_{A_{4 }},\iota_{i_{A_{4}}})}(M)_{\Gamma_{3}})\oplus(M)_{\Gamma_{4}})\hookrightarrow_ {\{*\}\cup\{*\}\cup\{*\}\cup\{*\}}(Q)_{\Xi}]\}\] \[+\{(N)_{[n]}\}\oplus\{(M)_{[m]}\}\oplus\{(Q)_{[q]}\},\] where the sum is over all possible choices of \(\{\Lambda_{i}\}_{i=1}^{4}\), \(\{\Gamma_{i}\}_{i=1}^{4}\) and \(\Xi\), \(\{\Lambda_{i}\}_{i=1}^{4}\in\mathbf{Part}([n])\), \(\{I_{i}\}_{i=1}^{4}\in\mathbf{Part}([m])\), \(\Lambda_{i}\) or \(\Gamma_{j}\) is allowed to be emptyset for some \(i\) or \(j\)\((1\leq i,j\leq 4)\), and \[(\bigcup_{i=1,2,3}\Lambda_{i})\cup(\bigcup_{i=1,2,3}\Gamma_{i})\neq\emptyset.\] **The situation of the left side** : We now consider the left side of the formula (4.7). Similarly, we need to focus on the terms with the following form, \[\{(N)_{[n]}\}\bullet(\{(M)_{\Gamma^{c}}\}\oplus\{((M)_{\Gamma}\hookrightarrow_ {(q_{T},\kappa_{q_{\Gamma}})}(Q)_{\Xi})\}\oplus(Q)_{\Xi^{c}}),\quad(**)\] where \(\Gamma\subset[m]\), \(\Gamma\neq\emptyset\), \[\Xi=\{k\in[q]|\,\exists i\in\Gamma,\ s.t.\ q_{i}\in Q_{k}\}.\] Precisely, \(\Xi\) results in a decomposition of \(\Gamma\), \(\{I_{k}\}_{k\in\Xi}\in\mathbf{Part}(\Gamma)\), such that \[\{((M_{i})_{i\in\Gamma}\hookrightarrow_{\{q_{i},\kappa_{q_{i}}\}_{i \in\Gamma}}(\ \bigoplus_{k\in\Xi}Q_{k}))\}\] \[=\bigoplus_{k\in\Xi}\{((M_{i})_{i\in I_{k}}\hookrightarrow_{\{(q_ {i},\kappa_{q_{i}})\}_{i\in I_{k}}}Q_{k})\},\] where \(I_{k}=\{i\in\Gamma|q_{i}\in Q_{k}\}\). Now we give a description of the expression \((**)\) in detail based on the formula (4.6). Due to the formula (4.6), we have \[\{(N)_{[n]}\}\bullet(\{(M)_{\Gamma^{c}}\}\oplus\{((M)_{\Gamma} \hookrightarrow_{(q_{T},\kappa_{q_{T}})}(Q)_{\Xi})\}\oplus(Q)_{\Xi^{c}})\] \[=\sum_{\begin{subarray}{c}\Lambda\subset[n],\Lambda\neq\emptyset \end{subarray}}\{(N)_{\Lambda^{c}}\}\oplus\sum_{(a_{\Lambda},\iota_{a_{A}}) }\{((N)_{\Lambda}\hookrightarrow_{(a_{A},\iota_{a_{A}})}((M)_{\Gamma^{c}} \oplus((M)_{\Gamma}\hookrightarrow_{(q_{T},\kappa_{q_{\Gamma}})}(Q)_{\Xi})\\ \qquad\oplus(Q)_{\Xi^{c}}))\}+\{(N)_{[n]}\}\oplus\{(M)_{\Gamma^{c}}\}\oplus\{ ((M)_{\Gamma}\hookrightarrow_{(q_{T},\kappa_{q_{\Gamma}})}(Q)_{\Xi})\}\oplus( Q)_{\Xi^{c}}.\] We focus on the term \[\{((N)_{\varLambda}\hookrightarrow_{(a_{A},\iota_{a_{A}})}((M)_{\varGamma^{c}} \oplus((M)_{\varGamma}\hookrightarrow_{(q_{\varGamma},\kappa_{q_{\varGamma}})}(Q )_{\varXi})\oplus(Q)_{\varXi^{c}}))\}.\] We divide \(\varLambda\) into three subsets \(\varLambda_{N\hookrightarrow M}\), \(\varLambda_{N\hookrightarrow M\hookrightarrow Q}\) and \(\varLambda_{N\hookrightarrow Q}\) (\(\{\varLambda_{N\hookrightarrow M},\varLambda_{N\hookrightarrow M\hookrightarrow Q },\varLambda_{N\hookrightarrow Q}\}\in\mathbf{Part}(\varLambda)\)) such that the above term can be divided into three parts. \[\{((N)_{\varLambda}\hookrightarrow_{(a_{A},\iota_{a_{A}})}((M)_{ \varGamma^{c}}\oplus((M)_{\varGamma}\hookrightarrow_{(q_{\varGamma},\kappa_{q_ {\varGamma}})}(Q)_{\varXi})\oplus(Q)_{\varXi^{c}}))\}\] \[=\{((N)_{\varLambda_{N\hookrightarrow M}}\hookrightarrow_{(i_{ \varLambda_{N\hookrightarrow M}},\iota_{i_{\varLambda_{N\hookrightarrow M}}})}(M)_{ \varGamma^{c}})\}\oplus\] \[\{((N)_{\varLambda_{N\hookrightarrow M\hookrightarrow Q}} \hookrightarrow_{(a_{\varLambda_{N\hookrightarrow M\hookrightarrow Q}},\lambda_{a _{\varLambda_{N\hookrightarrow M\hookrightarrow Q}}})}((M)_{\varGamma} \hookrightarrow_{(q_{\varGamma},\kappa_{q_{\varGamma}})}(Q)_{\varXi})\}\] \[\oplus\{((N)_{\varLambda_{N\hookrightarrow Q}}\hookrightarrow_{(q_{ \varLambda_{N\hookrightarrow Q}},\iota_{q_{\varLambda_{N\hookrightarrow Q}}})}(Q)_{ \varXi^{c}})\}.\] Furthermore, by the formula (3.6) we have: * \[\{((N)_{\varLambda_{N\hookrightarrow M}}\hookrightarrow_{(i_{\varLambda_{N \hookrightarrow M}},\iota_{i_{\varLambda_{N\hookrightarrow M}}})}(M)_{ \varGamma^{c}})\}\] \[=\{((N)_{\varLambda_{N\hookrightarrow M}}\hookrightarrow_{(i_{\varLambda_{N \hookrightarrow M}},\iota_{i_{\varLambda_{N\hookrightarrow M}}})}(M)_{ \varGamma_{c},N\hookrightarrow M})\}\oplus\{(M)_{\varGamma_{c},M}\},\] where \(\varGamma^{c}=\varGamma_{c,N\hookrightarrow M}\cup\varGamma_{c,M}\), \(\varGamma_{c,N\hookrightarrow M}\cap\varGamma_{c,M}=\emptyset\), * \[\{((N)_{\varLambda_{N\hookrightarrow Q}}\hookrightarrow_{(q_{\varLambda_{N \hookrightarrow Q}},\iota_{q_{\varLambda_{N\hookrightarrow Q}}})}(Q)_{\varXi^{c }})\}\] \[=\{((N)_{\varLambda_{N\hookrightarrow Q}}\hookrightarrow_{(q_{\varLambda_{N \hookrightarrow Q}},\iota_{q_{\varLambda_{N\hookrightarrow Q}}})}(Q)_{\varXi_{c,N\hookrightarrow Q}})\}\oplus\{(Q)_{\varXi_{c,Q}}\},\] where \(\varXi=\varXi_{c,N\hookrightarrow Q}\cup\varXi_{c,Q}\), \(\varXi_{c,N\hookrightarrow Q}\cap\varXi_{c,Q}=\emptyset\). * Recalling proposition 3.6 and remark 3.2 we have \[\{((N)_{\varLambda_{N\hookrightarrow M\hookrightarrow Q}} \hookrightarrow_{(a_{\varLambda_{N\hookrightarrow M\hookrightarrow Q}},\lambda_{a _{\varLambda_{N\hookrightarrow M\hookrightarrow Q}}})}((M)_{\varGamma} \hookrightarrow_{(q_{\varGamma},\kappa_{q_{\varGamma}})}(Q)_{\varXi})\}\] \[=\{((N)_{\varLambda^{(1)}}\oplus((N)_{\varLambda^{(2)}} \hookrightarrow_{(i^{\prime}_{\varLambda^{(2)}},i^{\prime}_{\varLambda^{(2)}} )}(M)_{\varGamma^{(2)}})\oplus(M)_{\varGamma^{(1)}}\] \[\hookrightarrow_{(**)\cup(**)\cup(**)}(Q)_{\varXi})\}\] Additionally, we need to consider the term \(\{(N)_{[n]}\}\bullet(\{(M)_{[m]}\}\oplus\{(Q)_{[q]}\})\) \[=\sum_{\varLambda\subset[n],\varLambda\neq\emptyset}\{(N)_{\varLambda^{c}} \}\oplus\{(N)_{\varGamma^{c}}\}\oplus\{(N)_{\varXi^{c}}\}\oplus\{((N)_{ \varLambda}\hookrightarrow_{(a_{A},\iota_{a_{A}})}((M)_{\varGamma}\oplus(Q)_{ \varXi}))\}\] \[+\{(N)_{[n]}\}\oplus\{(M)_{[m]}\}\oplus\{(Q)_{[q]}\}.\] In summary, we know that the left side of the formula (4.7) has same form as the one of the right side. Noting that the expressions on the both sides are the sum and direct sum for all possible insertion, thus the formula (4.7) is valid. With the help of theorem 3.2, the discussions concerning the product on \(\mathcal{H}_{adj}^{*}\) can be reduced to the situation of \(\mathcal{H}_{adj}\). In our setting, we do not distinguish the zero matrix with different order. By definition 3.1 we have \[\{0\}\bullet\{M\}=\{M\}\bullet\{0\}=\{M\}.\] Thus \((\mathcal{H}_{adj},\bullet,\{0\})\) is an unital algebra over \(\mathbb{K}\). We define a map \(\mathcal{M}\) from \((\mathcal{H}_{adj},\bullet,\{0\})\) to \((\mathcal{H}_{adj}^{*},\bullet,\eta)\) as follows: \[\mathcal{M}:\{M_{1}\}\oplus\cdots\oplus\{M_{m}\}\mapsto f_{\{M_{1}\}\oplus \cdots\oplus\{M_{m}\}},\,\mathcal{M}:\{0\}\mapsto\eta. \tag{4.8}\] In (4.8) each \(\{M_{i}\}\) is connected \((i=1,\cdots,m)\). From definition 4.1, theorem 4.1 and theorem 4.2 we immidiately have the conclusion about \(\mathcal{M}\). **Proposition 4.3**.: _The map \(\mathcal{M}\) defined by (4.8) is an algebraic isomorphism from \((\mathcal{H}_{adj},\bullet,\{0\})\) to \((\mathcal{H}_{adj}^{*},\bullet,\eta)\)._ By definition of \(\mathcal{H}_{adj}\), we know that \(M_{adj}(+\infty,\mathbb{N})\) plays the role of the bases in \(\mathcal{H}_{adj}\). On the other hand, we know that \[M_{adj}(+\infty,\mathbb{N})=\{\bigoplus_{i=1}^{m}\{M_{i}\}|\,m\in\mathbb{N}, \,\{M_{i}\}\in M_{adj}(m_{i},\mathbb{N})\diagup\,\,is\,\,connected,\,\,1\leq i \leq m\}.\] Thus, the formula (4.2) suggests us to define a new coproduct on \(\mathcal{H}_{adj}\) in the following way. **Definition 4.2**.: _Let \(\{M\}=\bigoplus_{i=1}^{m}\{M_{i}\}\), where each \(\{M_{i}\}\in M_{adj}(m_{i},\mathbb{N})\) is connected (\(i=1,\cdots,m\)). Then we define the coproduct to be_ \[\triangle_{1}\{M\}=\{M\}\otimes\{0\}+\{0\}\otimes\{M\}+\sum_{I\subset[m],\,I, \,I^{c}\neq\emptyset}\{(M)_{I}\}\otimes\{(M)_{I^{c}}\}, \tag{4.9}\] _where \(I^{c}=[m]\setminus I\). Particularly, \(\triangle_{1}\{0\}=\{0\}\otimes\{0\}\)._ The product \(\bullet\) can be extended to the situation of \(\mathcal{H}_{adj}\otimes\mathcal{H}_{adj}\). Let \((M)_{[m]},(N)_{[n]},(Q)_{[q]},\)\((R)_{[r]}\in M_{adj}(+\infty,\mathbb{N})\), we define \[((M)_{[m]}\otimes(N)_{[n]})\bullet((Q)_{[q]}\otimes(R)_{[r]})=((M)_{[m]} \bullet(Q)_{[q]})\otimes((N)_{[n]}\bullet(R)_{[r]}).\] It is easy to check that the product defined above is well defined. It is obvious that \(\triangle_{1}\) is co-commutative. Firstly, we will prove \(\triangle_{1}\) is co-associative. **Theorem 4.3**.: _We have_ \[(\triangle_{1}\otimes 1)\triangle_{1}=(1\otimes\triangle_{1})\,\triangle_{1}\,. \tag{4.10}\] Proof.: Let \(\{M\}=\bigoplus_{i=1}^{m}\{M_{i}\}\), where each \(\{M_{i}\}\in M_{adj}(m_{i},\mathbb{N})\diagup\sim\) is connected (\(i=1,\cdots,m\)). By a straightforward calculation, we have \[\begin{array}{l}(\triangle_{1}\otimes 1)\,\triangle_{1}\,\{M\}=(1 \otimes\triangle_{1})\,\triangle_{1}\,\{M\}\\ =\sum\limits_{I_{1},I_{2},I_{3}}\{(M)_{I_{1}}\}\otimes\{(M)_{I_{2}}\}\otimes\{ (M)_{I_{3}}\},\end{array}\] where \(I_{1}\cup I_{2}\cup I_{3}=[m]\), \(I_{i}\cap I_{j}=\emptyset\) (\(i\neq j\)), one or two of \(I_{1},I_{2},I_{3}\) may be emptyset. The coproduct \(\triangle_{1}\) and product \(\bullet\) are compatible. **Theorem 4.4**.: _Let \(\{M_{i}\},\{N_{j}\}\in M_{adj}(+\infty,\mathbb{N})\) be connected (\(i=1,\cdots,\,j=1,\cdots,n\)). Then, we have_ \[\triangle_{1}(\{(N)_{[n]}\}\bullet\{(M)_{[m]}\})=\triangle_{1}\{(N)_{[n]}\} \bullet\triangle_{1}\{(M)_{[m]}\}. \tag{4.11}\] Proof.: To prove the formula (4.11), we need to calculate the both sides of (4.11). **The situation of the left side** : Recalling the formula (4.6) we have \[\{(N)_{\underline{n}}\}\bullet\{(M)_{\underline{m}}\}=\sum\limits_{\Lambda \subset[n],\,\Lambda\neq\emptyset}\sum\limits_{(i_{A},\iota_{\Lambda})}\{(N)_{ \Lambda^{c}}\}\oplus\{((N)_{\Lambda}\hookrightarrow_{(i_{A},\iota_{\Lambda})}( M)_{\Gamma})\}\oplus\{(M)_{\Gamma^{c}}\}.\] Therefore \[\begin{array}{c}\triangle_{1}(\{(N)_{[n]}\}\bullet\{(M)_{[m]}\})\\ =\sum\limits_{\Lambda\subset[n],\,\Lambda\neq\emptyset}\sum\limits_{(i_{A}, \iota_{\Lambda})}\triangle_{1}\{(N)_{\Lambda^{c}}\}\oplus\triangle_{1}\{((N)_ {\Lambda}\hookrightarrow_{(i_{A},\iota_{A})}(M)_{\Gamma})\}\oplus\triangle_{1} \{(M)_{\Gamma^{c}}\}\\ =\sum\limits_{\Lambda\subset\underline{n},\,\Lambda\neq\emptyset}\sum\limits_ {(i_{A},\iota_{\Lambda})}(\sum\limits_{\Lambda_{c,1}\subset\Lambda^{c}}\{(N) _{\Lambda_{c,1}}\}\otimes\{(N)_{\Lambda_{c,2}}\})\oplus(\sum\limits_{\Gamma_ {c,1}\subset\Gamma^{c}}\{(M)_{\Gamma_{c,1}}\}\otimes\{(M)_{\Gamma_{c,2}}\})\\ \oplus(\sum\limits_{\Gamma_{1}}\{((N)_{\Lambda_{1}}\hookrightarrow_{(i_{A_{1 }},\iota_{A_{1}})}(M)_{\Gamma_{1}})\}\otimes\{((N)_{\Lambda_{2}}\hookrightarrow_ {(i_{A_{2}},\iota_{A_{2}})}(M)_{\Gamma_{2}})\})\\ =\sum\limits_{\Lambda\subset\underline{n},\,\Lambda\neq\emptyset}\sum\limits _{(i_{A},\iota_{A})}\sum\limits_{\Lambda_{c,1}\subset\Lambda^{c}}\sum\limits _{\Gamma_{c,1}\subset\Gamma^{c}}\{((N)_{\Lambda_{c,1}}\}\oplus\{((N)_{ \Lambda_{1}}\hookrightarrow_{(i_{A_{1}},\iota_{A_{1}})}(M)_{\Gamma_{1}})\}\\ \oplus\{(M)_{\Gamma_{c,1}}\}\otimes(\{(N)_{\Lambda_{c,2}}\}\oplus\{((N)_{ \Lambda_{2}}\hookrightarrow_{(i_{A_{2}},\iota_{A_{2}})}(M)_{\Gamma_{2}})\} \oplus\{(M)_{\Gamma_{c,2}}\}),\end{array}\] where \(\{\Lambda_{c,1},\Lambda_{c,2},\Lambda_{1},\Lambda_{2}\}\in\mathbf{Part}([n])\), \(\Lambda_{c,1}\cup\Lambda_{c,2}=\Lambda^{c}\), \(\Lambda_{1}\cup\Lambda_{2}=\Lambda\), \(\{\Gamma_{c,1},\Gamma_{c,2},\Gamma_{1},\Gamma_{2}\}\in\mathbf{Part}([m])\), \(\Gamma_{c,1}\cup\Gamma_{c,2}=\Gamma^{c}\), \(\Gamma_{1}\cup\Gamma_{2}=\Gamma\). Recalling the proof of proposition 3.3, \[\Gamma_{a}=\{i\in\Gamma|\,\exists j\in\Lambda_{a},\,\,s.t.\,\,i_{j}\in M_{i}\}, \,\,a=1,2,\] thus \(\Gamma_{a}\) is determined by \(\Lambda_{a}\) (\(a=1,2\)). Now we take \(\Lambda^{(a)}=\Lambda_{c,a}\cup\Lambda_{a}\) (\(a=1,2\)), thus, \(\Gamma^{(a)}=\Gamma_{a}\cup\Gamma_{c,a}\) (\(a=1,2\)). Then we have \[\begin{array}{c}\triangle_{1}(\{(N)_{[n]}\}\bullet\{(M)_{[m]}\})\\ =\sum\limits_{\Lambda^{(1)},\Lambda^{(2)},\Gamma^{(1)},\Gamma^{(2)}}(\{(N)_{ \Lambda^{(1)}}\}\bullet\{(M)_{\Gamma^{(1)}}\})\otimes(\{(N)_{\Lambda^{(2)}} \}\bullet\{(M)_{\Gamma^{(2)}}\}),\end{array}\] where \(\varLambda^{(1)}\) or \(\varLambda^{(2)}\) may be emptyset, for example, when \(\varLambda^{(1)}=\emptyset\), we define \(\{(N)_{\varLambda^{(1)}}\}=\{0\}\). **The situation of the right side** : By definition 4.2 we have \[\triangle_{1}\{(N)_{[n]}\}=\sum_{\varLambda\subset[n]}\{(N)_{\varLambda}\} \otimes\{(N)_{\varLambda^{c}}\},\ \triangle_{1}\{(M)_{[m]}\}=\sum_{\varGamma\subset[m]}\{(M)_{\varGamma}\} \otimes\{(M)_{\varGamma^{c}}\}.\] Therefore we have \[=\sum_{\varLambda\subset[n],\varGamma\subset[m]}(\{(N)_{\varLambda}\}\bullet \{(M)_{\varGamma}\})\otimes(\{(N)_{\varLambda^{c}}\}\bullet\{(M)_{\varGamma^{ c}}\}).\] Comparing the expressions on the both sides of (4.11), we know that the formula (4.11) is valid. Recalling the contents in section 2, we know that the tuple \((\mathcal{H}_{adj},\oplus,\{0\},\triangle,\eta)\) is a bialgebra. It is easy to check that the tuple \((\mathcal{H}_{adj},\bullet,\{0\},\triangle_{1},\eta)\) is also a bialgebra. We consider the reduced coproduct \(\overline{\triangle_{1}}\), \[\overline{\triangle_{1}}\{M\}=\triangle_{1}\{M\}-\{M\}\otimes\{0\}-\{0\} \otimes\{M\},\,\{M\}\in M_{adj}(+\infty,\mathbb{N}),\{M\}\neq\{0\}.\] Due to the formula (4.2), there is a obvious conclusion as follows. **Proposition 4.4**.: _For each \(\{M\}\in M_{adj}(+\infty,\mathbb{N})\) (\(\{M\}\neq\{0\}\)), there is a positive integer \(k\) such that_ \[\overline{\triangle_{1}}^{k}\{M\}=\{0\},\] _where_ \[\overline{\triangle_{1}}^{k+1}=(\overline{\triangle_{1}}\otimes\ \underbrace{1\otimes\cdots\otimes 1} _{k-times}\ \ )\overline{\triangle_{1}}^{k}.\] Proposition 4.4 means that \((\mathcal{H}_{adj},\bullet,\{0\},\triangle_{1},\eta)\) is a conilpotent bialgebra, therefore, a Hopf algebra. Similar to the situation of \(\mathcal{H}_{adj}^{*}\), the formula (4.2) of the coproduct \(\triangle_{1}\) shows that \(\{M\}\in M_{adj}(+\infty,\mathbb{N})\) is connected if and only if \[\triangle_{1}\{M\}=\{M\}\otimes\{0\}+\{0\}\otimes\{M\}.\] Therefore, we have \[\mathbf{P}(\mathcal{H}_{adj})=\mathbf{Span}_{\mathbb{C}}\{\{M\}\in M_{adj}(+ \infty,\mathbb{N})|\,\{M\}\ is\ connected\},\] where \({\bf P}({\cal H}_{adj})\) denotes the set of the all primitive elements of \(({\cal H}_{adj},\bullet,\{0\},\triangle_{1},\eta)\). Let \(\{M\},\{N\}\in M_{adj}(+\infty,\mathbb{N})\) be connected, then the product \(\bullet\) induces a Lie bracket as follows, \[[\{M\},\{N\}]=\{M\}\bullet\{N\}-\{N\}\bullet\{M\}. \tag{4.12}\] By the formula (4.4) we have \[[\{M\},\{N\}]=\sum_{(j,\tau_{j})}\{(M\hookrightarrow_{(j,\tau_{j})}N)\}-\sum_{ (i,\iota_{i})}\{(N\hookrightarrow_{(i,\iota_{i})}M)\}. \tag{4.13}\] The formula (4.13) implies that \([\{M\},\{N\}]\in{\bf P}({\cal H}_{adj})\) for \(\{M\},\{N\}\in{\bf P}({\cal H}_{adj})\). Hence \({\bf P}({\cal H}_{adj})\) is a Lie algebra. According to Milnor-Moore theorem (see?) we know that \[{\cal H}_{adj}\cong U({\bf P}({\cal H}_{adj})),\] i.e. as a Hopf algebra, \(({\cal H}_{adj},\bullet,\{0\},\triangle_{1},\eta)\) is isomorphic to the enveloping algebra of \({\bf P}({\cal H}_{adj})\), \(U({\bf P}({\cal H}_{adj}))\). Actually, with the help of the formula (4.6), we can directly prove that \(\{(M)_{[m]}\}\) can be expressed by a polynomial of the elements in \({\bf P}({\cal H}_{adj})\). Precisely, let \(\{(M)_{[m]}\}=\bigoplus_{i=1}^{m}\{M_{i}\}\), each \(\{M_{i}\}\) is connected (\(i=1,\cdots,m\)). Then, by induction on \(m\), we can prove that \(\bigoplus_{i=1}^{m}\{M_{i}\}\) can be expressed as a polynoremial of \(\{M_{i}\}\) (\(i=1,\cdots,m\)) and their insertions under the multiplication \(\bullet\). **Remark 4.1**.: _Based on the correspondence between the adjacency matrices and Feynman diagrams, the Hopf algebra \(({\cal H}_{adj},\bullet,\{0\},\triangle_{1},\eta)\) means there is another Hopf algebra structure on the set of Feynman diagrams induced from the dual of Connes-Kreimer hopf algebra._
2309.06500
Waveguide QED in the Dipole Gauge
In recent studies on ultrastrong coupling between matter and light in cavities, the significance of gauge choice when employing the widely-used two-level approximation has been highlighted. Expanding upon these investigations, we extend the analysis to waveguide QED, where we demonstrate that truncations performed in the dipole gauge also yield accurate results. To illustrate this point, we consider the case of a dipole coupled to a cavity array. Various numerical and analytical techniques have been employed to investigate the low-energy dynamics of the system. Leveraging these theoretical tools, we argue that single photon scattering is an ideal method for investigating gauge-related issues. Our findings reveal two novel effects in the scattering spectra, which cannot be reproduced in a truncated model using the Coulomb gauge. Firstly, the primary resonance is modified due to a Lamb shift contribution. Secondly, we observe asymmetric transmission amplitudes surrounding this resonance, reflecting the asymmetry of the spectral density in this model. Additionally, we explore other features in the scattering spectra resulting from ultrastrong couplings, such as the emergence of Fano resonances and inelastic channels. Finally, we propose an experimental test of our ideas in the context of circuit QED.
Sergi Terradas-Briansó, Luis Martín-Moreno, David Zueco
2023-09-12T18:14:20Z
http://arxiv.org/abs/2309.06500v1
# Waveguide QED in the Dipole Gauge ###### Abstract In recent studies on ultrastrong coupling between matter and light in cavities, the significance of gauge choice when employing the widely-used two-level approximation has been highlighted. Expanding upon these investigations, we extend the analysis to waveguide QED, where we demonstrate that truncations performed in the dipole gauge also yield accurate results. To illustrate this point, we consider the case of a dipole coupled to a cavity array. Various numerical and analytical techniques have been employed to investigate the low-energy dynamics of the system. Leveraging these theoretical tools, we argue that single photon scattering is an ideal method for investigating gauge-related issues. Our findings reveal two novel effects in the scattering spectra, which cannot be reproduced in a truncated model using the Coulomb gauge. Firstly, the primary resonance is modified due to a Lamb shift contribution. Secondly, we observe asymmetric transmission amplitudes surrounding this resonance, reflecting the asymmetry of the spectral density in this model. Additionally, we explore other features in the scattering spectra resulting from ultrastrong couplings, such as the emergence of Fano resonances and inelastic channels. Finally, we propose an experimental test of our ideas in the context of circuit QED. ## I Introduction Photons usually interact weakly with matter, which has lead to the adoption of several approximations in quantum optics. The most common ones include the rotating-wave approximation, truncating the matter description to its lowest energy levels, neglecting the \(A^{2}\) term, and the Markovian approximation for computing spontaneous emission, among others. However, numerous experiments have demonstrated that discrete quantum emitters can couple to light beyond the limitations of perturbative coupling. This breakthrough has been achieved by coupling these emitters to cavities [1; 2; 3] and waveguides [4; 5; 6; 7]. It has been demonstrated that, in certain cases, the interaction energy can be comparable to the energies of light and matter. As a consequence, many approximations break down, indicating that the coupling strength has entered the ultrastrong regime (USC). One of the primary noticeable effects of entering the USC is the significant emergence of processes that go beyond the interchange of a single photon and matter excitation. As a result, the widely used rotating-wave approximation (RWA) for the interaction loses its validity, leading to the renormalization of the bare emitter parameters and the emergence of a nontrivial ground state. Several interesting phenomena have been discussed in relation to the latter, including the possibility of converting virtual photons into real ones through ground state perturbations [8; 9; 10; 11; 12; 13], the localization-delocalization transition [14; 15], and the potential for performing nonlinear optics at the limits of single and zero photons [16; 17; 18; 19; 20; 21]. To gain comprehensive insights into light-matter interactions in the USC regime, see [22; 23]. More recently, it has been discovered that the USC regime introduces additional complications for conventional approaches used to describe light-matter systems. The commonly employed two-level approximation (TLA) for the matter subsystem has been found to lack gauge invariance in certain descriptions. In fact, truncating a momentum-like coupling operator has been shown to cause significant inconsistencies between complete and truncated models [24; 25; 26]. Instead, employing position-based interactions yields more reliable results when combined with the TLA. This research area has sparked numerous studies focusing on different approaches to truncating the matter level subsystems [27], and also on the truncation of photonic levels or the number of modes [28; 29]. Correctly applying the two-level approximation has significant implications for a wide range of system properties. It affects not only the energy levels [24], but also the ground state [30], emission spectra [31; 32], spectral density [32], and various other observables [30; 32]. In this work, we build upon previous approaches to ensure gauge invariance in cavity quantum electrodynamics and apply them to the domain of waveguide QED. We propose that, for a dipole coupled to a waveguide, employing the dipole gauge is more suitable for matter truncation. To illustrate this, we focus on the case of a single dipole coupled to a cavity array and conduct both numerical and analytical calculations. Notably, we demonstrate that scattering experiments are ideal for testing gauge-related issues. Specifically, we reveal that the transmittance minima exhibits a red-shift as the coupling strength increases, even within the lower range of the ultrastrong coupling regime. This red-shift effect arises from the contribution of Lamb shifts, which contrasts with the constant resonance observed when truncation is performed in the Coulomb gauge. This key result highlights a qualitative distinction between truncation approaches carried out in two different gauges. Furthermore, our analysis reveals that including counterrotating terms enables the occurrence of Fano resonances and inelastic scattering processes. The rest of the manuscript is organized as follows. In Sec. II, we present a general gauge-invariant description of a single particle interacting with the electromagnetic field. We apply this description to the case of waveguide QED and particularize it to a cavity array waveguide. Section III introduces the framework used to address the scattering processes in these systems and discusses how gauge invariance affects their description. The different theoretical methods employed in the scattering computations are described in Sec. IV. In section V, we present and analyse the numerical results for the scattering spectra both within and beyond the rotating-wave approximation. Sec. VI proposes an implementation of this setup in a circuit QED platform. Finally, we conclude in section VII. The manuscript also includes four appendices: Appendix A analyses gauge invariance in the weak coupling limit. Appendix B provides the analytical computation of the self-energy in the RWA. Appendix C discusses the convergence of Coulomb gauge and dipole gauge simulations. Appendix D deals with the continuous limit of the dipole gauge model. ## II Gauge invariant formulation of waveguide QED ### Preliminaries In this section, we present a formalism that describes the interaction between a single emitter and the electromagnetic field. A convenient approach to introducing gauge-invariant light-matter systems can be found in Refs. [25; 27; 33]. In the Coulomb gauge, the Hamiltonian is expressed as: \[H_{C}=H_{\rm ph}+U^{\dagger}H_{\rm m}U\;. \tag{1}\] Here, \(H_{\rm ph}\) represents the quantized Hamiltonian for the electromagnetic field, which will be specified below, and \(H_{\rm m}\) refers to the matter Hamiltonian. The unitary operator \(U\), which represents a gauge transformation itself, is given by \[U=e^{iq{\bf A}({\bf x})\cdot{\bf x}/\hbar}\;, \tag{2}\] where \({\bf A}({\bf x})\) is the vector potential and \(q\) denotes the emitter charge. The variable \({\bf x}\) corresponds to the position operator of the emitter. It is worth noting that \({\bf A}({\bf x})\) can explicitly depend on this position. Given \(H_{\rm m}={\bf p}^{2}/2m+V({\bf x})\), Eqs. (1) and (2) result in the minimal coupling Hamiltonian \[H_{C}=\frac{1}{2m}\Big{(}{\bf p}-q{\bf A}\Big{)}^{2}+V({\bf x})\;. \tag{3}\] This way of expressing \(H_{C}\) also offers a straightforward understanding of the dipole gauge. In fact, the Hamiltonian in the dipole gauge can be represented by an gauge transformation, equivalent to the Power-Zienau-Woolley (PZW) one, as [Cf. Eq. (1)], \[H_{D}=H_{\rm m}+UH_{\rm ph}U^{\dagger}\;. \tag{4}\] Within this formulation, it is evident that \(H_{D}=UH_{C}U^{\dagger}\), ensuring gauge invariance. Furthermore, the light-matter coupling transforms the _bare_ matter Hamiltonian in the Coulomb gauge, while in the dipole gauge, it transforms the _free_ electromagnetic field Hamiltonian. Up to this point, there should be no ambiguity about working in one gauge or the other. However, in practical calculations, we often need to make approximations in the Hamiltonians, especially when focusing on low-energy dynamics. In such cases, we typically truncate the matter Hamiltonian to some minimum energy states, using methods like the two-level approximation or the single band limit, among others. The problem of gauge ambiguities arises when making these truncations, since applying approximations on the matter subsystem in one gauge may lead to different results than in another. This discrepancy can be understood by examining Eqs. (1) and (4): matter and photonic operators are not the same in \(H_{C}\) as in \(H_{D}\) [cf. Eq. (2)]. For instance, in single-mode cavity QED, where \({\bf A}({\bf x})={\bf A_{0}}({\bf x_{0}})(a+a^{\dagger})\), it has been argued that the correct starting point for performing the truncation is \(H_{D}\), while truncation in \(H_{C}\) yields incorrect results in the USC regime. This is because in the dipole gauge, the coupling is \({\bf E}({\bf x})\cdot{\bf x}\) (\(E({\bf x})\) is the electric field), while in the Coulomb gauge, it goes as \({\bf A}({\bf x})\cdot{\bf p}\). As the emitter wave functions are typically localized in space, the momentum-\({\bf p}\) matrix elements can not be neglected even between well-separated (in energy) eigenstates [24]. The same is expected to occur in waveguide QED, as discussed below. ### Waveguide QED Hamiltonian EM quantization is convenient in the Coulomb gauge. In what follows, we will assume that the _position_ of the emitters is fixed. This is the case in the majority of situations and facilitates the discussion. To be more precise, we will assume that the emitter position can be written as \({\bf x}={\bf x_{0}}+\delta{\bf x}\) (\({\bf x_{0}}\) is not longer an operator but a vector position). For all the relevant energy scales \({\bf A}({\bf x})\cong{\bf A}({\bf x_{0}})\), which is the long-wavelength approximation. In this scenario, \({\bf A}\) acts only on the Hilbert space of the photons [34, Sect. 3]. \[{\bf A}_{\perp}({\bf x_{0}})=\frac{1}{\sqrt{L}}\sum_{k}\Big{(}{\bf\lambda}_{k }(x_{0},y_{0})\;a_{k}e^{ikz_{0}}+{\rm h.c.}\Big{)}\;. \tag{5}\] We have added the suffix \(\perp\) to emphasize that the potential vector has only transverse components in the Coulomb gauge. In this work, we focus on waveguide QED, which fixes a propagation direction, denoted as \(z\) (cf. the exponentials and the scalar character for the wavevector \(k\)). Additionally, this constrains \(\mathbf{\lambda}_{k}\) to the \(xy\)-plane, which can be expressed as: \[\mathbf{\lambda}_{k}\equiv\sqrt{\frac{\hbar}{2\omega_{\mathbf{k}}\epsilon_{0}}}\mathbf{ u}_{\mathbf{k}}\;, \tag{6}\] with \(\mathbf{u}_{k}\) normalized functions, satisfying \(\int_{\mathbb{R}_{2}}dxdy\,|\mathbf{u}_{k}|^{2}=1\), and providing the space dependence of \(\mathbf{A}\) around the waveguide. The waveguide is assumed to be surrounded by vacuum, hence the appearance of \(\epsilon_{0}\) above. The frequencies of the different waveguide modes are denoted as \(\omega_{k}\): \[H_{\mathrm{ph}}=\sum_{k}\hbar\omega_{k}a_{k}^{\dagger}a_{k}\;. \tag{7}\] Up to this point, we have assumed a waveguide of length \(L\), using a discrete number of modes \(k=m\times\pi/L\), where \(m=0,\pm 1,...\). However, the continuum limit can be obtained by taking \(L\rightarrow\infty\). At this point, it becomes evident that continuing to use the Coulomb gauge comes with certain obstacles. The most evident is the fact that the minimal coupling, introduces the \(\mathbf{A}^{2}\)-term which, after Eq. (5), couples all the waveguide modes, resulting in terms like \(a_{k}a_{k^{\prime}}+a_{k}^{\dagger}a_{k^{\prime}}+\mathrm{h.c.}\). As a consequence, the photonic part should be diagonalized using a Bogolioubov transformation. Alternatively, we can switch to the dipole gauge (4). To achieve this, we just need to know how \(a_{k}\) transforms under \(U\) in (2), \[Ua_{k}U^{\dagger}=a_{k}-iq\frac{1}{\sqrt{L}}\mathbf{\lambda}_{k}e^{-ikz_{0}}\cdot \mathbf{x} \tag{8}\] Therefore, the waveguide QED Hamiltonian in the dipole gauge can be written as: \[H_{D}=H_{\mathrm{m}}+\sum_{k}\hbar\omega_{k}a_{k}^{\dagger}a_{k}-iq\frac{1}{ \sqrt{L}}\mathbf{x}\sum_{k}\left(\mathbf{\lambda}_{k}\hbar\omega_{k}e^{ikz_{0}}a_{ k}-\mathrm{h.c.}\right)+\frac{q^{2}}{L}\sum_{k}\hbar\omega_{k}\Big{|}\mathbf{\lambda}_{k}e^{ ikz_{0}}\cdot\mathbf{x}\Big{|}^{2}. \tag{9}\] The last term in Eq. (9) can be absorbed into \(H_{\mathrm{m}}\), resulting in a modified matter Hamiltonian \(H_{\mathrm{m}}^{\prime}=H_{\mathrm{m}}+\frac{q^{2}}{L}\sum_{k}\hbar\omega_{k} \Big{|}\mathbf{\lambda}_{k}e^{ikz_{0}}\cdot\mathbf{x}\Big{|}^{2}\). If \(H_{\mathrm{m}}^{\prime}\) can be described using its two lowest states, say \(\{|0^{\prime}\rangle,|1^{\prime}\rangle\}\), \(H_{D}\) can be expressed in the form of a spin-boson model: \[\mathcal{H}_{D}=\frac{\hbar\Delta^{\prime}}{2}\sigma_{z}+\sum_{k}\hbar\omega_{ k}a_{k}^{\dagger}a_{k}+\sigma_{x}\sum_{k}\left(\hbar g_{k}a_{k}+\mathrm{h.c.}\right) \tag{10}\] where \[g_{k}=\frac{\omega_{k}}{\sqrt{L}}\langle 0^{\prime}|\mathbf{d}|1^{\prime} \rangle\cdot\mathbf{\lambda}_{k}, \tag{11}\] with \(\hbar\Delta^{\prime}\) being the transition energy between the two states and \(\mathbf{d}=q\mathbf{x}\). Here and throughout the text, we use the notation \(\mathcal{H}_{D}\) to distinguish the truncated Hamiltonian in Eq. (10) from the full model \(H_{D}\) in (9). It is customary to define the spectral density for spin-boson models as: \[J_{D}(\omega)=2\pi\sum_{k}|g_{k}|^{2}\delta(\omega-\omega_{k})\;. \tag{12}\] Here, we have used the suffix \(D\) to emphasize that the spectral density is gauge-dependent since, it depends on the light-matter coupling (11). As mentioned earlier, the \(\mathbf{A}^{2}\)-term in the Coulomb gauge prevents us from explicitly expressing the Hamiltonian in the form of (9) and/or (10). However, at low and intermediate couplings, the \(\mathbf{A}^{2}\) is usually neglected. By doing so, one can obtain a spin-boson model, but with a different transition frequency \(\Delta\) (the last term of (9) has been absorbed into \(H_{\mathrm{m}}\)) and different a spectral density. Since the spin-boson model is determined by its the spectral density, the Coulomb and dipole gauges are not equivalent after truncation, similar to the case in cavity QED. Despite this, it is possible to show that, \[\lim_{|\mathbf{\lambda}_{k}|\to 0}J_{C}(\Delta)\to J_{D}(\Delta) \tag{13}\] The proof is provided in Appendix A. Thus, truncation can be safely done in both gauges in the usual scenario of waveguide QED, in which the light-matter coupling is _small_. However, entering the USC regime requires more caution, and as we will discuss in detail later, the dipole gauge proves to be quite convenient. In addition, we will explore the physical consequences of working in the USC regime and how they manifest in standard experiments, such as single photon scattering. ### The Cavity array case Up to this point, we have not specified the particular waveguide or emitter we are considering. Moving forward, we will analyze a model that allows for both exact numerical treatments and analytical estimates. To describe the emitter, we will assume it to be spherically symmetric dipole, which enables us to use a one-dimensional model with position \(x\) and momentum \(p\) operators to represent its Hamiltonian. On the other hand, we will focus on a cavity array for the waveguide, where the emitter is coupled to a single cavity (specifically, the \(n=0\) cavity). We will also assume that the cavities in the array are single-mode, with the vector potential aligned with the dipole in the Coulomb gauge. This setup ensures that the model remains fully one-dimensional. A schematic representation of this system is shown in Fig. 1. As we will discuss, these simplifications can be justified within the circuit QED architecture, where the USC regime has been reached, and our proposed ideas may be implemented. Then, the resulting _full_ light-matter Hamiltonian in the Coulomb gauge can be expressed as: \[H_{C} =\frac{\left[p-qA_{0}(a_{0}+a_{0}^{\dagger})\right]^{2}}{2m}+V(x)\] \[+\hbar\omega_{c}\sum_{n}^{N}a_{n}^{\dagger}a_{n}+\hbar\xi\sum_{n} ^{N}(a_{n}a_{n+1}^{\dagger}+a_{n}^{\dagger}a_{n+1}) \tag{14}\] In this model, we consider identical \(N\) cavities, each with a resonance frequency \(\omega_{c}\). The cavities are coupled to their left and right neighbors with a strength \(\xi\) in a tight-binding fashion (last term of equation (14)). Regarding the dipole, we describe it using a double-well potential: \[V(x)=-\mu\frac{x^{2}}{2}+\lambda\frac{x^{4}}{4}. \tag{15}\] Moving to the momenta space \(a_{k}=1/\sqrt{N}\sum_{n}a_{n}e^{ikn}\) and transforming to the dipole gauge following Eqs. (4), (8), we obtain the waveguide Hamiltonian with \(H_{m}^{\prime}=p^{2}/2m+V(x)+\hbar\omega_{c}q^{2}A_{0}^{2}x^{2}\) [Cf. Eqs. (9) and (14)] and, \[\lambda_{k}=A_{0} \tag{16}\] Our simulations consider both the non-truncated (which we call the full model) and the truncated version where only the two lowest states, \(\{|0^{\prime}\rangle,|1^{\prime}\rangle\}\) of \(H_{m}^{\prime}\) are retained. In the latter case, the model to consider is the spin-boson one (10) with, \[g_{k}=\frac{\omega_{c}qA_{0}|\langle 1^{\prime}|x|0^{\prime}\rangle|}{\sqrt{N}} \frac{\omega_{k}}{\omega_{c}}, \tag{17}\] then from Eq. (12) \[J_{D}(\omega)=\frac{2g^{2}}{\sqrt{4\xi^{2}-(\omega-\omega_{c})^{2}}}\frac{ \omega^{2}}{\omega_{c}^{2}}, \tag{18}\] to quantify the coupling regime of the system in the truncated dipole gauge, we define the coupling strength \(g=qA_{0}\omega_{c}|\langle 1^{\prime}|x|0^{\prime}\rangle|\). Moreover, we find it useful to express the dipole gauge Hamiltonian in position space, as we will employ this representation in our scattering calculations. Using \(a_{n}=1/\sqrt{N}\sum_{k}a_{k}e^{-ikn}\), it yields, \[H_{D}= H_{m}^{\prime}+\hbar\omega_{c}\sum_{n}a_{n}^{\dagger}a_{n}+\hbar\xi \sum_{n}(a_{n}^{\dagger}a_{n+1}+\text{h.c.})-i\hbar\omega_{c}qA_{0}(a_{0}^{ \dagger}-a_{0})x-i\hbar\xi qA_{0}\left[(a_{1}^{\dagger}-a_{1})+(a_{-1}^{ \dagger}-a_{-1})\right]x. \tag{19}\] The gauge transformation intertwines light and matter; hence, in this new gauge, the dipole also couples to the adjacent cavities \(n=-1,1\), which, in the Coulomb gauge, are the ones coupled to cavity \(n=0\) via the hopping term. ### Truncation of matter levels in both gauges In general, Hamiltonians (9), (14) or even the truncated version (10) are non-integrable. Besides, the numerical calculation of the spectrum is rather challenging. Hence, performing a benchmark of the truncation in different gauges with the full model is not viable. To illustrate how the truncation affects gauge invariance, we present the eigenenergies of a minimal system comprising a dipole coupled to three cavities. These are obtained in a gauge invariant non-truncated representation and within the two-level approximation in both the Coulomb and dipole gauges in Fig. 2. Refs. [24; 25] introduce a convenient dimensionless notation for the matter Hamiltonian, which we also follow in the diagonalization procedure to obtain the spectra of Fig. 2. By defining a length scale \(x_{0}=(\hbar^{2}/(\lambda m))^{1/6}\), we work with the dimensionless variable \(z=x/x_{0}\), allowing Figure 1: Schematic representation of a one-dimensional array of coupled cavities via the hopping parameter \(\xi\). The dipole lies within the centre cavity with a relative distance between its charges \(x\). us to rewrite the bare matter Hamiltonian \(H_{m}\) as: \[H_{m}=E_{d}\left[\frac{p_{z}^{2}}{2}-\frac{\beta z^{2}}{2}+\frac{z^{4}}{4}\right], \tag{20}\] where \(\beta=m\mu x_{0}^{4}/\hbar^{2}\), \(p_{z}=-i\hbar\partial/\partial z\). Fig. 2 demonstrates that, similar to the single cavity QED case [24; 25], the truncation fails in the Coulomb gauge, while it accurately matches the full model in the dipole gauge. This observation highlights that the truncation in the dipole gauge, with an interaction described by the position operator, preserves the energy levels of the Hamiltonian. In contrast, calculations within the Coulomb gauge are profoundly affected by the truncation, which is consistent with the results obtained the single cavity scenario [24; 25]. ## III Scattering: Setting the Problem A caveat in USC is that the distinction between "light" and "matter" subsystems becomes less clear. This is not only due to the creation of strongly correlated light-matter states but also because the definition of these subsystems is gauge-dependent. While physical observables remain gauge-invariant, the choice of matter or light observables can be ambiguous. To overcome these subtleties and provide _clear and measurable signatures of truncation issues in different gauges_, this paper focuses on studying scattering phenomena. Scattering serves as an ideal testbed for highlighting gauge-related problems. The scattering problem can be simplified as follows. The input state, representing our initial condition, is chosen to be the non-normalized quantum state, \[|\Psi_{\text{in}}\rangle=(a_{\phi}^{\dagger})^{N}|\text{GS}\rangle,\qquad a_{ \phi}^{\dagger}=\sum_{x}\phi_{x}^{\text{in}}a_{x}^{\dagger}\,, \tag{21}\] where \(|\text{GS}\rangle\) is the ground state of the system and \(\phi_{x}^{\text{in}}\) is a Gaussian wavepacket centred in \(x_{\text{in}}\) with spatial width \(\theta\), \[\phi_{x}^{\text{in}}=\exp\left(-\frac{(x-x_{\text{in}})^{2}}{2\theta^{2}}+ik_ {\text{in}}x\right)\;. \tag{22}\] Typically, we consider \(x_{\text{in}}\) located on the left-hand side of the scatterer, with the wavepacket moves to the right towards it. The wavepacket (22) is exponentially localized around \(k_{\text{in}}\), with a width of approximately \(\theta^{-1}\). The wave packet evolves in time as \[|\Psi(t)\rangle=U(t,0)|\Psi_{\text{in}}\rangle=\text{e}^{-iH_{\text{tot}}t}| \Psi_{\text{in}}\rangle\;. \tag{23}\] A final time \(t_{\text{out}}\) is chosen, which must be sufficiently large to allow the photons to move freely along the waveguide after interacting with the scatterers. The evolution is then described by the \(S\)-matrix, defined as: \[|\Psi_{\text{out}}\rangle=S|\Psi_{\text{in}}\rangle\,. \tag{24}\] The scattering matrix is characterized by its momentum components: \[S_{p_{1}\dots p_{N^{\prime}},\,k_{1}\dots k_{N}}=\langle\text{GS}|a_{p_{1}}...a_{p_{N^{\prime}}}\,S\,a_{k_{1}}^{\dagger}...a_{k_{N}}^{\dagger}|\text{GS} \rangle\;. \tag{25}\] Some comments are pertinent here. The ground state wavefunction \(|\text{GS}\rangle\) appears in the definition of \(S\). In the ultrastrong coupling regime, as discussed before, the ground state differs from the vacuum state and contains a non-zero number of excitations [12]. In this regime, the number of excitations is not conserved, so \(N^{\prime}\neq N\) in general in the Eq. (25). However, we can expect some simplifications to occur. We specialized our discussion to single photon wavepackets, searching for computational simplicity, and also because in the experiments, when using low-power coherent classical input/output field states, the transmittance amplitudes coincide with the single photon scattering amplitudes. Moreover, it has been shown [35] that for wavepackets far away from the scatterer, even in the ultrastrong coupling regime, we can approximate \(a_{\phi}^{N}|\text{GS}\rangle\cong a_{\phi}^{N}|\text{vac}\rangle\), where \(|\text{vac}\rangle\) represents the trivial vacuum of the waveguide with \(a_{n}|\text{vac}\rangle=0\) for all \(n\). Lastly, it has been numerically tested that the probability of having more of one photon in the output field is negligible [16; 17]. As a consequence, the single photon amplitudes can be related to the number of photons as follows: \[\frac{\langle\Psi_{\text{out}}|a_{k}^{\dagger}a_{k}|\Psi_{\text{out}}\rangle}{ \langle\Psi_{\text{in}}|a_{k}^{\dagger}a_{k}|\Psi_{\text{in}}\rangle}=|S_{kk }|^{2}\equiv t_{k}. \tag{26}\] Figure 2: The energy spectra of an array of three cavities with the central one coupled to the dipole, are shown for the full model (solid black line), the truncated dipole gauge (dashed red line) and the truncated Coulomb gauge (dashed-dotted blue line). In this plot, we choose \(\beta=3.8\) and \(E_{dip}/(\hbar\omega_{c})=63.812\) such that at zero coupling, there is a resonance between the cavity energy \(\omega_{c}\) and the first bare dipole transition \(\Delta\). The hopping parameter is given by \(\xi/\omega_{c}=-1/\pi\). For the full model diagonalization, we consider 18 dipole levels and 18 photonic levels in each of the three cavities, while the truncated cases use the same number of photonic excitations and 2 dipole levels. This equation defines the _transmission amplitude_\(t_{k}\), which is a key quantity in this work. Within this framework, it is easy to understand how scattering is free from ambiguities in the abovementioned sense. Of course, being an observable, the transmission amplitude is gauge invariant. What makes scattering "special" is that both the input and output fields have support only on regions well separated from the scatterer. Consequently, they can be considered as free photon wavepackets created over the QED vacuum of the waveguide, which remains the same in all the gauges: \(U|\Psi_{\text{in}}\rangle=|\Psi_{\text{in}}\rangle\) (the same with \(|\Psi_{\text{out}}\rangle\)). Therefore, one can start with the same initial conditions in both gauges, allow the system to evolve, compute the amplitude and compare the results after performing truncations in both the dipole and Coulomb gauges. ## IV Theoretical methods. Let us outline the theoretical methods we employed to compute the scattering spectra in the USC regime. Our approaches involve a Polaron-like transformation, numerical simulations based on Matrix Product States, and matching techniques. The readers already familiar with these methods or not interested in technical details, should better proceed to the next section. ### Polaron picture: effective single excitation dynamics The polaron formalism provides an effective description of the low-energy sector of spin-boson models such as (10). This approach is based on applying a unitary transform that disentangles the light and matter subsystems. \[U_{\text{P}}=\exp\left(-\sigma_{x}\sum_{k}f_{k}^{*}a_{k}-f_{k}a_{k}^{\dagger} \right), \tag{27}\] where \(f_{k}\) represents a set of variational parameters. These parameters are found by minimising the ground state energy, which is given by the state with zero excitations \(|\Psi_{\text{P}}^{\text{GS}}\rangle=|0\rangle\otimes|0_{k}\rangle\), see _e.g._[36, 37, 38, 39]. After applying (27), the effective Hamiltonian in the polaron picture is given by \[\mathcal{H}_{P} =\sum_{k}\omega_{k}a_{k}^{\dagger}a_{k}+\frac{\Delta_{r}}{2} \sigma_{z}+2\Delta_{r}\sum_{k}(f_{k}a_{k}\sigma^{+}+\text{h.c.})\] \[-2\Delta_{r}\sigma_{z}\sum_{k,p}f_{k}f_{p}^{*}a_{k}^{\dagger}a_{p}, \tag{28}\] where \[\Delta_{r}=\exp\left(-2\sum_{k}|f_{k}|^{2}\right), \tag{29}\] and \[f_{k}=\frac{g_{k}}{(\Delta_{r}+\omega_{k})}. \tag{30}\] Here and throughout the manuscript, we use the convention \(\hbar=1\) for simplicity. The variational parameters \(f_{k}\) and \(\Delta_{r}\) are obtained self-consistently from Eqs. (29) and (30). The renormalized frequency \(\Delta_{r}\) is a well-known result from the spin-boson model. It tends to vanish as the coupling increases, eventually leading to the localization-delocalization quantum phase transition depending on \(J(\omega)\)[40]. The main advantage of the effective Hamiltonian (28) is that it conserves the number of excitations, making the single-particle dynamics relatively straightforward. ### Matrix product states Tensor networks have proven to be effective tools for numerically simulating light-matter systems in the ultrastrong coupling regime [14, 15, 17]. Specifically, for our one-dimensional chain with nearest-neighbor interactions described by (19), we utilize matrix product states in conjunction with a time evolution block decimation (TEBD) algorithm [41] to simulate wavepacket scattering, as discussed in Section III. By simulating the scattering process for a sufficiently long duration \(t_{out}\), we can obtain the outgoing eigenstate (24), as explained in the previous section. Furthermore, with the matrix product state representation of \(|\Psi_{\text{in}}\rangle\) and \(|\Psi_{\text{out}}\rangle\), the computation of the transmission amplitude transmittance described by Eq. (26) can be efficiently implemented. ### Matching techniques Lastly, we also use a matching technique to obtain the transmittance and reflectance amplitudes. The fundamental concept in this framework involves dividing the coupled waveguide into three distinct regions, as depicted in Figure 3. The central region (II) encompasses the dipole, the central cavity, and a sufficient number of additional cavities to contain the virtual photonic excitations resulting from the dressing of the dipole in the Figure 3: The coupled waveguide is divided into three regions (I), (II), and (III), which are employed in computing the transmission via matching methods. Regions (I) and (III) are chosen to have negligible ground-state photonic population, even in the ultrastrong coupling regime. ultrastrong coupling regime. Ground state photons exhibit exponential localization around the emitter, with \(\langle a_{n}^{\dagger}a_{n}\rangle_{\rm gs}\sim e^{-\kappa_{\rm GS}|n|/2}\), where \(\kappa_{\rm GS}\) denotes the localization length (which has been calculated in [42] but is irrelevant for our purposes). Consequently, the cavity regions (I) and (III) can be considered, to a good approximation, to not have photons in the ground state. In other words, in regions (I) and (III), we assume that \(\langle a_{n}^{\dagger}a_{n}\rangle_{\rm GS}=0\). Bringing everything together, in regions (I) and (II), we describe single-photon transport using the following state \[|\Psi\rangle=\sum_{n,\alpha}\phi_{n,\alpha}a_{n}^{\dagger}|0_{ph},\alpha_{sc} \rangle+\sum_{\alpha}f_{\alpha}|0_{ph},\alpha_{sc}\rangle\;. \tag{31}\] Here, \(|0_{ph},\alpha_{sc}\rangle\) represents the state with zero excitations in regions (I) and (III). The label \(\alpha\) denotes the eigenstates in the region (II), which can be computed numerically. For the parameters considered in this work, we have found that up to 5 or 7 cavities in region (II) are sufficient to obtain an accurate description of the transmittance. The quantity \(\phi_{n,\alpha}\) denotes the amplitude of having a photon in the n-th cavity while the scatterer remains in state \(\alpha\): \[\phi_{n,\alpha}(k)=\begin{cases}e^{ikn}+r_{k,\alpha}e^{-ikn},&(\text{I})\\ t_{k,\alpha}e^{ikn},&(\text{III})\end{cases}\;. \tag{32}\] By employing the ansatz (31), the time-independent Schrodinger equation can be solved, resulting in the determination of the transmittance and reflectance amplitudes. ## V Results The complete transmittance spectrum is quite intricate, as we will soon discover. To better understand the gauge issues and the impact of truncation in different gauges, we will initially focus on calculating the scattering coefficients within the rotating-wave approximation. This approach simplifies single photon scattering and allows for fully analytical solutions. Subsequently, we will proceed to solve the full model and comprehensively discuss the complete transmittance spectrum. ### Transmission under the RWA approximation In order to isolate the effects of the truncation, we will apply the following approximations. We apply the two-level truncation and the RWA such that \(\mathcal{H}_{D}=\mathcal{H}_{0}+\mathcal{H}_{I}\), with \[\mathcal{H}_{0}=\Delta\sigma^{+}\sigma^{-}+\omega_{c}\sum_{n}a_{n}^{\dagger}a_ {n}+\xi\sum_{n}(a_{n}^{\dagger}a_{n+1}+\text{h.c.}) \tag{33}\] and \[\mathcal{H}_{I}=g(i\sigma^{-}a_{0}^{\dagger}+\text{h.c.})+\frac{\xi g}{\omega_ {c}}\left(\sigma^{-}(ia_{1}^{\dagger}+ia_{-1}^{\dagger})+\text{h.c.}\right) \tag{34}\] \(\sigma^{\pm}\) are the two level system ladder operators and \(g=\omega_{c}qA_{0}|\langle 0^{\prime}|x|1^{\prime}\rangle|\). We emphasize that the rotating-wave approximation is valid only in the weak coupling regime, but here we extend our computations to larger couplings to compare the RWA results directly with those obtained from the full model, thereby revealing the role of the counter-rotating terms. Furthermore, we neglect the terms of second order in the couplings, specifically the \(x^{2}\) term in the dipole Hamiltonian. As mentioned, this term introduces a dependency on the coupling strength \(g\) to the TLS transition \(\Delta^{\prime}\) in Eq. (10). However, as shown in Figure 4, \(\Delta^{\prime}\) increases with \(g\) but we note that this change is relatively small compared to the other effects discussed in this section. Therefore, for the RWA calculations, we will not consider this effect to gain a qualitative understanding. The rotating-wave approximation offers several advantages. First, the ground state becomes trivial, with \(|{\rm GS}\rangle=|0\rangle\), implying zero excitations in both the waveguide and the dipole. Moreover, within the RWA, we can work within the single excitation manifold since the number of excitations is conserved, i.e., \([\mathcal{H}_{D},N]=0\), where \(N=\sum_{n}a_{n}^{\dagger}a_{n}+\sigma^{+}\sigma^{-}\). In this case, the matching method introduced in Sec. IV.3 becomes exact using the general _ansatz_ for a quantum state in the single excitation manifold. \[|\psi\rangle=\sum_{n}\phi_{n}(k)a_{n}^{\dagger}|0\rangle+\phi_{q}\sigma^{+}|0 \rangle\;. \tag{35}\] The eigenvalue problem, \(\mathcal{H}_{D}|\psi\rangle=E|\psi\rangle\), leads to a dipole excited state amplitude given by: \[\phi_{q}=\frac{g}{E-\Delta}\left[\phi_{0}(k)+\frac{\xi}{\omega_{c}}(\phi_{1}( k)+\phi_{-1}(k))\right]. \tag{36}\] Figure 4: Renormalization of the first dipole transition energy in the dipole gauge \(\Delta^{\prime}\) with increasing coupling, presented in units of the bare energy. The parameters used are the same as in Fig. 2. In order to find the scattering eigenstates, we use the _ansatz_ for the photonic amplitude at sites \(n\neq 0\) as follows [43], \[\phi_{n}(k)=\begin{cases}e^{ikn}+r_{k}e^{-ikn},&n<0\\ t_{k}e^{ikn},&n>0\end{cases}\;. \tag{37}\] Using the continuity of the wave function, \(\phi_{0^{+}}(k)=\phi_{0^{-}}(k)\) the solution for the transmittance amplitude is: \[t_{k}=\frac{\Delta-\omega_{k}-\frac{g^{2}}{\omega_{c}^{2}}(\omega_{c}+\omega_{ k})}{\omega_{k}-\Delta-i\frac{g^{2}\omega_{k}^{2}/\omega_{c}^{2}}{\sqrt{(2 \xi)^{2}-(\omega_{k}-\omega_{c})^{2}}}+\frac{g^{2}}{\omega_{c}^{2}}(\omega_{k} +\omega_{c})}. \tag{38}\] An equivalent computation can be performed for the Coulomb gauge, assuming the same approximations mentioned in II.2 to obtain a spin-boson model, this is, neglecting the \(A^{2}\) term and considering the RWA. By applying these transformations to Hamiltonian (14), we obtain a model that also conserves the number of excitations. Further details about the resulting model can be found in Appendix A. With this derived number-conserving description of our system in the Coulomb gauge, we can use the _ansatz_ (35) to obtain \(t_{k}\). It yields the following expression (compare with Eq. (38)) \[t_{k}=\frac{\Delta-\omega_{k}}{\omega_{k}-\Delta-\frac{ig_{C}^{2}}{\sqrt{(2 \xi)^{2}-(\omega_{k}-\omega_{c})^{2}}}}, \tag{39}\] where \(g_{C}=qA_{0}\langle 0|p|1\rangle/m\) is the coupling strength in the Coulomb gauge, see App. A for more details. Figure 5 compares both formulas (38) and (39), with the transmittance \(T=|t_{k}|^{2}\), as a function of the coupling strength \(g\) in the dipole gauge and the incoming photon frequency \(\omega_{k}\). In the truncated Coulomb gauge, it is a known result [43; 16] that the resonance frequency always occurs at the dipole transition, and the width of the transmittance minima increases as \(g^{2}\) as plotted in Fig. 5 (a). The equivalent computation in the truncated dipole gauge is presented in Figure 5 (b). A notable feature in the dipole gauge is that the resonance moves to lower frequencies as \(g\) increases. The resonant frequency can be obtained by imposing \(t_{k}=0\) in Eq. (38), leading to the formula: \[\omega_{\text{res}}^{\text{RWA}}=\omega_{c}\frac{\Delta\omega_{c}-g^{2}}{ \omega_{c}^{2}+g^{2}}\;. \tag{40}\] This dependence is represented by the dotted cyan line in Figure 5 (b). The resonant frequency shift is a consequence of the coupling-dependent term in the numerator of Eq. (38), which arises from the couplings to the adjacent cavities in the real-space Hamiltonian (19). This frequency shift will be confirmed with the full model in the next section. Another effect of the correct truncation is the modification of the width of the resonance, as given by the imaginary term in (38). In Figure 5 (b), we can observe a transmittance imbalance at both sides of the red-shifted minima. For a given coupling value, the transmittance is higher at frequencies below the resonance than at frequencies above. To gain a deeper understanding of the resonance shift, we employ the resolvent operator method [44]. This approach allows us to identify the change in resonance as a Lamb shift. The self-energy of our spin-boson model within the RWA can be expressed as: \[\Sigma(E)=\sum_{k}\frac{|g_{k}|^{2}}{E-\omega_{k}}=\frac{g^{2}}{N\omega_{c}^{2 }}\sum_{k}\frac{\omega_{k}^{2}}{E-\omega_{k}}\;. \tag{41}\] In the continuum limit, this summation can be written in terms of known integrals [45; 46]. Further details on this computation can be found in Appendix B. After performing some manipulations, the self-energy of the system can be expressed as follows: \[\Sigma(E)=-\frac{g^{2}}{\omega_{c}^{2}}(\omega_{c}+E)+i\frac{g^{2}}{\sqrt{(2 \xi)^{2}-(E-\omega_{c})^{2}}}\frac{E^{2}}{\omega_{c}^{2}}\;. \tag{42}\] The real part of the self-energy represents the Lamb shift, causing the red-shift proportional to the square of the normalized coupling strength \(g/\omega_{c}\), as given in Eq. (38). Figure 5: Transmittance spectra of a single photon within the rotating-wave approximation in both the truncated Coulomb gauge (a) and the truncated dipole gauge (b). The dipole gauge resonant frequency obtained in Eq. (40) is shown as a dotted cyan line. The parameters defining the system are the same as in Fig. 2. Similarly, the imaginary part of (42) corresponds to half of the spontaneous emission rate, which can also be obtained by evaluating the spectral density \(J_{D}(\omega)\) at the dipole transition \(\Delta\), as shown in Eq. (18). The shift in the resonance frequency (40) is one of the primary findings of this study. Although it has been calculated under the rotating-wave approximation, we will observe a similar effect in the full model. Importantly, this shift is a measurable quantity that sheds light on the issues associated with truncating the matter in the Coulomb gauge. ### Beyond RWA Having established that the truncation within the Coulomb gauge cannot account for the Lamb shift and the resonant frequency dependence on the coupling, we now turn our attention to the full model without applying the RWA. This allows us to study the transmission in a wide range of parameters. To ensure the accuracy of our results and interpretations, we performed simulations using the matching technique by truncating to different numbers of levels in the dipole gauge. Additionally, we conducted MPS simulations, as presented in Appendix C, where it can be observed that truncating the dipole gauge to two levels yields the correct results. Therefore, in the main text, we will only describe this specific case. Without the RWA, the number of excitations is no longer conserved. As a result, processes that involve different numbers of photons between the input and output fields become possible. However, after conducting thorough numerical investigations, we have found that these processes have negligible magnitudes. Therefore, in this study, we can focus on the single photon transmission without significant loss of accuracy. The complete elastic transmittance spectra obtained in the Coulomb and dipole gauges are depicted in Figures 6 (a) and (b), respectively. In the Coulomb gauge spectra shown in Figure 6 (a), the transmittance minima remains constant, much like in the number-conserving transmittance spectra in Figure 5 (a). One notable feature arising from the counterrotating terms in the Coulomb gauge is a Fano resonance near the center of the band for all coupling strengths. The corresponding transmittance spectra in the dipole gauge are plotted in Figure 6 (b). Several features can be observed in this plot. First, let's discuss the resonance frequency shift, which already occurs within the RWA, as detailed above. To understand this shift in the full model, we can employ the polaron picture and the effective Hamiltonian \(\mathcal{H}_{P}\) (28). This transformed Hamiltonian is number-conserving, enabling us to compute the self-energy similarly to the previous section. In this case, the self-energy is given by: \[\Sigma_{P}(E)=\sum_{k}\frac{4\Delta_{r}^{2}|f_{k}|^{2}}{(E-\omega_{k}-2\Delta_ {r}\sum_{l,p}f_{l}f_{p})}\,. \tag{43}\] The equation for the resonance is now given by [Cf. with eq. (40)]: \[\omega_{k}-\Delta_{r}+\Re\left(\Sigma_{P}(\omega_{k})\right)=0\,. \tag{44}\] Here, \(\Delta_{r}\) represents the renormalized transition frequency (29). In Figure 6 (b), the frequency shift is depicted as a white dashed line, matching the numerical results, and for comparison, we also plot the RWA result (40) with a cyan dotted line. The shift is smaller than in the RWA case, which is due to the competition of two effects. On the one hand, we have the renormalized dipole frequency \(\Delta_{r}\) from (29). On the other hand, there is \(\Re\left(\Sigma_{P}(\omega_{k})\right)\), which tends to shift towards higher frequencies. Figure 6: (a) Elastic transmittance spectra for a single-photon propagating in a coupled cavity array, including all the interaction terms of the truncated Coulomb gauge, as a function of the dipole gauge coupling strength and the incoming photon frequency. The red dashed line indicates the scatterer transition relevant to the photon transport in this gauge. (b) Equivalent spectra computed in the dipole gauge. The red and orange dashed lines depict the transition energies of the scatterer in the dipole gauge that play a role in the transmission process. Additionally, we plot the resonance predicted in the rotating-wave approximation in a dotted cyan line, as given in equation (40). The resonance predicted utilising polaron techniques, as described in section V, is depicted with a dashed white line. The main result of this paper is confirming the resonance shift in the full model, which provides a qualitatively measurable feature to test gauge issues related to the truncation of dipole energies. Furthermore, Fano resonances also appear in the spectra. The first resonance occurs at intermediate frequencies and spans most of the range of \(g\) values. Another resonance appears at larger coupling strengths and for the higher frequencies of the band. These Fano resonances can be explained by considering the interaction between the flying photon and the dipole, which is allowed to access subspaces with a larger number of excitations due to the inclusion of counterrotating terms [16, 1]. As a result, the flying photon can resonate with eigenstates of the scatterer (defined as region (II) in Sec. IV.3) having different numbers of excitations. In Figure 7, we plot the eigenstates of region II as a function of the coupling strength. Two localized eigenstates with odd parity, corresponding to three and five excitations, are identified and denoted as \(E_{3}\) and \(E_{5}\), respectively. We also plot the energy differences between these states and the ground state, i.e., \(E_{3}-E_{0}\) and \(E_{5}-E_{0}\), in Figure 6 (b) as dashed lines in red and orange, respectively. A similar analysis can be done in the Coulomb gauge, where a three-excitation eigenstate can be associated with the Fano resonance observed in that spectrum. The agreement with the numerical results confirms our argument explaining the presence of the Fano resonances. Lastly, the full model exhibits inelastic scattering. The presence of bound states originating from the ultrastrong coupling allows for scattering processes that leave the scatterer in an excited state. In Figure 8, we present the inelastic transmittance spectra obtained from the dipole gauge computation, which reaches a maximum value of 0.25, a fundamental bound as explained in [16]. These inelastic processes correspond to Raman scattering, leaving the dressed dipole in an excited bound state. To delineate the parameter range where inelastic transmission occurs, we use the energy conservation condition: \[\omega_{k}^{in}+E_{0}=E_{n}+\omega_{k}^{out}, \tag{45}\] where \(\omega_{k}^{out}\in[\omega_{c}-2\xi,\omega_{c}+2\xi]\), and \(E_{0}\) and \(E_{n}\) are the energies of the ground state and bound states for the dressed dipole. The solid red line in Figure 8 (c) represents the minimum energy of the incoming photon for which inelastic scattering is possible, given by Eq. (46): \[\omega_{ine}^{min}=E_{2}-E_{0}+\omega_{c}-2\xi. \tag{46}\] Our numerical computations verify this condition, providing the separation line beyond which inelastic transmission is possible. ## VI Implementation Finally, we propose a circuit QED architecture that provides an experimental platform to test the ideas developed in this paper. Our proposed circuit, illustrated in Figure 9, consists of an array of \(N\) LC circuits coupled inductively in series. The central circuit contains a superconducting qubit (such as a transmon) capacitively coupled to the LC circuit, as shown in the same figure. By considering the Kirchhoff equations of motion and selecting \(\phi_{0}\) and \(\phi_{q}\) as variables, we can express the current through the capacitor \(C_{r}\) as \(C_{r}(\ddot{\phi}_{0}-\ddot{\phi}_{q})\), which leads to a light-matter coupling via momentum, analogous to the minimal coupling in Eq. (1). The Hamiltonian for Figure 8: Inelastic transmittance \((1-T-R)/2\) in the dipole gauge. The orange dashed line indicates the same Fano resonance as in 6. The red solid line indicates the minimum energy of the incoming photon required for inelastic scattering to occur, as given by Eq. (46). Figure 7: Lowest eigenenergies of Region (II), which includes the dipole and the 5 central cavities, as introduced in IV.3. this circuit is given by [47; 48], \[H_{Ch}= \sum_{n}\left[\frac{Q_{n}^{2}}{2C_{r}}+\frac{\phi_{n}^{2}}{2L_{ \Sigma}}+\frac{\phi_{n}\phi_{n-1}}{L_{c}}\right]\] \[+\frac{(Q_{0}+Q_{q})^{2}}{2C_{J}}+E_{J}\cos(\phi_{q})\;, \tag{47}\] where \(1/2L_{\Sigma}=1/2L_{r}+1/L_{c}\). The suffix \(Ch\) indicates that this Hamiltonian is obtained in the charge gauge, resulting from the choice of dynamical variables \(\phi_{0}\) and \(\phi_{q}\). The presence of the minimal coupling-like feature in this circuit makes it analogous to the light-matter Hamiltonian in the Coulomb gauge (14). Similar to dipolar systems, a unitary transformation can also be applied in this case (cf. Eq. (2)) [24]: \[U=e^{i\phi_{q}Q_{0}/\hbar}\;. \tag{48}\] Physically, this transformation changes the dynamical variables (for the Kirchhoff equations) from \(\{\phi_{0},\phi_{q}\}\) to \(\{\delta\phi,\phi_{q}\}\), where \(\delta\phi=\phi_{0}-\phi_{q}\). By utilizing the mode fluxes and charges, \(\phi_{k}=\frac{1}{\sqrt{N}}\sum_{n}\phi_{n}e^{ikn}\), \(Q_{k}=\frac{1}{\sqrt{N}}\sum_{n}Q_{n}e^{ikn}\), and \(\alpha_{k}=\left(\frac{1}{2L_{\Sigma}}+\frac{\cos(k)}{L_{c}}\right)\), along with their quantization [47; 48]: \[Q_{k} =-i\sqrt{\frac{\hbar\omega_{k}C_{r}}{2}}(a_{k}^{\dagger}-a_{k}),\] \[\phi_{k} =\sqrt{\frac{\hbar\omega_{k}}{4\alpha_{k}}}(a_{k}^{\dagger}+a_{k} )\;,\] the transformed Hamiltonian \(H_{Fl}=UH_{Ch}U^{\dagger}\) can be written as: \[H_{Fl}=\frac{Q_{q}^{2}}{2C_{J}}+E_{J}\cos\left(\frac{\phi_{q}- \phi_{ext}}{\Phi_{0}}\right)+\sum_{k}\hbar\omega_{k}a_{k}^{\dagger}a_{k}-\hbar \sum_{k}g_{k}(a_{k}+a_{k}^{\dagger})\phi_{q}+\frac{1}{N}\sum_{k}\alpha_{k}| \phi_{q}|^{2}\;. \tag{49}\] Here, the dispersion relation is \(\omega_{k}=\omega_{r}+2\xi_{r}\cos(k)\), where \(\omega_{r}=(L_{\Sigma}C_{r})^{-1/2}\) and \(\xi_{r}=\omega_{r}L_{\Sigma}/L_{c}\). The coupling constants are given by: \[\hbar g_{k}=\sqrt{\frac{\hbar}{2L_{\Sigma}\omega_{c}N}}\omega_{k}. \tag{50}\] It is worth noting that \(g_{k}\sim\omega_{k}\) as shown in Eq. (17). Moreover, the bare matter Hamiltonian, defined by the first two terms in (49), is also modified by a term scaling with the square of the flux operator. Therefore, this circuit Hamiltonian is equivalent to (4). ## VII Conclusions In this work, we have extended the study of gauge issues in light-matter coupled systems to the realm of waveguide QED. Gauge problems present significant technical challenges when studying waveguide QED, particularly in the ultrastrong coupling regime. We have employed various theoretical methods, including numerical techniques (matching and MPS) and analytical approaches (polaron transformation), to describe the system dynamics and compare truncation in different gauges. We argued that scattering, a natural quantity in waveguide QED, is ideal for testing different gauges, and it holds relevance from an experimental perspective. Our investigations have confirmed that the transmittance spectrum exhibits both qualitative and quantitative differences when truncating in different gauges. Numerical results have provided evidence that the dipole gauge is well-suited for truncation, allowing for accurate transmittance spectra over a wide parameter range. On the other hand, the Coulomb gauge is found to be unsuitable for truncation. Figure 6 presents a clear visual representation of these significant differences. The main features of correct transmittance spectra include the coupling-dependent resonant frequency shift, the emergence of Fano-like resonances, and the occurrence of non-elastic scattering. These findings constitute important aspects of our study. Furthermore, we have concluded the article by proposing an experimental implementation of this physics using circuit QED. Figure 9: Cavity array implemented in a superconducting circuit QED platform. Individual LC oscillators are coupled inductively to their neighbours, while the transmon is directly connected to the capacitance of the central oscillator, labelled 0. ## VIII Acknowledgments The authors acknowledge funding from the Spanish Government Grants PID2020-115221GB-C41/AEI/10.13039/501100011033 and TED2021-131447B-C21 funded by MCIN/AEI/10.13039/501100011033 and the EU "NextGenerationEU"/PRTR, the Gobierno de Aragon (Grant E09-17R Q-MAD) and the CSIC Quantum Technologies Platform PTI-001. ## Appendix A Gauge Invariance at weak coupling In this section, we demonstrate that in the weak coupling limit, both the Coulomb gauge and dipole gauge formulations can be represented as spin-boson models [see Eq. (10)]. Although their spectral densities are not identical, under the Wigner-Weisskopf approximation, their spontaneous emission rates are equal, as indicated in Eq. (13). By substituting the expression of the vector potential in the Coulomb gauge, as given in the main text equation (5), into the minimal coupling Hamiltonian (3), we obtain a similar expression to that in the dipole gauge (9) \[H_{C}=H_{\rm m}+\sum_{k}\omega_{k}a_{k}^{\dagger}a_{k}-\frac{q}{\sqrt{L}} \frac{\mathbf{p}}{m}\sum_{k}\left(\mathbf{\lambda}_{k}e^{ikz_{0}}a_{k}-{\rm h.c. }\right)+\frac{q^{2}}{2mL}\Big{(}\sum_{k}\mathbf{\lambda}_{k}e^{ikz_{0}}a_{k}+{ \rm h.c.}\Big{)}^{2}. \tag{10}\] To derive effective models in the weak coupling regime we consider the limit \(\mathbf{\lambda}_{k}\to 0\) in both (9) and (10) retaining only the first-order terms in \(\mathbf{\lambda}_{k}\). Therefore neglecting the shift in \(H_{m}\) for the dipole gauge and the \(\mathbf{A}^{2}\)-term in the Coulomb gauge. After taking this limit, we project the effective Hamiltonians into the subspace spanned by the first two energy levels of the bare matter Hamiltonian \(H_{m}\), denoted as \(|0\rangle,|1\rangle\). This procedure results in two spin-boson models, one for each gauge, which can be written in a form similar to the full model presented in the main text (10). In this weak coupling limit, the waveguide modes \(\omega_{k}\) and the two-level system transition frequencies are equal for both models. The difference between the weak coupling Coulomb and dipole spin-boson models lies in their mode couplings. In the dipole gauge, the expression for the full model couplings is given in (11). For the approximated model in the dipole gauge, we have \[g_{\mathbf{x},k}=\frac{q\omega_{k}}{\sqrt{L}}\langle 0|\mathbf{x}|1\rangle \cdot\mathbf{\lambda}_{k}. \tag{11}\] The corresponding couplings in the Coulomb gauge can be written as \[g_{\mathbf{p},k}=\frac{q}{m\sqrt{L}}\langle 0|\mathbf{p}|1\rangle \cdot\mathbf{\lambda}_{k} \tag{12}\] We can now compute the spectral densities of these two models using the definition given by Eq. (12) in the main text, obtaining \[J_{D}(\omega) =\frac{q^{2}\omega^{2}}{L}\frac{\big{|}\langle 0|\mathbf{x}|1 \rangle\cdot\mathbf{\lambda}_{k}\big{|}^{2}}{\sqrt{4\xi^{2}-(\omega-\omega_{c})^{ 2}}}, \tag{13}\] \[J_{C}(\omega) =\frac{q^{2}}{m^{2}L}\frac{\big{|}\langle 0|\mathbf{p}|1 \rangle\cdot\mathbf{\lambda}_{k}\big{|}^{2}}{\sqrt{4\xi^{2}-(\omega-\omega_{c})^{ 2}}}. \tag{14}\] Utilizing the general relation \[\langle n|\mathbf{p}|l\rangle=im\Delta_{nl}\langle n|\mathbf{x}|l\rangle, \tag{15}\] where \(\Delta_{nl}\) is the transition frequency between states \(|n\rangle\) and \(|l\rangle\), we can derive the following expression for the spectral density in the Coulomb gauge \[J_{C}(\omega)=\frac{q^{2}\Delta^{2}}{L}\frac{\big{|}\langle 0|\mathbf{x}|1 \rangle\cdot\mathbf{\lambda}_{k}\big{|}^{2}}{\sqrt{4\xi^{2}-(\omega-\omega_{c})^ {2}}}. \tag{16}\] This proves that in the weak coupling limit, the spontaneous emission rate \(J(\Delta)\) is gauge invariant \[\lim_{\mathbf{\lambda}_{k}\to 0}J_{C}(\Delta)=\lim_{\mathbf{\lambda}_{k}\to 0}J_{D}( \Delta). \tag{17}\] ## Appendix B Analytical computation of the Self-Energy of the coupled dipole In this appendix, we provide the derivation of the exact self-energy of the dipole excited level in the RWA (42). We begin the derivation from the expression (41) and assume that the energy \(E\) lies within the band \(E\in[\omega_{c}-2\xi,\omega_{c}+2\xi]\). However, the computation can be easily extended to energies outside the band. As mentioned in the main text, expanding the expression of the dispersion relation in Eq. (41) yields: \[\Sigma(E) =\frac{g^{2}}{N\omega_{c}^{2}}\sum_{k}\frac{\omega_{k}^{2}}{E- \omega_{k}}=\] \[=\frac{g^{2}}{N\omega_{c}^{2}}\sum_{k}\Bigg{[}\frac{\omega_{c}^{2 }}{E-\omega_{k}}+\frac{2\xi\omega_{c}(e^{ik}+e^{-ik})}{E-\omega_{k}}\] \[+\frac{\xi^{2}\left(e^{ik}+e^{-ik}\right)^{2}}{E-\omega_{k}} \Bigg{]}. \tag{18}\] We can convert the summations over modes into inte grals by taking the continuum limit. (\(N\to\infty\)). \[\Sigma(E) =\frac{g^{2}}{2\pi\omega_{c}^{2}}\int_{-\pi}^{\pi}\Bigg{[}\frac{( \omega_{c}^{2}+2\xi^{2})dk}{E-\omega_{k}}+\frac{2\xi\omega_{c}(e^{ik}+e^{-ik}) dk}{E-\omega_{k}}\] \[+\int_{-\pi}^{\pi}\frac{\xi^{2}(e^{2ik}+e^{-2ik})dk}{E-\omega_{k} }\Bigg{]}. \tag{10}\] The entire computation in (10) relies one valuating integrals of the form: \[I(E,n)= \int_{-\pi}^{\pi}\frac{e^{ink}dk}{E-[\omega_{c}-\xi(e^{ik}+e^{-ik})]}. \tag{11}\] By performing the variable change \(z=e^{ik}\), transforms into a closed integral around a circle of radius \(|z|=1\) in the complex plane \[I(E,n)=\frac{1}{\xi}\oint_{|z|=1}\frac{z^{n}dz}{z^{2}+2az+1} \tag{12}\] where we defined \(a=(E-\omega_{c})/(2\xi)\). Since the integral is around a closed circle, we have \(I(E,n)=I(E,-n)\). Integral (12) can be solved using the Residue Theorem. If the energy lies within the band (\(|a|<1\)), the corresponding poles are \(z_{\pm}=-a\pm i\sqrt{1-a^{2}}\). These poles correspond to the limits \(\lim_{\eta\to 0}E\pm i\eta\). \[\lim_{\eta\to 0}I(E\pm i\eta,n)=\mp 2\pi i\frac{(-a\pm i\sqrt{1-a^{2}})^{|n|}}{ \sqrt{4\xi^{2}-(E-\omega_{c})^{2}}}. \tag{13}\] Writing the self-energy (10) in terms of the integrals (12) gives: \[\Sigma(E) =i\frac{g^{2}}{2\pi\omega_{c}^{2}}\Big{[}(\omega_{c}^{2}+2\xi^{ 2})I(E,0)+\] \[4\xi\omega_{c}I(E,1)+2\xi^{2}I(E,2)\Big{]}. \tag{14}\] From where, after introducing the solution found in (13), we can obtain the self-energy as in Eq. 42 of the main text. The real part of the self-energy gives the Lamb shift of the resonant frequency, whereas the imaginary part provides half its spectral width. \[\text{Re}(\Sigma(E))=-\frac{g^{2}}{\omega_{c}^{2}}(\omega_{c}+E). \tag{15a}\] \[\text{Im}(\Sigma(E))=\frac{g^{2}}{\omega_{c}^{2}}\frac{E^{2}}{\sqrt{4\xi^{2}-( E-\omega_{c})^{2}}}=\frac{J(E)}{2}. \tag{15b}\] ## Appendix C Transmission in the Coulomb gauge In this appendix, we demonstrate the effect of truncation in both gauges by explicitly computing different transmission spectra while considering various numbers of dipole levels \(N_{d}\). As mentioned in the main text, applying the two-level approximation in the dipole gauge can accurately approximate the full model, whereas doing so in the Coulomb gauge does not yield the expected results. However, we were able to perform some calculations in the Coulomb gauge for \(N_{d}>2\) at the intermediate range of the USC regime. In Fig. 10, we plot the transmittance spectra in the Coulomb gauge, including all coupling terms, for different numbers of dipole levels. Fig. 10 (a) shows the spectra obtained using the two-level approximation in the Coulomb gauge, where the resonance is fixed at \(\omega_{k}=\Delta\) and an additional Fano resonance is observed for all couplings. In Fig. 10 (b), we increase the number of dipole levels, resulting in a slight red-shift of the main resonance and an increase in the transmittance at lower frequencies around the main resonance while maintaining the Fano resonance. As the number of dipole levels increases further in Fig. 10 (c) and (d), we observe a strong red-shift in the transmittance minima and an imbalance of the transmittance on both sides of the main resonance. These effects are consistent with what we found and explained in the main text for the dipole gauge. In both Fig. 10 (c) and (d), we also plot the minima of transmittance obtained in the dipole gauge, as shown Figure 10: Transmittance spectra in the Coulomb gauge for a different number of dipole levels \(N_{d}\). (a) and (b) correspond to the spectra obtained using the two-level approximation and for four levels, respectively. For a higher number of dipole levels, such as in (c) \(N_{d}=5\) and (d) \(N_{d}=8\), we also plot the RWA resonance as a dotted cyan line [Cl. Eq. (40)] and the USC resonance line predicted with the polaron transformation in the dipole gauge. in the main text in Fig. 6. The blue dotted line represents the analytical solution in the RWA, as given in Eq. (40), while the white dashed line gives the solution in the effective polaron picture. It is evident that capturing the main features obtained in the dipole gauge with the two-level approximation in the Coulomb gauge requires a significantly higher number of dipole levels. While the main resonance seems to be well described, the Fano resonances do not coincide yet. The Fano resonances directly come from the eigenenergies of the scatterer, which are well approximated in the dipole gauge, as introduced in Sec. II.4. However, obtaining an accurate two-level approximation in the Coulomb gauge from the truncated dipole model is possible, as observed in the single cavity case [25]. Applying a truncated version of the Power-Zineau-Woooley transformation (2), \(\mathcal{U}=\exp(ig/\omega_{c}(a_{0}+a_{0}^{\dagger})\sigma_{x})\), we can recover a Coulomb gauge description from \(\mathcal{H}_{C}=\mathcal{U}\mathcal{H}_{D}\mathcal{U}^{\dagger}\) [Cf. Eqs. (33), (34)] \[\mathcal{H}_{C} =\sum_{n}\omega_{c}a_{n}^{\dagger}a_{n}+\xi\sum_{n}(a_{n+1}^{ \dagger}a_{n}+a_{n}^{\dagger}a_{n+1}) \tag{47}\] \[+\frac{\Delta^{\prime}}{2}\left[\cos\left(\frac{2g}{\omega_{c}}( a_{0}+a_{0}^{\dagger})\right)+\sin\left(\frac{2g}{\omega_{c}}(a_{0}+a_{0}^{ \dagger})\right)\right].\] ## Appendix D Continuous limit of the system Cavity array systems can be experimentally implemented in various platforms, such as photonic crystals and superconducting systems [7, 49], as discussed in Sec. VI. Moreover, these types of models can also be seen as discretizations of general waveguide models. In this appendix, we derive the continuous real-space description of our system in the dipole gauge. This model differs from other standard continuous waveguide QED models [50] due to the couplings to adjacent cavities. To begin, we split the momentum space into left and right propagating momenta as follows: \[H_{ph}=\sum_{j=\{L,R\}}\int dk_{j}\omega(k_{j})a^{\dagger}(k_{j})a(k_{j}) \tag{48}\] where \([a(k_{i}),a^{\dagger}(k_{j}^{\prime})]=\delta(k_{j}-k_{i}^{\prime})\delta_{ij}\). Next, we introduce creation and annihilation operators for right or left propagating photons at position \(x\) as the Fourier transform of their momentum counterparts \[a(k_{R}) :=\int a_{R}(r)e^{-ik_{R}r}dr, \tag{49}\] \[a(k_{L}) :=\int a_{L}(r)e^{-ik_{L}r}dr. \tag{50}\] By linearizing the dispersion relation around a probe wavevector \(k_{0}\), \[\omega_{L}(k) \simeq\omega_{L}(k_{0})-v_{g}(k_{0})k_{L}, \tag{51}\] \[\omega_{R}(k) \simeq\omega_{R}(k_{0})+v_{g}(k_{0})k_{R}, \tag{52}\] with \(k_{L}=k-k_{0}\) and \(k_{R}=k+k_{0}\), we can find an expression for Hamiltonian (48) in real space as \[H_{ph} =\int dra_{R}^{\dagger}(r)\left(\omega_{R}(k_{0})-iv_{g}\frac{ \partial}{\partial r}\right)a_{R}(r)\] \[+\int dxa_{L}^{\dagger}(r)\left(\omega_{L}(k_{0})-iv_{g}\frac{ \partial}{\partial r}\right)a_{L}(r). \tag{53}\] where we have used the relation \(k_{R}e^{ik_{R}r}=-i\partial_{r}e^{ik_{R}r}\). After applying the same linearization to the interaction term the full Hamiltonian reads \[H_{D} =\sum_{j}\int dra_{j}^{\dagger}(r)\left(\omega_{j}(k_{0})-iv_{g} \frac{\partial}{\partial r}\right)a_{j}(r)+H_{m}\] \[+iqA_{0}x\sum_{j}\int dk_{j}\left(\omega_{j}(k_{0})-iv_{g}\frac{ \partial}{\partial r}\right)a_{j}(r)\] \[-iqA_{0}x\sum_{j}\int dk_{j}\left(\omega_{j}(k_{0})-iv_{g}\frac{ \partial}{\partial r}\right)a_{j}^{\dagger}(r). \tag{54}\]
2302.14276
On the Role of Emergent Communication for Social Learning in Multi-Agent Reinforcement Learning
Explicit communication among humans is key to coordinating and learning. Social learning, which uses cues from experts, can greatly benefit from the usage of explicit communication to align heterogeneous policies, reduce sample complexity, and solve partially observable tasks. Emergent communication, a type of explicit communication, studies the creation of an artificial language to encode a high task-utility message directly from data. However, in most cases, emergent communication sends insufficiently compressed messages with little or null information, which also may not be understandable to a third-party listener. This paper proposes an unsupervised method based on the information bottleneck to capture both referential complexity and task-specific utility to adequately explore sparse social communication scenarios in multi-agent reinforcement learning (MARL). We show that our model is able to i) develop a natural-language-inspired lexicon of messages that is independently composed of a set of emergent concepts, which span the observations and intents with minimal bits, ii) develop communication to align the action policies of heterogeneous agents with dissimilar feature models, and iii) learn a communication policy from watching an expert's action policy, which we term `social shadowing'.
Seth Karten, Siva Kailas, Huao Li, Katia Sycara
2023-02-28T03:23:27Z
http://arxiv.org/abs/2302.14276v1
# On the Role of Emergent Communication for Social Learning ###### Abstract Explicit communication among humans is key to coordinating and learning. Social learning, which uses cues from experts, can greatly benefit from the usage of explicit communication to align heterogeneous policies, reduce sample complexity, and solve partially observable tasks. Emergent communication, a type of explicit communication, studies the creation of an artificial language to encode a high task-utility message directly from data. However, in most cases, emergent communication sends insufficiently compressed messages with little or null information, which also may not be understandable to a third-party listener. This paper proposes an unsupervised method based on the information bottleneck to capture both referential complexity and task-specific utility to adequately explore sparse social communication scenarios in multi-agent reinforcement learning (MARL). We show that our model is able to i) develop a natural-language-inspired lexicon of messages that is independently composed of a set of emergent concepts, which span the observations and intents with minimal bits, ii) develop communication to align the action policies of heterogeneous agents with dissimilar feature models, and iii) learn a communication policy from watching an expert's action policy, which we term'social shadowing'. Machine Learning, ICML ## 1 Introduction Social learning (Jaques et al., 2019; Ndousse et al., 2021) agents analyze cues from direct observation of other agents (novice or expert) in the same environment to learn an action policy from others. However, observing expert actions may not be sufficient to coordinate with other agents. Rather, by learning to communicate, agents can better model the intent of other agents, leading to better coordination. In humans, explicit communication for coordination assumes a common communication substrate to convey abstract concepts and beliefs directly (Mirsky et al., 2020), which may not be available for new partners. To align complex beliefs, heterogeneous agents must learn a message policy that translates from one theory of mind (Li et al., 2022) to another to synchronize coordination. Especially when there is complex information to process and share, new agent partners need to learn to communicate to work with other agents. Emergent communication studies the creation of artificial language. Often phrased as a Lewis game, speakers and listeners learn a set of tokens to communicate complex observations (Lewis, 1969). However, in multi-agent reinforcement learning (MARL), agents suffer from partial observability and non-stationarity (due to unaligned value functions) (Papoudakis et al., 2019), which aims to be solved with decentralized learning through communication. In the MARL setup, agents, as speakers and listeners, learn a set of tokens to communicate observations, intentions, coordination, or other experiences which help facilitate solving tasks (Karten et al., 2022, 2023). Agents learn to communicate effectively through a backpropagation signal from their task performance (Foerster et al., 2016; Lowe et al., 2017; Lazaridou et al., 2016; Sukhbaatar et al., 2016; Singh et al., 2018). This has been found useful for applications in human-agent teaming (Karten et al., 2023; Marathe et al., 2018; Lake et al., 2019; Lazaridou and Baroni, 2020), multi-robot navigation (Freed et al., 2020), and coordination in complex games such as StarCraft II (Samvelyan et al., 2019). Communication quality has been shown to have a strong relationship with task performance (Marlow et al., 2018), leading to a multitude of work attempting to increase the representational capacity by decreasing the convergence rates (Eccles et al., 2019; Lin et al., 2021; Karten et al., 2022; Wang et al., 2020; Tucker et al., 2022). Yet these methods still create degenerate communication protocols (Karten et al., 2023, 2022; Freed et al., 2020), which are uninterpretable due to joined concepts or null (lack of) information, which causes performance degradation. In this work, we investigate the challenges of learning a messaging lexicon to prepare emergent communication for social learning (EC4SL) scenarios. We study the following hypotheses: **H1)** EC4SL will learn faster through structured concepts in messages leading to higher-quality solutions, **H2)** EC4SL aligns the policies of expert heterogeneous agents, and **H3)** EC4SL enables social shadowing, where an agent learns a communication policy while only observing an expert agent's action policy. By learning a communication policy, the agent is encouraged to develop a more structured understanding of intent, leading to better coordination. The setting is very realistic among humans and many computer vision and RL frameworks may develop rich feature spaces for a specific solo task, but have not yet interacted with other agents, which may lead to failure without alignment. We enable a compositional emergent communication paradigm, which exhibits clustering and informativeness properties. We show theoretically and through empirical results that compositional language enables independence properties among tokens with respect to referential information. Additionally, when combined with contrastive learning, our method outperforms competing methods that only ground communication on referential information. We show that contrastive learning is an optimal critic for communication, reducing sample complexity for the unsupervised emergent communication objective. In addition to the more human-like format, compositional communication is able to create variable-length messages, meaning that we are not limited to sending insufficiently compressed messages with little information, increasing the quality of each communication. In order to test our hypotheses, we show the utility of our method in multi-agent settings with a focus on teams of agents, high-dimensional pixel data, and expansions to heterogeneous teams of agents of varying skill levels. Social learning requires agents to explore to observe and learn from expert cues. We interpolate between this form of social learning and imitation learning, which learns action policies directly from examples. We introduce a'social shadowing' learning approach where we use first-person observations, rather than third-person observations, to encourage the novice to learn latently or conceptually how to communicate and develop an understanding of intent for better coordination. The social shadowing episodes are alternated with traditional MARL during training. Contrastive learning, which works best with positive examples, is apt for social shadowing. Originally derived to enable lower complexity emergent lexicons, we find that the contrastive learning objective is apt for agents to develop internal models and relationships of the task through social shadowing. The idea is to enable a shared emergent communication substrate (with minimal bandwidth) to enable future coordination with novel partners. Our contributions are deriving an optimal critic for a communication policy and showing that the information bottleneck helps extend communication to social learning scenarios. In real-world tasks such as autonomous driving or robotics, humans do not necessarily learn from scratch. Rather they explore with conceptually guided information from expert mentors. In particular, having structured emergent messages reduces sample complexity, and contrastive learning can help novice agents learn from experts. Emergent communication can also align heterogeneous agents, a social task that has not been previously studied. ## 2 Related Work ### Multi-Agent Signaling Implicit communication conveys information to other agents that is not intentionally communicated (Grupen et al., 2022). Implicit signaling conveys information to other agents based on one's observable physical position (Grupen et al., 2022). Implicit signaling may be a form of implicit communication such as through social cues (Jaques et al., 2019; Ndousse et al., 2021) or explicit communication such as encoded into the MDP through "cheap talk" (Sokota et al., 2022). Unlike implicit signaling, explicit signaling is a form of positive signaling (Li et al., 2021) that seeks to directly influence the behavior of other agents in the hopes that the new information will lead to active listening. Multi-agent emergent communication is a type of explicit signaling which deliberately shares information. Symbolic communication, a subset of explicit communication, seeks to send a subset of pre-defined messages. However, these symbols must be defined by an expert and do not scale to particularly complex observations and a large number of agents. Emergent communication aims to directly influence other agents with a learned subset of information, which allows for scalability and interpretability by new agents. ### Emergent Communication Several methodologies currently exist to increase the informativeness of emergent communication. With discrete and clustered continuous communication, the number of observed distinct communication tokens is far below the number permissible (Tucker et al., 2021). As an attempt to increase the emergent "vocabulary" and decrease the data required to converge to an informative communication "language", work has added a bias loss to emit distinct tokens in different situations (Eccles et al., 2019). More recent work has found that the sample efficiency can be further improved by grounding communication in observation space with a supervised reconstruction loss (Lin et al., 2021). Information-maximizing autoencoders aim to maximize the state reconstruction accuracy for each agent. How ever, grounding communication in observations has been found to easily satisfy these input-based objectives while still requiring a myriad more samples to explore to find a task-specific communication space (Karten et al., 2022). Thus, it is necessary to use task-specific information to communicate informatively. This will enable learned compression for task completion rather than pure compression for input recovery. Other work aims to use the information bottleneck (Tishby and Zaslavsky, 2015) to decrease the entropy of messages (Wang et al., 2020). In our work, we use contrastive learning to increase representation similarity with future goals, which we show optimally optimizes the Q-function for messages. ### Natural Language Inspiration The properties of the tokens in emergent communication directly affect their informative ability. As a baseline, continuous communication tokens can represent maximum information but lack human-interpretable properties. Discrete 1-hot (binary vector) tokens allow for a finite vocabulary, but each token contains the same magnitude of information, with equal orthogonal distance to each other token. Similar to word embeddings in natural language, discrete prototypes are an effort to cluster similar information together from continuous vectors (Tucker et al., 2021). Building on the continuous word embedding properties, VQ-VIB (Tucker et al., 2022), an information-theoretic observation grounding based on VQ-VAE properties (Van Den Oord et al., 2017), uses variational properties to provide word embedding properties for continuous emergent tokens. Like discrete prototypes, they exhibit a clustering property based on similar information but are more informative. However, each of these message types determines a single token for communication. Tokens are stringed together to create emergent "sentences". ## 3 Preliminaries We formulate our setup as a decentralized, partially observable Markov Decision Process with communication (DecPOMDP-Comm). Formally, our problem is defined by the tuple \(\langle\mathcal{S},\mathcal{A},\mathcal{M},\mathcal{T},\mathcal{R},\mathcal{O },\Omega,\gamma\rangle\). We define \(\mathcal{S}\) as the set of states, \(\mathcal{A}^{i},\,i\in[1,N]\) as the set of actions, which includes task-specific actions, and \(\mathcal{M}^{i}\) as the set of communications for \(N\) agents. \(\mathcal{T}\) is the transition between states due to the multi-agent joint action space \(\mathcal{T}:\mathcal{S}\times\mathcal{A}^{1},...,\mathcal{A}^{N}\to \mathcal{S}\). \(\Omega\) defines the set of observations in our partially observable setting. Partial observability requires communication to complete the tasks successfully. \(\mathcal{O}^{i}:\mathcal{M}^{1},...,\mathcal{M}^{N}\times\hat{\mathcal{S}}\to\Omega\) maps the communications and local state, \(\hat{\mathcal{S}}\), to a distribution of observations for each agent. \(\mathcal{R}\) defines the reward function and \(\gamma\) defines the discount factor. ### Architecture The policy network is defined by three stages: Observation Encoding, Communication, and Action Decoding. The best observation encoding and action decoding architecture is task-dependent, i.e., using multi-layer perceptrons (MLPs), CNNs (LeCun et al., 1995), GRUs (Chung et al., 2014), or transformer (Vaswani et al., 2017) layers are best suited to different inputs. The encoder transforms observation and any sequence or memory information into an encoding \(H\). The on-policy reinforcement learning training uses REINFORCE (Williams, 1992) or a decentralized version of MAPPO (Yu et al., 2021) as specified by our experiments. Our work focuses on the communication stage, which can be divided into three substages: message encoding, message passing (often considered sparse communication), and message decoding. We use the message passing from (Karten et al., 2022). For message decoding, we build on a multi-headed attention framework, which allows an agent to learn which messages are most important (Agarwal et al., 2020). Our compositional communication framework defines the message encoding, as described in section 4. ### Objective Mutual information, denoted as \(I(X;Y)\), looks to measure the relationship between random variables, \[I(X;Y)=\mathds{E}_{p(x,y)}\left[\log\frac{p(x|y)}{p(x)}\right]=\mathds{E}_{p( x,y)}\left[\log\frac{p(y|x)}{p(y)}\right]\] which is often measured through Kullback-Leibler divergence (Kullback, 1997), \(I(X;Y)=D_{KL}(p(x,y)||p(x)\otimes p(y))\). The message encoding substage can be defined as an information bottleneck problem, which defines a trade-off between the complexity of information (compression, \(I(X,\hat{X})\)) and the preserved relevant information (utility, \(I(\hat{X},Y)\)). The deep variational information bottleneck defines a trade-off between preserving useful information and compression (Alemi et al., 2017; Tishby and Zaslavsky, 2015). We assume that our observation and memory/sequence encoder provides an optimal representation \(H^{i}\) suitable for sharing relevant observation and intent/coordination information. We hope to recover a representation \(Y^{i}\), which contains the sufficient desired outputs. In our scenario, the information bottleneck is a trade-off between the complexity of information \(I(H^{i};M^{i})\) (representing the encoded information exactly) and representing the relevant information \(I(M^{j\neq i};Y^{i})\), which is signaled from our contrastive objective. In our setup, the relevant information flows from other agents through communication, signaling a combination of the information bottleneck and a Lewis game. We additionally promote complexity through our compositional independence objective, \(I(M^{i}_{1};\ldots;M^{i}_{L}|H^{i})\) This is formulated by the following Lagrangian, \[\mathcal{L}(\ p(m^{i}|h^{i})\ )=\beta_{u}\hat{I}(M^{j\neq i};Y^{i})\ -\beta_{c}\hat{I}(H^{i};M^{i})\] \[\ ### Message Generation Architecture Now, we can define the pipeline for message generation. The idea is to create an architecture that can generate features to enable independent message tokens. We expand each compressed token into the space of the hidden state \(h\) (1-layer linear expansion) since each token has a natural embedding in \(\mathbf{R}^{|h|}\). Then, we perform attention using a softmin to help minimize similarity with previous tokens and sample the new token from a variational distribution. See algorithm 1 for complete details. During execution, we can generate messages directly due to equation 1, resolving any computation time lost from sequential compositional message generation. ``` 1:\(T\leftarrow\texttt{num\_tokens}\) 2:\(m=\mathbf{0}\)\(\{T\times d_{m},d_{m}\leftarrow\texttt{token\_size}\}\) 3:\(Q\leftarrow\texttt{Q\_MLP}(h_{t})\) 4:\(V\leftarrow\texttt{V\_MLP}(h_{t})\) 5:for\(i\gets 1\) to \(T\)do 6:\(K\leftarrow\texttt{K\_MLP}(m)\) 7:\(\hat{h}=\texttt{softmin}(\frac{Q^{\tau_{\texttt{mean}}(K,1)}}{\sqrt{d_{k}}})^ {\intercal}V\) 8:\(m_{i}\sim\mathcal{N}(\hat{h};\mu,\sigma)\) 9:endfor 10:return\(m\) ``` **Algorithm 1** Compositional Message Gen.(\(h_{t}\)) ## 5 Utility through Contrastive Learning First, note that our Markov Network is as follows: \(H^{j}\to M^{j}\to Y^{i}\gets H^{i}\). Continue to denote \(i\) as the agent identification and \(j\) as the agent ID such that \(j\neq i\). We aim to satisfy the utility objective of the information bottleneck, \(I(M^{j};Y^{i})\), through contrastive learning as shown in figure 1. **Proposition 5.1**.: _Utility mutual information is lower bounded by the contrastive NCE-binary objective, \(I(M,Y)\geq\log\sigma(f(s,m,s_{f}^{+}))+\log\sigma(1-f(s,m,s_{f}^{-})).\)_ The proof is in Appendix A.1. This result shows a need for gradient information to flow backward across agents along communication edge connections. ## 6 Experiments and Results We condition on inputs, especially rich information (such as pixel data), and task-specific information. When evaluating an artificial language in MARL, we are interested in referential tasks, in which communication is _required_ to complete the task. With regard to intent-grounded communication, we study ordinal tasks, which require coordination information between agents to complete successfully. Thus, we consider tasks with a team of agents to foster messaging that communicates coordination information that also includes their observations. To test **H1**, structuring emergent messages enables lower complexity, we test our methodology and analyze the input-oriented information and utility capabilities. Next, we analyze the ability of heterogeneous agents to understand differing communication policies (**H2**)). Finally, we consider the effect of social shadowing (**H3**), in which agents solely learn a communication policy from an expert agent's action policy. We additionally analyze the role of offline reinforcement learning for emergent communication in combination with online reinforcement learning to further learn emergent communication alongside an action policy. We evaluate each scenario over 10 seeds. ### Environments Blind Traffic JunctionWe consider a benchmark that requires both referential and ordinal capabilities within a team Figure 1: By using contrastive learning, our method seeks similar representations between the state-message pair and future states while creating dissimilar representations with random states. Thus satisfying the utility objective of the information bottleneck. The depicted agents are blind and cannot see other cars. of agents. The blind traffic junction environment (Singh et al., 2018) requires multiple agents to navigate a junction without any observation of other agents. Rather, they only observe their own state location. Ten agents must coordinate to traverse through the lanes without colliding into agents within their lane or in the junction. Our training uses REINFORCE (Williams, 1992). Pascal VOC GameWe further evaluate the complexity of compositional communication with a Pascal VOC (Everingham et al., 2010). This is a two-agent referential game similar to the Cifar game (Lin et al., 2021) but requires the prediction of multiple classes. During each episode, each agent observes a random image from the Pascal VOC dataset containing exactly two unique labels. Each agent must encode information given only the raw pixels from the original image such that the other agent can recognize the two class labels in the original image. An agent receives a reward of 0.25 per correctly chosen class label and will receive a total reward of 1 if both agents guess all labels correctly. See figure 2. Our training uses heterogeneous agents trained with PPO (modified from MAPPO (Yu et al., 2021) repository). For simplicity of setup, we consider images with exactly two unique labels from a closed subset of size five labels of the original set of labels from the Pascal VOC data. Furthermore, these images must be of size \(375\times 500\) pixels. Thus, the resultant dataset comprised 534 unique images from the Pascal VOC dataset. ### Baselines To evaluate our methodology, we compare our method to the following baselines: (1) no-comm, where agents do not communicate; (2) rl-comm, which uses a baseline communication method learned solely through policy loss (Singh et al., 2018); (3) ae-comm, which uses an autoencoder to ground communication in input observations (Lin et al., 2021); (4) VQ-VIB, which uses a variational autoencoder to ground discrete communication in input observations and a mutual information objective to ensure low entropy communication (Tucker et al., 2022). ### Input-Oriented Information Results We provide an ablation of the loss parameter \(\beta\) in table 1 in the blind traffic junction scenario. When \(\beta=0\), we use our compositional message paradigm without our derived loss terms. We find that higher complexity and independence losses increase sample complexity. When \(\beta=1\), the model was unable to converge. However, when there is no regularization loss, the model performs worse (with no guarantees about referential representation). We attribute this to the fact that our independence criteria learns a stronger causal relationship. There are fewer spurious features that may cause an agent to take an incorrect action. In order to understand the effect of the independent concept representation, we analyze the emergent language's capacity for redundancy. A message token \(m_{l}\) is redundant if there exists another token \(m_{k}\) that represents the same information. With our methodology, the emergent 'language' converges to the exact number of observations and intents required to solve the task. With a soft discrete threshold, the independent information loss naturally converges to a discrete number of tokens in the vocabulary. Our \(\beta\) ablation in table 1 yields a bijection between each token in the vocabulary and the possible emergent concepts, i.e., the enumerated observations and intents. Thus for \(\beta=0.1\), there is no redundancy. Sparse CommunicationIn corollary 4.3, we assume that there is no mutual information between tokens. In practice, the loss may only be near-zero. Our empirical results yield independence loss around \(1e-4\). In table 1, the size of the messages is automatically compressed to the smallest size to represent the information. Despite a trivially small amount of mutual information between tokens, our compositional method is able to reduce the message size in bits by 2.3x using our derived regularization, for a total of an 8x reduction in message size over non-compositional methods such as ae-comm. Since the base unit for the token is a \begin{table} \begin{tabular}{|l|l l l|} \hline \hline \(\beta\) & Success & \begin{tabular}{l} Message \\ Size in Bits \\ \end{tabular} & Redundancy \\ \hline 0.1 & 1.0 & 64 & 1.0 \\ 0.01 &.996 & 69.52 & 1.06 \\ 0.001 &.986 & 121.66 & 2.06 \\ 0 &.976 & 147.96 & 2.31 \\ non- &.822 & 512 & 587 \\ compositional & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline \hline \end{tabular} \end{table} Table 1: Beta ablation: Messages are naturally sparse in bits due to the complexity loss. Redundancy measures the capacity for a bijection between the size of the set of unique tokens and the enumerated observations and intents. Min redundancy is 1.0 (a bijection). Lower is better. Figure 2: An example of two possible classes, person and horse, from a single observation in the Pascal VOC game. 32-bit float, we note that each token in the message may be further compressed. We observe that each token uses three significant digits, which may further compress tokens to 10 bits each for a total message length of 20 bits. ### Communication Utility Results Due to coordination in MARL, grounding communication in referential features is not enough. Finding the communication utility requires grounding messages in ordinal information. Overall, figure 3 shows that our compositional, contrastive method outperforms all methods focused on solely input-oriented communication grounding. In the blind traffic junction, our method yields a higher average task success rate and is able to achieve it with a lower sample complexity. Training with the contrastive update tends to spike to high success but not converge, often many episodes before convergence, which leaves area for training improvement. That is, the contrastive update begins to find aligned latent spaces early in training, but it cannot adapt the methodology quickly enough to converge. The exploratory randomness of most of the early online data prevents exploitation of the high utility \(f^{+}\) examples. This leaves further room for improvement for an adaptive contrastive loss term. Regularization loss convergenceAfter convergence to high task performance, the autoencoder loss increases in order to represent the coordination information. This follows directly from the information bottleneck, where there exists a tradeoff between utility and complexity. However, communication, especially referential communication, should have an overlap between utility and complexity. Thus, we should seek to make the complexity loss more convex. Our compositional communication complexity loss does not converge before task performance convergence. While the complexity loss tends to spike in the exploratory phase, the normalized value is very small. Interestingly, the method eventually converges as the complexity loss converges below a normalized 0.3. Additionally, the contrastive loss tends to decrease monotonically and converges after the task performance converges, showing a very smooth decrease. The contrastive \(f^{-}\) loss decreases during training, which may account for success spikes prior to convergence. The method is able to converge after only a moderate decrease in the \(f^{+}\) loss. This implies empirical evidence that the contrastive loss is an optimal critic for messaging. See figure 3. ### Heterogeneous Alignment Through Communication In order to test the heterogeneous alignment ability of our methodology to learn higher-order concepts from high-dimensional data, we analyze the performance on the Pascal VOC game. We compare our methodology against ae-comm to show that concepts should consist of independent information directly from task signal rather than compression to reconstruct inputs. That is, we show an empirical result on pixel data to verify the premise of the information bottleneck. Our methodology significantly outperforms the observation-grounded ae-comm baseline, as demonstrated by figure 4. The ae-comm methodology, despite using autoencoders to learn observation-grounded communication, performs only slightly better than no-comm. On the other hand, our methodology is able to outperform both baselines significantly. It is important to note that based on figure 4, our methodology is able to guess more than two of the four labels correctly across the two agents involved, while the baseline methodologies struggle to guess exactly two of their four labels consistently. This can be attributed to our framework being able to learn compositional concepts that are much more easily discriminated due to mutual independence. Figure 4: **Pascal VOC Game** Representing compositional concepts from raw pixel data in images to communicate multiple concepts within a single image. Our method significantly outperforms ae-comm and no-comm due to our framework being able to learn composable, independent concepts. Figure 3: **Blind Traffic Junction** Left: Our method uses compositional complexity and contrastive utility to outperform other baselines in terms of performance and sample complexity. The legend provides the mean \(\pm\) variance of the best performance. Right: Top: success, contrastive, and complexity losses for our method. Right, Bottom: success, autoencoder loss for ae-comm with supervised pretraining. ### Social Shadowing Critics of emergent communication may point to the increased sample complexity due to the dual communication and action policy learning. In the social shadowing scenario, heterogeneous agents can learn to generate a communication policy without learning the action policy of the watched expert agents. To enable social shadowing, the agent will alternate between a batch of traditional MARL (no expert) and (1st-person) shadowing an expert agent performing the task in its trajectory. The agent only uses the contrastive objective to update its communication policy during shadowing. In figure 5, the agent that performs social shadowing is able to learn the action policy with almost half the sample complexity required by the online reinforcement learning agent. Our results show that the structured latent space of the emergent communication learns socially benevolent coordination. This tests our hypothesis that by learning communication to understand the actions of other agents, one can enable lower sample complexity coordination. Thus, it mitigates the issues of solely observing actions. ## 7 Discussion By using our framework to better understand the intent of others, agents can learn to communicate to align policies and coordinate. Any referential-based setup can be performed with a supervised loss, as indicated by the instant satisfaction of referential objectives. Even in the Pascal VOC game, which appears to be a purely referential objective, our results show that intelligent compression is not the only objective of referential communication. The emergent communication paradigm must enable an easy-to-discriminate space for the game. In multi-agent settings, the harder challenge is to enable coordination through communication. Using contrastive communication as an optimal critic aims to satisfy this, and has shown solid improvements. Since contrastive learning benefits from good examples, this method is even more powerful when there is access to examples from expert agents. In this setting, the communication may be bootstrapped, since our optimal critic has examples with strong signals from the'social shadowing' episodes. Additionally, we show that the minimization of our independence objective enables tokens that contain minimal overlapping information with other tokens. Preventing trivial communication paradigms enables higher performance. Each of these objectives is complementary, so they are not trivially minimized during training, which is a substantial advantage over comparative baselines. Unlike prior work, this enables the benefits of training with reinforcement learning in multi-agent settings. In addition to lower sample complexity, the mutual information regularization yields additional benefits, such as small messages, which enables the compression aspect of sparse communication. From a qualitative point of view, the independent information also yields discrete emergent concepts, which can be further made human-interpretable by a post-hoc analysis (Yeh et al., 2021). This is a step towards white-box machine learning in multi-agent settings. The interpretability of this learned white-box method could be useful in human-agent teaming as indicated by prior work (Karten et al., 2023). The work here will enable further results in decision-making from high-dimensional data with emergent concepts. The social scenarios described are a step towards enabling a zero-shot communication policy. This work will serve as future inspiration for using emergent communication to enable ad-hoc teaming with both agents and humans.
2309.00171
Invariant subspace problem in Hilbert space: Correlation with the Kadison-Singer problem and the Borel conjecture
This paper explores the intriguing connections between the invariant subspace problem, the Kadison-Singer problem, and the Borel conjecture. The Kadison-Singer problem, originally formulated in terms of pure states on C*-algebras, was later reformulated using projections, establishing a link with the invariant subspace problem. The Borel conjecture, a question in descriptive set theory, connects to the invariant subspace problem through Borel equivalence relations. This paper elucidates these connections, underscoring the interplay of unsolved mathematical problems and the collaborative nature of mathematical research.
Mostafa Behtouei
2023-08-31T23:56:19Z
http://arxiv.org/abs/2309.00171v1
Invariant subspace problem in Hilbert space: Correlation with the Kadison-Singer problem and the Borel conjecture ###### Abstract This paper explores the intriguing connections between the invariant subspace problem, the Kadison-Singer problem, and the Borel conjecture. The Kadison-Singer problem, originally formulated in terms of pure states on C*-algebras, was later reformulated using projections, establishing a link with the invariant subspace problem. The Borel conjecture, a question in descriptive set theory, connects to the invariant subspace problem through Borel equivalence relations. This paper elucidates these connections, underscoring the interplay of unsolved mathematical problems and the collaborative nature of mathematical research. ## 1 Introduction The field of functional analysis grapples with fundamental questions that underlie the structure and properties of linear operators in infinite-dimensional spaces. Among these inquiries, the invariant subspace problem stands as a central enigma, probing the existence of a non-trivial closed invariant subspace for every bounded linear operator on a separable infinite-dimensional Hilbert space [1, 2]. The exploration of this problem examines the intricate interconnection between algebraic and topological concepts, and its resolution holds implications for various mathematical domains. Moreover, its application in physics further highlights the significance of unraveling these relationships [3]. In this paper, we explore the interesting links between the invariant subspace problem and two other important unsolved math problems: the Kadison-Singer problem and the Borel conjecture. The Kadison-Singer problem, originating from the study of pure states on C*-algebras, has evolved to intersect with the invariant subspace problem through its reformulation in terms of projections. This link reveals a deep connection between seemingly disparate problems within functional analysis [4]. On a different mathematical subject, the Borel conjecture, rooted in descriptive set theory, raises questions about the structure of real numbers and their subsets. Unexpectedly, this conjecture's ramifications stretch to the invariant subspace problem, creating a bridge between functional analysis and set theory through Borel equivalence relations. By investigating these intricate relationships, we illuminate the broader tapestry of unsolved mathematical problems and underscore the collaborative nature of mathematical research. In the following sections, we will explore the specifics of each issue, exploring their formulations, historical contexts, and implications. Through this exploration, we aim to shed light on the interconnectedness of these challenges, demonstrating how progress in one area of mathematics can have far-reaching consequences in seemingly unrelated domains. Ultimately, the pursuit of solutions to these enigmatic problems exemplifies the spirit of mathematical inquiry, where the exchange of ideas and cross-pollination of concepts pave the way towards deeper understanding and potential breakthroughs. The interactions between the invariant subspace problem, Kadison-Singer problem, and Borel conjecture exemplify the collaborative and synergistic nature of mathematical research. As mathematicians from various domains come together to explore these connections, they enrich the mathematical landscape, uncovering hidden relationships and advancing knowledge across disciplines. ## 2 The Kadison-Singer problem and its connection to the invariant subspace problem C*-algebras are a fundamental class of mathematical objects that play a crucial role in functional analysis, operator theory, and quantum mechanics. They provide a rich framework for studying linear operators, their algebraic and topological properties, and their relations to other mathematical structures [5]. A C*-algebra is a complex algebra equipped with an involution (conjugate transpose) and a norm that satisfy specific properties. Formally, let \(\mathcal{A}\) be a complex algebra over the field of complex numbers \(\mathbb{C}\). Then, \(\mathcal{A}\) is a C*-algebra if it is equipped with the following structures: 1. Involution: An involution is a map \(*:\mathcal{A}\rightarrow\mathcal{A}\) that assigns to each element \(a\in\mathcal{A}\) its adjoint \(a^{*}\). The involution satisfies the following properties for all \(a,b\in\mathcal{A}\) and \(\alpha\in\mathbb{C}\): \[(a+b)^{*} =a^{*}+b^{*},\] \[(\alpha a)^{*} =\overline{\alpha}a^{*},\] \[(ab)^{*} =b^{*}a^{*}.\] 2. Norm: A norm is a map \(\|\cdot\|:\mathcal{A}\rightarrow[0,\infty)\) that assigns a non-negative real number to each element \(a\in\mathcal{A}\). The norm satisfies the following properties for all \(a,b\in\mathcal{A}\) and \(\alpha\in\mathbb{C}\): \[\|a\| \geq 0,\quad\|a\|=0\text{ if and only if }a=0,\] \[\|\alpha a\| =|\alpha|\|a\|,\] \[\|a+b\| \leq\|a\|+\|b\|,\] \[\|ab\| \leq\|a\|\|b\|.\] 3. C*-Condition: The C*-condition is the key defining property of C*-algebras. It states that for all \(a\in\mathcal{A}\), we have: \[\|a^{*}a\|=\|a\|^{2}.\] C*-algebras provide a versatile framework for studying linear operators, self-adjoint elements, and their properties. They have applications in a wide range of mathematical areas, including functional analysis, operator theory, quantum mechanics, and mathematical physics [6]. The Kadison-Singer problem, introduced in 1959 by Kadison and Singer, originated in the realm of C*-algebras. At its inception, the problem sought to establish whether every pure state on a C*-algebra could be realized as a vector state within a separable infinite-dimensional Hilbert space. Formally, for a given C*-algebra \(\mathcal{A}\), the problem was posed as: Problem 1 (Kadison-Singer Problem): Given a pure state \(\omega\) on \(\mathcal{A}\), is it possible to find a separable infinite-dimensional Hilbert space \(\mathcal{H}\) and a vector state \(\phi\in\mathcal{H}\) such that \(\omega(a)=\langle\phi,a\phi\rangle\) for all \(a\in\mathcal{A}\)? The initial formulation of the Kadison-Singer problem captured the essence of exploring pure states within the framework of C*-algebras. However, as the problem evolved, a projection-based approach emerged, revealing an unexpected connection to the invariant subspace problem in functional analysis. The problem, in its initial formulation, revolved around the investigation of pure states on C*-algebras. A pure state \(\omega\) on a C*-algebra \(\mathcal{A}\) is a linear functional that is positive, normalized, and satisfies \(\omega(a^{*}a)=\omega(a)^{*}\omega(a)\) for all \(a\in\mathcal{A}\). The problem posed the question of whether every pure state could be realized as a vector state on a separable infinite-dimensional Hilbert space. The reformulated problem asked whether, for a given C*-algebra \(\mathcal{A}\), a collection of finite-dimensional projections \(\{P_{i}\}\) could be found, satisfying specific conditions. These conditions ensured that the projections captured essential properties of the original problem [4, 7]. Projection-based Kadison-Singer problem states that for a given a C*-algebra \(\mathcal{A}\), does there exist a collection of finite-dimensional projections \(\{P_{i}\}\) such that: (1) each \(P_{i}\) is finite-dimensional, (2) the sum of the projections is the identity operator, i.e., \(\sum_{i}P_{i}=I\), and (3) for any subset \(J\subseteq\{1,2,\ldots\}\), the operator norm of the sum \(\|\sum_{i\in J}P_{i}\|\) is bounded by 1? Surprisingly, this projection-based formulation established an intimate connection between the Kadison-Singer problem and the invariant subspace problem. The invariant subspace problem, a long-standing question in functional analysis, queries whether every bounded linear operator on a separable infinite-dimensional Hilbert space possesses a non-trivial closed invariant subspace [2]. The projection-based Kadison-Singer problem, with its focus on finite-dimensional projections and their properties, unveiled a profound relationship between the two problems. The unexpected connection between the projection-based Kadison-Singer problem and the invariant subspace problem highlights the intricate and sometimes unforeseen interplay between seemingly disparate mathematical inquiries. The reformulation of the Kadison-Singer problem brought to light a shared mathematical essence, transcending the boundaries of specific problem domains. This connection underscores the rich web of mathematical ideas and the potential for cross-fertilization of concepts across different fields of study. The evolution of the Kadison-Singer problem from its roots in pure states to its projection-based formulation showcases the dynamic nature of mathematical exploration. The subsequent connection to the invariant subspace problem exemplifies the profound unity that can emerge from seemingly unrelated questions. This confluence of ideas reflects the collaborative and synergistic spirit that characterizes mathematical research. The Borel conjecture and its link to the invariant subspace problem The Borel conjecture, introduced by Emile Borel in 1938, resides within the realm of descriptive set theory and delves into the intricate structure of sets of real numbers. The conjecture posits a remarkable relationship between the presence of certain subsets and the distribution of real numbers within them [8]. Statement of the Borel Conjecture: Given any set \(A\) of real numbers with positive Lebesgue measure, there exists a perfect set \(P\subseteq A\) that does not contain any isolated points. In other words, \(P\) is a closed set without isolated points, and it is a subset of \(A\). The connection between the Borel conjecture and the invariant subspace problem arises unexpectedly through the lens of Borel equivalence relations, establishing a bridge between different areas of mathematics [9]. Borel equivalence relations are a fundamental concept in descriptive set theory, serving as a bridge between different mathematical areas. These relations play a significant role in understanding the complexity and structure of sets and functions, offering insights into the Borel conjecture and its unexpected connection to the invariant subspace problem [10]. Definition and Basic Properties: Let \(X\) be a standard Borel space, which is a topological space equipped with a Borel \(\sigma\)-algebra or let \((X,\mathcal{B})\) be a measurable space, where \(X\) is a topological space and \(\mathcal{B}\) is the Borel \(\sigma\)-algebra generated by the open sets of \(X\). Then, for any subset \(A\subseteq X\), we have: \[A\in\mathcal{B}\iff A\text{ is a Borel measurable set.}\] Where \((X,\mathcal{B})\) represents the measurable space with a topological space \(X\) and its associated Borel \(\sigma\)-algebra \(\mathcal{B}\). The equation states that a subset \(A\) of \(X\) belongs to the Borel \(\sigma\)-algebra \(\mathcal{B}\) if and only if it is a Borel measurable set. An equivalence relation on \(X\) is a binary relation that is reflexive, symmetric, and transitive. A Borel equivalence relation is one where the equivalence classes and the relation itself are Borel sets in \(X\times X\). Formally, an equivalence relation \(\sim\) on \(X\) is Borel if the sets \(\{(x,y)\in X\times X\mid x\sim y\}\) and \(\{(x,y)\in X\times X\mid x\not\sim y\}\) are Borel sets. Borel equivalence relations are classified based on their complexity within the Borel hierarchy. An equivalence relation \(\sim\) is Borel reducible to another equivalence relation \(\approx\) (denoted as \(\sim\leq_{B}\approx\)) if there exists a Borel function \(f:X\to X\) such that \(x\sim y\) if and only if \(f(x)\approx f(y)\). Equivalence relations can be ranked according to the strength of this reducibility, leading to a rich hierarchy of equivalence relations [11]. The connection between the Borel conjecture and the invariant subspace problem arises unexpectedly through the study of Borel equivalence relations. The Borel conjecture posits that any set of real numbers with positive Lebesgue measure contains a perfect set without isolated points. Surprisingly, this conjecture's ramifications extend to the invariant subspace problem [12]. A positive resolution of the Borel conjecture would yield a solution to the invariant subspace problem for specific classes of bounded linear operators. In particular, certain classes of hypercyclic operators would possess no non-trivial closed invariant subspaces if the Borel conjecture were proven [13]. Borel equivalence relations serve as a mathematical bridge, connecting seemingly distant domains such as descriptive set theory, functional analysis, and operator theory. The unexpected connection between the Borel conjecture and the invariant subspace problem underscores the profound interplay of ideas that can emerge from the exploration of Borel equivalence relations [14]. Such relations play a significant role in descriptive set theory, offering a systematic way to classify sets based on their "sameness" under certain operations. The connection between the Borel conjecture and the invariant subspace problem manifests when considering a class of bounded linear operators known as hypercyclic operators. These operators exhibit a chaotic behavior, repeatedly cycling through dense orbits in the underlying Hilbert space. In a chaotic dynamical system, a small change in initial conditions can lead to drastically different trajectories over time. This phenomenon is often characterized by the Lyapunov exponent, which quantifies the rate of exponential divergence of initially nearby trajectories. Mathematically, for a one-dimensional chaotic map \(f(x)\), the Lyapunov exponent \(\lambda\) can be defined as: \[|\delta x(t)|\approx e^{\lambda t}|\delta x(0)|\] Here, \(|\delta x(t)|\) represents the separation between two initially close trajectories at time \(t\), and \(\lambda\) is the Lyapunov exponent. The Lyapunov exponent provides insight into the sensitivity of a chaotic system to initial conditions. A positive Lyapunov exponent indicates exponential divergence, contributing to the unpredictable and complex behavior observed in chaotic systems. Hypercyclic operators are a fascinating class of bounded linear operators that exhibit a distinct form of chaotic behavior in a Hilbert space. These operators play a pivotal role in understanding the interplay between the Borel conjecture and the invariant subspace problem, shedding light on the complex dynamics that underlie these mathematical inquiries. Definition and Properties: Let \(\mathcal{H}\) be a separable infinite-dimensional Hilbert space. An operator \(T:\mathcal{H}\rightarrow\mathcal{H}\) is said to be hypercyclic if there exists a vector \(x\in\mathcal{H}\) such that the orbit \(\{T^{n}x\}_{n\geq 0}\) is dense in \(\mathcal{H}\). In other words, the iterates of \(x\) under the action of \(T\) come arbitrarily close to every vector in the Hilbert space. Formally, for any \(y\in\mathcal{H}\) and any \(\varepsilon>0\), there exists an \(n\geq 0\) such that \(\|T^{n}x-y\|<\varepsilon\). Hypercyclic operators are known for their chaotic behavior, characterized by the dense orbits they generate. The concept of chaos in this context refers to the unpredictability and lack of long-term regularity in the behavior of orbits under the action of the operator \(T\). In other words, an operator \(T\) on a Hilbert space \(\mathcal{H}\) is said to be hypercyclic if there exists a vector \(v\in\mathcal{H}\) such that the orbit of \(v\) under the repeated action of \(T\) is dense in the entire Hilbert space, i.e.: \[\overline{\text{span}}\{T^{n}v:n\in\mathbb{N}\}=\mathcal{H}.\] Mathematically, the dense orbits imply that, for any vector \(y\in\mathcal{H}\) and any neighborhood \(U\) of \(y\), there exists an iterate \(n\) such that \(T^{n}x\) belongs to \(U\). This rapid and unbounded proliferation of iterates across the Hilbert space contributes to the chaotic nature of hypercyclic operators. The connection between hypercyclic operators and the Borel conjecture is a remarkable example of the interplay between seemingly unrelated mathematical concepts. The Borel conjecture, which concerns the structure of sets of real numbers, unexpectedly intersects with the invariant subspace problem through hypercyclic operators. It has been shown that a positive resolution of the Borel conjecture would imply the existence of hypercyclic operators without non-trivial closed invariant subspaces. In other words, the Borel conjecture's influence extends to the dynamical properties of operators in the Hilbert space. This result directly ties back to the invariant subspace problem, shedding light on its potential resolution for this specific class of operators. Hypercyclic operators provide a lens through which to explore the intricate dynamics and chaos that can arise in mathematical systems. These operators are sensitive to initial conditions, and their orbits can be dense in the space, which can be related to notions of chaos and unpredictability. The operators exhibit interesting dynamical properties. They can be thought of as operators that "mix" and "permute" the elements of the Hilbert space in a way that leads to a highly unpredictable behavior. The hypercyclic operators exemplify chaotic behavior within the realm of bounded linear operators. Their connection to the Borel conjecture and the invariant subspace problem adds a layer of depth to our understanding of these inquiries, showcasing the unexpected connections that can emerge in mathematical exploration. In quantum mechanics, hypercyclic operators play a role in understanding the dynamic behavior of quantum systems. Hypercyclic behavior in quantum mechanics is analogous to the concept of chaos in classical systems, where small changes in initial conditions lead to significantly different trajectories. Consider a quantum system described by a Hilbert space \(\mathcal{H}\) and let \(T\) be a unitary operator representing a quantum evolution. If there exists a vector \(v\) such that the orbit \(\{T^{n}v:n\in\mathbb{N}\}\) is dense in \(\mathcal{H}\), then the operator \(T\) is hypercyclic in the quantum mechanical sense. In summary, the Borel conjecture intertconnects with the invariant subspace problem through the study of Borel equivalence relations. The exploration of this connection reveals a deep relationship between two seemingly disparate mathematical inquiries and underscores the intricate interplay of concepts across different domains of mathematics. Moreover, the insights gained from this connection contribute to our understanding of both problems and exemplify the collaborative nature of mathematical research. ## 4 Conclusion The invariant subspace problem's connections with the Kadison-Singer problem and the Borel conjecture highlight the deep interconnections within mathematics. The evolution of problem formulations and unexpected relationships underscore the collaborative and interdisciplinary nature of mathematical inquiry. By exploring these connections, mathematicians can uncover new perspectives and potentially contribute to the resolution of these long-standing problems.
2309.10362
Evaporation-induced temperature gradient in a foam column
Various parameters affect the foam stability: surface and bulk rheology of the solution, gravitational drainage, mechanical vibrations, bubble gas composition, and also evaporation. Evaporation is often considered through the prism of a liquid loss, but it also induces a cooling effect due to the enthalpy of vaporization. In this study, we combine a theoretical and experimental approach to explore the temperature field in a foam column evaporating from the top. We show that a measurable temperature profile exists in this geometry with temperatures at the interface lower than the environmental temperature by few degrees. We demonstrate that the temperature profile is the result of a balance between the enthalpy of vaporization and heat fluxes originating from the thermal conduction of foam and air and the thermal radiation. For small foam thicknesses compared to the radius, we found that the temperature gradient is established over the foam thickness while for large aspect ratios, the gradient is spanning over a lengthscale comparable to the tube radius.
François Boulogne, Emmanuelle Rio, Frédéric Restagno
2023-09-19T06:48:30Z
http://arxiv.org/abs/2309.10362v1
# Evaporation-induced temperature gradient in a foam column ###### Abstract Various parameters affect the foam stability: surface and bulk rheology of the solution, gravitational drainage, mechanical vibrations, bubble gas composition, and also evaporation. Evaporation is often considered through the prism of a liquid loss, but it also induces a cooling effect due to the enthalpy of vaporization. In this study, we combine a theoretical and experimental approach to explore the temperature field in a foam column evaporating from the top. We show that a measurable temperature profile exists in this geometry with temperatures at the interface lower than the environmental temperature by few degrees. We demonstrate that the temperature profile is the result of a balance between the enthalpy of vaporization and heat fluxes originating from the thermal conduction of foam and air and the thermal radiation. For small foam thicknesses compared to the radius, we found that the temperature gradient is established over the foam thickness while for large aspect ratios, the gradient is spanning over a lengthscale comparable to the tube radius. ## 1 Introduction Foam stability [1] is one of the crucial parameter for successful applications [2]. In such an assembly of bubbly, the liquid tend to flow downward due to gravity (drainage) [3], the small bubbles tend to empty in the bigger ones (coarsening) [4, 5], neighboring bubbles tend to merge (coalescence) [6], and liquid evaporation affects the foam wetness. These different mechanisms at the origin of foam aging are either entangled, coarsening and drainage for example, or poorly understood like coalescence or evaporation, which has been mainly ignored in the literature. To build an empirical knowledge, systematic experimental tests have been developed to compare chemical formulations in various physical and chemical conditions (pH, temperature, etc.). A common approach is to use foam columns and measure the temporal evolution of the foam height [7, 8]. For instance, some commercial instruments rely on the Bikerman test measuring the equilibrium height in presence of a given bubbling velocity [9] or on the Ross-Milles test based on the time evolution of the height of a foam generated beforehand [10]. Most of the time, these tests are performed in closed atmosphere. However, in ambient conditions, the liquid evaporation plays also a role on the stability of soap films. This aspect has recently received particular attention on soap films, based on the consideration that evaporation induces thinning of films [11, 12, 13]. On foams, the effect of evaporation has also been noticed by several authors [14, 15, 16, 17]. Li _et al._ suggested that the film thinning comes with an increase of the surfactant concentration where evaporation takes place [18]. Such concentration gradient leads to a Marangoni flow increasing further the film thinning rate, and thus promoting the film bursting. The opposite effect, _i.e._ a stability enhanced by evaporation due to Marangoni effects has been obtained by Chandran Suja _et al._ on oil-based systems [19]. As a result, laboratory tests performed with sealed containers may not reflect the same behavior as in the practical conditions of usage. Therefore, the effect of evaporation on foams must be carefully considered. More recently, we have evidenced that the temperature of a soap film is not always equal to the ambient temperature due to evaporation [20]. The cooling effect appears to be significant, up to 8 \({}^{\circ}\)C, for a film of 12 mm of diameter and a relative humidity of about 20 %. To the best of our knowledge, the cooling effect induced by evaporation is overlooked in foams, which may have important consequences to better understand and predict their stability. In this paper, we propose to investigate experimentally and theoretically the cooling induced by the evaporation of a foam column. We measure the temperature profile in foams of different aspect ratios. Then, we rationalize these measurements by predicting the steady-state temperature of the foam-air interface and the resulting temperature profile in the foam column. Finally, we discuss the predictions offered by the model and we compare the predicted temperature profiles to our measurements. ## 2 Experimental procedure and observations Figure 1 is a schematic of the experiment. The column of foam is made in a cylindrical tube of radius \(R=17\) mm and of 20 cm total height. Tubes are in PMMA (Abaque-plast) with a wall thickness of 3 mm. A soap solution prepared by mixing a commercial dish washing soap (Fairy) at 10 %wt with pure water (resistivity = 18.2 M\(\Omega\cdot\)cm), is poured in the tube such that the liquid interface reaches a distance \(L_{\rm f}>0\) to the rim. The surface tension of the liquid is \(\gamma=25.4\pm 0.1\) mN/m. The foam is produced by bubbling air with a pressure generator (OF1, Elveflow, France) through a 32G blunt needle in the soap solution [21]. This method produces a monodisperse foam with a bubble diameter of \(2b=2\) mm; the bubble size is determined by image analysis of the rising bubbles [22]. The choice of a monodisperse foam is motivated by the limited coarsening process [23] and to the seek of a homogeneous, easy to reproduce material. As illustrated in figure 1, the foam is generated up to the rim and is placed in an environment controlled in relative humidity at \({\cal R}_{\rm H}=50\) % [24]. The environmental temperature is measured for each experiment and is \(T_{\infty}=21\pm 1\)\({}^{\circ}\)C. The temperature is measured by a thermocouple probe (type K, NiAl-NiCr, diameter 0.2 mm, RS PRO) connected to a digital thermometer (RS PRO 1314). The probe is directly plunged in the foam. The position of the probe is controlled by a motorized translation stage (LTS150, Thorlabs) and the temperature is recorded once stabilized at each position. Temperatures are measured with a typical uncertainty of \(\pm 0.1\)\({}^{\circ}\)C and the positions with a typical uncertainty of 1 mm. The relative humidity is measured with a HIH-4021-003 sensor from Honeywell, USA. Typical measurements are presented in figure 2 where we calculate the difference of temperature between the thermocouple at a given position \(T(z)\) and the ambient temperature \(T_{\infty}\) measured with the same instrument. We show two temperature profiles for foam thicknesses smaller and larger than the tube radius, _i.e._\(L_{\rm f}/R=0.6\) and \(L_{\rm f}/R=3.9\) respectively. Within the duration of the experiments (typically 10 minutes), we have not noticed a significant foam aging or bubble bursting that would have changed the typical thickness of the foam layer or the bubble size. The temperatures of the foam/atmosphere interface are nearly equal in both situations, to the resolution of the temperature measurements and the determination of the position of the interface (\(\pm\) 1 mm). This surface temperature is 2.5 \({}^{\circ}\)C lower than the environmental temperature for both foam heights. However, the characteristic lengthscale of the temperature variation differs with the foam height. For an aspect ratio \(L_{\rm f}/R\simeq 0.6\), the temperature nearly reaches the ambient temperature at the foam-liquid interface, which results in a sharp temperature increases with the penetration in the foam. In contrast, for the thicker foam with an aspect ratio \(L_{\rm f}/R\simeq 3.9\), the temperature variation is smoother and the ambient temperature is reached ahead of the liquid-foam interface. Additionally, we checked by weighing samples in identical conditions that the soap solution evaporates at the same rate as pure water, indicating that the solutes do not alter significantly the chemical potential of the solution [20]. In the next section, we aim at predicting the temperature profile in the foam, which includes the determination of the temperature of the interface due to evaporation. ## 3 Model ### Evaporative flux The atmosphere is characterized by its temperature \(T_{\infty}\) and it relative humidity \({\cal R}_{\rm H}\), which is defined as \(p_{\infty}/p_{\rm sat}(T_{\infty})\) where \(p_{\infty}\) is the partial vapor pressure at \(T_{\infty}\) and \(p_{\rm sat}(T_{\infty})\), the saturated vapor pressure of water. The temperature variation of the saturated vapor pressure is well described by the Antoine equation \(p_{\rm sat}(T)=p^{\circ}\,10^{A-B/(C+T)}\) where \(p^{\circ}=10^{5}\) Pa and the coefficients where \(A=5.341\pm 0.003\) K, \(B=1807.5\pm 1.6\) K, and \(C=-33.9\pm 0.1\) K are fitted with data from [25]. In a steady state regime, assuming diffusive transfers between the foam and the atmosphere, the vapor concen Figure 1: Foam column of height \(L_{\rm f}\) and radius \(R\) evaporating at the rim, \(z=0\), in the atmosphere characterized by the vapor concentration \(c_{\infty}\) at the temperature \(T_{\infty}\). tration field is the solution of the Laplace equation. The total evaporative flux \(Q_{\rm ev}\) of a circular disk of radius \(R\) is proportional to the difference of vapor concentrations \(\Delta c^{\star}\) between the environment at a temperature \(T_{\infty}\) and above the interface at a temperature \(T_{\rm i}\). This evaporative flux writes \[Q_{\rm ev}=4{\cal D}R\Delta c^{\star}, \tag{1}\] where \({\cal D}\) is the diffusion coefficient of vapor in air [26, 27]. The difference of vapor concentrations \(\Delta c^{\star}=c_{\infty}-c_{\rm sat}(T_{\rm i})\) can be related to the difference of vapor pressures. Denoting \(P\) the atmospheric pressure, \(\rho_{\rm air}\) the air density, \(M_{\ell}\) and \(M_{\rm air}\) the molar weight of the liquid and air respectively, we have \(\Delta c^{\star}\simeq\frac{\rho_{\rm air}M_{\ell}}{M_{\rm air}}\frac{\Delta p ^{\star}}{P}\) with \(\Delta p^{\star}=p_{\infty}-p_{\rm sat}(T_{\rm i})\). Due to the enthalpy of vaporization, the temperature of the interface \(T_{\rm i}\) is lower than the ambient temperature. Thus, the interface receives a heat flux from the environment. Three contributions can be identified: by conduction from the surrounding atmosphere, by conduction from the foam, and also by radiation. In the following paragraphs, we model these heat transfers. ### Heat fluxes Heat flux from the atmosphereSimilarly to the vapor concentration field, the temperature field is the solution of a Laplace equation with \(T_{\rm i}\) at the interface and \(T_{\infty}\) far from it. Thus, the heat flux from the atmosphere is \[Q_{\rm h1}=4\lambda_{\rm air}R\Delta T^{\star}, \tag{2}\] where \(\lambda_{\rm air}=0.025\) W\(\cdot\)K\({}^{-1}\cdot\)m\({}^{-1}\) is the thermal conductivity of air and \(\Delta T^{\star}=T_{\infty}-T_{\rm i}\) is the temperature difference. Next, to determine the heat flux from the foam, we must comment its thermal conductivity. Thermal conductivity of foamsThe foam is supposed to be a continuous medium, which is valid if the lengthscale of the heterogeneities is much smaller than the size of the material, _i.e._ for \(b\ll\{R,L_{\rm f}\}\). The foam and the soap solution are characterized by their thermal conductivity, \(\lambda_{\rm f}\) and \(\lambda_{\ell}\), respectively. Leach proposed a model combining series and parallel contributions to the thermal conductivity [28]. At the first leading order in \(\varphi\), the liquid fraction, the thermal conductivity of foam can be estimated as \[\lambda_{\rm f}=\lambda_{\rm air}+\frac{2}{3}\lambda_{\ell}\varphi, \tag{3}\] with \(\lambda_{\ell}=0.61\) W\(\cdot\)K\({}^{-1}\cdot\)m\({}^{-1}\). The liquid fraction of the foam results from the balance between gravity and capillary suction, which implies that the liquid fraction decreases with the altitude. The liquid fraction profile \(\varphi(z)\) is obtained by solving the drainage equation in a stationary regime (See Eq. (3.89) in [29]), which gives \[\varphi(z)=\hat{\varphi}\left(\frac{L_{\rm f}+z}{\ell_{c}}+\left(\frac{\varphi _{c}}{\hat{\varphi}}\right)^{-1/2}\right)^{-2}, \tag{4}\] where \(\varphi_{c}=0.26\) is the fraction of gaps in a close-packing of hard spheres and \(\hat{\varphi}=\ell_{c}^{2}/b^{2}\delta^{2}\) with the capillary length \(\ell_{c}=\sqrt{\gamma/\rho g}\) and a geometric constant \(\delta=1.73\). Equation 4 is plotted in the inset of figure 3 for a bubble radius \(b=1\) mm. A wet region located near the liquid-foam interface spans over a few bubble layers where a sharp decrease of the liquid fraction occurs, followed by a smoother decrease with the altitude [29, 30]. From the liquid fraction profile, we plot in figure 3 the Figure 3: The main plot shows the thermal conductivity of the foam as the function of the altitude, computed from equation 3. As an indication, the thermal conductivity values of water and air are also represented. In the inset, the liquid fraction is plotted as a function of the vertical position from equation 4. Computations are performed for a bubble radius \(b=1\) mm and \(\ell_{c}=1.6\) mm. As an indication, at \(L_{\rm f}+z=25\) mm, \(\lambda_{\rm f}/\lambda_{\rm air}=1.04\). Figure 2: Measurement of the temperature profile for two foam thicknesses at a relative humidity \({\cal R}_{\rm H}=48\) %. These thicknesses are materialized by vertical dashed lines at \(-z=L_{\rm f}=11\) and \(66\) mm. The column radius is \(17\) mm, such that the ratios \(L_{\rm f}/R\) are \(0.6\) and \(3.9\), respectively. thermal conductivity as a function of the altitude \(z\) with equation 3. Again, we notice a sharp variation of \(\lambda_{\rm f}\) over the first few layers of bubbles and the thermal conductivity converges quickly to the air conductivity. In the following, we consider that the thermal conductivity of the foam is independent of the vertical position \(z\) and we take \(\lambda_{\rm f}\simeq\lambda_{\rm air}\). This approximation allows to perform analytical calculations that will facilitate the interpretation of the predictions made by the model. After a comparison of the predictions with experimental measurements, we will comment further this approximation in the discussion. Heat flux from the foam columnWe propose to describe the temperature field in the center of the foam \(T(z)\). The environment is at a temperature \(T_{\infty}\). Due to the small thickness of the tube wall compared to the tube radius and the larger thermal conductivity of the wall compared to the foam, the temperature variation in the tube wall is neglected. Thus, the temperature profile \(T(z)\) at the center of the column results of the heat transfers from the column periphery, at \(T_{\infty}\), the interface at \(T_{\rm i}\), and the liquid, which is far from the liquid-foam interface at \(T(z\to-\infty)=T_{\infty}\). A heat flux balance on a slice between \(z\) and \(z+\mathrm{d}z\) yields the differential equation \[\frac{\mathrm{d}^{2}T}{\mathrm{d}z^{2}}+\frac{2}{R^{2}}\left(T_{\infty}-T(z) \right)=0. \tag{5}\] This equation is valid both in the foam, which lies from \(z=0\) to \(z=-L_{\rm f}\), and in the liquid from \(z=-L_{\rm f}\) to \(z\to-\infty\). The boundary conditions associated with the differential equation 5 are \[T(0) =T_{\rm i}, \tag{6a}\] \[T(z\to-\infty) =T_{\infty},\] (6b) \[T\left(-L_{\rm f}^{-}\right) =T\left(-L_{\rm f}^{+}\right),\] (6c) \[\lambda_{\rm f}\left.\frac{\mathrm{d}T}{\mathrm{d}z}\right|_{-L_{ \rm f}^{+}} =\lambda_{\rm f}\left.\frac{\mathrm{d}T}{\mathrm{d}z}\right|_{-L_{ \rm f}^{-}}, \tag{6d}\] where equations 6a and 6b are the temperature conditions at the interface and in the liquid far from the foam, respectively; equation 6c corresponds to the temperature continuity at the liquid-foam interface, and 6d is the continuity of the thermal flux at the liquid-foam interface. The solution of equation 5 is expressed in the foam and the liquid respectively, and reads \[T_{\rm f,f}(z)=T_{\infty}-\Delta T^{\star}\left[\alpha_{\rm f,f}e^{-\sqrt{2}z /R}-\beta_{\rm f,f}e^{\sqrt{2}z/R}\right], \tag{7}\] where the coefficients \(\alpha_{\rm f}\), \(\beta_{\rm f}\), \(\alpha_{\rm f}\), \(\beta_{\rm f}\) are determined with the boundary conditions 6a-d. Let us denote \(k^{\pm}=\exp(\pm\sqrt{2}L_{\rm f}/R)\) and \(\Lambda=\lambda_{\rm f}/\lambda_{\rm f}\). Then, we have \[\alpha_{\rm f} =0, \tag{8a}\] \[\beta_{\rm f} =\frac{k^{+}}{k^{-}}+\beta_{\rm f}\left(1+\frac{k^{+}}{k^{-}} \right),\] (8b) \[\alpha_{\rm f} =1+\beta_{\rm f},\] (8c) \[\beta_{\rm f} =\frac{k^{+}(1+\Lambda)}{k^{-}(1-\Lambda)-k^{+}(1+\Lambda)}. \tag{8d}\] The heat flux from the bulk of the foam to the interface is \(Q_{\rm h2}=-\pi R^{2}\lambda_{\rm f}\left.\overrightarrow{z}\cdot\nabla T \right|_{z=0}\). Combined with equation 7, we obtain \[Q_{\rm h2}=-\pi\sqrt{2}R\lambda_{\rm f}\left(1+2\beta_{\rm f}\right)\Delta T^ {\star}. \tag{9}\] It is convenient to define the total heat flux by conduction \(Q_{\rm h}=Q_{\rm h1}+Q_{\rm h2}\), which can be written in the form \[Q_{\rm h}=\lambda_{\rm eff}R\Delta T^{\star}, \tag{10}\] with an effective thermal conductivity \(\lambda_{\rm eff}=4\lambda_{\rm air}-\pi\sqrt{2}\left(1+2\beta_{\rm f}\right) \lambda_{\rm f}\). This effective thermal conductivity depends on foam aspect ratio \(L_{\rm f}/R\) through the coefficient \(\beta_{\rm f}\) as shown in figure 4. In the limit of large aspect ratios, we have \(\beta_{\rm f}\to-1\). Thus, the effective thermal conductivity becomes independent of the aspect ratio and the value is \(\lambda_{\rm eff}^{\rm lim}=4\lambda_{\rm air}+\pi\sqrt{2}\lambda_{\rm f}\). Both terms in the expression of \(\lambda_{\rm eff}\) are of the same order of magnitude, which reflects the significance of both \(Q_{\rm h1}\) and \(Q_{\rm h2}\) in the heat exchange with the interface. Radiative fluxRadiation is known to be significant to estimate the cooling effect when the characteristic size of the evaporating surface is typically larger than several millimeters [31; 20]. We describe the radiative flux bringing heat to the surface by the Stefan-Boltzmann equation \[Q_{\rm rad}=\pi R^{2}\epsilon\sigma(T_{\infty}^{4}-T_{\rm i}^{4}), \tag{11}\] Figure 4: The effective thermal conductivity \(\lambda_{\rm eff}\) as defined in equation 10 is plotted as a function of the foam aspect ratio \(L_{\rm f}/R\) in orange. The dashed black line is the limit for large aspect ratio \(\lambda_{\rm eff}^{\rm lim}=4\lambda_{\rm air}+\pi\sqrt{2}\lambda_{\rm f}\). where \(\epsilon\) is the emissivity and \(\sigma\) the Stefan-Boltzmann constant. Below, we will comment the significance of this radiative flux with respect to the conductive heat flux \(Q_{\rm h}\). Now that we determined the temperature profile in the column and the heat transfers, the temperature of the interface \(T_{\rm i}\) remains to be calculated to close the problem. In the next paragraph, we derive this temperature from the balance between the energy absorbed by the evaporation and the heat transfers. ### Energy balance In a steady state regime, the energy balance writes \[h_{\rm ev}Q_{\rm ev}=-Q_{h}\left(1+\frac{Q_{\rm rad}}{Q_{h}}\right), \tag{12}\] where \(h_{\rm ev}\) is the enthalpy of vaporization. For small temperature differences, _i.e._\(|T_{\rm i}-T_{\infty}|/T_{\infty}\ll 1\), the ratio of radiative and conductive fluxes can be simplified as \[\frac{Q_{\rm rad}}{Q_{\rm h}}=\frac{\pi Re\sigma(T_{\infty}^{4}-T_{\rm i}^{4}) }{\lambda_{\rm eff}(T_{\infty}-T_{\rm i})}\simeq\frac{4\pi Re\sigma T_{\infty} ^{3}}{\lambda_{\rm eff}}. \tag{13}\] Considering large foam aspect ratios, the radiative flux appears to be as significant as the conductive flux for a critical radius \(R_{\rm c}=\lambda_{\rm eff}^{\rm lim}/(4\pi e\sigma T_{\infty}^{3})\). At 21 \({}^{\circ}\)C, we find \(R_{\rm c}\simeq 7\) mm, which indicates that \(Q_{\rm rad}\) cannot be neglected in our experiments. From equation 12, with equations 1, 10, and 13, we relate the difference of vapor pressures \(\Delta p^{*}\) to the temperature difference \(\Delta T^{*}\) that reads \[\Delta p^{*}=-P{\cal A}\Delta T^{*}, \tag{14}\] where \({\cal A}\) is the so-called psychrometer coefficient, which is for this system, \[{\cal A}=\frac{M_{\rm air}}{\rho_{\rm air}M_{\ell}}\frac{\lambda_{\rm eff}}{ 4h_{\rm ev}{\cal D}}\left(1+\frac{4\pi Re\sigma T_{\infty}^{3}}{\lambda_{\rm eff }}\right). \tag{15}\] We can remark that \(\Delta p^{*}\) depends on the temperature of the interface \(T_{\rm i}\), which is necessary to take into account the variation of the evaporative flux with the temperature of the interface. The temperature dependence of \(\Delta p^{*}\) being non-linear due to Antoine equation, the interfacial temperature \(T_{\rm i}\) cannot be determined analytically from equation 14. As a consequence, for a given column radius \(R\), ambient temperature \(T_{\infty}\), and relative humidity \({\cal R}_{\rm H}=(p_{\rm sat}(T_{\rm i})-\Delta p^{*})/p_{\rm sat}(T_{\infty})\), we seek numerically for the temperature \(T_{\rm i}\) from equation 14 with the Antoine equation by using a Newton procedure [32]. Next, the temperature profile is computed with equations 7 and 8. From equation 15, it is worth noting that the psychrometer coefficient depends on lengthscales through (a) \(\lambda_{\rm eff}\), which depends itself of \(L_{\rm f}/R\) for small aspect ratios, and (b) directly of the tube radius \(R\) due to the ratio \(Q_{\rm rad}/Q_{\rm h}\) (Eq. 13). Thus, the psychrometer coefficient is a function of \((R,L_{\rm f}/R)\). Therefore, the temperature of the interface cannot be rescaled neither by the radius \(R\), nor by the aspect ratio \(L_{\rm f}/R\), and so the temperature profiles. In the next section, we discuss the results of this model, and we compare the predictions to experiments. ## 4 Discussion Before comparing the predictions to the experiments, we start by analyzing the influence of the model parameters on the temperature of the interface. In figure 5(a), we plot the cooling effect as a function of the foam column radius for different values of the relative humidity and a constant foam aspect ratio. Evaporation and thus the cooling effect being driven by the relative humidity, a dry atmosphere leads to a lower temperature of the interface. This statement is a direct consequence of the psychrometric equation (Eq. 14). In addition, the increase of the foam column radius reduces the cooling effect. Indeed, in figure 5(a), the ratio \(Q_{\rm rad}/Q_{\rm h}\) increases linearly with the radius (Eq. 13) such that wider columns receive a more important contribution of the radiative transfer, which hinders the cooling effect. The foam aspect ratio also has an effect on the temperature of the interface. In figure 5(b), we plot the temperature difference as a function of the radius for aspect ratio smaller and larger than unity. For small aspect ratios, typically \(L_{\rm f}/R<1\), the temperature of the interface decreases with the foam thickness at a given radius. This observation is directly linked to the increased effective thermal conductivity \(\lambda_{\rm eff}\) observed in figure 4. Schematically, a small foam thickness reduces the insulation of the foam-vapor interface with the underlying liquid, which has a significantly larger thermal conductivity than air. Thus, increasing the aspect ratio beyond unity has a little effect on the interface temperature, as the temperature profile becomes dominated by the heat flux from the wall of the container. Now that the effects of the relative humidity and the size of the tube are clarified, we can focus on the temperature profile in the foam for different aspect ratios. In figure 6, we show the experimental measurements presented in figure 2 with additional foam heights. We also plot the temperature profiles given by equation 7 without any fitting parameter. We observe that the model is in excellent agreement with the measurements, within the experimental uncertainties. The temperature profiles presented in figure 6 clearly illustrate the insulating properties of foam, especially for small aspect ratios where a neat rupture in the profile is observed at the liquid-foam interface. In this case, the lengthscale associated with the temperature gradient is the foam thickness. Also, for larger aspect ratios, the profiles tend to collapse on a single curve. Again, this collapse indicates that the temperature is mainly the results of the radial heat flux over the vertical flux. Thus, for large foam aspect ratios, the characteristic lengthscale of the temperature gradient is the tube radius. Finally, we come back to the approximation we made on the thermal conductivity of the liquid foam. The assumption of a uniform thermal conductivity to describe the temperature profile in the foam columns is well validated by the experiments as shown in figure 6. Indeed, we expect that this approximation is less satisfactory when the temperature gradient, and thus the heat flux, at the liquid-foam interface becomes significant, which is the case for thin foam layers. Temperatures of thinner foam layers must deviate from the present model, but this is difficult to render with our experimental protocol as the foam thickness will be composed of only a few bubble layers, conflicting with our continuum approach. ## 5 Conclusions In this work, we have investigated experimentally and theoretically the temperature profile that is established in a foam column evaporating from the top. Experimentally, we observed that a temperature profile is perfectly measurable with temperature differences between the interface and the ambient temperature of few degrees. Considering the foam aspect ratio as the ratio between the foam thickness and the tube radius, we found for thin foams, the lengthscale of the temperature gradient is set by the foam thickness. Conversely, for large aspect ratios, the lengthscale is the tube radius. We successfully modeled the temperature profile in the column with an analytical expression by considering, in a 1D approximation, a heat flux balance between the radial and the axial directions, assuming a constant thermal conductivity in the foam layer. The temperature of the interface results from an energy balance between the energy associated with the enthalpy of vaporization and the thermal flux from the environment. The latter originates from three contributions: the heat conduction by the foam and by the surrounding air, as well as the radiative flux. The good comparison of our theoretical predictions with our experiments validates the aforementioned approximations. Further studies will be necessary to evaluate how the temperature gradient combined with the evaporation plays a role on the foam stability. The foam drying due to evaporation promotes bubble bursting [33], while the role of thermal gradients is more subtle. For instance, evaporation can increase the concentration of non-volatile compounds, increasing also the bubble stability [34]. a Marangoni flow in surface bubbles placed in a cool atmosphere has been found to enhance the stability by thickening the soap film [35], which remains an open question on foams. Figure 5: Theoretical prediction of the cooling effect on the interface as a function of the foam column radius. (a) Effect of the relative humidity for \(L_{\mathrm{f}}/R=3\). (b) Influence of the foam height with respect to the foam radius. The ambient temperature is set at \(T_{\infty}=20\)\({}^{\circ}\)C and \(\mathcal{R}_{\mathrm{H}}=50\)\(\%\). Figure 6: Comparison between the measurement of the temperature profile for various foam thicknesses with the prediction given by equation 7. The experiments are performed in a column of radius \(R=17\) mm at a relative humidity \(\mathcal{R}_{\mathrm{H}}=0.48\). The vertical dashed lines correspond to the positions of the liquid/foam interface \(L_{f}\). ## Acknowledgments The authors thank A. Commereuc and M. Pasquet for stimulating discussions. The authors acknowledge for funding support from the French Agence Nationale de la Recherche in the framework of project AsperFoam - 19-CE30-0002-01.
2306.00092
Accretion onto disk galaxies via hot and rotating CGM inflows
Observed accretion rates onto the Milky-Way and other local spirals fall short of that required to sustain star formation for cosmological timescales. A potential avenue for this unseen accretion is an inflow in the volume-filling hot phase ($\sim10^6$ K) of the circumgalactic medium (CGM), as suggested by some cosmological simulations. Using hydrodynamic simulations and a new analytic solution valid in the slow-rotation limit, we show that a hot inflow spins up as it approaches the galaxy, while remaining hot, subsonic and quasi-spherical. At the radius of angular momentum support ($\approx15$ kpc for the Milky-Way) the hot flow flattens into a disk geometry and then cools from $\sim10^6$ K to $\sim10^4$ K at the disk-halo interface. Cooling affects all hot gas, rather than just a subset of individual gas clouds, implying that accretion via hot inflows does not rely on local thermal instability in contrast with 'precipitation' models for galaxy accretion. Prior to cooling and accretion the inflow completes $\sim t_{\rm cool}/t_{\rm ff}$ radians of rotation, where $t_{\rm cool}/t_{\rm ff}$ is the cooling time to free-fall time ratio in hot gas immediately outside the galaxy. The ratio $t_{\rm cool}/t_{\rm ff}$ may thus govern the development of turbulence and enhancement of magnetic fields in gas accreting onto low-redshift spirals. We argue that accretion via hot inflows can explain the observed truncation of nearby thin stellar disks at $\approx4$ disk radii. We also show that if rotating hot inflows are common in Milky-Way size disk galaxies, as predicted, then signatures should be observable with X-ray telescopes and FRB surveys.
Jonathan Stern, Drummond Fielding, Zachary Hafen, Kung-Yi Su, Nadav Naor, Claude-André Faucher-Giguère, Eliot Quataert, James Bullock
2023-05-31T18:10:20Z
http://arxiv.org/abs/2306.00092v2
# Accretion onto disk galaxies via hot and rotating CGM inflows ###### Abstract Observed accretion rates onto the Milky-Way and other local spirals fall short of that required to sustain star formation for cosmological timescales. A potential avenue for this unseen accretion is an inflow in the volume-filling hot phase (\(\sim 10^{6}\) K) of the circumgalactic medium (CGM), as suggested by some cosmological simulations. We derive an approximate axisymmetric analytic solution of such hot CGM accretion flows, and validate it with hydrodynamic simulations. We show that a hot inflow spins up as it approaches the galaxy, while remaining hot, subsonic and quasi-spherical. At the radius of angular momentum support (\(\sim 15\) kpc for the Milky-Way) the hot flow flattens into a disk geometry and then cools from \(\sim 10^{6}\) K to \(\sim 10^{4}\) K at the disk-halo interface. Cooling affects all hot gas, rather than just a subset of individual gas clouds, implying that accretion via hot inflows does not rely on local thermal instability in contrast with 'precipitation' models for galaxy accretion. Prior to cooling and accretion the inflow completes \(\approx t_{\rm cool}/t_{\rm ff}\) radians of rotation, where \(t_{\rm cool}/t_{\rm ff}\) is the cooling time to free-fall time ratio in hot gas immediately outside the galaxy. The ratio \(t_{\rm cool}/t_{\rm ff}\) may thus govern the development of turbulence and enhancement of magnetic fields in gas accreting onto low-redshift spirals. We argue that accretion via hot inflows can explain the observed truncation of nearby thin stellar disks at \(\approx 4\) disk radii. We also show that if rotating hot inflows are common in Milky-Way size disk galaxies, as predicted, then signatures should be observable with X-ray telescopes, kinetic SZ measurements, and FRB surveys. keywords: - ## 1 Introduction Observation of neutral gas surrounding the Milky Way and nearby spirals suggest accretion rates of \(0.05-0.2\,{\rm M}_{\odot}\,{\rm yr}^{-1}\), falling short of the \(1-2\,{\rm M}_{\odot}\,{\rm yr}^{-1}\) required to sustain observed star formation rates (SFRs) for cosmological timescales (Sancisi et al., 2008; Putman et al., 2012; Kambhuis et al., 2022). This'missing accretion' is often attributed to predominantly ionized gas clumps with temperature \(\sim 10^{4}\) K (e.g., Voit et al., 2017), observable mainly in UV absorption. It is however unclear if this phase can provide the necessary fuel for star formation, due to both uncertainties in converting UV absorption features to net accretion rates (e.g. Fox et al., 2019), and since hydrodynamic instabilities may disrupt and evaporate cool gas clumps before they reach the galaxy (Heitsch and Putman, 2009; Armillotta et al., 2017; Tan et al., 2023). An alternative, less explored possibility is that accretion proceeds via a subsonic inflow in the volume-filling hot phase (\(\sim 10^{6}\) K) of the circumgalactic medium (CGM), similar to classic 'cooling flow' solutions discussed in the context of the intracluster medium (Mathews and Bregman, 1978; Fabian et al., 1984). Such hot CGM inflows are evident in modern cosmological simulations such as FIRE (Stern et al., 2021; Hafen et al., 2022) and TNG (ZuHone et al., in prep.; see also figure 9 in Nelson et al., 2019). Since the hot CGM is expected to have a net rotation (e.g., Roskar et al., 2010; Stevens et al., 2017; Oppenheimer, 2018; DeFelippis et al., 2020; Truong et al., 2021; Huscher et al., 2021; Nica et al., 2021), an inflow will cause it to spin up. Stern et al. (2020) used an idealized one-dimensional model to show that in Milky-Way mass halos, such a rotating hot inflow will remain hot down to the radius where the rotation velocity approaches the circular velocity \(v_{\rm c}=\sqrt{GM(<r)/r}\), at which point the gas cools to \(\sim 10^{4}\) K and joins the ISM disk. Hafen et al. (2022) demonstrated that this picture applies, and is the dominant accretion mode onto \(z\sim 0\) Milky-Way mass galaxies in the FIRE-2 cosmological zoom simulations (Hopkins et al., 2018). They further showed that the flow forms a coherently spinning disk prior to accretion onto the galaxy, and that this coherence may be a necessary condition for the formation of thin disk galaxies, consistent with conclusions from related FIRE-2 analyses (Stern et al., 2021; Stern et al., 2021; Yu et al., 2021, 2022; Gurvich et al., 2023). It thus appears that a deep understanding of the physics of hot and rotating inflows could be crucial for understanding the evolution of local star forming disks. In the present paper, we complement the cosmological simulation-based analysis of hot and rotating CGM inflows in Hafen et al. (2022), by deriving an idealized, two-dimensional axisymmetric solution for inflowing and rotating hot CGM. Deriving an idealized solution allows identifying its dependence on system parameters and boundary conditions, and provides a basis for assessing the effects of additional physics. Our derivation is built on previous 1D hot inflow solutions1 which accounted for rotation in a highly approximate manner (Cowie et al., 1980; Birnboim & Dekel, 2003; Narayan & Fabian, 2011; Stern et al., 2020). These studies assumed the centrifugal force is directed outward in the spherical radius direction, so the solution remained spherically symmetric. Here we assume the centrifugal force is directed outward in the cylindrical radius direction, and derive a 2D axisymmetric solution which captures the transition from a quasi-spherical flow geometry at large scales where angular momentum support is weak to a disk geometry at small scales where angular momentum support dominates. The idealized nature of our approach implies that insights may be applicable also to other astrophysical disks fed by spherical inflows, such as AGN disks in galaxy centers (e.g., Quataert & Narayan, 2000) or protoplanetary disks in the center of star-forming clouds (e.g., Fielding et al., 2015). The inflowing hot CGM solution derived herein qualitatively differs from hot CGM models which are radially-static ('thermal balance' models, e.g., McCourt et al., 2012; Sharma et al., 2012; Voit et al., 2017; Faerman et al., 2017, 2020; Pezzulli et al., 2017; Sormani et al., 2018), and from hot outflow models (Thompson et al., 2016; Schneider et al., 2020). Thermal balance models explicitly assume that radiative cooling is equal to feedback heating, thus inhibiting the hot inflow, while outflow models require that feedback heating dominates. The present solution focuses on the limit where feedback heating is subdominant. We note that observational evidence for thermal balance is strong in the ICM, since the star formation rate at the cluster center is small relative to the inflow rate \(\dot{M}\) implied by the X-ray emission \(L_{X}\) (the well-known 'cooling flow problem', where \(\dot{M}\approx L_{X}/v_{\rm c}^{2}\gg\) SFR, see McDonald et al. 2018 for a recent revisit). There is, however, no similar cooling flow problem in disc galaxies. Upper limits on \(L_{X}\) from the hot CGM of Milky-Way mass galaxies are a few \(\times 10^{40}\,{\rm erg\ s^{-1}}\)(Li & Wang, 2013; Li et al., 2014; Anderson et al., 2015; Comparat et al., 2022), and recent results based on eROSITA data suggest the actual emission is comparable to this value (Chadayamburi et al., 2022). For \(v_{\rm c}\approx 200\,{\rm km\ s^{-1}}\) this \(L_{X}\) implies \(\dot{M}\approx 1\,{\rm M_{\odot}\,yr^{-1}}\,\sim\) SFR, in contrast with \(\dot{M}\gg\) SFR deduced for the ICM. More massive spirals in which the hot CGM is detected in individual objects with \(L_{X}\gtrsim 10^{41}\) erg s\({}^{-1}\) have SFR \(\approx 10\,{\rm M_{\odot}\,yr^{-1}}\) and hence also satisfy \(\dot{M}\sim\) SFR (Anderson et al., 2016; Bogdan et al., 2017; Das et al., 2019). The upper limits and detections of X-ray emission around disk galaxies thus allow for the possibility that the hot CGM is inflowing and providing the fuel for star formation. The paper is organized as follows. In section 2 we derive an approximate analytic solution of hot and rotating CGM, while in section 3 we derive a numerical solution. In section 4 we consider the effect of additional physical mechanisms which were not included in the basic analysis, and in section 5 we derive several observables of hot rotating CGM. Implications of our results are discussed in section 6 and section 7 provides a summary. ## 2 The structure of hot and rotating CGM - analytic considerations The flow equations for radiating, ideal gas with adiabatic index \(\gamma=5/3\) subject to an external gravitational potential \(\Phi\) are \[\nabla\cdot(\rho\vec{v}) = -\frac{\partial\rho}{\partial t} \tag{1}\] \[\left(\frac{\partial}{\partial t}+\vec{v}\cdot\nabla\right)\vec{v} = -\frac{1}{\rho}\nabla P-\nabla\Phi\] (2) \[\left(\frac{\partial}{\partial t}+\vec{v}\cdot\nabla\right)\ln K = -\frac{1}{t_{\rm cool}} \tag{3}\] where \(\rho,P\) and \(\vec{v}\) are respectively the gas density, pressure, and velocity. We use \(K\equiv P\rho^{-5/3}\) for the 'entropy' (up to an exponent and a constant) and \(t_{\rm cool}\) for the cooling time, defined as \[t_{\rm cool}=\frac{3}{2}\frac{P}{n_{\rm H}^{2}\Lambda}\;, \tag{4}\] where \(n_{\rm H}\) is the hydrogen density, \((3/2)P\) is the energy per unit volume, and \(\Lambda\) is the cooling function defined such that \(n_{\rm H}^{2}\Lambda\) is the energy lost to radiation per unit volume. Equations (1)-(3) neglect conduction, viscosity and magnetic fields, the potential effect of which will be assessed below. We also do not include a heating term in equation (3) since we search for a solution in the limit that heating is subdominant to cooling (see introduction). ### Hot CGM without angular momentum We start with a brief review of steady-state (\(\partial/\partial t=0\)) hot inflow solutions without angular momentum, which were studied extensively mainly in the context of the inner ICM (classic 'cooling flows', e.g., Mathews & Bregman, 1978) and adapted to galaxy-scale halos by Stern et al. (2019). When angular momentum is neglected spherical symmetry can be assumed, and hence eqns. (1)-(3) reduce to \[4\pi r^{2}\rho v_{r} = \dot{M} \tag{5}\] \[\frac{1}{2}v_{r}^{2} = -\frac{1}{\rho}\frac{d\left(P_{\rm th}+P_{\rm turb}\right)}{dr}- \frac{v_{\rm c}^{2}}{r}\] (6) \[v_{r}\frac{d\ln K}{dr} = -\frac{1}{t_{\rm cool}} \tag{7}\] where \(r\) is the spherical radius, \(\dot{M}\) is the mass flow rate (constant with radius in steady-state, down to the radius where stars form), \(P_{\rm turb}=\rho\sigma_{\rm turb}^{2}\) is the turbulent pressure, and \(\sigma_{\rm turb}\) is the turbulent velocity. Multiplying eqn. (7) by \(\sqrt{2}/v_{\rm c}\) we get \[\frac{\sqrt{2}v_{r}}{v_{\rm c}}\frac{d\ln K}{dr}=-\frac{t_{\rm ff}}{t_{\rm cool }}\;, \tag{8}\] where the free-fall time is defined as \[t_{\rm ff}=\frac{\sqrt{2}r}{v_{\rm c}}\;. \tag{9}\] Assuming that cooling is slow relative to the free-fall time implies that either the flow is isentropic with \(d\ln K/dr\approx 0\) as in the Bondi (1952) solution, or that the inflow velocity is small, i.e., \[\frac{v_{r}}{v_{\rm c}}\sim\left(\frac{t_{\rm cool}}{t_{\rm ff}}\right)^{-1} \ll 1\;. \tag{10}\] The solutions discussed here correspond to the latter type of solutions where \(v_{r}\ll v_{\rm c}\). Hydrodynamic simulations show that initially static gas with \(t_{\rm cool}\gg t_{\rm ff}\) converges onto a cooling flow solution within a timescale \(t_{\rm cool}\), rather than onto an isentropic flow (e.g., Stern et al., 2019). To derive an analytic approximation, one can neglect in eqn. (6) the small inertial term \(v_{r}^{2}\) and the turbulent term which is also expected to be small \(P_{\rm turb}\sim(v_{r}/v_{\rm c})^{2}P_{\rm th}\) (see section 4.2 below). Further approximating the gravitational potential as isothermal with some constant \(v_{\rm c}\) then gives (see section 2 in Stern et al., 2019): \[c_{\rm s}^{2} = \frac{10}{9}v_{\rm c}^{2}\] \[T = 2.0\cdot 16^{9}\,v_{\rm c,200}^{2}\,{\rm K}\] \[n_{\rm H} = 0.8\cdot 10^{-3}\,r_{\rm 10}^{-1.5}v_{\rm c,200}\hat{M}_{1}^{0.5} \Lambda_{-22}^{-0.5}\,{\rm cm}^{-3}\] \[t_{\rm cool} = 370\,r_{\rm 10}^{1.5}v_{\rm c,200}\hat{M}_{1}^{-0.5}\Lambda_{-22}^{-0. 5}\,{\rm Myr}\] \[-v_{r}=\frac{r}{t_{\rm cool}} = 27\,r_{\rm 10}^{-0.5}v_{\rm c,200}^{-1}\hat{M}_{1}^{0.5}\Lambda_{-22 }^{0.5}\,{\rm km\ s}^{-1}\] \[-\mathcal{M}_{r}=\sqrt{\frac{9}{20}}\frac{t_{\rm ff}}{t_{\rm cool}} = 0.13\,r_{\rm 10}^{-0.5}v_{\rm c,200}^{-2}\hat{M}_{1}^{0.5} \Lambda_{-22}^{0.5} \tag{11}\] where \(c_{\rm s}\) is the sound speed, \(T\) is the temperature, \(\mathcal{M}_{r}\equiv v_{r}/c_{\rm s}\) is the radial Mach number of the flow, and \(r_{\rm 10}=r/10\,{\rm kpc}\), \(v_{\rm c,200}=v_{\rm c}/200\,{\rm km\ s}^{-1}\), \(\dot{M}_{1}=\dot{M}/1\,{\rm M}_{\odot}\,{\rm yr}^{-1}\), and \(\Lambda_{-22}=\Lambda/10^{-22}\,{\rm erg\,cm}^{3}\,{\rm s}^{-1}\). Conversely, one could treat the CGM mass or CGM density as a free parameter, and then \(\dot{M}\) follows from the density relation in eqn. (11). The numerical values used in eqn. (11) are appropriate for the Milky Way CGM: \(\dot{M}\) is taken to be roughly half the star formation rate (SFR) of \(\approx 1.5-2\,{\rm M}_{\odot}\,{\rm yr}^{-1}\)(Bland-Hawthorn & Gerhard, 2016), as expected in steady-state where the ISM mass is constant with time, and \(\approx 40\%\) of the stellar mass formed is ejected back into the ISM via winds and supernovae (e.g. Lilly et al., 2013). This \(\dot{M}\) is also consistent with X-ray absorption and emission constraints on the hot CGM of the Milky-Way (see introduction). The value of \(\Lambda\) is appropriate for \(T=2\cdot 10^{6}\,{\rm K}\) gas with metallicity \(Z_{\rm CGM}=Z_{\odot}/3\)(Miller & Bregman, 2015). Eqn. (11) reveals several properties of the non-rotating solution. The inflow velocity \(v_{r}\) is equal to \(r/t_{\rm cool}\), so the accretion time equals the cooling time, as expected. This also implies that the entropy drops linearly with decreasing radius (see eqn. 7). Additionally, the inflow has a temperature which is independent of radius and roughly equal to the halo virial temperature, despite radiative losses. This is a result of compressive heating during the inflow balancing radiative cooling. The solution in eqn. (11) also highlights that the parameter \(t_{\rm cool}/t_{\rm ff}\) sets the Mach number of the flow, and thus also the sonic radius of the flow where \(|\mathcal{M}_{r}|=1\): \[r_{\rm sonic}\approx r(t_{\rm cool}=1.5t_{\rm ff})=0.17\,v_{\rm c,200}^{-4} \dot{M}_{1}\Lambda_{-22}\,{\rm kpc}\;, \tag{12}\] where we used \(\sqrt{20/9}=1.5\). Near and within the sonic radius the assumption of a quasi-hydrostatic flow is invalid, and the flow transitions into a cool (\(T\approx 10^{4}\,{\rm K}\)) free-falling flow. Equation (12) indicates that \(r_{\rm sonic}\) is well within the galaxy for Milky-Way parameters, though it can be on CGM scales in lower mass galaxies where \(v_{\rm c}\) is lower, or at higher redshift where \(\dot{M}\) is higher. In this paper we focus on systems with \(r_{\rm sonic}\) smaller than the galaxy scale, so the quasi-hydrostatic approximation is valid throughout the CGM. Another important scale of cooling flows is the cooling radius \(R_{\rm cool}\) where the cooling time equals the system age. For the above parameters \(t_{\rm cool}=10\,{\rm Gyr}\) occurs at \(r=110\,{\rm kpc}\). This scale is not part of the steady-state solution, since beyond it gas does not have time to cool and the assumption of steady-state is invalid. The cooling radius appears when accounting for the time-dependence of the problem at large CGM radii (Bertschinger, 1989). In this work we focus on inner halo radii (\(\lesssim 50\,{\rm kpc}\)) where the dynamical effects of angular momentum are most pronounced, so the hot gas cooling time is short relative to cosmological timescales and thus steady-state is more likely to be achieved. This inner CGM region is also less susceptible to cosmological effects not included in our analysis, such as non-spherical accretion and satellite galaxies (Fielding et al., 2020). ### Rotating hot CGM - the circularization radius Given a small net angular momentum in the hot gas, for example due to torques applied by neighboring halos, the rotation velocity will increase as the gas inflows. We can hence define a circularization radius \(R_{\rm circ}\) as an approximate radius where the rotational velocity equals the circular velocity and the flow becomes rotationally supported: \[R_{\rm circ}\equiv r\left(v_{\phi}=v_{\rm c}\right)\;. \tag{13}\] The value of \(R_{\rm circ}\) can be estimated based both on cosmological considerations and on observations. In a \(\Lambda\)CDM universe, a given dark matter halo is expected to have a spin parameter which on average equals: \[\lambda\equiv\frac{J}{\sqrt{2}M_{\rm vir}v_{\rm vir}R_{\rm vir}}\sim 0.035 \tag{14}\] (e.g., Bullock et al., 2001; Rodriguez-Puebla et al., 2016), where \(M_{\rm vir}\), \(v_{\rm vir}\), \(R_{\rm vir}\), and \(J\) correspond to the halo mass, virial velocity, virial radius, and angular momentum, correspondingly. Assuming the hot CGM has roughly the same spin as the dark matter halo as suggested by cosmological simulations (e.g., Stewart et al., 2013, 2017; DeFilippis et al., 2020), and assuming that near the disk \(v_{\rm c}=f_{v_{\rm c}}v_{\rm vir}\) with \(f_{v_{\rm c}}\gtrsim 1\) we get \[R_{\rm circ}\approx\sqrt{2}\lambda f_{v_{\rm c}}^{-1}R_{\rm vir}\sim 15f_{v_{ \rm c}}^{-1}\left(\frac{R_{\rm vir}}{300\,{\rm kpc}}\right)\,{\rm kpc}\;. \tag{15}\] We show below (section 6.4) that this estimate of \(R_{\rm circ}\) is supported by observed truncations of nearby thin stellar disks at \(\approx 4\) disk radii. Comparison of eqn. (15) with eqn. (12) implies that \(R_{\rm circ}\gg r_{\rm sonic}\) in Milky-Way halos. Thus, a hot CGM inflow in a Milky-Way halo is expected to become rotation-supported well before it transitions into a supersonic flow. This conclusion is also apparent if we estimate the radial Mach number near \(R_{\rm circ}\). Using eqn. (11) we have \[\frac{t_{\rm cool}}{t_{\rm ff}}(R_{\rm circ})=6.3\,\left(\frac{R_{\rm circ}}{15 \,{\rm kpc}}\right)^{0.5}\,v_{\rm c,200}^{2}\hat{M}_{1}^{-0.5}\Lambda_{-22}^{-0. 5}\;, \tag{16}\] and hence \[\mathcal{M}_{r}(R_{\rm circ})\approx(0.7t_{\rm cool}/t_{\rm ff})^{-1}\approx 0.1\;. \tag{17}\] The difference between CGM with \(r_{\rm sonic}<R_{\rm circ}\) and CGM with \(r_{\rm sonic}>R_{\rm circ}\) was discussed by Stern et al. (2020), and is related to the classic distinction between 'hot mode' and 'cold mode' accretion in quasi-spherical systems (White & Rees, 1978; Birnboim & Dekel, 2003; Fielding et al., 2017). In this paper we focus on systems with \(r_{\rm sonic}<R_{\rm circ}\) so the hot accretion mode dominates. ### Rotating hot CGM - fluid equations We now search for a steady-state and axisymmetric solution to the flow equations which accounts for angular momentum, using a spherical coordinate system (\(r,\theta,\phi\)) where \(\theta\) is the angle relative to the rotation axis and \(\phi\) is the azimuthal angle. The momentum equations in the \(r\) and \(\theta\) directions are thus \[\frac{\partial P}{\partial r} = -\rho\frac{v_{\rm c}^{2}}{r}+\rho\Omega^{2}r\sin^{2}\theta \tag{18}\] \[\frac{\partial P}{\partial\theta} = \rho\Omega^{2}r^{2}\sin\theta\cos\theta \tag{19}\] where \(\Omega\) is the angular frequency, and \(v_{r}\), \(v_{\theta}\), and \(v_{\phi}=\Omega r\sin\theta\) are the velocity vector components. We neglect the inertial \(v_{r}^{2}\) term since its magnitude relative to the other terms is of order \(\mathcal{M}_{r}^{2}\approx(t_{\rm cool}/t_{\rm ff})^{-2}\). We similarly neglect the \(v_{\theta}^{2}\) and \(v_{r}v_{\theta}\) terms, since motion in the \(\theta\) direction is a result of the combination of radial and rotational motions, and hence \(v_{\theta}\) is of the same order as \(v_{r}\) or smaller. The momentum equation in the \(\phi\) direction is \[v_{r}\frac{\partial}{\partial r}(\Omega r^{2}\sin^{2}\theta)=-v_{\theta}\frac{ \partial}{r\partial\theta}(\Omega r^{2}\sin^{2}\theta)\, \tag{20}\] which indicates that the specific angular momentum \(j=\Omega r^{2}\sin^{2}\theta\) is conserved along flowlines, as expected under our assumption of axisymmetry. \[\frac{1}{r^{2}}\frac{\partial}{\partial r}(\rho v_{r}r^{2})+ \frac{1}{r\sin\theta}\frac{\partial}{\partial\theta}(\rho v_{\theta} \sin\theta) = 0 \tag{21}\] \[v_{r}\frac{\partial\ln K}{\partial r}+ v_{\theta}\frac{\partial\ln K}{r\partial\theta} = -\frac{1}{t_{\rm cool}}. \tag{22}\] At large radii where the centrifugal terms in eqns. (18)-(19) are small, the solution will approach the no-angular momentum solution discussed in the previous section. In this limit \(v_{\theta}\to 0\), so eqn. (20) implies that \(\Omega r^{2}\) is constant for a given \(\theta\). We thus write \[\Omega_{1}(r,\theta)=\frac{v_{\rm c}R_{\rm circ}F(\theta)}{r^{2}} \tag{23}\] where the subscript '1' denotes that this relation holds true at large radii where rotational support is small (see below). In eqn. (23) we chose \(v_{\rm c}R_{\rm circ}F(\theta)\) for the constant of integration, where \(F\) is some function that satisfies \(F(\pi/2)=1\). This definition of \(F\) implies that flowlines in the midplane (\(\theta=\pi/2\)) have \(\Omega_{1}=v_{\rm c}R_{\rm circ}/r^{2}\) and achieve full rotation support at a cylindrical radius equal to \(R_{\rm circ}\). Similarly, flowlines which originate from a general polar angle \(\theta_{0}\) at large radii achieve full rotation support at a cylindrical radius \(R_{\rm circ}F(\theta_{0})\sin^{2}\theta_{0}\). The form of \(F(\theta)\) will be determined by the outer boundary condition, which sets the dependence of angular momentum on polar angle at large radii. ### Analytic solution in the slow rotation limit In this section we derive a solution to equations (18)-(22) which is accurate to lowest order in the effects of rotation. A similar approach was employed to study meridional flows in the Sun (Sweet, 1950; Tassoul, 2007). The dynamical effects of rotation on hot CGM inflows increase with decreasing \(r\), and by definition become dominant near \(R_{\rm circ}\). The deviations of hydrodynamic properties from the radial solution (eqn. 11) can thus be approximated by keeping only terms which depend on \(r/R_{\rm circ}\) to the lowest order. It is straightforward to show that there are no terms of order \((r/R_{\rm circ})^{-1}\), since the lowest order of \(\Omega\) is proportional to \((r/R_{\rm circ})^{-2}\) (eqn. 23) and rotation enters the other flow equations only via the term \(\Omega^{2}r^{2}\) (eqns. 18-19). We thus define a perturbation parameter \[\epsilon=\left(\frac{r}{R_{\rm circ}}\right)^{-2}\, \tag{24}\] and search for a solution of the form \[P_{1} = P_{0}(r)\left[1+\epsilon(r)f_{P}(\theta)\right]\] \[\rho_{1} = \rho_{0}(r)\left[1+\epsilon(r)f_{\rho}(\theta)\right]\] \[v_{r,1} = v_{r,0}(r)\left[1+\epsilon(r)f_{v_{r}}(\theta)\right]\] \[v_{\theta,1} = v_{r,0}(r)\epsilon(r)f_{v_{\theta}}(\theta)\] \[\Omega_{1} = \frac{v_{\rm c}}{R_{\rm circ}}\epsilon(r)F(\theta). \tag{25}\] Here, a subscript '0' denotes the solution without angular momentum (eqn. 11), a subscript '1' denotes the approximate solution which we wish to find, and \(f_{P},f_{P},f_{\nu},f_{\nu_{\theta}}\) are some functions of \(\theta\). The motivation for the form of \(v_{\theta,1}\) will become apparent below. The solution for \(\Omega_{1}\) is equivalent to eqn. (23). We emphasize that the assumption of mild rotation is on top of the assumption of quasi-hydrostatic conditions, which allowed neglecting the quadratic velocity terms. Together, these assumptions imply that we assume the following conditions on timescales in the system: \[t_{\rm acc}\approx t_{\rm ff},\ t_{\rm cool}\gg t_{\rm ff},\ t_{\rm rot}^{2} \gg t_{\rm ff}^{2}\, \tag{26}\] where \(t_{\rm sc}\) is the sound crossing time which is approximately equal to \(t_{\rm ff}\) since the flow is quasi-hydrostatic, and \(t_{\rm rot}=\Omega^{-1}\) is the rotation time. Note that observations suggest that these conditions are roughly satisfied in the hot inner CGM of the Milky-Way, with \((t_{\rm rot}/t_{\rm ff})^{2}\sim 4\)(Hodges-Kluck et al., 2016) and \(t_{\rm cool}/t_{\rm ff}\approx 6\) (eqn. 16). In order to reduce the complexity of the analytic derivation, in this section we find a solution assuming \[F(\theta)=1\, \tag{27}\] i.e. \(\Omega_{1}\) is independent of \(\theta\) so shells at large radius rotate as rigid bodies. Solutions for other \(F(\theta)\) are discussed below. Using the form of the flow variables in eqn. (25), the first-order terms in \(\epsilon\) in eqn. (19) are \[P_{0}\frac{\partial f_{P}}{\partial\theta}=\rho_{0}v_{\rm c}^{2}\sin\theta \cos\theta. \tag{28}\] Using \(P_{0}=(2/3)\rho v_{\rm c}^{2}\) (based on \(P=(3/5)\rho c_{\rm s}^{2}\) and eqn. 11) then gives \[f_{P}=\frac{3}{4}\sin^{2}\theta+C\, \tag{29}\] where \(C\) is a constant of integration to be determined below. Next, the first order terms in eqn. (18) give \[f_{P}P_{0}\frac{\partial\ln\epsilon}{\partial r}+f_{P}\frac{\partial P_{0}}{ \partial r}=-f_{\rho}\rho_{0}\frac{v_{\rm c}^{2}}{r}+\rho_{0}\frac{v_{\rm c}^{2 }}{r}\sin^{2}\theta. \tag{30}\] Since \(P_{0}\propto r^{-3/2}\) (eqn. 11) and \(\epsilon\propto r^{-2}\) (eqn. 24) we get \[f_{P}=\frac{11}{4}\sin^{2}\theta+\frac{7}{3}C\, \tag{31}\] where again we used \(P_{0}=(2/3)\rho v_{\rm c}^{2}\). We can also define \(T_{1}=T_{0}(1+\epsilon f_{T})\) and \(K_{1}=K_{0}(1+\epsilon f_{K})\) and hence \[f_{T} = f_{P}-f_{P}=-2\sin^{2}\theta-\frac{4}{3}C \tag{32}\] \[f_{K} = f_{P}-\frac{5}{3}f_{P}=-\frac{23}{6}\sin^{2}\theta-\frac{26}{9}C. \tag{33}\] Eqns. (29), (31) and (32) indicate that the pressure and density increase when traversing from the rotating axis to the midplane at a fixed \(r\), while the temperature decreases. The increase in pressure in the major axis is due to the higher density, which overcomes the lower effective gravity along the major axis which tends to decrease the pressure. In the entropy equation (22), the second term on the left hand side is of order \(\epsilon^{2}\) and can be neglected. The first order terms of this equation are hence \[v_{r,0}/k_{\rm F}\frac{\partial\ln\epsilon}{\partial r}+v_{r,0}/f_{\nu_{r}}\frac{ \partial\ln K_{0}}{\partial r}=-\frac{1}{t_{\rm cool,0}}\left[f_{P}-(1+l)f_{T}\right] \tag{34}\] where we use \(t_{\rm cool}\propto T/\rho\Lambda\), and approximate the temperature dependence of the cooling function as \(\Lambda\propto T^{-l}\). Using \(K_{0}\propto r\), \(t_{\rm cool,0}=-r/v_{r,0}\), and the above relations for \(f_{K}\), \(f_{\rho}\), \(f_{T}\) we get \[f_{\nu_{r}}=\left(-\frac{35}{12}+2l\right)\sin^{2}\theta-\left(-\frac{19}{9}+ \frac{4}{3}l\right)C\approx-\frac{23}{12}\sin^{2}\theta-\frac{13}{9}C. \tag{35}\] where in the second equality we use \(l=0.5\), appropriate for gas with \(T\sim 10^{6}\,\mathrm{K}\) and a characteristic CGM metallicity of \(Z\approx 0.3Z_{\odot}\)(Miller and Bregman, 2015). Last, we use the continuity equation (21) to derive \(v_{\theta}\), which we cast in the form \(v_{\theta,1}=v_{r,0}\epsilon f_{v_{\theta}}\) (see eqn. 25). Keeping only first order terms we get \[\frac{\rho_{0}\epsilon v_{r,0}}{r\sin\theta}\frac{\partial}{\partial\theta} \left(f_{v_{\theta}}\sin\theta\right)=-\frac{f_{\rho}+f_{v_{r}}}{r^{2}}\frac{ \partial}{\partial r}\left(\epsilon\rho_{0}v_{r,0}r^{2}\right). \tag{36}\] Using the definition of \(\epsilon\) and that \(\rho_{0}v_{r,0}r^{2}\) is independent of \(r\), we get \[\frac{\partial}{\partial\theta}\left(\sin\theta f_{v_{\theta}}\right)=2\sin \theta(f_{\rho}+f_{v_{r}}) \tag{37}\] so \[f_{v_{\theta}} = \frac{1}{\sin\theta}\int\left(\frac{5}{3}\sin^{3}\theta+\frac{16 }{9}C\sin\theta\right)d\theta \tag{38}\] \[= \frac{1}{9\sin\theta}\left[\cos\theta\left(5\cos^{2}\theta-15-16C \right)+\mathcal{D}\right]\.\] where \(\mathcal{D}\) is another constant of integration. We further require \(v_{\theta}(\pi/2)=v_{\theta}(0)=0\), in order to avoid a discontinuity at the rotation axis and to enforce symmetry with respect to the midplane. This gives \(\mathcal{D}=0\) and \(\mathcal{C}=-5/8\), and hence \[f_{v_{\theta}}=-\frac{5}{18}\,\sin 2\theta. \tag{39}\] Note that since \(v_{r,0}\) is negative, then \(v_{\theta,1}=v_{r,0}f_{v_{\theta}}\epsilon\) is positive for \(\theta<\pi/2\) and negative for \(\theta>\pi/2\), indicating that rotation diverts the flow towards the disc plane, as expected. To summarize our solution we use the derived \(\mathcal{C}=-5/8\) in eqns. (29), (31), (32) and (35) and get \[P_{1} = P_{0}(r)\left(1+\frac{R_{\mathrm{circ}}^{2}}{r^{2}}\left(\frac{ 3}{4}\sin^{2}\theta-\frac{5}{8}\right)\right)\] \[\rho_{1} = \rho_{0}(r)\left(1+\frac{R_{\mathrm{circ}}^{2}}{r^{2}}\left(\frac{ 11}{4}\,\sin^{2}\theta-\frac{35}{24}\right)\right)\] \[T_{1} = T_{0}\left(1-\frac{R_{\mathrm{circ}}^{2}}{r^{2}}\left(2\sin^{2} \theta-\frac{5}{6}\right)\right)\] \[v_{r,1} = v_{r,0}(r)\left(1-\frac{R_{\mathrm{circ}}^{2}}{r^{2}}\left(\frac{ 23}{12}\sin^{2}\theta-\frac{65}{72}\right)\right)\] \[v_{\theta,1} = -v_{r,0}(r)\cdot\frac{5}{18}\,\frac{R_{\mathrm{circ}}^{2}}{r^{2}} \sin(2\theta)\] \[\Omega_{1} = \frac{v_{\mathrm{c}}R_{\mathrm{circ}}}{r^{2}} \tag{40}\] where the zero-order terms are given by eqn. (11). For a given \(v_{\mathrm{c}}\) and \(\Lambda\), the solution in eqn. (40) depends on three parameters: \(\dot{M}\) and \(\Lambda\) (or equivalently CGM mass and metallicity) which set the non-rotating solution, and \(R_{\mathrm{circ}}\) which sets the corrections due to rotation. The solution in eqn. (40) is for an outer boundary condition in which \(\Omega_{1}\) is independent of \(\theta\) (i.e., \(F(\theta)=1\)). In Appendix A we give several solutions for \(\Omega_{1}\propto F(\theta)=\sin^{n}(\theta)\) with integer \(n\), i.e., the rotation frequency at the outer boundary increases with angle from the rotation axis. In these solutions, the \(\theta\)-dependent term in the solution for \(P_{1}\) is multiplied by a factor of \(\sin^{2n}(\theta)/(n+1)\) relative to that in eqn. (40), while the corresponding term for \(T_{1}\) is multiplied by a factor of \(\sin^{2n}(\theta)\cdot(n+2)/(n+1)\). Thus, the result above that \(P\) and \(\rho\) increase towards the midplane, while \(\mathcal{T}\) decreases, holds also when \(\Omega_{1}\) increases with \(\theta\). These deviations from spherical symmetry however tend to become weaker, and more concentrated near the midplane, with increasing \(n\). ### Number of revolutions in CGM inflows The number of revolutions around the rotation axis completed by a flowline can be derived from the ratio \(v_{\phi}/v_{r}\): \[\frac{v_{\phi}}{v_{r}}=\frac{\Omega r\sin\theta}{v_{r}}=\frac{v_{\mathrm{c}}R _{\mathrm{circ}}\sin\theta}{rv_{r,0}}+\mathcal{O}\left(\left(\frac{R_{\mathrm{ circ}}}{r}\right)^{3}\right). \tag{41}\] Using \(v_{r,0}=r/t_{\mathrm{cool}}+\mathcal{O}((R_{\mathrm{circ}}/r)^{2})\) and \(t_{\mathrm{ff}}=\sqrt{2}r/v_{\mathrm{c}}\) we thus get \[\frac{v_{\phi}}{v_{r}}=\sqrt{2}\frac{t_{\mathrm{cool}}}{t_{\mathrm{ff}}}\frac{ R_{\mathrm{circ}}}{r}\sin\theta+\mathcal{O}\left(\left(\frac{R_{\mathrm{ circ}}}{r}\right)^{3}\right). \tag{42}\] It is thus evident that in solutions with larger \(t_{\mathrm{cool}}/t_{\mathrm{ff}}\) the flowlines are more tightly wound, i.e. the flow rotates more prior to accreting onto the galaxy. The total number of radians a flowline rotates can be approximated with the following integral \[\int\Omega dt=\int\frac{v_{\phi}}{r\sin\theta}\frac{dr}{v_{r}}\approx\sqrt{2} \int\frac{t_{\mathrm{cool}}}{t_{\mathrm{ff}}}\frac{R_{\mathrm{circ}}}{r^{2}}dr =1.9\frac{t_{\mathrm{cool}}}{t_{\mathrm{ff}}}\left(R_{\mathrm{circ}}\right)\, \tag{43}\] where in the first approximation we used eqn. (42) and neglected the \((R_{\mathrm{circ}}/r)^{3}\) term, and in the second approximation we used \(t_{\mathrm{cool}}/t_{\mathrm{ff}}\propto r^{1/2}\) (eqn. 11) and integrated over the range \(r/R_{\mathrm{circ}}=1-7\). The upper bound of this range corresponds to the cooling radius (see section 2.1), though since the integrand scales as \(r^{-3/2}\) most of the rotation happens near \(R_{\mathrm{circ}}\), and the choice of upper limit does not significantly affect the result. Equation (43) shows that the number of rotations is set by \(t_{\mathrm{cool}}/t_{\mathrm{ff}}\) near \(R_{\mathrm{circ}}\). This can be understood intuitively since the cooling time tracks the inflow time (\(t_{\mathrm{cool}}=r/|v_{r}|\)) while the free fall time tracks the the rotation time near \(R_{\mathrm{circ}}\) (\(t_{\mathrm{ff}}=\pi/v_{\mathrm{c}}=r/v_{\phi}(R_{\mathrm{circ}})\)). For the MW halo in which \(t_{\mathrm{cool}}/t_{\mathrm{ff}}(R_{\mathrm{circ}})\approx 6\) (eqn. 16), we thus get that an accreting element rotates \(\approx 12\) radians prior to accretion. It is informative to extend the result in eqn. (43) also to cases where \(t_{\mathrm{cool}}<t_{\mathrm{ff}}\) and the volume-filling phase is cool and in free-fall with \(v_{r}\approx-v_{\mathrm{c}}\), as expected at halo masses \(\ll 10^{12}\,\mathrm{M}_{\odot}\)(e.g., Stern et al., 2020). Using \(-v_{r}=\min(r/t_{\mathrm{cool}},v_{\mathrm{c}})\) in eqn. (43) we thus get \[\int\Omega dt=\max(1.9\frac{t_{\mathrm{cool}}}{t_{\mathrm{ff}}}\left(R_{ \mathrm{circ}}\right),0.9) \tag{44}\] Equation (44) demonstrates that only in the hot accretion mode where pressure-support slows down accretion relative to free-fall, the CGM has time to rotate significantly before accreting. In contrast, free-falling cold flows rotate merely by \(\approx 1\) radian prior to accretion. ## 3 The structure of hot and rotating CGM - numerical solution In this section we derive a numerical solution for hot and rotating CGM inflows. To this end we run a 3D hydrodynamic simulation and let it converge onto a steady-state where mass continuously flows through the hot CGM and then cools and accretes onto the disk. Using this method to find the numerical solution has the advantage that it demonstrates that the solution is an attractor. We then present the properties of this solution and compare it to the approximate analytic solution derived in section 2.4. ### Setup We use the meshless finite-mass ('MFM') mode of GIZMO (Hopkins, 2015), a Lagrangian method with no inter-element mass flux, which enables us to track the history of each resolution element. The code accounts for self-gravity of the gas and stars, to which we add an acceleration term \(-(v_{\rm c}^{2}/r)^{\beta}\) with \(v_{\rm c}=200\,{\rm km\ s^{-1}}\). This term approximates the gravitational field in the inner halo due to unmodelled dark matter and stars. Optically thin radiative cooling is calculated using the \(z=0\) tables from Wiersma et al. (2009) down to \(T=10^{4}\) K, while optically thick radiative cooling to lower temperatures is disabled. All gas resolution elements with \(n_{\rm SF}>10\,{\rm cm^{-3}}\) are converted into stellar particles. All stellar feedback processes are disabled. The density, temperature and radial velocity of gas is initialized with a spherical, non-rotating hot inflow solution from Stern et al. (2019), to which we add rotation corresponding to some \(R_{\rm circ}\). This solution is found by integrating the 1D spherically-symmetric and steady-state flow equations, starting at \(r_{\rm sonic}=0.1\,{\rm kpc}\) and proceeding outward. The integration uses the same \(v_{\rm c}=200\,{\rm km\ s^{-1}}\) and cooling function with \(Z=0.3\,{\rm Z_{\odot}}\) as in the simulation. The 1D solution has \(\dot{M}=1\,{\rm M_{\odot}\,yr^{-1}}\) and at radii \(r\gg r_{\rm sonic}\) is well approximated by eqn. (11), with \(T=2\cdot 10^{6}\) K and \(\Lambda_{-22}=0.3\). We then randomly select initial positions in \((r,\phi,\theta)\) for the initial location of gas resolution elements, such that the radial mass profile reproduces that in the spherically symmetric solution. To add a net rotation to the gas, all resolution elements at \(r>R_{\rm circ}\) are initialized with \(v_{\phi}=200\sin\theta(r/R_{\rm circ})^{-1}\,{\rm km\ s^{-1}}\), with \(R_{\rm circ}=15\,{\rm kpc}\). This addition of rotation implies that the initial conditions are not in steady-state, since the initial pressure structure does not account for rotation support. As discussed below the simulation adjusts to a new steady-state within a cooling time of \(<1\,{\rm Gyr}\), with a somewhat larger \(\dot{M}\) of \(1.5\,{\rm M_{\odot}\,yr^{-1}}\). The mass of individual resolution elements is set to \(m_{b}=1000\,{\rm M_{\odot}}\) for elements at \(r<100\,{\rm kpc}\). This mass resolution implies a characteristic size of \(\approx(3m_{b}/4\pi m_{\rm H})^{1/3}\approx 0.2n_{-3}^{-1/3}\,{\rm kpc}\) for typical hot gas densities of \(\approx 10^{-3}n_{-3}\,{\rm cm^{-3}}\) near the disk scale, and smaller sizes for the denser cool gas. For comparison, the height of the \(\approx 10^{4}\) K gaseous disk which forms from the cooling of the hot gas is \(\approx(10^{4}\,{\rm K}/T_{\rm hot})^{1/2}R_{\rm circ}\approx 1.3\,{\rm kpc}\), where \(T_{\rm hot}\approx 2\cdot 10^{6}\) K is the hot gas temperature. Beyond \(100\,{\rm kpc}\) the gas does not participate in the inflow since cooling times are too long, but needs to be included in the simulation in order to confine gas at smaller radii from expanding outward (in a realistic halo this confinement is achieved either by such hot gas with a long cooling time or by the ram pressure of infalling gas outside the accretion shock). To avoid investing too much computing time in this confining outer gas, we sample the spherically-symmetric solution beyond \(100\,{\rm kpc}\) with resolution elements which masses increase by a factor of three every factor of \(\sqrt{2}\) in radius, out to \(3.2\,{\rm Mpc}\) where the sound-crossing time equals \(10\,{\rm Gyr}\). In total the CGM is simulated with \(3.1\times 10^{7}\) resolution elements. We also add a galaxy to the initial conditions, using the MakeDisk code (Springel et al., 2005) with the following parameters. The stellar disk is initialized with mass \(M_{\rm*}=10^{8}\,{\rm M_{\odot}}\), cylindrical radial scale length of \(R_{\rm d}=R_{\rm circ}/4=3.75\,{\rm kpc}\) spanning \(0.03-4R_{\rm d}\), and a vertical scale length of \(0.1R_{\rm d}\). The gaseous disk has a mass \(M_{\rm disk\ gas}=0.2M_{\rm*}\), and the same exponential distribution as stars, and the bulge has a mass \(M_{\rm bulge}=2\cdot 10^{7}\,{\rm M_{\odot}}\) and scale length \(0.1\,{\rm kpc}\). We include in the MakeDisk calculation the same isothermal gravitational field used in the hydro simulation, and stellar and gas particles in the disk have the same \(m_{b}=1000\,{\rm M_{\odot}}\) resolution as in the CGM. The choice of galaxy parameters is inconsequential as long as the initial mass is small compared to the accreted mass, which at \(t=1\,{\rm Gyr}\) is \(Mt\sim 10^{9}\,{\rm M_{\odot}}\). The simulation is run for \(3.5\,{\rm Gyr}\), with snapshots saved every \(5\,{\rm Myr}\). At all times and radii the gravitational field is dominated by the included isothermal gravitational field with \(v_{\rm c}=200\,{\rm km\ s^{-1}}\), rather than by the simulated gas and stars. Our setup is loosely based on the setup of Su et al. (2019, 2020), which simulated the behavior of gas in group and cluster-sized halos. A similar setup to ours for Milky-Way mass halos was employed by Kaufmann et al. (2006) using an SPH code. This code was later found to over-predict artificial clumping of the cool gas (Agertz et al., 2007; Kaufmann et al., 2009). Our use of the MFM code addresses this numerical issue (see Hopkins, 2015 for code tests). Additionally, since in our simulation _all_ the hot gas cools once it inflows past \(R_{\rm circ}\), the numerical details may affect the distribution of clumps and their typical sizes, but not the total mass which cools. Figure 1: Temperature map (color) and flowlines (black lines) in a hot CGM inflow. Left and middle panels show the solution in the \(R_{\rm cyl}-z\) and \(x-y\) planes, respectively. The right panel depicts three specific flowlines as 3D ‘tubes’, where the cross-section along each tube scales as \((\rho v)^{-1}\) and hence illustrates the compression of the flow. Note that the hot \(\sim 10^{6}\) K phase inflows along helical paths, and cools to \(\sim 10^{4}\) K just prior to joining the ISM disk. ### Results #### 3.2.1 Overview Figure 1 shows temperature maps in the simulation at \(t>1\,\)Gyr, after the hot CGM phase converged onto an axisymmetric steady-state solution within \(r\approx 40\,\)kpc. Steady-state and axisymmetry are evident from the small dispersion in hot CGM properties with time and \(\phi\), as shown below. The left and middle panels respectively show the \(R_{\rm cyl}-z\) plane and the \(x-y\) plane (mass-weighted over \(-10<z<10\,\)kpc). The figure shows that the hot gas fills the volume except in the disk region at \(R_{\rm cyl}\leq R_{\rm circ}=15\,\)kpc and \(|z|\lesssim 1\,\)kpc. Black lines depict flowlines in the two planes, derived as described below. Three of these flowlines are also depicted in the right panel as 3D 'tubes', where the tube cross-section scales as \((\rho v)^{-1}\) and thus illustrates the compression of the flow. Figure 1 shows that the flowlines in the hot gas are helical, with the hot gas spiraling onto the galaxy. While inflowing, the gas initially remains hot with \(T\sim T_{\rm vir}\), and then cools to \(\sim 10^{4}\,\)K just prior to joining the ISM disk. This cooling is accompanied with strong compression of the flow, as evident from the sharp decrease in the width of the flow tubes in the right panel (blue thickness should drop to \(\lesssim 0.01\) pixels upon cooling according to the \((\rho v)^{-1}\) scaling, though is plotted with one pixel for visibility). Figure 2 plots radial shell-averaged velocities in the simulation after steady-state is achieved. The top panel shows that at radii \(r>R_{\rm circ}\) the sound speed \(c_{\rm s}\) (red) approximately equals \(v_{\rm c}\) (gray), indicating the hot gas is to first-order supported against gravity by thermal pressure, as also indicated by the slow inflow velocities \(|v_{r}|\ll v_{\rm c}\) (magenta). At these radii the rotation velocity increases inward roughly as \(r^{-1}\) due to conservation of angular momentum, reaching \(v_{\phi}=v_{\rm c}\) at \(R_{\rm circ}\). Within \(R_{\rm circ}\) the gas is fully rotationally supported and cool with \(c_{s}\ll v_{\rm c}\), and the radial velocity drops to zero. The associated change in geometry is evident in the bottom panel, which plots the average absolute height above the midplane \(|z|\) in different radial shells. Figure 3: Gas properties along flowlines in hot rotating CGM, versus time since a fluid element is at \(r=40\,\)kpc. Panels show radius, polar angle, temperature and rotation velocity. Different lines and bands correspond to medians and \(16^{\rm h}-84^{\rm h}\) percentiles of flowlines with different polar angles at large radii \(\theta_{0}\). Vertical dotted lines indicate times where \(T\) drops to \(10^{5}\,\)K. Initially the flowlines have roughly constant \(\theta\approx\theta_{0}\). About \(200\,\)Myr prior to cooling the flow geometry flattens (\(\theta\to\pi/2\)), and the temperature increases mainly at small \(\theta_{0}\). At cooling \(v_{\rm c}\) reaches \(v_{\rm c}\approx 200\,\)km s\({}^{-1}\) and \(r\) becomes constant, indicating a transition from quasi-thermal pressure support against gravity to full rotational support. Dispersion in the hot gas prior to cooling is small, demonstrating the hot inflow is steady and axisymmetric. Figure 2: Radially-averaged kinematics and geometry of a hot CGM inflow. _Top:_ Lines show sound speed (red), rotation velocity (blue), and inflow velocity (magenta). _Bottom:_ Black line shows the absolute height \(|z|\) above the midplane. Rotation velocity increases inward due to conservation of angular momentum. At the radius \(R_{\rm circ}\) where the inflow becomes fully rotation supported (\(v_{\phi}=v_{\rm c}\)) the hot inflow cools out, the inflow halts (\(|v_{r}|\to 0\)), and the geometry transitions from quasi-spherical to a disk. The gas distribution is close to spherically-symmetric at \(r>R_{\rm circ}\) in contrast with a thin disk distribution at \(r<R_{\rm circ}\). #### 3.2.2 Accretion of the hot CGM onto the cool ISM Figure 3 provides a Lagrangian view of hot inflowing CGM, by plotting median properties of resolution elements versus time since the element was at \(r=40\) kpc. The properties depend on the initial polar angle of the flowline \(\theta_{0}\), so for each \(\theta_{0}\) in \((0.1\pi,0.3\pi,0.4\pi,0.5\pi)\) we group all resolution elements that reside at \(40<r<41\) kpc and \(|\theta-\theta_{0}|<0.025\pi\) at times \(1<t<1.5\) Gyr. Then, for each \(\theta_{0}\) group we plot the median and \(16-84\) percentile ranges of \(r\), \(\theta\), \(T\), and \(v_{\phi}\). The \(16-84\) percentile range thus accounts for the dispersion both with \(\phi\) and with \(t\), and specifically a small \(16-84\) percentile range indicates that the solution is axisymmetric and in steady-state. Fig. 3 shows that at \(r\gtrsim 25\) kpc (early times in this plot), the gas is hot and inflowing, with a somewhat larger inflow velocity near the rotation axis (\(\theta_{0}=0.1\pi\)). Rotation is sub-Keplerian (\(<200\) km s\({}^{-1}\)) but growing with time as indicated by Fig. 2. The value of \(\theta\) remains roughly constant and equal to \(\theta_{0}\) for a given flowline. The gas properties then go through a transition when the flowlines reach radii of \(r\lesssim 20\) kpc. The gas initially heats up and is diverted to midplane (\(\theta=\pi/2\)), and then abruptly cools. The initial heating is more pronounced near the rotation axis, and is a result of the compression during the transition to a disk geometry (see below). Cooling occurs simultaneously with \(v_{\phi}\) reaching \(v_{\rm c}\approx 200\) km s\({}^{-1}\) and \(r\) becoming constant (see bottom and top panels), indicating a transition from quasi-pressure support which allows a slow inflow to full rotational support in which the inflow stops. We thus identify this abrupt transition in the hot inflow as the time of accretion from the hot CGM onto the cool ISM. We emphasize that this cooling and accretion is a global transition in the flow, rather than being specific Figure 4: Gas properties along flowlines in hot rotating CGM, versus time since gas in the flowline cools. The time of cooling is equal to the time of accretion onto the ISM. From left to right and top to bottom the panels plot cylindrical radius, height above the midplane, total rotation since \(r=40\) kpc, density, temperature, pressure, entropy, density dispersion, and specific angular momentum. Different lines and bands correspond to medians and \(16^{\rm th}-84^{\rm th}\) percentile ranges of flowlines with different polar angles at large radii \(\theta_{0}\). When the flow is hot (negative times in the panels) the entropy is slowly decreasing due to the radiative cooling, density fluctuations are small, and the specific angular momentum is conserved. Just prior to cooling the geometry flattens and the gas somewhat heats up due to compressive heating (see \(z\) and \(T\) panels). At cooling gas densities increase by \(\times 300\) or more and density fluctuations become significant. to individual clumps of gas. Note also that prior to cooling the \(16-84\) percentile ranges in individual flowlines are small, indicating the hot CGM is axisymmetric and in steady-state. The result that cooling of the hot CGM is roughly coincident with flattening of the geometry from a spherical flow to a disk is consistent with the conclusions of Hafen et al. (2022), which studied hot accretion in cosmological zoom simulations of low-redshift Milky Way-mass galaxies from the FIRE project. Figure 4 plots gas properties along flowlines versus \(t-t(T=10^{5}\,\mathrm{K})\), where \(t(T=10^{5}\,\mathrm{K})\) is defined as the time at which the temperature in the flowline equals \(10^{5}\,\mathrm{K}\). This time is also marked with vertical lines in Fig. 3, and as mentioned above is equivalent to the time of accretion onto the ISM. The nine panels show cylindrical radius \(R_{\mathrm{cylyl}}\), \(z\), total rotation \(\phi-\phi_{0}\) where \(\phi_{0}\equiv\phi(r=40\,\mathrm{kpc})\), density, temperature, pressure, entropy, density dispersion, and specific angular momentum. Fig. 4 shows that while the flow is hot (\(t<t(T=10^{5}\,\mathrm{K})\)) the density and pressure of the hot inflow mildly increase with time, while its entropy decreases, as expected due to the slow radial contraction and the radiative energy losses. Density fluctuations are small while the flow is hot (\(\langle\delta\rho/\bar{\rho}\rangle\ll 1\)) as in a non-rotating cooling flow (Balbus and Soker, 1989; Stern et al., 2019), and the specific angular momentum is conserved since the system is axisymmetric. The \(\phi-\phi_{0}\) panel shows that a typical flowline completely roughly one full revolution prior to cooling. At \(t\approx t(T=10^{5}\,\mathrm{K})\) when the flow abruptly cools, the density increases by a factor of \(\approx 300\) for the flowline in the midplane (\(\theta_{0}=0.5\pi\)), and by a larger factor in flowlines with smaller \(\theta_{0}\). The specific angular momentum somewhat increases, likely as a result of the interaction with stars and preexisting disk gas. Also apparent is that density fluctuations become strong just before the gas cools (\(t-t(T=10^{5}\,\mathrm{K})\approx-25\,\mathrm{Myr}\)), and remain of order unity after cooling, in contrast with the weak density fluctuations when the flow is hot. The transition to a disk geometry occurs somewhat earlier (\(t-t(T=10^{5}\,\mathrm{K})\approx-250\) Myr, see \(z\) panel), as also indicated by Fig. 3. The eventual drop in temperature from \(T\approx 2\cdot 10^{6}\,\mathrm{K}\) to \(T\approx 10^{4}\,\mathrm{K}\) occurs over a short timespan of \(\Delta t\approx 20\,\mathrm{Myr}\) (see zoom-in on these times in Figure 11). This layer corresponds to the disc-halo interface (e.g., Fraternali and Binney, 2008) which in our simulation extends \(\approx\pm 1\,\mathrm{kpc}\) from the midplane. The rapid cooling is made possible since the density increases to \(n_{\mathrm{H}}\sim 0.01\,\mathrm{cm}^{-3}\) while the inflow is still hot, shortening the cooling timescale to tens of Myr. We differ to future work studying the implications of our results for the disc-halo interface, which requires accounting also for fountain flows from the disk (see discussion section 6.5 below). The result that density fluctuations are small when the flow is hot (bottom-middle panel of Fig. 4) is apparently in tension with the conclusion of Sormani and Sobacchi (2019) that rotation in hot CGM enhances the development of thermal instabilities. Sormani and Sobacchi reached this conclusion using a linear analysis of perturbations in hot rotating CGM. A potential reason for this difference is that thermal perturbations do not have sufficient time to grow, despite that hot rotating CGM are formally unstable. The growth time of thermal instability is the cooling time, the same timescale as the inflow timescale on which _all_ the hot gas cools and accretes onto the ISM. A similar argument explains why significant density perturbations do not develop spontaneously in non-rotating cooling flows, despite that the hot gas is formally unstable (Balbus and Soker, 1989). #### 3.2.3 Deviations from spherical symmetry in the hot CGM Figure 5 plots the dependence of hot CGM properties on polar angle \(\theta\) at radii of \(45\,\mathrm{kpc}\) and \(25\,\mathrm{kpc}\). From top to bottom the different rows plot angular frequency, temperature, hydrogen density and thermal pressure. Magenta lines denote the non-rotating analytic solution (eqn. 11), blue lines denote the slow rotating analytic solution (eqn. 40), and black lines the solution in the simulation after steady-state is achieved (the same solution used in Figs. 1-4). The perturbation parameter \(\epsilon=(r/R_{\mathrm{circ}})^{-2}\) (see eqn. 24) is noted at the top. The slow-rotating solution accounts only for the lowest-order terms in this quantity. Fig. 5 demonstrates how the properties of the hot gas deviate from spherical symmetry due to the rotation, and more so at radii approaching \(R_{\mathrm{circ}}\) where rotation support is more significant. At \(r=25\,\mathrm{kpc}\) in the numerical solution, the temperature at the rotating axis is almost a factor of two lower than in the midplane, while the density is a factor of two higher. Note also that the slow-rotating analytic solution rather accurately reproduces the simulation at \(r=45\,\mathrm{kpc}\) Figure 5: Deviations from spherical symmetry in hot rotating CGM. Panels show from top to bottom the hot gas angular frequency, temperature, hydrogen density, and pressure, versus angle from the rotation axis \(\theta\), \(\mathrm{at}\,r=45\,\mathrm{kpc}\) (_left_) and \(r=25\,\mathrm{kpc}\) (_right_). Black lines are based on the simulation after steady-state is achieved (the simulation also used in Figs. 1–4). Magenta lines plot the analytic non-rotating solution (eqn. 11), while blue lines plot the analytic slow-rotating solution (eqn. 40) which accounts only for lowest-order terms in \(\epsilon=(r/R_{\mathrm{circ}})^{-2}\) (\(\epsilon\) noted on top). In the rotating solutions density and pressure increase towards the midplane, while temperature decreases. where \(\epsilon=0.1\). At \(r=25\,\mathrm{kpc}\) where \(\epsilon=0.4\) the analytic solution is qualitatively consistent with the trends of \(T,r_{\mathrm{H}}\), and \(P\) versus \(\theta\), though there are quantitative differences potentially since high-order terms in \(\epsilon\) are neglected in the analytic solution. #### 3.2.4 Revolations in inflow versus \(t_{\mathrm{cool}}/t_{\mathrm{ff}}\) To test the relation between total rotation in the inflow and \(t_{\mathrm{cool}}/t_{\mathrm{ff}}\) measured at \(R_{\mathrm{circ}}\) (section 2.5), we run several simulations with different combinations of \(\dot{M}\), \(Z_{\mathrm{CGM}}\), and \(v_{\mathrm{c}}\) which yield different \(t_{\mathrm{cool}}/t_{\mathrm{ff}}\) via eqn. (11). The parameters of the simulations are listed in Table 1. For \(\dot{M}\) we use the value measured through a shell at \(2R_{\mathrm{circ}}\) in snapshots after the simulations achieve steady-state, which is typically \(10-75\%\) larger than \(\dot{M}\) in the initial conditions (see SS3.1). On these snapshots we also measure the average rotation \(\Delta\phi=\int\Omega dt\) a fluid element completes as it inflows from \(10R_{\mathrm{circ}}\) to \(R_{\mathrm{circ}}\), and plot them in Figure 6. The figure shows that the simulations roughly follow the analytic estimate from eqn. (44), confirming that the number of rotations in hot rotating CGM is proportional to \(t_{\mathrm{cool}}/t_{\mathrm{ff}}\). ## 4 Additional Considerations In this section we consider the effect of additional physical mechanisms and effects which were not included in the simulation above. ### Viscosity Viscous forces in the flow may in principle cause angular momentum transport along directions where there is shear in the flow. In our solution \(\Omega\propto r^{-2}\) up to corrections of order \(\epsilon\) (eqn. 40) and thus there is shear in the radial direction. Here we show that for standard kinematic viscosity the expected angular momentum transport due to this shear is small. The specific angular momentum of the flow is \(j=\Omega r^{2}\sin^{2}\theta=v_{\mathrm{c}}R_{\mathrm{circ}}\sin^{2}\theta\) (eqn. 40). Viscous forces in the radial direction will cause an angular momentum loss per unit time of \[\frac{dj}{dt}=\nu r^{2}\sin^{2}\theta\frac{d^{2}\Omega}{dr^{2}}=6\nu\Omega \sin^{2}\theta\, \tag{45}\] where \(\nu\) is the kinematic viscosity: \[\nu=0.56\,T_{6}^{5/2}\left(\frac{n_{\mathrm{H}}}{10^{-3}\,\mathrm{cm}^{-3}} \right)^{-1}\xi_{\nu}\,\mathrm{kpc}\,\mathrm{km}\,\mathrm{s}^{-1} \tag{46}\] and \(\xi_{\nu}\) is the reduction of the viscosity relative to the Spitzer value. We can estimate the fractional angular momentum loss due to viscous forces by multiplying \(dj/dt\) by the flow time \(\approx t_{\mathrm{cool}}\) and dividing by \(j\): \[\frac{\Delta j_{\mathrm{visc}}}{j}\approx\frac{dj}{dt}\cdot\frac{t_{\mathrm{ flow}}}{j}=\frac{6\Omega r_{\mathrm{cool}}}{v_{\mathrm{c}}R_{\mathrm{circ}}} \approx 0.092\,\xi_{\nu}r_{10}v_{\mathrm{c},200}^{5}\dot{M}_{1}^{-1} \tag{47}\] where we used the analytic solution in eqn. (40) and neglected corrections of order \((R_{\mathrm{circ}}/r)^{2}\). For typical estimates of \(\xi_{\nu}\sim 0.1\)(Narayan & Medvedev, 2001) this value is substantially smaller than unity, indicating that viscous forces can generally be neglected. ### Turbulence Assuming that turbulence is seeded at large CGM radii, for example by cosmological accretion or due to stirring by subhalos, what would be the fate of these turbulent motions in the inner CGM inflow explored here? In a non-rotating inflow, we expect a balance between dissipation of turbulence on a timescale \(t_{\mathrm{diss}}=r/\sigma_{\mathrm{turb}}\) and 'adiabatic heating' of turbulence due to the contraction of the inflow on a timescale \(t_{\mathrm{flow}}=r/v_{r}\)(Robertson & Goldreich, 2012). This balance suggests that contracting turbulent fluids converge to \(t_{\mathrm{flow}}\sim t_{\mathrm{diss}}\) and hence \(\sigma_{\mathrm{turb}}\sim v_{r}\), since more rapid turbulent motions will dissipate while slower turbulence will heat up (Murray & Chang, 2015; Murray et al., 2017). In a steady-state cooling flow where \(t_{\mathrm{flow}}\approx t_{\mathrm{cool}}\), we \begin{table} \begin{tabular}{c c c c c} \(v_{\mathrm{c}}\) & \(R_{\mathrm{circ}}\) & \(\dot{M}\) & \(Z_{\mathrm{CGM}}\) & \(t_{\mathrm{cool}}/t_{\mathrm{ff}}\) at \(R_{\mathrm{circ}}^{\mathrm{(a)}}\) \\ [\(\mathrm{km}\,\mathrm{s}^{-1}\)] & [kpc] & [\(\,\mathrm{M}_{\odot}\,\mathrm{yr}^{-1}\)] & [\(\,Z_{\odot}\)] & \\ \hline 200(b) & 15 & 1.5 & 0.3 & 9.3 \\ 200(c) & 15 & 3.2 & 0.3 & 6.5 \\ 200 & 10 & 2.7 & 0.1 & 8.5 \\ 200 & 10 & 8.3 & 0.1 & 4.9 \\ 200 & 1 & 1.8 & 0.1 & 3.3 \\ 200 & 1 & 4.2 & 3.0 & 0.5 \\ 200 & 1 & 5.4 & 20 & 0.2 \\ 230 & 18 & 2.9 & 0.3 & 12.2 \\ 210 & 18 & 3.5 & 0.3 & 8.1 \\ 150 & 10 & 1.3 & 0.3 & 3.6 \\ 150 & 10 & 2.7 & 0.3 & 2.5 \\ 150 & 10 & 8.8 & 0.3 & 1.4 \\ 100 & 5 & 0.2 & 0.3 & 2.4 \\ 100 & 5 & 0.8 & 0.3 & 1.3 \\ \end{tabular} \end{table} Table 1: Parameters of simulations used in Figure 6. Figure 6: Total rotation completed by a fluid element in the CGM prior to accreting onto the ISM. Markers denote mean values in simulations with different \(t_{\mathrm{cool}}/t_{\mathrm{ff}}\) (measured at \(R_{\mathrm{circ}}\)), while the thin line denotes the analytic estimate from eqn. (44). Hot CGM with longer \(t_{\mathrm{cool}}\) have slower inflow velocities, and hence fluid elements rotate more prior to accretion. thus expect also \(t_{\rm diss}\sim t_{\rm cool}\). Using \(t_{\rm ff}\sim r/v_{\rm c}\) and \(v_{\rm c}\approx c_{\rm s}\), it thus follows that \[\frac{\sigma_{\rm turb}}{c_{\rm s}}\sim\frac{t_{\rm ff}}{t_{\rm cool}}. \tag{48}\] Since \(t_{\rm ff}<t_{\rm cool}\), eqn. (48) suggests that turbulence is subsonic, i.e., turbulent support is subdominant to thermal support, as assumed in section 2.1. Furthermore, this relation suggests that relative importance of turbulent motions decreases with increasing \(t_{\rm cool}/t_{\rm ff}\). Note that eqn. (48) is based on the assumption that the dominant turbulence driving mechanism at inner CGM radii is adiabatic 'heating' of pre-existing turbulence in the inflow. This assumption is similar to the underlying assumption of our solution that the dominant heating mechanism is compression of the CGM inflow, rather than other heating sources such as feedback. In a rotating inflow, turbulence may also be induced by the shear between adjacent shells. Radial displacements due to such turbulent motions are not subject to restoring Coriolis forces, since the non-perturbed solution is angular momentum conserving (Fig. 4) and hence the epicyclic frequency \(\kappa\) is zero. Balbus et al. (1996) showed that for a rotationally-dominated disk with \(\kappa=0\) such turbulence develops with an e-folding time roughly equal to an orbit time. Given that hot CGM inflows complete \(1-2\) orbits before accretion (Fig. 6) it is thus unclear if shear-induced turbulence has sufficient time to develop, especially since growth times may differ somewhat between the rotationally-supported disks simulated in Balbus et al. (1996) and the pressure-supported flows discussed here. As a preliminary test of the effect of turbulence on our hot rotating CGM solution, we run another simulation with similar initial conditions as in our fiducial simulation, to which we add a turbulent velocity field with amplitude \(\sigma_{\rm turb}(t=0)=30\,{\rm km~{}s^{-1}}\) and a lognormal power spectrum peaking at a wavelength of \(50\,{\rm kpc}\) and a logarithmic width of 1. Figure 7 compares the results of this simulation after steady-state is achieved with the results of the fiducial simulation shown in Figs. 1-5. The figure shows that adding turbulence both increases the density fluctuations in the hot inflow, and causes the angular momentum to decrease by \(\lesssim 25\%\) over the last \(2\,{\rm Gyr}\) before cooling, where the latter effect is more pronounced at large \(\theta_{0}\) near the disk plane. Given the associated change in radius this decrease in angular momentum corresponds to \(\kappa\approx 0.2\), which is potentially sufficient to stem any further development of turbulence (Balbus et al., 1996). Other properties of the solution not shown in Fig. 7 are similar to those in the non-turbulent simulation. This result suggests that turbulence in the CGM at the strength explored only mildly affects conservation of angular momentum in hot CGM inflows. The question of development / dissipation of turbulence in hot and rotating CGM inflows can be cast also in terms of the angular momentum distribution of gas accreting onto disk galaxies. This follows since a large turbulent velocity implies a broad angular momentum distribution, and vice-versa. Using the FIRE zoom simulations, Hafen et al. (2022) argued that a narrow angular momentum distribution in accreting gas may be a necessary condition for the formation of thin disk galaxies, pointing to potentially important implications of this question for galaxy evolution. We defer a more detailed exploration of turbulence in hot rotating CGM to future work. ### Magnetic fields The contraction and rotation of the hot CGM is expected to enhance magnetic fields present at the outer radius of the inflow. In appendix C we estimate this enhancement using the rotating inflow solution derived above, assuming ideal MHD and ignoring potential dynamical effects of the magnetic field on the flow. Defining \(r_{0}\) as the outer radius of the inflow we find that \[\frac{B_{r}}{B(r_{0})} = \left(\frac{r}{r_{0}}\right)^{-2}\] \[\frac{B_{\theta}}{B(r_{0})} = \left(\frac{v_{r}r}{v_{r}(r_{0})r_{0}}\right)^{-1}\] \[\frac{B_{\phi}}{B(r_{0})} = \frac{\sqrt{2}_{\rm cool}}{t_{\rm ff}}\frac{r_{0}^{2}R_{\rm circ}} {r^{3}}\sin\theta \tag{49}\] where for simplicity we assume the seed field \(B(r_{0})\) is isotropic. These derived ratios \(B_{r}/B(r_{0})\), \(B_{\theta}/B(r_{0})\), \(B_{\phi}/B(r_{0})\) are accurate up to corrections of order \((R_{\rm circ}/r)^{2}\). For comparison, the enhancement of the magnetic field in a non-rotating spherical inflow is \(B_{r}\propto r^{-2}\) and \(B_{\theta}\propto B_{\phi}\propto(v_{r}r)^{-1}\)(e.g., Shapiro, 1973), which can be understood intuitively as a result of conservation of magnetic flux through patches moving with the flow. Eqn. (49) thus implies that rotation mainly affects \(B_{\phi}\), due to the winding of the field by the rotation. The enhancement of \(B_{\phi}\) is a product of \((t_{\rm cool}/t_{\rm ff}\cdot R_{\rm circ}/r)\) and \((r_{0}/r)^{2}\), where the former tracks the number of radians rotated by the inflow (eqn. 43) and the latter tracks the contraction of an inflowing shell. For \(r_{0}/R_{\rm circ}\sim 6\) and \(t_{\rm cool}/t_{\rm ff}\sim 6\) eqn. (49) suggests an increase Figure 7: The effect of turbulence on hot rotating CGM. The panels show density fluctuations (_top_) and specific angular momentum (_bottom_) along flowlines, versus time since the hot CGM cools and accretes onto the ISM. Dashed lines denote the fiducial simulation shown in Figs. 1–5 while solid lines denote a similar simulation with turbulence added to its initial conditions. Line color denotes the initial polar angle of the flowline as in Fig. 4. Note the mild decrease in specific angular momentum in the turbulent simulation in contrast with the constant angular momentum in the fiducial simulation. in \(B_{\phi}\) of order \(\sim 200\) by the time the hot gas reaches \(R_{\rm circ}\), just prior to accreting onto the galaxy. For comparison, the thermal pressure increases over the same range of radii as \(P(R_{\rm circ})/P(r_{0})\approx(R_{\rm circ}/r_{0})^{-3/2}\sim 15\) (eqn. 11), and thus the ratio of thermal to magnetic pressure \(\beta\propto P/B^{2}\) is expected to decrease by a large factor of \(\sim 3000\). Current upper limits on the magnetic field in the inner CGM of \(\sim L^{\bullet}\) at \(z\lesssim 0.3\) galaxies suggest magnetic pressure is subdominant to the thermal pressure (\(\beta>1\)), at least along the major axis where most of the accretion is expected (Prochaska et al., 2019; Lan & Prochaska, 2020; Heesen et al., 2023). It thus follows that if the hot CGM is accreting as suggested in this work, seed magnetic fields are sufficiently small that they do not dominate even after the enhancement induced by contraction and rotation. The eventual cooling of the inner hot CGM onto the ISM will further enhance \(B\). Fig. 4 suggests that the gas density increases by a factor of \(\lesssim 1000\) as it cools, which would increase \(B\) by a further factor \(\lesssim 1000^{2/3}=100\) in the limit of ideal MHD. Another potentially interesting implication of the hot CGM solution concerns the development of turbulence due to the magnetic-rotational instability (MRI). The MRI amplitude growth rate is \(\sim\Omega\)(e.g. Balbus & Hawley, 1998; Masada & Sano, 2008), so the result that \(\int\beta d\ell t\sim t_{\rm cool}/t_{\rm ff}\) (Fig. 6, eqn. 43) implies that prior to accretion MRI can grow by \(e^{\ell_{\rm cool}/t_{\rm ff}}\), i.e. a factor of \(10^{4}\) for \(t_{\rm cool}/t_{\rm ff}\approx 10\). The solution may thus change considerably as \(t_{\rm cool}/t_{\rm ff}\) exceeds some critical value where MRI becomes fully developed. We defer analysis of accretion via magnetic hot rotating CGM to future work. ## 5 Observational Implications In this section we discuss several observational signatures of hot and rotating CGM inflows. Given the idealized nature of the solution, the estimates of the signal strength are at the order of magnitude level. More realistic calculations based on cosmological simulations would be a useful next step. ### Angle dependence of X-ray emission and temperature Predicted deviations from spherical symmetry induced by angular momentum support (Fig. 5) are potentially detectable by measuring the dependence of CGM X-ray emission on angle from the minor axis of a galaxy. The top panel in Figure 8 shows the predicted soft X-ray surface brightness versus angle from the minor axis and impact parameter \(R_{\perp}\), assuming CGM rotation is oriented edge-on in the plane of the sky. The surface brightness is calculated using version 4.2.0 of the pyXSIM package (ZuHone & Hallman, 2016) on a simulation with \(\dot{M}=3\,{\rm M}_{\odot}\,{\rm yr}^{-1}\), \(v_{\rm c}=200\,{\rm km}\,{\rm s}^{-1}\), and \(R_{\rm circ}=15\,{\rm kpc}\), in a snapshot after the simulation converged onto a steady-state. These parameters are chosen to simulate the signal around NGC 891 which has a similar circular velocity and size as the Milky Way, but a higher SFR of \(\approx 3.8\,{\rm M}_{\odot}\,{\rm yr}^{-1}\)(Popescu, C. C. et al., 2004). The figure shows that the soft X-ray brightness (\(\propto n_{\rm H}^{2}\)) increases towards the major axis at small \(R_{\perp}\), since rotation induces a higher CGM density near the midplane (eq. 40, Fig. 8). The bottom panel in Fig. 8 shows the luminosity ratio of the O\(\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{ }}}}}}}{{{{{ {\rmrmrm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ }}}}}}}}}}}{{{}}}}}{{{{{ {{{{{{{{{{{{\rmrmrm{\rm{\rm{\rm{\rm}}}}}}}}}}}}}}{{{{{{{{{{{{{{\rm{{\rm{{\rm{{}}}} {\rm{{\rm{{\rm{{\rm{{}}}}}}}}}}}}}}}}}{{{{{{{{{{{{{{{{{{{{}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\)) and O\(\rm{{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ {\rm{\rm{ { { { { { }}}}}}}}}}}}}}}}}}}}}}\)) is the same as the Milky Way, but a higher SFR of \(\approx 3.8\,{\rm M}_{\odot}\,{\rm yr}^{-1}\)(Popescu, C. C. et al., 2004). The figure shows that the soft X-ray brightness (\(\propto n_{\rm H}^{2}\)) increases towards the major axis at small \(R_{\perp}\), since rotation induces a higher CGM density near the midplane (eq. 40, Fig. 8). The bottom panel in Fig. 8 shows the luminosity ratio of the O\(\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rmrmrmrm{\rmrm{ \rmrm{ \rmrm{ \rm{ { 0. }}}}}}}}\). The amount \(R_{\rm{circ}}\) mainly sets the size scale, and the predicted asymmetry for different values of \(R_{\rm circ}\) can be deduced by appropriately scaling \(R_{\perp}\). We emphasize that the solution in this work predicts the X-ray emission absent any heating or asymmetry induced by ongoing feedback, such as the asymmetric feedback effects discussed in Nica et al. (2021) and Truong et al. (2021). The trends and emissivities in Fig. 8 can thus be used as a benchmark for estimating feedback effects in cosmological simulations and in the real Universe. ### Measuring the hot gas rotation profile using the kSZ effect The change in temperature of cosmic microwave background (CMB) photons passing through the CGM induced by the kinetic Sunyaev & Zeldovich (1972) effect is equal to \[\frac{\Delta T_{\rm kSZ}}{T_{\rm CMB}}=\frac{\sigma_{\rm T}}{c}\int n_{\rm e }v_{\rm los}dl \tag{50}\] where \(T_{\rm CMB}=2.7\,{\rm K}\) is the average CMB temperature, \(\sigma_{\rm T}\) is the Thomson scattering cross-section, \(n_{\rm e}\approx 1.2n_{\rm H}\) is the electron density, Figure 8: Observables sensitive to deviations from spherical symmetry in the hot CGM. _Top_: predicted soft X-ray emission from the CGM of an edge-on galaxy, versus radius and angle from the minor axis. Emissivity increases with angle due to higher densities in the midplane induced by angular momentum support. _Bottom_: O\(\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{}}}}}}}}}}}}}}\)O\(\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{}} \rm{\rm{\rm{\rm{}}}}}}}}}}}}}}}}\) in \(\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{}}}}}}}}}}}}}}}}}\) emission line ratio. The ratio decreases with angle due to higher temperatures at the rotation axis (see Fig. 5). \(v_{\rm los}\) is the peculiar velocity projected along the line of sight \(dl\), and we neglected corrections due to finite optical depths to Compton scattering which are small in the CGM. If the CGM rotation is viewed edge-on, then \(v_{\rm los}\) would have opposite signs at opposite sides of the galaxy, and the sign of \(\Delta T\) would also be opposite. One could thus identify that the CGM is rotating by measuring the difference in CMB temperature between CGM sightlines on two opposite sides of the galaxy. This differencing method also has the advantage that it removes the kSZ effect due to the peculiar velocity of the galaxy halo, and removes foreground fluctuations with an angular scale larger than the angular size of the target galaxy. The top panel of Figure 9 shows the predicted kSZ signal, derived by calculating the integral in eqn. (50) through the simulation mentioned in the previous section. Pixel size in this plot is chosen to be 3 kpc, corresponding to the expected angular size of \(1^{\prime}\) of CMB-S4 experiments (Battaglia et al., 2017) at the NGC 891 distance of 9.8 Mpc. We mask the signal from \(R<20\) kpc and \(|z|<3\) kpc as this region would would likely be contaminated by dust emission from the galaxy (e.g., Smith et al., 2016). In the bottom panel we plot the difference in the signal between the approaching and receding sides of the CGM, in the midplane. Fig. 9 shows that the signal can exceed \(\Delta T/T_{\rm CMB}\approx 5\cdot 10^{-7}\), comparable to the expected sensitivity of \(\Delta T\approx 1\mu K\) in CMB-S4 experiments (Battaglia et al., 2017). Stacking different pixels or different galaxies could potentially further increase the sensitivity. It thus may be possible to detect the kSZ signal from hot rotating CGM with upcoming CMB experiments, and thus test the rotation velocity profile predicted by the hot and rotating CGM solution. Bregman et al. (2022) recently measured the thermal Sunyaev-Zeldovich effect (tSZ) in nearby massive spirals using data from the Planck satellite. At radii of 30 kpc accessible by the \(10^{\prime}\) resolution of Planck they detected the tSZ signal with a 'y-parameter' (\(=\Delta T_{\rm tSZ}/T_{\rm CMB}\)) of \(3\cdot 10^{-7}\) in the CGM of NGC 891, and \(y\approx 10^{-7}\) in a stack of the CGM of 11 other galaxies. On a scale of 30 kpc the predicted kSZ signal due to rotation is \(\approx 3\cdot 10^{-7}\) in the midplane (Fig. 9, bottom panel2), similar to the Bregman et al. (2022) result and hence potentially detectable. Footnote 2: In our solution the kSZ and tSZ signals are similar near \(R_{\rm circ}\) since \[\frac{\Delta T_{\rm kSZ}}{\Delta T_{\rm tSZ}}\frac{\sigma_{T}}{m_{\rm e}c^{ \prime}}\int\frac{n_{\rm e}v_{\rm los}dl}{\sigma_{\rm tSZ}^{T}\int\frac{Pdl}{ \sigma_{\rm tSZ}^{T}}\approx 1.0\cdot v_{\rm c,200}^{-1}\frac{v_{\phi}}{v_{\rm c }}=1.0\cdot v_{\rm c,200}^{-1}\frac{R_{\rm circ}}{r_{\rm c}}\sin\theta} \tag{51}\] where \(m_{\rm e}\) is the electron mass and the numerical evaluation follows from \(P=2n_{e}kT\), \(T=2\cdot 10^{6}(v_{\rm c}/200\,{\rm km\,s^{-1}})^{2}\) K (eqn. 11), and \(v_{\rm los}\approx v_{\phi}\). ### Measuring the hot gas rotation profile using emission line centroiding An X-ray microcalorimeter with high spectral resolution may also be able to measure the rotational velocity field of the hot CGM. The middle panel of Fig. 9 plots the projected line of sight velocity in the simulation used in the previous section, weighted by the luminosity of the OVII He\(\alpha\) line. As above, we calculate the line emissivity based on the density and temperature in the simulation using pyXSIM, and then integrate along the line of sight. Pixel size in this panel is 3 kpc, corresponding to the planned \(15^{\prime\prime}\) resolution of the proposed Line Emission Mapper probe (LEM, Kraft et al., 2022) for a target at a distance of 40 Mpc3. The bottom panel shows the line centroid difference between the approaching and receding sides of the CGM, Figure 9: Predicted kSZ effect (_top_) and centroid shift of the OVII 0.56 keV emission line (_middle_) from a hot rotating CGM surrounding an edge-on disk galaxy. Assumed pixel size is 3 kpc, corresponding either to \(1^{\prime}\) at a distance of 10 Mpc as planned for CMB-S4 or to \(15^{\prime\prime}\) at 40 Mpc as planned for the LEM probe. The central disk has been masked in order to focus on the signal from the hot gas. The _bottom_ panel shows the signal difference between the two sides of the disk. The predicted signal is comparable to the planned sensitivity of \(-1\mu\) K pixel\({}^{-1}\) in CMB-S4 and to the line centroiding accuracy of \(\lesssim 70\,{\rm km\,s^{-1}}\) planned for LEM. These observations could thus potentially test the hot CGM rotation velocity profile predicted by our solution. in the midplane. The velocity difference between both sides of the disk approaches \(200\,\mathrm{km\,s^{-1}}\), higher than the centroiding accuracy of \(\lesssim 70\,\mathrm{km\,s^{-1}}\) planned for LEM. High spectral resolution X-ray probes may thus be able to measure the rotation velocity profile in the hot CGM, and test whether it is consistent with the above solution. ### Dispersion Measure in FRBs Observations of the dispersion measure of Fast Radio Bursts (FRBs) constrain electron column densities in the CGM (Prochaska and Zheng, 2019). Current upper limits from the Milky-Way CGM are \(52-111\,\mathrm{cm^{-3}}\) pc (Cook et al., 2023; Ravi et al., 2023), and these limits could improve substantially with upcoming FRB surveys (see Ravi et al., 2023). Fig. 10 plots the predicted dispersion measure for the Milky-Way CGM, based on our fiducial simulation. These where calculated by integrating the electron density in the fiducial simulation from \((R,z)=(8\,\mathrm{kpc},0)\) out to \(r=300\,\mathrm{kpc}\), for different Galactic directions \((b,l)\). The predicted dispersion measures are \(11-16\,\mathrm{cm^{-3}}\,\mathrm{pc}\) and increase towards lower \(b\) due to the higher densities near the midplane (eqn. 40). These columns scale as \((\dot{M}_{1}/Z_{0,3})^{0.5}\) due to the dependence of hot CGM density on these quantities, where \(Z_{0,3}=Z_{\mathrm{CGM}}/0.3Z_{\odot}\) (eqn. 11). ## 6 Discussion ### Differences from previous models of hot CGM Models of the low-redshift CGM typically assume that the hot CGM phase is static due to feedback heating compensating for radiative losses (Sharma et al., 2012; Voit et al., 2017; Faerman et al., 2017, 2020), which yields an effectively infinite cooling time and hence \(v_{r}=0\) (see eqn. 10). Several recent models have accounted also for rotation (Pezzulli et al., 2017; Sormani et al., 2018; Afrui et al., 2022), though also in these latter models the gas is assumed not to flow in the radial direction. The radially static assumption in these 'thermally-balanced' models is in contrast with the inflowing nature of the hot CGM solution described herein, where heating by ongoing feedback is assumed to be small. A third possibility is that feedback heating dominates over cooling, and the hot CGM forms an outflow (e.g., Thompson et al., 2016; Schneider et al., 2020, though note these studies neglected a pre-existing CGM). As the mechanics of CGM heating by stellar and AGN feedback are currently not well understood, and existing X-ray constraints do not rule out an inflow as in the ICM (see introduction), it is currently unclear which of these three paradigms is more accurate. Since the inflow discussed here is highly subsonic, deviations from hydrostatic equilibrium in the radial direction are small, of order \((t_{\mathrm{cool}}/t_{\mathrm{ff}})^{-2}\), and indeed are neglected in the analytic solution derived in section 2.4. The inflow solution derived here thus satisfies similar hydrostatic equilibrium constraints as static CGM models. However, assuming an inflow also enforces conservation of mass, energy and angular momentum between adjacent shells (eqns. 20-22), and hence the allowed space of inflowing hot CGM solutions is significantly smaller than that of static solutions. Specifically, static models have to add another assumption to determine the entropy profile of the hot CGM (e.g., assume a constant \(t_{\mathrm{cool}}/t_{\mathrm{ff}}\) as in Voit et al., 2017 or a constant entropy as in Faerman et al., 2020), while in the inflow solution derived in this study the entropy profile is determined by conservation of energy. Similarly, in the solution described here the rotational velocity profile is constrained by angular momentum conservation, in contrast with weaker constraints based on cosmological accretion considerations in the static solution of Sormani et al. (2018). ### Accretion via hot inflows versus 'precipitation' As mentioned in the introduction, accretion onto the Milky Way and nearby spirals is likely dominated either by \(\sim 10^{4}\,\mathrm{K}\) gas clumps or by the hot \(\sim 10^{6}\,\mathrm{K}\) inflows discussed in this work. Since \(\sim 10^{4}\,\mathrm{K}\) gas clumps form via local thermal instability in the hot phase (Fall and Rees, 1985; Maller and Bullock, 2004; Voit et al., 2015; Armillotta et al., 2016) which then lose buoyancy and accrete ('precipitation'), both hot inflows and precipitation originate from the hot phase and would thus be considered 'hot accretion' in the context of the classic distinction between the hot and cold accretion modes (e.g. Nelson et al., 2013). However, in the scenario studied here the CGM inflow remains at \(\sim 10^{6}\,\mathrm{K}\) down to a cylindrical radius \(R_{\mathrm{circ}}\sim 15\,\mathrm{kpc}\) and height of \(\sim\,\mathrm{kpc}\) above the midplane, at which point _all the hot gas_ cools and joins the ISM, rather than just a subset of localized clouds. Hot inflows are thus a type of 'quiet accretion' (Putman et al., 2012) - accretion which becomes accessible to cool gas observations only when it cannot be distinguished from pre-existing disk gas or from small-scale funtian flows. Identification of hot inflows thus requires observing the hot gas directly as discussed in section 5. We note that at outer CGM radii of \(r\gtrsim 100\,\mathrm{kpc}\) a hot inflow is not expected since cooling times are long (section 2.1), so any infall would likely be dominated by cool \(\sim 10^{4}\,\mathrm{K}\) gas. This cool inflow at large radii can potentially join a hot inflow at small CGM radii if the cool clouds are disrupted by hydrodynamic instabilities (e.g., Tan et al., 2023). Figure 10: Predicted dispersion measure of the Milky-Way CGM as a function of Galactic coordinates, for the solution derived in this work. The increase towards lower \(b\) is due to higher gas densities near the midplane (Fig. 40). ### Hot inflows require that the halos of typical \(\sim L^{*}\) spirals are baryon-depleted As discussed in section 2.1, in steady state we expect a hot CGM inflow rate of \(\dot{M}\approx 0.5\,\)SFR, which is \(\approx 1\,\)M\({}_{\odot}\,\)yr\({}^{-1}\) for the Milky-Way. To achieve this \(\dot{M}\), the CGM density must be lower than expected in a baryon-complete halo. This requirement can be demonstrated by integrating the density profile in eqn. (11) out to the virial radius, and solving for \(\dot{M}\): \[\dot{M}=6.6\frac{M_{\rm CGM}}{10^{11}\,\rm M_{\odot}}\left(\frac{R_{\rm CGM }}{300\,\rm kpc}\right)^{-3}v_{\rm c,200}^{-2}\Lambda_{-22}\,\rm M_{\odot}\,yr^ {-1}\,, \tag{52}\] where we assumed a CGM mass \(M_{\rm CGM}=0.16M_{\rm halo}-M_{\rm galaxy}\) with \(M_{\rm halo}=10^{12}\,\rm M_{\odot}\) and a galaxy mass \(M_{\rm galaxy}=6\cdot 10^{10}\,\rm M_{\odot}\), while the CGM size \(R_{\rm CGM}\) is approximated to equal \(R_{\rm vir}\approx 300\,\)kpc. This predicted \(\dot{M}\) is substantially higher than the \(\approx 1\,\rm M_{\odot}\,\)yr\({}^{-1}\) required to sustain star formation in local disks, a result consistent with the general result that without feedback the SFR in local \(\sim L^{\star}\) galaxies is overpredicted (e.g., Naab & Ostriker, 2017). It thus follows that for local discs to be fed by hot CGM inflows, gas originally associated with the halo must have expanded beyond \(R_{\rm vir}\). Such an expanded CGM is supported by recent thermal Sunyaev-Zeldovich (tSZ) maps of nearby spirals, which indicate that the baryon budget of the halo is spread over a size of \(R_{\rm CGM}\gtrsim 500\,\)kpc (Bregman et al., 2022). Using this larger observed CGM size in eqn. (52) would suggest \(\dot{M}\lesssim 1.5\,\rm M_{\odot}\), consistent with observed SFRs and hence supporting the scenario that local disk galaxies are fed by hot CGM inflows. There is however an inherent challenge in a scenario where discs are fed by an inflow from an expanded hot CGM. An expanding CGM is apparently contradictory to an inflowing CGM, and requires feedback heating to dominate radiative cooling, in contrast with the assumption above that radiative cooling dominates. This apparent contradiction can be circumvented if the inflow and expansion are separated either in time or in space. For example it is plausible that feedback was strong at high redshift causing the CGM to expand, and has since subsided so at low-\(z\) the hot CGM develops an inflow. Such an evolution in feedback strength is predicted by FIRE simulations of Milky-Way mass galaxies (Muratov et al., 2015; Faucher-Giguere, 2018; Stern et al., 2021; Pandya et al., 2021), and is also consistent with stellar winds being strong in \(z\sim 2\) galaxies but weak in typical \(\sim L^{\star}\) galaxies at \(z\sim 0\)(Heckman & Thompson, 2017). Alternatively, black hole feedback or stellar feedback which occur in 'bursts' separated by more than \(\sim 1\,\)Gyr would allow a hot inflow to develop between bursts, since this timescale is the cooling time in the inner CGM on which a hot inflow develops (eqn. 11). A third possibility is that feedback mainly heats the outer CGM, allowing the inner CGM to form an inflow. While this may seem counter-intuitive, it is potentially possible if feedback is focused on the rotation axis at small CGM radii and isotropizes only at large CGM radii, thus allowing the hot inner CGM to inflow from the midplane. Another option is that feedback energy is propagated by weak shocks or sound waves which dissipate and dump heat only in the outer CGM. These possibilities for how feedback energy is distributed as a function of space and time have been explored in the context of the ICM (e.g., see recent review in Donahue & Voit, 2022), but less so in the context of the CGM. A thorough test of the viability of these different feedback scenarios would allowing understanding under which conditions the hot CGM inflow scenario is viable. ### Truncation of thin disks at \(R_{\rm cyl}\approx R_{\rm circ}\) Our results suggest that \(R_{\rm circ}\) corresponds to the 'edge' of galactic disks fed by hot inflows. This is evident by the sharp drop in cool gas mass beyond a cylindrical radius of \(R_{\rm circ}\) (Figs. 1-2), and is a result of hot CGM inflows cooling and accreting only when angular momentum support becomes dominant at radii \(\leq R_{\rm circ}\) (Figs. 3 - 4). While radial motions within the disk could in principle cause disk gas and stars to extend beyond the initial radius of accretion, these effects are not expected to be large in \(\sim L^{\star}\) galaxies at \(z\sim 0\)(Krumholz et al., 2018; Trapp et al., 2022). The prediction of a stellar disk edge at \(R_{\rm cyl}\approx R_{\rm circ}\) can explain the sharp truncation observed in 60 - 80% of thin disc galaxies. Such truncations are typically observed at cylindrical radii of \(\approx 3.5-4R_{\rm d}\) where \(R_{\rm d}\) is the disk scale length (Kregel et al., 2002; Comeron et al., 2012; Martin-Navarro et al., 2012)4. Given that \(3.5R_{\rm d}\approx 0.04R_{\rm vir}\) based on the \(R_{\rm d}-R_{\rm vir}\) relation from Kravtsov (2013), the observed radius of disc edges are consistent with the estimate of \(R_{\rm circ}\sim 0.05R_{\rm vir}\) in eqn. (15), an estimate based on the assumption that hot CGM spin is similar to the average spin of the dark matter halo. We thus conclude that the hot CGM inflow scenario is supported both by the existence of truncations in thin galactic disks, and by the radii at which truncations are observed. Furthermore, the result that truncations are less prevalent in thick disks (Comeron et al., 2012) suggests that thick disks are fed by a different accretion mechanism, consistent with the result of Hafen et al. (2022) that hot CGM inflows only feed thin galactic disks. Footnote 4: The derived fraction of thin disks which exhibit a truncation is based on discs observed edge-on, since at low inclination stellar halos make the truncation harder to observe (Martin-Navarro et al., 2014). ### Fountain flows and the disk-halo interface The cooling layer of the solution found above, in which the gas temperature drops from \(\approx 2\cdot 10^{6}\) K to \(\lesssim 10^{4}\) K (e.g., Fig. 1), corresponds to the gas layers surrounding the disk known as the 'disc-halo interface' or 'extraplanar gas' (e.g., Fraternali & Binney, 2008). In this layer accretion from the CGM is expected to mix with fountain flows from the disk driven by feedback, which are not included in our calculation. Thus, the solution derived in this work, and specifically the gas properties at \(t\approx t(T=10^{5}\,\)K) shown in Figs. 3 - 4, provide an outer boundary condition for the disc-halo interface. It would be beneficial to add an inflowing hot CGM component to existing disk-halo interface models (e.g., Marasco et al., 2012; Fraternali, 2017). Such an addition could further constrain these models and make them more predictive, based on the mass flux, angular momentum flux, temperature, and pressure as a function of \((R_{\rm cyl},z)\) predicted by the hot CGM solution. This would be especially beneficial given the considerable amount of recent surveys which observed this layer, spanning a large range of the electromagnetic spectrum (e.g., Ho et al., 2016; Bizyaev et al., 2017; Levy et al., 2019; Bish et al., 2019, 2021; Reach et al., 2020). Additionaly, a combined disk-halo interface + hot CGM model, together with available disk-halo interface observations, may provide further constraints on the properties of the hot CGM. Since the hot CGM solution predicts the pressure at the hot CGM-ISM interface, comparison of this pressure with the measured ISM pressure near the Sun provides a test of the solution's applicability. An analytic estimate of the predicted CGM pressure in the disk plane can be derived from eqn. (40) using \(r=R_{\rm circ}\) and \(\theta=\pi/2\): \[\frac{P_{\rm CGM}(R_{\rm circ},\frac{\pi}{2})}{k_{\rm B}}=2.3n_{\rm H}T=4600\ \dot{M}_{1}^{0.5}v_{\rm c,200}^{3}A_{-22}^{-0.5}\left(\frac{R_{\rm circ}}{15\, \rm kpc}\right)^{-1.5}. \tag{53}\] Similar pressures of \(P/k\sim 2000-5\,000\,\rm cm^{-3}\,\rm K\) are seen in the hot CGM - ISM interface in the simulation (Fig. 4, \(\theta_{0}=0.3-0.5\pi\) curves at times \(\gtrsim t(T=10^{5}\,\rm K)\)), except near the galaxy center where thermal pressures are higher (\(\theta_{0}=0.1\pi\) curve in the same figure). For comparison, the thermal pressure in the ISM was measured by Jenkins & Tripp (2001) using C i absorption lines, which found: \[\frac{P_{\rm ISM}}{k_{\rm B}}(C\,\rm i)=(3800\pm 0.175\,\rm dex)\,\rm cm^{-3}\, \rm K \tag{54}\] To this one should add magnetic field pressure, turbulent pressure and cosmic ray pressure, all of which are comparable to the thermal pressure in the ISM (e.g., Boulares & Cox 1990). The values in equations (53) and (54) are similar (and a somewhat higher ISM pressure is expected due to a vertical pressure gradient in the disk), indicating that the CGM solution derived above is consistent with observed conditions in the Galactic ISM. This result, that a no-ongoing feedback CGM solution roughly predicts the observed ISM pressure, suggests ongoing feedback does not significantly affect conditions in the CGM, at least near the Sun. This conclusion is expected for stellar-driven feedback since outflows are weak in \(\sim L^{\star}\) galaxies at \(z\sim 0\) (e.g., Heckman & Thompson 2017). It also implies that the feedback near the galaxy center evident by the Fermi bubbles has no significant effect on the CGM near the Sun today. The effects of feedback on the CGM near the Galactic center could however be more significant. ## 7 Summary In this work we derive an axisymmetric, steady-state solution for hot and rotating CGM inflows, focusing on Milky-Way mass galaxies at \(z\sim 0\). We demonstrate that such accretion flows transition from a quasi-spherical and hot (\(\sim 10^{6}\,\rm K\)) medium largely supported by thermal pressure to a cool (\(\lesssim 10^{4}\,\rm K\)) disk supported by rotation. This cooling occurs at the disc-halo interface, within a cylindrical radius equal to the circularization radius of the flow \(R_{\rm circ}\sim 15\,\rm kpc\) and at heights \(|z|\sim\,\rm kpc\) above the disk. Such hot inflows are expected in the inner CGM (\(r\lesssim 50\,\rm kpc\)) if radiative cooling has dominated over feedback heating in the last \(t_{\rm cool}\lesssim 1\,\rm Gyr\). We find both a new analytic solution in the slow rotation limit, which is valid in the limit \((r/R_{\rm circ})^{2}\gg 1\) (section 2.4), and a numerical solution applicable also at \(r\lesssim R_{\rm circ}\) (section 3). These solutions provide an idealized version of the hot CGM inflows identified by Hafen et al. (2022) in FIRE simulations of Milky-Way mass galaxies at \(z\sim 0\). They also provide a basis for adding the effects of unmodelled processes such as AGN feedback, magnetic fields and turbulence (see section 4), and a benchmark for identifying the effects of feedback in cosmological simulations and observations. Our main results can be summarized as follows: * In hot CGM inflows the entire flow cools when it reaches the disc-halo interface, rather than a subset of clouds (Figs. 1-4). Hot inflows thus differ qualitatively from 'precipitation' in which accretion is facilitated by local thermal instability farther out in the halo. * Rotational support induces deviations from spherical symmetry in the density and temperature structure of hot CGM inflows (Fig. 5), qualitatively similar to the rotating but radially-static models of Sormani et al. (2018). These deviations may be detectable with X-ray telescopes (Fig. 8). * The number of revolutions prior to accretion in hot CGM inflows is \(\approx t_{\rm cool}/t_{\rm ff}\sim 10\) radians, in contrast with \(\approx 1\) radian in cold flows. Enhancement of magnetic fields and development of turbulence in the hot CGM thus likely depend on the value of \(t_{\rm cool}/t_{\rm ff}\). * Observed SFRs in local spirals constrain typical hot inflow accretion rates to \(\dot{M}\lesssim 1-2\,\rm M_{\odot}\,yr^{-1}\). This requires baryon-depleted CGM, in which the halo baryon budget is spread over \(\sim 2R_{\rm vir}\). Evidence for such expanded CGM has been recently reported by (Bregman et al. 2022), using tSZ maps around nearby spirals. * Hot CGM inflows with \(\dot{M}\approx 1\,\rm M_{\odot}\,yr^{-1}\) predict \(P/k\sim 5000\,\rm cm^{-3}\,\rm K\) at the disc-halo interface (eqn. 53), similar to the ISM pressure estimated near the Sun. This supports our assumption that feedback today does not significantly affect the hot CGM pressure, at least near the Sun. * Hot inflows predict a sharp disk edge at \(R_{\rm cyl}\approx R_{\rm circ}\sim 0.05R_{\rm vir}\), consistent with the prevalence of observed truncations in local stellar disks at \(\approx\)4 disc scale lengths (e.g., Comeron et al. 2012). * The predicted rotation profile of hot CGM inflows is potentially detectable with kinetic Sunyaev-Zeldovich maps of nearby galaxies with \(\sim 1\mu\)K sensitivity, as planned for CMB-S4 (Fig. 9). * We predict dispersion measures of \(\lesssim 15\,\rm cm^{-3}\,\rm pc\) in the hot Milky-Way CGM, which decrease with Galactic latitude (Fig. 10). These could potentially be detectable with upcoming FRB surveys. ## Acknowledgements JS thanks S. Peng Oh and M. Voit for useful discussions. JS was supported by the Israel Science Foundation (grant No. 2584/21). CAFG was supported by NSF through grants AST-2108230 and CAREER award AST-165252; by NASA through grants 17-AIP17-0067 and 21-ATP21-0036; by STScI through grant HST-GO-16730.016-A; by CXO through grant TM2-32005X; and by the Research Corporation for Science Advancement through a Cottrell Scholar Award. This work was supported in part by a Simons Investigator award from the Simons Foundation (EQ) and by NSF grant AST-2107872. JSB was supported by the National Science Foundation (NSF) grant AST-1910965 and NASA grant 80NSSC22K0827. The computations in this work were run at facilities supported by the Scientific Computing Core at the Flatiron Institute, a division of the Simons Foundation.
2309.10404
Robust Evidence for the Breakdown of Standard Gravity at Low Acceleration from Statistically Pure Binaries Free of Hidden Companions
It is found that Gaia DR3 binary stars selected with stringent requirements on astrometric measurements and radial velocities naturally satisfy Newtonian dynamics without hidden close companions when projected separation $s \lesssim 2$ kau, showing that pure binaries can be selected. It is then found that pure binaries selected with the same criteria show a systematic deviation from the Newtonian expectation when $s \gtrsim 2$ kau. When both proper motions and parallaxes are required to have precision better than 0.005 and radial velocities better than 0.2, I obtain 2,463 statistically pure binaries within a `clean' $G$-band absolute magnitude range. From this sample, I obtain an observed to Newtonian predicted kinematic acceleration ratio of $\gamma_g=g_{\rm{obs}}/g_{\rm{pred}}=1.49^{+0.21}_{-0.19}$ for acceleration $\lesssim 10^{-10}$ m s$^{-2}$, in excellent agreement with $1.49\pm 0.07$ for a much larger general sample with the amount of hidden close companions self-calibrated. I also investigate the radial profile of stacked sky-projected relative velocities without a deprojection to the 3D space. The observed profile matches the Newtonian predicted profile for $s \lesssim 2$ kau without any free parameters but shows a clear deviation at a larger separation with a significance of $\approx 5.0\sigma$. The projected velocity boost factor for $s\gtrsim 5$ kau is measured to be $\gamma_{v_p} = 1.20\pm 0.06$ (stat) $\pm 0.05$ (sys) matching $\sqrt{\gamma_g}$. Finally, for a small sample of 40 binaries with exceptionally precise radial velocities (fractional error $<0.005$) the directly measured relative velocities in the 3D space also show a boost at larger separations. These results robustly confirm the recently reported gravitational anomaly at low acceleration for a general sample.
Kyu-Hyun Chae
2023-09-19T08:10:09Z
http://arxiv.org/abs/2309.10404v4
Robust Evidence for the Breakdown of Standard Gravity at Low Acceleration from Statistically Pure Binaries Free of Hidden Companions ###### Abstract It is found that Gaia DR3 binary stars selected with stringent requirements on astrometric measurements and radial velocities naturally satisfy Newtonian dynamics without hidden close companions when projected separation \(s\lesssim 2\) kau, showing that pure binaries can be selected. It is then found that pure binaries selected with the same criteria show a systematic deviation from the Newtonian expectation when \(s\gtrsim 2\) kau. When both proper motions and parallaxes are required to have precision better than 0.005 and radial velocities better than 0.2, I obtain 2,463 statistically pure binaries within a 'clean' \(G\)-band absolute magnitude range. From this sample, I obtain an observed to Newtonian predicted kinematic acceleration ratio of \(\gamma_{g}=g_{\rm obs}/g_{\rm pred}=1.49^{+0.21}_{-0.19}\) for acceleration \(\lesssim 10^{-10}\) m s\({}^{-2}\), in excellent agreement with \(1.49\pm 0.07\) for a much larger general sample with the amount of hidden close companions self-calibrated. I also investigate the radial profile of stacked sky-projected relative velocities without a deprojection to the 3D space. The observed profile matches the Newtonian predicted profile for \(s\lesssim 2\) kau without any free parameters but shows a clear deviation at a larger separation with a significance of \(\approx 5.0\sigma\). The projected velocity boost factor for \(s\gtrsim 5\) kau is measured to be \(\gamma_{v_{p}}=1.20\pm 0.06\) (stat) \(\pm 0.05\) (sys) matching \(\sqrt{\gamma_{g}}\). Finally, for a small sample of 40 binaries with exceptionally precise radial velocities (fractional error \(<0.005\)) the directly measured relative velocities in the 3D space also show a boost at larger separations. These results robustly confirm the recently reported gravitational anomaly at low acceleration for a general sample. Binary stars (154); Gravitation (661); Modified Newtonian dynamics (1069); Non-standard theories of gravity (1118) 0000-0002-0002-0002]Kyu-Hyun Chae 0000-0002-3882-8867]Kyu-Hyun Chae ## 1 Introduction Wide binaries (widely-separated, long-period, gravitationally-bound binary stars) provide crucial testbeds for probing gravitational dynamics in the low-acceleration regime (e.g., Hernandez et al., 2012; Banik & Zhao, 2018; Pittordis & Sutherland, 2018; Banik & Kroupa, 2019; Pittordis & Sutherland, 2019; Hernandez et al., 2022). A couple of recent studies by Chae (2023) and Hernandez (2023) of wide binary stars selected from Gaia data release 3 (DR3; Vallenari et al., 2023) have reported a gravitational anomaly at low acceleration \(\lesssim 10^{-9}\) m s\({}^{-2}\), or for a sky-projected separation \(s\gtrsim 2\) kau (kilo astronomical unit) for typical binaries with total masses of \(\sim(0.5-2)\)M\({}_{\odot}\). This gravitational anomaly implies a low-acceleration breakdown of both Newtonian dynamics and general relativity and so has immense implications for astrophysics, cosmology, and fundamental physics. Thus, one cannot overemphasize the importance of confirming the claimed anomaly from as many independent studies as possible. Chae (2023) considered wide binaries selected from El-Badry et al. (2021) that are statistically free of both chance-alignment cases and resolved (\(>1^{\prime\prime}\)) triples and higher-order multiples. Because of the initial selection, additional quality cuts, and the availability of dust extinction correction, Chae (2023) used only up to \(26,615\) wide binaries within 200 pc. Chae (2023) then self-calibrated the occurrence rate (\(f_{\rm multi}\)) of triples and higher-order multiples hosting hidden (i.e. unresolved) close companions by requiring that binaries must satisfy Newtonian dynamics at a close enough separation, or at a high enough acceleration \(\gtrsim 10^{-8}\) m s\({}^{-2}\), as predicted by all currently available plausible theories including modified Newtonian dynamics (MOND; Milgrom, 1983). Chae (2023) also paid a particular attention to projection effects and employed a Monte Carlo (MC) method to deproject measured sky-projected relative velocities \(v_{p}\) to the three-dimensional (3D) space physical velocities \(v\), and compared a kinematic acceleration \(v^{2}/r\) with the corresponding Newtonian prediction. Chae (2023) obtained up to a \(10\sigma\) significance for the gravitational anomaly based on MC analyses. Moreover, the magnitude and trend of the anomaly matched well the prediction of MOND-type modified gravity such as AQUAL (Bekenstein & Milgrom, 1984) and QUMOND (Milgrom, 2010) under the external field effect (EFE) of the Milky Way. Hernandez (2023) took a different approach that tried to remove all cases of both chance alignments and hierarchical multiples. Because Hernandez (2023) applied various strict cuts, his final sample includes only 450 pure binaries in the distance range \(d<125\) pc or \(125<d<170\) pc. Hernandez (2023) calculated the dispersion of one-dimensional velocity components on the plane of the sky and compared it with the Newtonian prediction by Jiang & Tremaine (2010). Hernandez (2023) checked that small-separation (\(s\lesssim 2\) kau) systems matched the Newtonian prediction indicating that kinematic contaminants are indeed negligible and Newtonian dynamics holds in the high-acceleration regime. Then, he found that the observed sky-projected velocities systematically deviated from the Newtonian expectation at large-separation (\(s\gtrsim 2\) kau) systems. Because the sample size was small and the observed kinematics of the binaries was compared with simulations for other binaries, Hernandez (2023) did not quantify a statistical significance of the anomaly seen at \(s\gtrsim 2\) kau. Nevertheless, the final result from Hernandez (2023) agreed with that of Chae (2023). Unlike Chae (2023) and Hernandez (2023), another recent study by Pittordis & Sutherland (2023) based on a Gaia database considered only low-acceleration binaries and thus could not calibrate \(f_{\rm multi}\) among their wide binaries. Their analysis was also compounded by their inclusion of chance-alignment cases. Pittordis & Sutherland (2023) erroneously concluded that Newtonian dynamics matched their low-acceleration data with their "fitted" \(f_{\rm multi}\) (without a proper calibration). As Chae (2023) demonstrated, Pittordis & Sutherland (2023) conclusion is not surprising because the dynamics uncovered by Chae (2023) is pseudo-Newtonian with a rescaling of Newton's constant \(G\to 1.4G\) and this boost can be canceled by a higher \(f_{\rm multi}\) (without a proper calibration). Just recently, Banik et al. (2023) argued that Newtonian dynamics was preferred over MOND (using specifically the QUMOND model) based on a method similar to Pittordis & Sutherland (2023). They did not include the Newtonian regime (\(\lesssim 2\) kau) data to calibrate \(f_{\rm multi}\) but claimed that gravity and \(f_{\rm multi}\) could be simultaneously constrained based only on data from the Newton-MOND transition and MOND regimes (see below). They obtained a high value of \(f_{\rm multi}\approx 0.70\) for their preferred Newtonian model although their sample included only binaries passing a strict cut on relative velocities. Their value of \(f_{\rm multi}\approx 0.70\) is unlikely high compared with the observed range \(0.3\lesssim f_{\rm multi}\lesssim 0.5\)(e.g., Raghavan et al., 2010; Riddle et al., 2015; Moe & Stefano, 2017) even for general binary samples without kinematic cuts. Moreover, I note that binaries selected by strict kinematic cuts have significantly lower \(f_{\rm multi}\) than that for a general sample. I will further discuss their sample selection, analyses, and results at relevant places (see in particular appendices). Figure 1 shows an AQUAL numerical (Chae & Milgrom, 2022) prediction on the radial acceleration for circular orbits under the EFE of the Milky Way that approximately matches the AQUAL analytic asymptotic limit at an average inclination of the external field. Due to the strong external field of the Milky Way, internal dynamics is expected to switch from the Newtonian regime to the pseudo-Newtonian regime with a boosted Newton's constant. As shown in Figure 1, most of the transition is expected to occur abruptly in the narrow acceleration range of \(-9.6\lesssim\log_{10}(g_{\rm N}/{\rm m\ s}^{-2})\lesssim-8.8\). As shown in Figure 2, for typical Gaia wide binaries with a total mass of \(\approx 1.4M_{\odot}\) the transition acceleration range corresponds approximately to the sky-projected separation range of \(2\lesssim s\lesssim 5\) kau (kilo astronomical units). This MOND prediction was supported by the two published analyses (Chae, 2023; Hernandez, 2023) and is intended to be further tested in this study. Here I consider a new analysis that is complementary to Chae (2023) and Hernandez (2023)) and can provide a robust test of gravity. The analysis of Chae (2023) involved a complex chain of steps with various observational inputs for a general sample and obtained a maximal statistical power. The complexity arises largely from modeling the kinematic effects of hidden close companions with currently available observational inputs. As \(f_{\rm multi}\to 0\), the complexity is gone and thus any possibilities of systematic errors involving close companions can be removed at the cost of losing statistical power. The question is whether a statistically significant sample of \(f_{\rm multi}=0\) can be selected in a systematic and veri fiable way so that gravitational anomaly can be tested robustly and with a sufficient statistical power. Hernandez (2023) obtained a sample that was supposed to be largely free of hierarchical systems but his analysis was limited in two ways. Hernandez (2023) did not carry out a Monte Carlo simulation or any other statistical procedure for his sample to quantify a statistical significance of the seen anomaly (although a comparison with an independent simulation by Jiang & Tremaine (2010) was made). Also, each sample defined by Hernandez (2023) seems too small with just 450 binaries. Here I obtain a much larger sample of 2,463 wide binaries of \(f_{\rm multi}=0\) in a systematic and verifiable way. Then, I test gravity in an acceleration plane with the algorithm developed in Chae (2023) with \(f_{\rm multi}=0\). More importantly, I investigate stacked velocity profiles with a MC simulation to do a quantitative statistical test. In this way, the present work will complement both Chae (2023) and Hernandez (2023). I will show that the results from pure binaries agree excellently with those of Chae (2023) reaffirming the validity of the procedure and conclusion of Chae (2023). The structure of this paper is as follows. Section 2 describes how a sample of pure binaries can be selected in a systematic way. Section 3 describes a Monte Carlo modeling of pure binary stars. Section 4 presents the results. Section 5 discusses any possible source of systematic errors. In Section 6, I offer the conclusion and discuss future works. In Appendix A, I describe a correction to Chae (2023) and revise the representative results. In Appendix B, I consider some kinematic quality cuts on general samples of wide binaries and their effects on inference on gravity. Python scripts used for this work and the sample of pure binaries can be accessed at Zenodo: doi:10.5281/zenodo.8416435. Figure 1: This figure shows an AQUAL numerical (Chae & Milgrom, 2022) prediction on the internal gravitational field for circular orbits under the external field of the Milky Way as estimated in Chae (2023). The numerical acceleration matches well the analytic asymptotic value when the external field is inclined at an average angle of \(60^{\circ}\) from the orbital axis. Internal dynamics is expected to switch from the Newtonian regime to the MOND regime over the narrow acceleration range indicated by the cyan-colored band. Figure 2: This figure shows the range of sky-projected separation \(s\) corresponding to the Newton-MOND transition range in acceleration shown in Figure 1. The inset shows the Keplerian prediction on the range of orbital periods for Gaia wide binaries used in Chae (2023) and this study. ## 2 A Systematic Selection of Pure Binaries Following Chae (2023), I work with the catalog of one million candidate binaries derived by El-Badry et al. (2021) from Gaia DR3 astrometric measurements. This catalog provides estimated values of chance-alignment probability (\(\mathcal{R}\)) so that chance-alignment cases can be effectively excluded. The catalog also excludes triples and higher-order multiples whose components are all resolved by \(>1^{\prime\prime}\). Thus, by requiring \(\mathcal{R}<0.01\) (or something similar), one can choose binaries that may include only unresolved close companions. Specifically, I consider 26,615 binaries with \(\mathcal{R}<0.01\) within 200 pc from the Sun whose components have sky-projected separation (\(s\)) in the range \(0.2<s<30\) kau and have absolute magnitudes in the 'clean range' \(4<M_{G}<14\) defined by Chae (2023) where \(M_{G}\) is the dust-extinction-corrected absolute magnitude in the Gaia \(G\) band. As Chae (2023) showed, samples of binaries selected by a precision threshold of 0.01 or 0.005 imposed on the measured proper motions (PMs) require \(f_{\rm multi}\gtrsim 0.3\) to statistically satisfy Newtonian dynamics at high enough accelerations \(\gtrsim 10^{-8}\) m s\({}^{-2}\). Because binaries must satisfy Newtonian dynamics at high enough accelerations, binaries selected in a systematic way may be regarded as a sample of pure binaries statistically free of hidden close companions if they require \(f_{\rm multi}=0\) to match Newtonian dynamics at accelerations \(\gtrsim 10^{-8}\) m s\({}^{-2}\). For this procedure to be valid masses of individual stars must be reliably known. Fortunately, Chae (2023) provides a couple of reliable magnitude-mass relations (see figure 7 and table 1 of Chae (2023)) in the Gaia \(G\) band. In search of a sample of pure binaries satisfying \(f_{\rm multi}=0\), various selection criteria have been tried guided by observational and simulation studies (e.g., Belokurov et al., 2020; Penoyre et al., 2022). If a close undetected companion is present, it can have various effects. First, the image may not be well modeled so that Gaia derived ruwe values may be significantly larger than 1. Second, because the close companion induces additional motions, the measured uncertainties of parallaxes and proper motions (PMs) will become larger. Third, the additional motions will also increase the measurement uncertanties of radial velocities. It turns out that a sample of pure binaries can be obtained by requirements on both astrometric measurements and radial velocities. The requirements can be specified as follows. Throughout the brighter (more massive) star is referred to as component A while the fainter star component B. 1. Reported values of ruwe for both components are smaller than 1.2. Both PM components and parallaxes (thus distances) have relative (i.e. fractional) measurement errors smaller than \(\varepsilon\) with \(\varepsilon\leq 0.005\). 3. Distances of two components agree within \(2\sigma\) uncertainties and a maximum possible difference from the elliptical orbit \(\Delta d^{\rm max}_{\rm orbit}\), i.e., \[|d_{\rm A}-d_{\rm B}|<\sqrt{4(\sigma_{d_{\rm A}}^{2}+\sigma_{d_{\rm B}}^{2})+( \Delta d^{\rm max}_{\rm orbit})^{2}},\] (1) where \(\Delta d^{\rm max}_{\rm orbit}=6s\) from a 99% statistical limit from random inclinations and orbital phases for elliptical orbits of observational eccentricities. 4. Radial velocities of both components have relative measurement errors smaller than 0.2. In other words, we take only binaries whose components have radial velocities with \(S/N>5\). 5. Radial velocities of two components agree within \(2\sigma\) uncertainties and a maximum possible difference from the elliptical orbit \(\Delta v^{\rm max}_{r,\rm orbit}\), i.e., \[|v_{r,\rm A}-v_{r,\rm B}|<\sqrt{4(\sigma_{v_{r,\rm A}}^{2}+\sigma_{v_{r,\rm B }}^{2})+(\Delta v^{\rm max}_{r,\rm orbit})^{2}}\] (2) with \[\Delta v^{\rm max}_{r,\rm orbit}=0.9419\ {\rm km\ s^{-1}}\sqrt{\frac{M_{\rm tot}}{s}} \times 1.3\times 1.2,\] (3) where \(M_{\rm tot}\) is the total mass of the binary system in units of solar mass (M\({}_{\odot}\)) and \(s\) is the sky-projected separation between the two components. The factor 1.3 represents a maximum possible value (see Section 3 below) arising from random inclinations and orbital phases for elliptical orbits of observational eccentricities. The last factor 1.2 allows for a possible boost of velocity in MOND-type modified gravity theories so as not to preclude such theories, though it is practically not important because other uncertainties are larger. In the above, the third and fifth requirements gaurantee that two stars form a true binary system. When \(\varepsilon=0.005\) is used, the total number of pure binary systems is \(N_{\rm tot}=2463\). The distribution of masses of the selected binaries can be found in Figure 3. The total masses are in the range \(0.5\lesssim M_{\rm tot}/{\rm M}_{\odot}\lesssim 2.1\) with a mass ratio \(M_{B}/M_{A}\geq 0.5\) for 88% of the systems. The mean (median) mass for the entire sample is 1.35 (1.36) M\({}_{\odot}\), and the binned mean varies only mildly with \(s\) as the right panel of Figure 3 shows. I also consider a subsample within a narrow range of total mass \(1.1<M_{\rm tot}/{\rm M}_{\odot}<1.8\). This subsample has a mean (median) mass of 1.44 (1.43) \({\rm M}_{\odot}\) and its binned mean does not vary with \(s\). Since the mean mass varies little or not at all with \(s\) in the entire sample or the subsample, it is possible to investigate a stacked radial velocity profile. This is important because a statistical analysis of velocity profiles will be a main part of this work. For the observed right ascension (\(\alpha\)) and declination (\(\delta\)) components of the PMs in a binary, \((\mu^{*}_{\alpha,A},\mu_{\delta,A})\) and \((\mu^{*}_{\alpha,B},\mu_{\delta,B})\),1 along with the accurately and precisely measured distances \(d_{A}\) and \(d_{B}\), the magnitude of of the plane-of-sky relative velocity \(v_{p}\) is given by Footnote 1: Here \(\mu^{*}_{\alpha}\equiv\mu_{\alpha}\cos\delta\) for PM component \(\mu_{\alpha}\). \[v_{p}=\left[(\mu^{*}_{\alpha,A}d_{A}-\mu^{*}_{\alpha,B}d_{B})^{2}+(\mu_{ \delta,A}d_{A}-\mu_{\delta,B}d_{B})^{2}\right]^{1/2}. \tag{4}\] For Equation (4) to be used for actual data, the precision of \(d_{A}\) and \(d_{B}\) must be extremely good to prevent spurious boost of \(v_{p}\) in some systems caused by random measurement errors. Thus, in practice it is more accurate to use \[v_{p}=4.7404\times 10^{-3}\ {\rm km\ s^{-1}}\times\Delta\mu\times d \tag{5}\] where \(d\) is a representative distance in pc to the binary system, and \[\Delta\mu=\left[(\mu^{*}_{\alpha,A}-\mu^{*}_{\alpha,B})^{2}+(\mu_{\delta,A}- \mu_{\delta,B})^{2}\right]^{1/2}, \tag{6}\] with all PM values given units of mas yr\({}^{-1}\). For \(d\) I take an error-weighted mean (\(d_{M}\)) of \(d_{A}\) and \(d_{B}\). Tests with the Gaia sample show that velocities estimated with Equations (4) and (5) are statistically equivalent only when the precision of distances is better than \(\varepsilon\approx 0.002\). In this work Equation (5) will be used because up to \(\varepsilon\approx 0.005\) is considered. Note also that because the distance range is \(9\lesssim d/{\rm pc}\lesssim 200\) (Figure 4) and \(8\times 10^{-6}\lesssim s/d\lesssim 2.8\times 10^{-3}\) with a median of \(5\times 10^{-5}\), it is sufficient to assume a plane geometry for the sky region of a binary system. The uncertainty of the PM magnitude (Equations (6)) is estimated following El-Badry et al. (2021) as \[\begin{array}{rl}\sigma_{\Delta\mu}&=&\left[(\sigma_{\mu^{*}_{\alpha,A}}^{2 }+\sigma_{\mu^{*}_{\alpha,B}}^{2})(\Delta\mu_{\alpha})^{2}\right.\\ &&\left.+(\sigma_{\mu_{\delta,A}}^{2}+\sigma_{\mu_{\delta,B}}^{2})(\Delta\mu_{ \delta})^{2}\right]^{1/2}/\Delta\mu,\end{array} \tag{7}\] Figure 4: This figure shows the distribution of \(d_{M}\) (the error-weighted mean distance) for the sample of statistically pure binaries shown in Figure 3. Two distance bins are indicated by the vertical dashed lines. These bins will be used to probe any possible systematic effect of distances. Figure 3: The left panel shows distributions of individual masses and total masses in 2,463 pure binaries of the main sample defined in the text. The dashed black vertical lines define a narrow mass range of total masses \(1.1<M_{\rm tot}/{\rm M}_{\odot}<1.8\). The right panel shows the mean masses in the 6 bins defined by sky-projected separation \(s\) as indicated by vertical red dashed lines. These 6 bins will be used in the analyses of sky-projected velocities. where \[\begin{split}(\Delta\mu_{\alpha})^{2}&=\ (\mu_{\alpha,A}^{*}-\mu_{ \alpha,B}^{*})^{2},\\ (\Delta\mu_{\delta})^{2}&=\ (\mu_{\delta,A}-\mu_{ \delta,B})^{2}.\end{split} \tag{8}\] The uncertainty of the sky-projected velocity is given by \[\sigma_{v_{p}}=4.7404\times 10^{-3}\ \mathrm{km\ s^{-1}}\times\sigma_{\Delta\mu} \times d. \tag{9}\] The normalized velocity parameter \(\tilde{v}\)(Banik & Zhao, 2018) and its uncertainty are given by \[\begin{split}\tilde{v}&\equiv\ v_{p}/v_{c},\\ \sigma_{\tilde{v}}&\equiv\ \tilde{v}\sqrt{\left( \frac{\sigma_{v_{p}}}{v_{p}}\right)^{2}+\left(\frac{\sigma_{v_{c}}}{v_{c}} \right)^{2}},\end{split} \tag{10}\] where \(v_{c}\equiv\sqrt{GM_{\mathrm{tot}}/s}\) is the Newtonian circular velocity defined at the projected separation \(s\)(Banik & Zhao, 2018) and I take \(\sigma_{v_{c}}/v_{c}=0.05\) assuming a total mass uncertainty of 10%. The uncertainties of \(v_{p}\) and \(\tilde{v}\) are introduced here as additional means to check/control data quality. Figure 5 shows the distributions of the estimated uncertainties of \(v_{p}\) and \(\tilde{v}\) for the pure binaries of the main sample shown in Figure 3. Virtually all pure binaries selected with strict astrometric and kinematic criteria have good signal-to-noise (\(S/N\gtrsim 3\)) with a median of about 40 for \(v_{p}\). Banik et al. (2023) advocate a cut based on \(\sigma_{\tilde{v}}<0.1\max(1,\tilde{v}/2)\). Figure 5 shows that this cut is already satisfied by virtually all pure binaries. Thus, for the pure binary sample I will not consider any artificial cut using either \(\sigma_{v_{p}}/v_{p}\) or \(\sigma_{\tilde{v}}\). In Appendix B, I will discuss the effects of cuts in general samples. Gaia DR3 radial velocities typically have much less precision than PMs. Thus, most radial velocities cannot be used to measure the relative radial velocity \(v_{r}\) between the two components because large random errors in individual radial velocities can create spurious boosts in many systems. However, for exceptionally precise radial velocities with the measurement precision comparable to that of \(\Delta\mu\) (Equation (6)), two relative velocities \(v_{p}\) and \(v_{r}\) can be combined to reliably estimate the relative physical velocity between the two stars \[v=\sqrt{v_{p}^{2}+v_{r}^{2}}. \tag{11}\] Considering that all PM components have relative errors \(<\varepsilon\), the relative error of \(v_{p,i}\) (\(i=\alpha,\delta\))2 is \(<\sqrt{2}\varepsilon\) for \(\varepsilon=0.005\). To require that the relative error of \(v_{r}\) is comparable with that of \(v_{p,i}\), I require relative errors of individual radial velocities \(<0.005\). Finally, I note that gravitational redshifts (El-Badry, 2022) from the surface gravities of the stars are irrelevant for the stars used in this work because stellar mass-to-radius ratio \(M/R\) varies little for the mass range \(0.1\lesssim M/\mathrm{M_{\odot}}\lesssim 1\)(Demory et al., 2009). Footnote 2: For most confirmed binary systems the two stars can be regarded at the same distance, so the distance may be treated as a constant. ## 3 A Monte Carlo Method of Testing Gravity with Stacked Velocity Profiles of Pure Binaries Testing gravity in an acceleration plane with pure binaries will be done using the algorithm of Chae (2023) with \(f_{\mathrm{multi}}=0\). Here I describe a Monte Carlo method of testing gravity with stacked velocity profiles of pure binaries. The description of elliptical orbits will be the same as that of Chae (2023). The pertinent question is how to predict the sky-projected relative velocity \(v_{p}(s)\) and the physical relative velocity \(v(s)\) between the two Figure 5: The upper panel shows the distribution of \(\sigma_{\tilde{v}}\) (Equation (10)) while the bottom panel shows the distribution of the fractional uncertainty of the sky-projected velocity \(\sigma_{v_{p}}/v_{p}\) for the main sample of 2,463 pure binaries. The upper panel indicates the cut similar to that suggested by Banik et al. (2023). Note that virtually all pure binaries satisfy the Banik et al. (2023) cut and \(S/N>2\) for the sky-projected velocity. stars with a sky-projected separation \(s\) that can be compared with the observed velocities, Equations (5) and (11). Figure 6 shows an equivalent one-body description of the elliptical orbit of binary dynamics taken from Chae (2023). The orbit is described in the plane polar coordinates \((r,\phi)\) by the equation \[r=\frac{a(1-e^{2})}{1+e\cos(\phi-\phi_{0})}, \tag{12}\] where \(e\) is the eccentricity, \(a\) is the semi-major axis, and \(\phi_{0}\) is the longitude of the periastron. In Newtonian dynamics, the magnitude of the relative physical velocity between the two stars is given by \[v(r)=\sqrt{\frac{GM_{\rm tot}}{r}\left(2-\frac{r}{a}\right)}. \tag{13}\] Physical separation \(r\) is related to the sky-projected separation \(s\) by \[s=r\sqrt{1-\sin^{2}i\sin^{2}\phi}, \tag{14}\] where \(i\) is the inclination and \(\phi\) is the azimuthal angle of the physical separation vector on the orbital plane as shown in Figure 6. Combining Equations (12), (13), and (14), we can express the magnitude of the relative physical velocity as a function \(s\), \[\begin{array}{lll}v(s)&=&0.9419\ {\rm km\ s}^{-1}\sqrt{\frac{M_{\rm tot}/M_{ \odot}}{s/{\rm kau}}}\times\\ &&\sqrt{1-\sin^{2}i\sin^{2}\phi}\left(2-\frac{1-e^{2}}{1+e\cos(\phi-\phi_{0})} \right).\end{array} \tag{15}\] The magnitude of the sky-projected velocity to the observer is given by \[v_{p}(s)=v(s)\sqrt{1-\sin^{2}i\sin^{2}\psi}, \tag{16}\] where \[\psi=\tan^{-1}\left(-\frac{\cos\phi+e\cos\phi_{0}}{\sin\phi+e\sin\phi_{0}} \right)+\pi. \tag{17}\] (I note that the factor \(\pi\) is physically irrelevant and added to match the definition given in Figure 6 exactly.) For the observed set of \((M_{\rm tot},s)\), one MC realization of Newtonian velocities of Equations (15) and (16) follow from MC realizations of \(\phi_{0}\), \(\phi\), \(i\) and \(e\). Because possible ranges of these parameters are broad, the predictions for one binary system cannot be meaningfully compared with the observed velocities to test gravity. However, if a number of binary systems are considered simultaneously, the individual random fluctuations are averaged out and thus the mean of the predictions can be meaningfully compared with the mean of the observed velocities. Moreover, if MC realizations are repeated many times, on can derive the probability distribution of the mean in a sample and thus estimate its statistical uncertainty. This procedure allows one to test gravity in a quantitative way. MC realizations of \(\phi_{0}\), \(\phi\), \(i\) and \(e\) follow those described in Chae (2023). They can be summarized as follows: (1) \(\phi_{0}\) is drawn randomly from the range \((0,2\pi)\); (2) \(\phi\) comes from the time along the orbit randomly drawn from \((0,T)\) where \(T\) is the period; (3) \(i\) is randomly drawn from \((0,\pi/2)\) with a probability density Figure 6: (Adapted from Chae (2023)) The left panel shows a one-particle equivalent description of orbital motions of the two stars in a binary system. The right panel defines the observer’s viewpoint at an inclination \(i\). \(\sin(i)\); finally, (4) \(e\) is drawn from the individual ranges provided by Hwang et al. (2022) as shown in figure 8 of Chae (2023). Note particularly that the eccentricity distribution for each binary is specified by three values: the most likely value (\(e_{m}\)), a lower-bound value (\(e_{l}\)), and an upper-bound value (\(e_{u}\)). In an MC, eccentricity is drawn using a combination of two truncated Gaussian functions: \(e_{m}\) is taken as the median and each side is assumed to follow a truncated Gaussian function with a "\(\sigma\)" of \(e_{u}-e_{m}\) or \(e_{m}-e_{l}\) with the total range bounded by the limit \(0.001<e<0.999\). I also consider a power-law distribution \(p(e;\alpha)=(1+\alpha)e^{\alpha}\) with a systematically varying \[\alpha=-5.123+4.612x-1.098x^{2}+0.08748x^{3} \tag{18}\] with \(x\equiv\log_{10}(s/\text{au})\) based on figure 7 and table 1 of Hwang et al. (2022). Equation (18) is valid for the range \(2\lesssim x\lesssim 4.5\). For further details, the reader is referred to Chae (2023). ## 4 Results The sample of 2,463 pure binaries defined in Section 2 was obtained from a systematic investigation using the code developed in Chae (2023) and revised as described in Appendix A. Various samples defined in Chae (2023) always had \(f_{\text{multi}}>0\) when binary motions were fitted to the Newtonian expectation in the high-acceleration regime \(\gtrsim 10^{-8}\text{m s}^{-2}\). This means that generally defined samples always include undetected close companions as widely appreciated. Also, the values of \(f_{\text{multi}}\approx 0.2-0.5\) obtained in Chae (2023) with the revision of Appendix A agree well with the results from various surveys (Raghavan et al., 2010; Tokovinin, 2014; Riddle et al., 2015; Moe and Stefano, 2017) indicating that the calibration procedure of Chae (2023) is reliable. In Section 4.1, I show the results in the acceleration plane for the sample of pure binaries obtained with the code of Chae (2023). The results clearly show that binary motions match the Newtonian expectation with \(f_{\text{multi}}=0\) in the high-acceleration regime. The results will then provide a new test of gravity in the low-acceleration regime. In Section 4.2, I present the main results of this work, i.e., the stacked velocity profiles compared with the MC predictions of Newtonian gravity. ### Results on the acceleration plane Figure 7 shows one MC result for the kinematic acceleration \(g\equiv v^{2}/r\) defined in Chae (2023) against the Newtonian gravitational acceleration \(g_{\text{N}}\equiv GM_{\text{tot}}/r^{2}\) for the main sample of 2,463 pure binaries. Here stellar masses are estimated through the standard magnitude-mass (\(M_{G}\)-\(M\)) relation (the first choice in table 1 of Chae (2023)) that is based on the Pecaut and Mamajek (2013)\(V\)-band magnitude-mass relation. The Newtonian expectation of the \(g_{\text{N}}\)-\(g\) relation is compared with that for the Gaia data. In particular, the median orthogonal deviations \(\langle\Delta_{\perp}\rangle\) in the acceleration bins are quantitatively compared, as shown in the bottom panels of Figure 7. Another MC gives different \(\langle\Delta_{\perp}\rangle\) values, and probability distributions of \(\langle\Delta_{\perp}\rangle\) can be derived from a number of MC results as demonstrated in Chae (2023). Figure 8 shows the distributions from 400 MC results. Those shown in the left column are from the MC results with the standard \(M_{G}\)-\(M\) relation while those in the right are with the \(J\)-band based \(M_{G}\)-\(M\) relation (the second choice in table 1 of Chae (2023)). It is clearly seen that the Gaia result naturally matches the Newtonian expectation in the highest acceleration bin at \(x_{0}\approx-8\) with \(f_{\text{multi}}=0\) (i.e. without any calibration) whichever \(M_{G}\)-\(M\) relation is used. The parameter \(\delta_{\text{obs-newt}}\equiv\langle\Delta_{\perp}\rangle_{\text{observed}}- \langle\Delta_{\perp}\rangle_{\text{Newton}}\) defined in paper I has values of \(\delta_{\text{obs-newt}}=-0.004\pm 0.019\) and \(-0.009\pm 0.021\) at \(x_{0}\approx-8\), well consistent with zero. This agreement is remarkable considering that there are no free parameters. To check that the above agreement with the Newtonian expectation at \(x_{0}\approx-8\) in the main sample is true rather than a coincidence, I consider subsamples selected with more stringent astrometric requirements (along with the same requirement on radial velocities). When relative errors of PMs and parallaxes are required to be \(<\varepsilon\), I consider \(\varepsilon=0.004\) and \(0.0025\). The MC results on the acceleration plane for these subsamples can be found in Figure 9. These results agree well with \(\delta_{\text{obs-newt}}=0\) at \(x_{0}\approx-8\) though with larger statistical errors due to smaller sample sizes. The above results are based on individual eccentricities estimated by Hwang et al. (2022) and thus represent currently most likely results. Figure 10 shows a result based on eccentricities drawn statistically from a power-law distribution with a varying index given by Equation (18). Because binary-specific eccentricities are replaced by statistical eccentricities, the deviation is somewhat diluted as already noted in Chae (2023). However, the result still favors the AQUAL model over Newton. So far I have considered three bins so that each bin has a maximal number of MC points for bins of significantly different accelerations. Now I consider finer bins to test any dependence of the results with binning. Figure 11 shows the results. The result with the standard inputs including individual eccentricities follows the AQUAL curve remarkably well over the entire bins. If the astrometric requirements or the requirement on radial velocities are relaxed progressively from PM and parallax relative errors \(<0.005\) or RV relative errors \(<0.2\), one can check that \(\delta_{\rm obs-newt}\) at \(x_{0}=-8\) with \(f_{\rm multi}=0\) increases progressively from zero. Note that the astrometric and RV requirements can be less stringent than the presently chosen requirements depending on the tolerance about the value of \(\delta_{\rm obs-newt}\) at \(x_{0}=-8\) with \(f_{\rm multi}=0\). For example, one could allow \(\delta_{\rm obs-newt}\) to be consistent with zero only within the MC estimated \(1\sigma\). Here I am very conservative and require \(\delta_{\rm obs-newt}\) to be consistent with zero within a small fraction of the MC estimated \(1\sigma\). The above results verify that the main sample with the presently chosen astrometric and RV requirements is _statistically_ free of hierarchical systems. Of course, there still can be a few individual systems that have minor undetected close companions. However, even if they are present, they are statistically negligible. The above results also reassure that the whole procedure, the Gaia data, and the empirical \(M_{G}\)-\(M\) relations are all reliable. Now the derived values of \(\delta_{\rm obs-newt}\) at lower accelerations are not consistent with zero indicating that Newtonian gravity breaks down in the low-acceleration regime. Because the same astrometric and RV requirements are imposed on all binaries regardless of the separation \(s\), it is unreasonable to imagine that only more widely (\(s\gtrsim 2\) kau) separated binaries preferentially have large amounts of undetected companions while the less widely (\(s\lesssim 1\) kau) separated binaries have none. Thus, these results provide robust evidence for two aspects of gravity: (1) Newtonian gravity holds for acceleration \(\gtrsim 10^{-8}\) m s\({}^{-2}\), (2) Newtonian gravity (and thus general relativity) breaks down in the low acceleration regime \(\lesssim 10^{-9}\) m s\({}^{-2}\). While the evidence from statistically pure binaries is robust, the statistical significance is much weaker than in Chae (2023) due to the much smaller sample size. Figure 7: MC realized distributions of 2,463 pure binaries in the acceleration plane defined in Chae (2023). The quantity \(g_{\rm N}\equiv GM_{\rm tot}/r^{2}\) is the Newtonian gravitational acceleration between the two stars and \(g\equiv v^{2}/r\) is an empirical kinematic acceleration, where \(r\) and \(v\) are deprojected 3D separation and relative velocity. The left panel shows a Newton-predicted distribution while the right panel shows a distribution from Gaia measurements from _one_ MC realization. Big dots indicate the medians in the orthogonal bins indicated by magenta dotted lines. The orthogonal deviation \(\Delta_{\perp}\) of a point from the diagonal line is indicated in the inset of the upper left panel. The bottom panels show the medians of \(\Delta_{\perp}\) in the bins. In the right panels, the Gaia medians are compared with he Newtonian medians. Figure 8: The upper left panel shows distributions of median orthogonal deviations \(\langle\Delta_{\perp}\rangle\) (defined in Figure 7) from 400 MC results with the standard \(M_{G}\)-\(M\) relation for the main sample of 2,463 pure binaries. The upper right panel is with the \(J\)-band-based \(M_{G}\)-\(M\) relation. The bottom panels show distributions of the difference \(\delta_{\rm obs-newt}\equiv\langle\Delta_{\perp}\rangle_{\rm obs}-\langle \Delta_{\perp}\rangle_{\rm N}\). In the lowest acceleration bin at \(x_{0}\approx-8.0\), \(\delta_{\rm obs-newt}=0\) is naturally satisfied without any adjustment of \(f_{\rm multi}\). The magenta curve in the bottom panels represents the AQUAL prediction for circular orbits with the Milky Way external field (see Figure 1). Figure 9: The same as the left panel of Figure 8 but for subsamples with stricter astrometric requirements than the main sample. The results have larger statistical uncertainties and are consistent with those for the main sample. with \(s\)-dependent power-law eccentricity distribution From the results with the standard \(M_{G}\)-\(M\) relation, \(\delta_{\rm obs-newt}=0.122\pm 0.041\) at \(x_{0}\approx-10.3\) with a significance of \(3.0\sigma\). At \(x_{0}\approx-9.1\), \(\delta_{\rm obs-newt}=0.033\pm 0.022\) with a significance of \(1.5\sigma\). Taken together these results rule out Newtonian gravity with a \(>3\sigma\) confidence. At \(x_{0}\approx-10.3\), the acceleration boost factor is \[\gamma_{g}\equiv\frac{g_{\rm obs}}{g_{\rm pred}}=10^{\sqrt{2}\delta_{\rm obs- newt}}=1.49^{+0.21}_{-0.19}, \tag{19}\] where \(g_{\rm obs}=v^{2}/r\) is the MC realized kinematic acceleration from the Gaia data, and \(g_{\rm pred}\) is the corresponding Newtonian prediction. This value is in excellent agreement with the values obtained in Chae (2023) for general samples with \(f_{\rm multi}\) self-calibrated. ### Profiles of stacked velocities Here I present profiles of stacked velocities for the pure binary sample and compare them with the Newtonian MC predictions as described in Section 3. I pay most attention to sky-projected velocity (\(v_{p}\)) profiles because radial velocities (\(v_{r}\)) as precise as \(v_{p}\) are quite rare. However, I will also consider, for the first time, profiles of physical velocities \(v\) (Equation (11)) for a few dozens of binaries with exceptionally precise \(v_{r}\) measurements. #### 4.2.1 Testing the general sample including impure binaries Before presenting results on pure binaries I test the general sample of 26,615 binaries defined in Chae (2023) that includes "impure" binaries hosting unresolved hidden close companions. Figure 12 compares the observed sky-projected velocities with the Newton-predicted values without taking into account hidden close companions. As expected, the observed velocities are higher than the Newton-predicted values regardless of the separation between the two stars. This is a clear indication that the observed velocities are largely affected by additional masses from hidden companions. However, Figure 12 also reveals that the observed binned medians do not follow the Keplerian scaling of \(\propto s^{-0.5}\). The measured scaling has a slope of \(-0.437\pm 0.003\) exhibiting a \(21\sigma\) deviation from \(-0.5\). This indicates that \(f_{\rm multi}\) must increase sharply with \(s\) to be consistent with the Keplerian scaling if gravitational anomaly or other factors are not permitted. I note that any bias arising from distances cannot explain this slope because a subsample with a narrow distance range shows a similar slope. Thus, we are in a situation where "general" binaries at the same distances selected with the same criteria show strong differences in kinematics depending only on \(s\). This motivates investigations of "pure" binaries in the following sections. #### 4.2.2 Main results Figure 13 shows the profile of stacked \(v_{p}\) values for all 2,463 pure binaries in the main sample and compares it with the Newtonian prediction with the standard \(M_{G}\)-\(M\) relation. Small red dots represent individual observed values while the big red dots represent the median, the 16th percentile, and the 84th percentile in the bins of \(s\). Small blue dots represent an output from one Newtonian MC run. The mean values of the median, the 16th percentile, and the 84th percentile in the bins of \(s\) are obtained from 400 MC runs. The big blue dots and errorbars shown in the left panel of Figure 13 represent the distributions from MC runs. Note that the Newtonian MC predicted velocities can have tangible uncertainties when numbers are small as in the relatively larger-\(s\) bins. Histograms in the right panels of Figure 13 show the distributions of \(\log_{10}(\bar{v}_{p,\rm obs}/\bar{v}_{p,\rm N})\) in the 6 bins. Note that the widths of the distributions are entirely determined by the distributions of \(\bar{v}_{p,\rm N}\). Figure 13 shows that in the three smallest-\(s\) bins the observed median velocities agree well with the Newton-predicted values with \(\log_{10}(\bar{v}_{p,\rm obs}/\bar{v}_{p,\rm N})=-0.008\pm 0.015\), \(-0.001\pm 0.013\), and \(0.005\pm 0.016\). Moreover, the 16th and 84th percentiles of the observed velocities agree well with the Newton-predicted values for the first two bins indicating that the observed distributions Figure 10: The same as the left panel of Figure 8 but with statistical eccentricities drawn from a power-law distribution with a varying index given by Equation (18). are fully consistent with the Newton-predicted distributions. Thus, for binary systems with \(0.5\lesssim M_{\rm tot}/\rm M_{\odot}\lesssim 2\) Newtonian dynamics holds for \(s\lesssim 2\) kau as expected from Figures 1 and 2. This result is consistent with the results on the acceleration plane presented in Section 4.1. The Newton-predicted median velocities follow the Keplerian profile \(\propto s^{-1/2}\) as expected because the median mass varies little with \(s\) in the sample as shown in Figure 3. However, the observed median velocities do not follow the Newtonian profile for the probed entire \(s\) range. There is a jump in the \(\bar{v}_{p,\rm obs}\) profile around \(s\approx 2\) kau. The deviation from the Newtonian predictions in the 4th bin is \(\log_{10}(\bar{v}_{p,\rm obs}/\bar{v}_{p,\rm N})=0.045\pm 0.019\) at \(\log_{10}(\bar{s}/\rm au)=3.53\). However, in the last two bins the deviations are \(0.076\pm 0.027\) and \(0.085\pm 0.040\) at \(\log_{10}(\bar{s}/\rm au)=3.89\) and \(4.22\). These deviations together represent a \(\approx 5.0\sigma\) detection of gravitational anomaly. Hereafter the statistical significance is estimated as follows. For the last three bins that deviate from the solid blue line, the probabilities \(p(x<0)\) (where \(x\equiv\log_{10}(\bar{v}_{p,\rm obs}/\bar{v}_{p,\rm N})\)) are calculated based on the estimated \(\mu\) and \(\sigma\) values and the product of the three probabilities is obtained. Then, the product value is used to estimate a Gaussian equivalent significance. In the two largest-\(s\) bins of Figure 13 the boost factor for projected velocities is \[\gamma_{v_{p}}\equiv\frac{\bar{v}_{p,\rm obs}}{\bar{v}_{p,\rm N}}=1.20\pm 0.0 6\ (\rm with\ individual\ \emph{e}). \tag{20}\] Because the kinematic acceleration was defined to be \(v^{2}/r\), \(\gamma_{v_{p}}\) is expected to match \(\sqrt{\gamma_{g}}\) if projection effects are averaged out in a statistical sample. This is indeed the case from Equations (19) and (20). A qualitatively important aspect of the stacked velocity profile shown in Figure 13 is that the last two bins follow the Keplerian scaling \(\propto s^{-1/2}\) indicating that they are pseudo-Newtonian with a boosted gravity as exactly predicted by MOND gravity under an EFE as shown in Figure 1. Thus, Figure 13 agrees well with the unique trait of MOND gravity. Figure 14 shows the result with the \(J\)-band-based \(M_{G}\)-\(M\) relation. This result is consistent with the result with the standard \(M_{G}\)-\(M\) relation indicating that the results are robust within the currently available \(M_{G}\)-\(M\) relations. The significance of the gravitational anomaly Figure 11: Similar to Figure 8 but with 7 bins. The left panel is with the standard inputs including individual eccentricities while the right panel is with statistical eccentricities replacing individual eccentricities. from the last three bins is \(4.9\sigma\). However, below I will further probe the effects of systematically varying the observed Gaia magnitudes. The above results are for a mass range \(0.5\lesssim M_{\rm tot}/\mathrm{M_{\odot}}\lesssim 2.5\). Now I consider subsamples with a limited mass range \(1.1<M_{\rm tot}/\mathrm{M_{\odot}}<1.8\). Figure 15 shows the results for the subsample with \(1.1<M_{\rm tot}/\mathrm{M_{\odot}}<1.8\) and the clean magnitude range \(4<M_{G}<14\). Figure 16 shows the results for the subsample with \(1.1<M_{\rm tot}/\mathrm{M_{\odot}}<1.8\) and a narrower magnitude range \(4<M_{G}<10\). These results agree well with the results with the main sample. The statistical significance of the deviations is \(\approx 4.0\sigma\) for both results. It is interesting to consider subsamples obtained in the limiting cases of extreme precision of PMs and distances. Figure 17 shows the result for \(1,206\) pure binaries with \(\varepsilon=0.0025\), i.e. twice better precision. The result is well consistent with the results for the main sample. The statistical significance of the deviations in the last three bins taken together is \(3.9\sigma\). Finally, I consider statistical eccentricities based on the power-law distribution with the slope systematically varying with \(s\) as given by Equation (18), rather than individual ranges of eccentricities reported by Hwang et al. (2022). The overall trend of the stacked velocity profile agrees well with the results with individual eccentricities. The statistical significance of the deviations in the last 3 bins taken together is \(4.1\sigma\). The boost factor estimated based on the two largest-\(s\) bins is slightly lower than the value given in Equation (20): \[\gamma_{v_{p}}\equiv\frac{\bar{v}_{\rm p,obs}}{\bar{v}_{\rm p,N}}=1.15\pm 0.06 \ \text{(with statistical $e$)}. \tag{21}\] The main key results are summarized in Table 1. The projected velocity boost factor is in the range \(1.15\leq\gamma_{v_{p}}\leq 1.24\). The statistical significance of the gravitational anomaly is in the range \((3.9\sigma,5.0\sigma)\). #### 4.2.3 Auxiliary analyses In Section 4.2.2, the Newtonian analysis with \(f_{\rm multi}=0\) showed that the observed velocities in the smaller-\(s\) bins matched the Newton-predicted values while those in the larger-\(s\) bins were higher than the Newton-predicted values. It is interesting to explore any possibility of attributing the boosted velocities in the larger-\(s\) bins somehow to additional masses from hidden close companions or a systematically shifted magnitude-mass relation. Here I consider varying \(f_{\rm multi}\) to make the observed velocities in the large-\(s\) bins agree with the Newton-predicted values. Figure 12: The left panel shows sky-projected velocities with respect to sky-projected separation \(s\) for the general sample of 26,615 binaries with the standard \(M_{G}\)-\(M\) relation. Small red dots are the observed velocities (Equation (5)) while blue ones are Newtonian velocities (Equation (16)) in _one_ MC realization. Big dots indicate median, 16th, and 84th percentile velocities in the bins of \(s\) defined in Figure 3. The Newtonian median velocities \(\bar{v}_{\rm p,N}\) follow the Keplerian scaling of \(\bar{v}_{\rm p,N}\propto s^{-1/2}\). The observed median velocities deviate from the Newtonian predictions in the entire range and the scaling as a slope clearly different from \(-1/2\). The right panels show the probability distributions of \(\log_{10}(\bar{v}_{\rm p,obs}/\bar{v}_{\rm p,N})\) derived from 400 MC realizations. I use the procedure of modeling close companions described in Chae (2023). Figure 19 shows the result with \(f_{\rm multi}=0.5\). In this case, the observed median velocities in the two largest-\(s\) bins match well the Newton-predicted medians. However, the Newton-predicted distributions are broader as revealed by the 16th and 84th percentiles due to the scatters arising from added components. More importantly, the Newton-predicted velocities in the three smallest-\(s\) bins now severely deviate from the observed velocities. Thus, we would be in a Newtonian world where binaries in the three smallest-\(s\) bins require \(f_{\rm multi}\approx 0\) while those in the two largest-\(s\) bins require \(f_{\rm multi}\approx 0.5\) although two subsamples satisfy identical photometric, astrometric, and radial velocity requirements. Since this huge difference in \(f_{\rm multi}\) between the small- and large-\(s\) bins is ad-hoc and unlikely, this result reinforces the results of Section 4.2.2. I also consider a pseudo-Newtonian analysis with a rescaled Newton's constant. This analysis is motivated because the results of Section 4.2.2 suggest that binaries \begin{table} \begin{tabular}{c c c c c c} \hline \hline sample & \(N_{\rm binary}\) & precision cut & eccentricity & \(\gamma_{v_{p}}\) (\(s\gtrsim 5\) kau) & statistical significance (\(s\gtrsim 2\) kau) \\ \hline \(4<M_{G}<14\), no limit on \(M_{\rm tot}\) & 2463 & \(\varepsilon=0.005\) & individual & \(1.20\pm 0.06\) & \(p(x<0)=3.1\times 10^{-7}\) (\(5.0\sigma\)) \\ \(4<M_{G}<14\), \(1.1<M_{\rm tot}/{\rm M}_{\odot}<1.8\) & 1465 & \(\varepsilon=0.005\) & individual & \(1.18\pm 0.07\) & \(p(x<0)=1.7\times 10^{-5}\) (\(4.1\sigma\)) \\ \(4<M_{G}<10\), \(1.1<M_{\rm tot}/{\rm M}_{\odot}<1.8\) & 1399 & \(\varepsilon=0.005\) & individual & \(1.17\pm 0.07\) & \(p(x<0)=3.8\times 10^{-5}\) (\(4.0\sigma\)) \\ \(4<M_{G}<14\), no limit on \(M_{\rm tot}\) & 1206 & \(\varepsilon=0.0025\) & individual & \(1.24\pm 0.09\) & \(p(x<0)=5.0\times 10^{-5}\) (\(3.9\sigma\)) \\ \(4<M_{G}<14\), no limit on \(M_{\rm tot}\) & 2463 & \(\varepsilon=0.005\) & statistical & \(1.15\pm 0.06\) & \(p(x<0)=1.7\times 10^{-5}\) (\(4.1\sigma\)) \\ \hline \end{tabular} Note. (1) The parameter \(\gamma_{v_{p}}\) is the boost factor for \(v_{p}\) (sky-projected velocity) estimated for bins with \(s\gtrsim 5\) kau. (2) \(x\equiv\log_{10}(\bar{v}_{p,\rm obs}/\bar{v}_{p,\rm N})\). \end{table} Table 1: Main results of gravitational anomaly from pure binaries Figure 13: The left panel shows sky-projected velocities with respect to sky-projected separation \(s\) for the main sample of 2,463 pure binaries with the standard \(M_{G}\)-\(M\) relation. Small red dots are the observed velocities (Equation (5)) while blue ones are Newtonian velocities (Equation (16)) in _one_ MC realization. Big dots indicate the median, the 16th percentile, and the 84th percentile velocities in the bins. The error bars of the big blue dots are estimated from 400 MC realizations. The Newtonian median velocities \(\bar{v}_{p,\rm N}\) follow the Keplerian scaling of \(\bar{v}_{p,\rm N}\propto s^{-1/2}\). The observed median velocities \(\bar{v}_{p,\rm obs}\) in the three lowest-\(s\) bins naturally match the Newtonian predictions. However, the observed median velocities deviate from the Newtonian predictions in the larger-\(s\) bins. The dotted line indicates the boosted velocity in the two largest-\(s\) bins. The right panels show the probability distributions of \(\log_{10}(\bar{v}_{p,\rm obs}/\bar{v}_{p,\rm N})\) derived from the MC realizations. Figure 14: The same as Figure 13 but with the \(J\)-band-based \(M_{G}\)-\(M\) relation. Figure 15: The same as Figure 13 but for the subsample with the limited mass range \(1.1<M_{\rm tot}/{\rm M_{\odot}}<1.8\). Figure 16: The same as Figure 13 but for the subsample with the narrower magnitude range \(4<M_{G}<10\) and the limited mass range \(1.1<M_{\rm tot}/{\rm M}_{\odot}<1.8\). Figure 17: The same as Figure 13 but for the subsample with a more stringent astrometric requirement of \(\varepsilon=0.0025\). Figure 19: The same as Figure 13 but with \(f_{\rm multi}=0.5\). Figure 18: The same as Figure 13 but with statistical eccentricities from an \(s\)-dependent power-law distribution. with \(s\gtrsim 5\) kau follow pseudo-Newtonian dynamics. Figure 20 shows the result with \(G^{\prime}=1.44G\). The observed median velocities in the two largest-\(s\) bins agree well with the pseudo-Newtonian predictions. The observed 84th and 16th percentiles also match well the pseudo-Newtonian predictions within the statistical uncertainties. These results suggest that binaries with \(s\gtrsim 5\) kau truly obey pseudo-Newtonian dynamics. However, the velocities in the three smallest-\(s\) bins deviate severely from the pseudo-Newtonian predictions as well expected because they are in the Newtonian regime. #### 4.2.4 Physical velocity \(v=\sqrt{v_{p}^{2}+v_{r}^{2}}\) Just 40 out of 2,463 binaries of the main sample have measured radial velocities satisfying the precision of \(<0.005\) required for proper motions in each dimension. These exceptional systems can be used to probe the profile of physical velocities in the 3D space. Figure 21 shows the measured physical velocities along with the Newtonian predicted values. Only three bins of \(s\) are considered to obtain median velocities. As expected, the MC generated distributions of median velocities are quite broad. In the lowest-\(s\) bin the observed median velocity matches well the Newtonian prediction. In the middle bin the observed median velocity is consistent with the Newtonian prediction within \(1.5\sigma\). However, in the highest-\(s\) bin satisfying \(s>3.5\) kau there is an indication that \(\bar{v}_{\rm obs}\) is higher than \(\bar{v}_{\rm N}\) with \(\log_{10}(\bar{v}_{\rm obs}/\bar{v}_{\rm N})=0.249\pm 0.089\), which represents a \(2.8\sigma\) deviation. This result is consistent with the results from the analyses of sky-projected velocity profiles. ## 5 Discussion All the above results except for those from the auxiliary analyses of Section 4.2.3 have been obtained without any free parameters. They are derived or calculated quite naturally from the Gaia measurements, the magnitude(\(M_{G}\))-stellar mass(\(M\)) relations from Chae (2023), and the individual eccentricity ranges from Hwang et al. (2022). Are there any possibilities that any of the observational inputs or analyses are grossly in error? Gaia data themselves cannot be a source of systematic error because only exceptionally precise data have been used in this study (see Figure 5) and the results remain consistent as the precision increases (see Figure 9 and 17). However, one possible concern may be that the above results are based on a composite of binaries at significantly different distances ranging from 9 pc up to 200 pc. Here I consider two subsamples in significantly narrower distance ranges of \(50<d_{M}<125\) pc and \(125<d_{M}<200\) pc as defined in Figure 4. Figures 22 and 23 show the results. Because sample sizes are smaller, these results have larger statistical uncertainties, but are consistent with each other and the main results. Note particularly that the \(50<d_{M}<125\) pc sample has particularly larger statistical uncertainties because the two largest-\(s\) bins have relatively few binaries. The \(M_{G}\)-\(M\) relation is also not like to be any source of systematic error. As shown in Chae (2023), the two \(M_{G}\)-\(M\) relations reliably cover the likely range at least for the clean magnitude range \(4<M_{G}<14\), and the results with the two relations give consistent results (see Figure 8 and compare Figure 13 with Figure 14). Moreover, the result for the subsample with a narrower magnitude range \(4<M_{G}<10\) (Figure 16) is consistent with those for the full sample. (Note that the \(M_{G}\)-\(M\) relation is particularly accurate in the range \(4<M_{G}<10\) as shown in figure 7 of Chae (2023).) Nevertheless, I consider systematically shifting magnitudes by \(\pm 0.5\) mag just as a wild possibility in the estimated \(G\)-band magnitudes. I note that masses of Pecaut and Mamajek (2013) agree well with the masses measured directly from confirmed close binaries at the same \(K\)-band magnitudes (Mann et al., 2019). Thus, it suffices to consider a systematic shift in \(G\)-band magnitudes only. Figure 24 shows the result with a shift of \(-0.5\). Even in this case, the median velocities in the two largest-\(s\) bins deviate significantly with a velocity boost factor of \(\approx 1.14\) while the two smallest-\(s\) bins deviate in the opposite direction. Kinematic contaminants such as hidden close companions cannot be a source of systematic error. Because binaries with small and large \(s\) values have similar total masses and satisfy the same selection criteria, if additional masses should be added to the binaries, similar masses must be added to the binaries regardless of their separations. Then, the fractional boost in velocity is similar for all binaries regardless of \(s\) because the boosted-to-initial velocity ratio \(v^{\prime}/v=\sqrt{M^{\prime}/M}\) is independent of \(s\). As shown in Figure 19, when additional masses are controlled by \(f_{\rm multi}\), the two largest-\(s\) bins require \(f_{\rm multi}=0.5\) to match Newtonian dynamics while the three smallest-\(s\) bins require \(f_{\rm multi}=0\). This extremely unlikely difference means that it seems impossible to attribute the gravitational anomaly to differential kinematic contaminants of hidden close companions. This view is bolstered by the fitted values of \(f_{\rm multi}\) in subsamples with progressively stricter kinematic criteria as presented in Appendix B. When kinematic criteria get stricter and stricter, \(f_{\rm multi}\) gets lower and lower, approaching zero eventually. Thus, it is hardly expected to be as high as \(f_{\rm multi}=0.5\) in the statistically pure binary sample selected in this work. Because the sample of 2,463 "pure" wide binaries is made public, this claim can be directly tested with observations (see Manchanda et al., 2023). The individual eccentricity ranges from Hwang et al. (2022) are the best available empirical information on eccentricities at present. I have also considered the power-law distribution of eccentricities with \(s\)-dependent exponent (Equation 18). However, here I consider the uniform thermal probability distribution of eccentricity \(p(e)=2e\) for all binaries to gauge possible source of systematic error arising from eccentricities. Note that this choice is deliberately biased from the empirical information (see figure 24 of Chae (2023)). Figure 25 shows the result with the thermal eccentricity distribution. In this case, the deviation is somewhat weakened but the combined statistical significance of the deviations in the two largest-\(s\) bins is \(3.7\sigma\). Thus, it is not possible to do away with the deviations by reasonably modifying eccentricities. ## 6 Conclusion and Future Prospects A sample of 2,463 _statistically_ pure wide binaries in the mass range \(0.5\lesssim M_{\rm tot}/{\rm M}_{\odot}\lesssim 2\) provides two crucial results on non-relativistic gravitational dynamics. First, the observed orbital motions of binaries with relatively small sky-projected separations (\(s\lesssim 2\) kau) are statistically consistent with Newtonian dynamics, naturally and without any adjustment of Gaia data or other observational inputs. This is a nontrivial result that provides direct evidence that currently available gravity theories including Newtonian and Milgromian theories hold in the non-relativistic regime of accelerations \(\gtrsim 10^{-8}\) m s\({}^{-2}\). Second, the observed orbital motions of binaries with relatively larger \(s\gtrsim 2\) kau are statistically _inconsistent_ with Newtonian dynamics. The Gaia measured sky-projected velocities are boosted with a statistical significance of \(\approx 5.0\sigma\). In the bin of the investigated range \(5<s<30\) kau, the velocity boost factor is measured Figure 21: The result for the physical velocity \(v=\sqrt{v_{p}^{2}+v_{r}^{2}}\) for binaries with exceptionally precise radial velocities. The relative errors of individual radial velocities are required to be smaller than 0.005 as described in the text. Figure 20: The same as Figure 13 but with a boosted Newton’s constant \(G^{\prime}=1.44G\). Figure 23: The same as Figure 13 but for a subsample within a narrow distance range of \(125<d_{M}<200\) pc. Figure 22: The same as Figure 13 but for a subsample within a narrow distance range of \(50<d_{M}<125\) pc. Figure 24: The same as Figure 13 but with \(G\)-band magnitudes systematically shifted by \(-0.5\). Figure 25: The same as Figure 13 but with eccentricities from a ‘thermal’ probability distribution \(p(e)=2e\) ignoring individual eccentricities. The result is deliberately biased toward a weakened boost of velocities. Nevertheless, the boost is still very significant. to be \(\gamma_{v_{p}}=1.20\pm 0.06\) (stat) \(\pm 0.05\) (sys) (see Table 1). When the pure binaries are analyzed in the acceleration plane defined in Chae (2023), the kinematic acceleration \(v^{2}/r\) with MC-deprojected \(v\) and \(r\) is systematically higher in the low-acceleration regime \(\lesssim 10^{-9}\) m s\({}^{-2}\) than the corresponding Newtonian prediction, with an acceleration boost factor of \(\gamma_{g}=1.49^{+0.21}_{-0.19}\) satisfying the expected relation \(\gamma_{g}=\gamma_{v_{p}}^{2}\). In a small sample of 40 pure wide binaries with exceptionally precise Gaia radial velocities, the physical velocity (\(v\)) is directly measured (i.e. without any de-projection). Despite the small number statistics, it is seen that the mean measured velocity in the largest-\(s\) bin is boosted compared with the Newtonian prediction while that in the smallest-\(s\) bin naturally matches the Newtonian prediction. The present results from analyses of statistically pure binaries provide a robust confirmation of the results of Chae (2023) for a much larger general sample that includes hierarchical systems with undetected companions. The present results also complement Hernandez (2023) who obtained a similar boost in projected velocities but with a lower or roughly quantified statistical significance. Just recently and almost concurrently with this work, Hernandez et al. (2023) has carried out a statistical analysis of the Hernandez (2023) sample of 667 wide binaries within 125 pc. They obtained a boost factor of \(\gamma_{g}=1.512\pm 0.199\) which is in good agreement with the results from this work and Chae (2023). Unlike this work and other recent studies (Chae, 2023; Hernandez, 2023; Hernandez et al., 2023), Banik et al. (2023) claimed an opposite conclusion and argued particularly that the gravitational anomaly obtained by Chae (2023) was largely affected by kinematic contaminants. As shown in Figure 5, the sample of pure binaries used in this work already satisfies the Banik et al. (2023) kinematic cut and yet shows essentially the same gravitational anomaly reported by Chae (2023); Hernandez (2023); Hernandez et al. (2023). Moreover, as presented in Appendix B, new results with kinematic cuts imposed confirm the gravitational anomaly. The evidence for the gravity boost in the low acceleration regime is now clear enough although the scientific community should keep gathering further evidence from future observations. What seems now more important is to precisely characterize the gravity boost to the point that the theoretical direction can be narrowed down. Given that precise radial velocities measured in just 40 binaries already show a mild evidence of the gravity boost in the low-acceleration regime (Figure 21), precise measurements of radial velocities in more pure binaries in the future may turn out to be quite fruitful in characterizing the gravity in the low acceleration regime. In principle, theoretical interpretations of the gravitational anomaly obtained here and in Chae (2023) and Hernandez (2023) are wide open. However, the most straightforward interpretation at hand is that nonrelativistic gravitational dynamics is governed/described by MOND-type Lagrangian theories of gravity (Bekenstein and Milgrom, 1984; Milgrom, 2010, 2023). However, because MOND-type Lagrangian theories are nonrelativistic phenomenological theories, something like phenomenological quantization rules before the full development of quantum physics, it is unclear what will be the underlying fundamental theory that will explain the MOND phenomenology eventually. Because MOND breaks the strong equivalence principle (Chae et al., 2020, 2021; Chae, 2022) while keeping the Einstein equivalence principle, even non-quantum gravity must be different from Einstein's general relativity (Einstein, 1916) and reformulated encompassing the successful aspects of both MOND and general relativity perhaps in the spirit of Mach's principle. Hypothetical dark matter was introduced as a solution to gravitational anomalies in the presently investigated low-acceleration regime in galaxies and galaxy clusters assuming that standard gravity holds in that regime. Now that standard gravity breaks down in the same low-acceleration regime regardless of hypothetical dark matter and in agreement with MOND, dark matter interpretation is seriously questioned as a valid solution. Thus, no direct detection of dark matter despite intensive worldwide campaigns can now be seen as a natural outcome. Because there has been no direct detection of dark matter, all circumstantial arguments and indirect "evidence" for dark matter assuming standard gravity can now be overridden by the present direct evidence for the breakdown of standard gravity. This means that the dark matter paradigm seems now doomed to be abandoned and we are entering an era of a paradigm shift. Implications of the gravitational anomaly in the low-acceleration regime for astrophysics, cosmology, and fundamental physics are truly far-reaching. In particular, the standard cosmology based on general relativity seems no longer valid even in principle. Development of MOND-based cosmology and structure formation (e.g., Sanders, 1998, 2001, 2008; Wittenburg et al., 2023) is now well-motivated and much-needed in parallel with theoretical advancement of MOND (e.g. Thomas et al., 2023). The author thanks Kareem El-Badry for discussion on radial velocities of stars. The revised version was based on an insightful report for which the author thanks the referee, a plenary talk given at the Korean Astronomical Society Fall 2023 meeting, and an invited talk given at the Pacific Rim Conference on Stellar Astrophysics held at Sejong University in October 2023. In particular, the author spotted an error in the code described in Chae (2023), which is typographic in nature, while preparing for the plenary talk, and the correction was reflected in this revision. This work was supported by the National Research Foundation of Korea (grant No. NRF-2022R1A2C1092306).
2309.07178
CloudBrain-NMR: An Intelligent Cloud Computing Platform for NMR Spectroscopy Processing, Reconstruction and Analysis
Nuclear Magnetic Resonance (NMR) spectroscopy has served as a powerful analytical tool for studying molecular structure and dynamics in chemistry and biology. However, the processing of raw data acquired from NMR spectrometers and subsequent quantitative analysis involves various specialized tools, which necessitates comprehensive knowledge in programming and NMR. Particularly, the emerging deep learning tools is hard to be widely used in NMR due to the sophisticated setup of computation. Thus, NMR processing is not an easy task for chemist and biologists. In this work, we present CloudBrain-NMR, an intelligent online cloud computing platform designed for NMR data reading, processing, reconstruction, and quantitative analysis. The platform is conveniently accessed through a web browser, eliminating the need for any program installation on the user side. CloudBrain-NMR uses parallel computing with graphics processing units and central processing units, resulting in significantly shortened computation time. Furthermore, it incorporates state-of-the-art deep learning-based algorithms offering comprehensive functionalities that allow users to complete the entire processing procedure without relying on additional software. This platform has empowered NMR applications with advanced artificial intelligence processing. CloudBrain-NMR is openly accessible for free usage at https://csrc.xmu.edu.cn/CloudBrain.html
Di Guo, Sijin Li, Jun Liu, Zhangren Tu, Tianyu Qiu, Jingjing Xu, Liubin Feng, Donghai Lin, Qing Hong, Meijin Lin, Yanqin Lin, Xiaobo Qu
2023-09-12T21:40:51Z
http://arxiv.org/abs/2309.07178v1
CloudBrain-NMR: An Intelligent Cloud Computing Platform for NMR Spectroscopy Processing, Reconstruction and Analysis ###### Abstract Nuclear Magnetic Resonance (NMR) spectroscopy has served as a powerful analytical tool for studying molecular structure and dynamics in chemistry and biology. However, the processing of raw data acquired from NMR spectrometers and subsequent quantitative analysis involves various specialized tools, which necessitates comprehensive knowledge in programming and NMR. Particularly, the emerging deep learning tools is hard to be widely used in NMR due to the sophisticated setup of computation. Thus, NMR processing is not an easy task for chemist and biologists. In this work, we present CloudBrain-NMR, an intelligent online cloud computing platform designed for NMR data reading, processing, reconstruction, and quantitative analysis. The platform is conveniently accessed through a web browser, eliminating the need for any program installation on the user side. CloudBrain-NMR uses parallel computing with graphics processing units and central processing units, resulting in significantly shortened computation time. Furthermore, it incorporates state-of-the-art deep learning-based algorithms offering comprehensive functionalities that allow users to complete the entire processing procedure without relying on additional software. This platform has empowered NMR applications with advanced artificial intelligence processing. CloudBrain-NMR is openly accessible for free usage at [https://csrc.xmu.edu.cn/CloudBrain.html](https://csrc.xmu.edu.cn/CloudBrain.html). magnetic resonance spectroscopy; processing; cloud computing; artificial intelligence; deep learning. ## I Introduction Nuclear Magnetic Resonance (NMR) spectroscopy, as an analytical technique that are widely used in biology [1]-[6], chemistry [7][8] and medicine [9], is a powerful tool that utilizes NMR phenomena to detect the composition and structure of molecules at the atomic level. The spectra are commonly presented in one-dimension or multi-dimension. The former may suffer from signal crowding, resulting in the overlapped spectral peaks and hard spectrum analysis. The latter alleviate the crowding but at the cost of significantly prolonged data acquisition time. To reduce this time, non-uniform sampling (NUS) can acquire partial data but need smart algorithms, e.g., Low-Rank (LR) [10] and deep learning (DL) [11][17][18], to fulfill the missing data. Thus, advanced algorithms may make NMR data processing more complex. For example, the whole NMR data processing under NUS is summarized in Fig. 1. First, the raw NUS data collected from the spectrometer needs to be preprocessed using NMRPipe [12]. Second, reconstruction algorithms are employed to recover missing data and obtain high-quality spectra. Third, the data undergoes post-processing procedures. Finally, an analysis software is used for spectra visualization, peak-picking and quantitative analysis. The entire process requires switching between various software or programs, demanding deep knowledge on all programs and data conversion routines. Fig. 1: Typical processing steps of the traditional way and the proposed platform. Recently, researchers have designed integrated software or platforms for processing and analyzing NMR data, such as CCPN [13], NMRFx [14] and NMRBox [15]. The former two softwares are offline and need to be setup on personal computer or server. NMRBox supports remote desktop, allowing to install and use multiple professional data processing softwares such as NMRPipe. Yet, an online web-based processing and advanced artificial intelligence algorithms are missing. Thus, these platforms hinder the data flow integration and requires more user training. NMR spectrometer vendors also provide processing and analysis softwares. For example, TopSpin [16] is a Bruker software used in a wide range of workflows. It has comprehensive functionalities and only needs few commands to process, display and analyze the NMR spectrum. But TopSpin requires software downloads and lacks the state-of-the-art artificial intelligence methods such as deep learning spectrum reconstruction [11][17][18]. Here, we will develop a one-site intelligent cloud computing NMR platform called CloudBrain-NMR. This platform is easy to operate and user-friendly, and integrates a series of spectrum reconstruction and analysis methods. It has three advantages: 1. _Multifunction:_ CloudBrain-NMR provides rich functions, including data pre-processing, spectrum reconstruction, post-processing, generation of simulation data, neural network training, and spectrum analysis. It includes deep learning spectrum reconstruction, intelligent peak searching, peak height estimation, and quantitative analysis of molecular concentration. 2. _One-site processing:_ The existing NMR analysis process requires complex tools and switching back and forth between various platforms and programs. Here, users only need to log in to the platform to complete all the process of spectrum, without the skills of programming or switching between multiple platforms or software. 3. _Fast and high-fidelity reconstruction:_ The platform has implemented the state-of-the-art deep learning NMR approaches under one graphic processing unit and eight-core central processing units. These hardware enables fast spectrum reconstruction within one second [17]. ## II NMR Cloud Computing Platform The complete workflow of NMR cloud platform is shown in Fig. 2 and the main steps include: 1. _Register and login:_ The URL is [https://csrc.xmu.edu.cn/CloudBrain.html](https://csrc.xmu.edu.cn/CloudBrain.html) [26] and the test **Account**: NMRTest1, **Password**: nmrtest1. 2. _Upload raw data and pre-processing:_ Select the data type, sampling method, and pre-processing operations (adding window function, automatic phase correction, and intercept data range). 3. _Set NUS parameters:_ Fill in NUS parameters if the data is fully sampled, otherwise skip this step if the data is undersampled. 4. _Spectrum Reconstruction:_ Choose a reconstruction algorithm (LRHM, VIP and Modern) to reconstruct the spectrum. 5. _Postprocessing:_ Perform the Fourier transform on the indirect dimension, correct the phase and remove the imaginary part. Also, users can view and download the reconstructed spectra. 6. _Peak picking:_ Call the deep picker [22]-[24]. Set the minimum of peak intensity (default value is 5.5) and threshold of noise (default values is 3.0) to avoid the detection of noise. Also, users can view the results and download data. 7. _Generate dataset_: Generate synthetic data (training set) through the sum of simulated exponential functions. This data will be used to train the deep learning network since huge amount of real NMR is hard to acquire. This step can be skipped if the neural network has been trained. 8. _Train neural network:_ Set the sampling rate of NUS, the number of training rounds, and select the training set. Notably, the sampling rate in the training set must match the target reconstructed data. 9. _Quantitative analysis:_ Four steps are required for quantitative analysis. (a) Filling in the delta value to control the size of spectral peak window for peak area integration; (b) Determination of integration area by filling in value to observed region; (c) Click button to perform automatic integration; (d) Compute quantitative results. ## III System Framework Fig. 3 shows the system architecture of the platform. The four-tier system architecture based on browser/server mode is adopted: The frontend user interface layer, message queue (MQ) layer, backend server layer and data access layer (DAL). This architecture is beneficial to system development, maintenance and updating. ### _Browser layer_ The browser layer consists of user and frontend user interface parts. Components of the user interface include the Vue framework [28], Element component library [29], JavaScript [30], and Axios [31]. Vue is a JavaScript framework for building the CloudBrain-NMR interface, enabling dynamic interaction of webpage. The Element component adds functionality onto the webpage and Axios sends and processes requests, enabling interactions between the front and back ends and the database. With these components, one can easily access the platform through a web browser, triggering the encapsulated API to send the relevant request to the backend, and returning the visualized result in the browser from the server. ### _Nginx_ Nginx (Engine x) [27] is a high-performance HTTP and reverse proxy web server, achieving low memory usage, extremely fast startup, and strong concurrency capabilities. It provides load balancing, relieving the system pressure at high concurrency when multiple users access it simultaneously. Nginx can reduce user waiting time, improve efficiency, and ensure the stable operation of the platform. When processing multi-user usage on CloudBrain-NMR, the platform needs to handle requests of different functional modules. Then, Nginx can isolate module functions to prevent situations where some from taking too long to process and making others unavailable. ### _Message queue layer_ Message queue is a container for storing messages and placing transmitted data in the queue. This layer aims to ensure the high concurrency performance of the platform and control the system traffic, preventing system paralysis due to excessive traffic such that message loses. Meanwhile, asynchronous processing of messages is realized to accelerate Fig. 3: System architecture of CloudBrain-NMR. Fig. 2: Workflow of CloudBrain-NMR. the response and stability of the system. The platform adopts RabbitMQ [20] to prevent system crashes from data transmission between multiple layers and competition for server resources in high concurrency situation. It ensures the stability and availability of the platform when multiple users send requests simultaneously, and guarantees the concurrency pressure of more than ten people on a single server of the platform. For example, when multiple users call the reconstruction module simultaneously, the message queue adds the requests to the queue and processes the requests one by one in sequence to ensure normal function running. ### _Server layer_ The server layer receives all frontend requests, calls corresponding algorithms and computing resources, and provides fast computing services for user requests. This layer is mainly composed of a lightweight framework flask [32] that is written in python. The framework consists of built-in Web Server Gateway Interface (WSGI), view, model, template and service. When users submit data parameters on the web page, CloudBrain-NMR establishes a connection through WSGI, calls the view function to receive requests and parameters, and activate the corresponding functional modules to process the relevant data and interact with the database. Finally, the result is returned to the front end using the view function, and the relevant processing content is displayed to using the template. Service deploys all the algorithms used by the platform, which will be described in Section IV. ### _Data access layer_ The proposed platform adopts a data storage that integrates MySQL with Redis. MySQL is utilized to store structured data such as user registration information and spectrum processing results. Redis is applied to store email verification codes for CloudBrain-NMR user registration and password retrieval. ### _Scheduling GPU to enable artificial intelligence_ The platform runs on the heterogeneous computing graphics processing units (GPU) cloud server provided by China Mobile, and its system operating environment configuration is summarized in Table I. The entire project is managed and run through Docker container [33]. TensorFlow [34] is adopted to schedule GPU parallel computing to fast deep learning NMR training, target spectrum reconstruction and picking spectrum peaks. Requests on using GPU are lined in a queue manner, without worrying about resource competition, enabling the maximal utilization of the GPU. ## IV Deployed Machine Learning Algorithms This section introduces the core machine learning algorithms integrated on the cloud platform. ### _Spectrum reconstruction with low-rankness or deep learning_ To save the time of data acquisition in biological or chemical NMR experiments, only partial data are acquired from the spectrometer under NUS at the cost of introducing spectrum artifacts. To remove these artifacts and obtain a clear spectrum, a smart reconstruction algorithm is required to fulfill the missing data. CloudBrain-NMR has three reconstruction algorithms, including two traditional methods (Low Rank Hankel Matrix (LRHM) [19] and VIrtual Peak (VIP) [18]) and one state-of-the-art deep leaning method (MoDern [17]). LRHM method [10] exploits the low-rank Hankel property of Free Induction Decay (FID). Through modeling the FID as the sum of several exponentials and converting the FID into Hankel matrix, the rank of this matrix equals to the number of exponentials [19]. This rank will be small if the number of exponentials is much smaller than the data length of the FID and this prior can be used to regularize the reconstructed spectrum [19]. However, LRHM tends to distort small spectral peaks if the acquired FID data points are very limited. To address this issue, the VIP method incorporates extra prior information of spectral peaks [18], such as center frequency and shape of the spectral peak, into the reconstruction model through self-learning subspaces. This strategy achieves the high-fidelity reconstruction, particularly on the reconstruction of low intensity peaks, and significantly improves the accuracy of quantification, including the distances between nuclear pairs, and the concentration of metabolites in mixture [18]. Recently, deep learning has received extensive attention and has been applied to the fields such as biomedicine and chemistry. MoDern is a sparse inspired meta learning network that can handle mismatch between training and target NMR data, enabling ultra-fast high-fidelity reconstruction [17]. Under the principle of meta-learning, MoDern defines an optimal threshold to generalize the network to robustly reconstruction spectrum under multiple acceleration factors of fast data acquisition. Spectrum artifacts are gradually removed in network, and finally, a high-quality spectrum is output. ### _Peak identification with DEEP Picker_ To accurately identify the spectral peak position and extract other spectral information, DEEP Picker [22]-[24] proposed by Li _et al_ deconvolves and picks spectral peaks with deep learning [22]-[24]. The network is trained with many simulated synthetic spectra consisting of known composition with different degrees of crowdedness. An advanced function is its powerful capability of correctly identifying overlapping peaks, which are always challenging to existing computational methods and even professional spectroscopists. ## V Implementation and Results This section describes all functional modules on cloud. ### _Raw data uploading and pre-processing module_ In this module, the raw FID data is uploaded and read by the functions that are extracted from an open-source code nmrglue [21], which is compatible to two representative vendors (Bruker and Varian). The sampling schemes (full sampling and NUS) should be set first and the uploaded data will be processed accordingly. Then, performing the common pre-data processing steps, including sine windowing (as shown in Equation (1) [12]), zero padding, Fourier transform, phase correction, imaginary part removal, and intercept the spectral region of interest. The sine window function is defined as \[f(x_{i})=\sin(\pi\times a+\pi\times\frac{(b-a)\times i}{s-1})^{p}\, \tag{1}\] where \(a\) and \(b\) specify the starting and the ending points of the sine-bell in units of \(\pi\) radians, respectively; \(p\) indicates the exponent of the sine-bell; \(s\) represents the number of points in the window function. Default values of \(a\), \(b\) and \(p\) are 0.0, 1.0, and 1.0, respectively. Notably, p could be non-integer. The extraction function preserves the spectral region of interests and removes the rest part of spectrum. Extraction is performed on the direct dimension (fully sampled dimension) of the 2D or 3D spectrum. This processing is not mandatory but it helps excluding the interference signals from non-interest spectrum interval and saving the computation and storage resources. The interface of pre-processing module is shown in Fig. 4. ### _NUS module_ This module is not necessary since the NUS scheme could be automatically read from the raw FID data if the NUS has been conducted physically on the spectrometer. If the FID data is fully sampled, then this module is useful to simulate the NUS scheme. On our platform, the Poisson NUS is simulated to obtain partial FID data, which will be used for the subsequent reconstruction. The Poisson distribution is chosen because it efficiently captures exponential signal characteristics, particularly in regions with rapid signal variations, and may improve the sensitivity [35]. ### _Reconstruction module_ Reconstruction is to recover the missing FID data points and then obtain a clear spectrum when NUS is applied. For 2D NMR, MoDern [17], VIP [18] and LRHM [19] have been deployed on the platform. For 3D NMR, only MoDern [17] is deployed. Regarding the computation time, the deep learning method (MoDern) runs much faster due to the powerful GPU and the non-iterative nature of a trained neural network. The LRHM and VIP use eight-core CPU parallel computing to save significant amount of reconstruction time. Taking the companion time for 2D NMR as an example (Table II), MoDern runs ultrafast (within 1 second) and LRHM requires 17-40 seconds. To verify the quality of a reconstructed spectrum, the Pearson correlation coefficient is adopted to measure the correlation between a reconstructed spectrum \(\hat{\mathbf{x}}\) and its fully sampled one x according to Equation (2): \[\mathrm{R}^{2}(\hat{\mathbf{x}},\mathbf{x})=\left(\frac{cov(\hat{\mathbf{x}},\mathbf{x})}{\sigma_{\mathbf{x}}\sigma_{\mathbf{x}}}\right)^{2}, \tag{2}\] where \(cov(\bullet)\) and \(\sigma\) denotes the covariance and standard deviation, respectively. Taking a 2D NMR reconstruction as an example (Fig. 5), both LRHM and MoDern provides high-fidelity reconstructions, achieving a higher Pearson correlation coefficient \(\mathrm{R}^{2}\) than 0.999. For the 3D NMR (Fig. 6), MoDern can finish the reconstruction in 12.12 seconds. ## 4 Results Figure 4: Pre-processing interface on the CloudBrain-NMR. Figure 5: A reconstruction example of the \({}^{1}\)H-\({}^{15}\)N HSQC 2D spectrum of a protein Gb1 on our platform. (a) is the fully sampled spectrum, (b) and (c) are the reconstructed spectrum from 25% NUS data by the deep learning method (MoDern) and the low rank Hankel matrix method (LRHM), respectively, (d) and (e) are the correlations of spectral peaks between the fully sampled spectrum and the reconstructed one using MoDern and LRHM, respectively. ### _Post-processing module_ This interface of post-processing module is shown in Fig. 7 and the workflow on cloud is summarized in Fig. 8. This module mainly processes the indirect dimension of spectral data. The indirect dimension is 1D vector (or 2D plane) for 2D (or 3D) NMR. Taking 3D as an example, two indirect dimensions need to be processed, and the post-processing operations include: Sine windowing according to Equation (1), Fourier transform, automatic phase correction and imaginary part removal. Among them, the Fourier transform has a total of five modes ('Default', 'Auto', 'Alternative', 'Inverse', 'Negate'), how to choose these modes depends on the specific experimental conditions and data quality, more details can be found in nmrglue [21]. Finally, spectrum data in *.ft2 format is saved or can be downloaded. Fig. 6: A reconstruction example of the HNCO 3D spectrum of an azurin protein on our platform. (a) and (c) are projections on \({}^{1}\)H-\({}^{15}\)N and \({}^{1}\)H-\({}^{13}\)C planes of the fully sampled referenced spectrum. (b) and (d) are projections on \({}^{1}\)H-\({}^{15}\)N and \({}^{1}\)H-\({}^{13}\)C planes of the reconstructed spectrum. ### _Peak identification module_ The module is designed to identify spectral peaks and output peak information. A state-of-the-art deep learning approach, DEEP Picker [22]-[24], is adopted for identification. The workflow and interface of peak identification is provided in Fig. 9 and Fig. 10, respectively. On this page, one needs to firstly set the minimal peak intensity scale, noise threshold, and chemical sample type (including protein and metabolite). Then, spectrum data (*.ft2 format) is selected from the online database or local files. By clicking the "Submit", parameters and data will be sent from the user to server. Next, deep picker on cloud will be called to identify peaks and mine peak information. Finally, processed results will be saved on server and sent back once the query is received. Fig. 8: The workflow of post-processing on the cloud. Fig. 10: Peak identification interface on the CloudBrain-NMR. Fig. 7: Post-processing interface on the CloudBrain-NMR. Fig. 9: The workflow of peak identification on the cloud. An identification example is tested on a 2D NMR spectrum (Fig. 11). Most peaks have been identified and marked with red cross symbols (Fig. 11(a)). To further measure the correctness of identification, a confidence is defined as the score according to \[\text{Softmax}\left(z_{p}\right)=\frac{exp\left(z_{p}\right)}{\sum\limits_{q=1}^ {Q}exp\left(z_{q}\right)}\,, \tag{3}\] where z represents the data points with the same chemical shift, \(p\) represents the \(p^{\text{th}}\) point in z, \(Q\) is the total number of data points, and \(q\) represents the \(q^{\text{th}}\)(\(q\)=1,2,..,\(Q\)) spectral peak. Fig. 11(b) shows that more than half peaks have a confidence that is greater than 0.9 while the rest ones has a confidence between 0.6-0.9. ### _Generating simulation data module_ The module simulates FID signal to build a training set for deep learning spectrum reconstruction. The FID signal is modeled as a superposition of a finite number of exponential functions [25] according to \[y_{m}=\sum\limits_{j=1}^{J}\left(A_{j}e^{i\phi_{j}}\right)e^{\frac{mM}{\tau_{ j}}}e^{im\Delta\sigma\omega_{j}}\,, \tag{4}\] where \(i\) denotes the imaginary unit, \(\Delta t\) denotes the time interval between two sampling points [25], \(y_{m}\) represents the \(m^{\text{th}}\) (\(m\)=1,2,..,\(M\)) sampled FID data point, \(A_{j}\), \(\phi_{j}\), \(\tau_{j}\), \(\omega_{j}\) indicate the amplitude, phase, damping factor, and angular frequency of the \(j^{\text{th}}\) (\(j\)=1,2,..,\(J\)) exponentials, respectively. The sampling rate of NUS is defined as the ratio of \(M\)/\(N\) where \(N\) is the number of fully sampled FID data points. In simulation, spectral parameters in Equation (4) is chosen randomly from a uniform distribution [11] and a sampling rate has to be set. For example, to train the MoDern, 4000 data samples are generated under a sampling rate of 25%. ### _Neural network training module_ Network training is deployed on cloud, meaning that users do not need to buy graphics processing unit. Through visit this module on website, users can customize parameters, e.g., the sampling rate, to match the application to obtain the best reconstructed spectrum. Even with the mismatched sampling rate between the training and target spectra, this limitation can be overcome well with the state-of-the-art deep learning spectrum reconstruction algorithm, MoDern, which achieves high-fidelity reconstruction [17]. Training usually needs 3.9 hours for 2D NMR and 15.1 hours for 3D NMR. The training process can be skipped since exponential functions have been synthetized to train a general solution for NMR spectrum reconstruction [11][17]. ### _Quantitative module_ This module first marks each spectral peak in a range of intercepted region with automatic numbers (e.g. peak 1-18 in Fig. 12). Then, a parameter "Delta" should be set to determine the integration range of spectral peaks (the size of black dashed box in Fig. 13). For a specific peak, zooming is to fill in the spectral peak number and range for viewing. Fig. 13 shows the spectral peak regions under different parameter settings. According to the initial set values, observe the area of spectral peak. If the Fig. 11: Peak identification test on a 2D \({}^{1}\)H-\({}^{15}\)N spectrum of a protein Gb1. (a) Identified peaks are marked with red cross symbol, (b) confidence distribution of the identification. Note: The range of 5.8-10.8 ppm of the full spectrum is intercepted for test. Fig. 12: Peak assignment of a \({}^{1}\)H-\({}^{13}\)C HSQC spectrum. important information of the spectral peak is not included in the area, or the spectral peaks are not fully displayed, the user can adjust the paddings of the X-axis and Y-axis to view the full spectral peaks. And Delta value can adjust the size of the window of spectral peak for peak area integration. The larger the values of padding, the smaller the amplification factor of the spectral peak. Peak identities and their integration values will be automatically saved online. ## VI Conclusions In this work, we have developed CloudBrain-NMR, an intelligent cloud computing platform to process, reconstruct and analyze NMR spectroscopy. Notable deep learning functions, such as undersampled spectrum reconstruction and automatic peak picker, have been integrated. CloudBrain-NMR is an open-access platform at [https://csrc.xmu.edu.cn/CloudBrain.html](https://csrc.xmu.edu.cn/CloudBrain.html). Users only need to visit website through browser and do not need to install any software. Simultaneous visit for multiple users is supported, which has been found very useful in biochemical training courses, such as BioNMR Advanced Tools seminar held at University of Gothenburg [36][37]. In the future, we plan to enhance the CloudBrain-NMR by integrating other state-of-the-art artificial intelligence functions and provide reliable services for the NMR community. ## Acknowledgments The authors thank Drs. Dawei Li and Rafael Bruschweiler (The Ohio State University) for providing the DEEP Picker code, Jonathan J. Helmus and Christopher P. Jaroniec for the nmrglue data processing code and Prof. Vladislav Orekhov (University of Gothenburg) for valuable suggestions.
2309.16075
A review of variable-pitch propellers and their control strategies in aerospace systems
The relentless pursuit of aircraft flight efficiency has thrust variable-pitch propeller technology into the forefront of aviation innovation. This technology, rooted in the ancient power unit of propellers, has found renewed significance, particularly in the realms of unmanned aerial vehicles and urban air mobility. This underscores the profound interplay between visionary aviation concepts and the enduring utility of propellers. Variable-pitch propellers are poised to be pivotal in shaping the future of human aviation, offering benefits such as extended endurance, enhanced maneuverability, improved fuel economy, and prolonged engine life. However, with additional capabilities come new technical challenges. The development of an online adaptive control of variable-pitch propellers that does not depend on an accurate dynamic model stands as a critical imperative. Therefore, a comprehensive review and forward-looking analysis of this technology is warranted. This paper introduces the development background of variable-pitch aviation propeller technology, encompassing diverse pitch angle adjustment schemes and their integration with various engine types. It places a central focus on the latest research frontiers and emerging directions in pitch control strategies. Lastly, it delves into the research domain of constant speed pitch control, articulating the three main challenges confronting this technology: inadequacies in system modeling, the intricacies of propeller-engine compatibility, and the impact of external, time-varying factors. By shedding light on these multifaceted aspects of variable-pitch propeller technology, this paper serves as a resource for aviation professionals and researchers navigating the intricate landscape of future aircraft development.
Hanjie Jiang, Ye Zhou, Hann Woei Ho
2023-09-28T00:13:19Z
http://arxiv.org/abs/2309.16075v1
# A review of variable-pitch propellers and their control strategies in aerospace systems ###### Abstract The relentless pursuit of aircraft flight efficiency has thrust variable-pitch propeller technology into the forefront of aviation innovation. This technology, rooted in the ancient power unit of propellers, has found renewed significance, particularly in the realms of unmanned aerial vehicles and urban air mobility. This underscores the profound interplay between visionary aviation concepts and the enduring utility of propellers. Variable-pitch propellers are poised to be pivotal in shaping the future of human aviation, offering benefits such as extended endurance, enhanced maneuverability, improved fuel economy, and prolonged engine life. However, with additional capabilities come new technical challenges. The development of an online adaptive control of variable-pitch propellers that does not depend on an accurate dynamic model stands as a critical imperative. Therefore, a comprehensive review and forward-looking analysis of this technology is warranted. This paper introduces the development background of variable-pitch aviation propeller technology, encompassing diverse pitch angle adjustment schemes and their integration with various engine types. It places a central focus on the latest research frontiers and emerging directions in pitch control strategies. Lastly, it delves into the research domain of constant speed pitch control, articulating the three main challenges confronting this technology: inadequacies in system modeling, the intricacies of propeller-engine compatibility, and the impact of external, time-varying factors. By shedding light on these multifaceted aspects of variable-pitch propeller technology, this paper serves as a resource for aviation professionals and researchers navigating the intricate landscape of future aircraft development. Variable-pitch propeller; variable-pitch propeller-equipped engines; variable-pitch control strategy. ## 1 Introduction Air propellers represent one of the oldest and enduring propulsion technologies in aviation history, with a rich and dynamic evolution. Since the pioneering days of aviation marked by the Wright brothers' historic Flyer 1 in 1903 [1], the development of aeronautical propeller technology has remained closely intertwined with the progress of aircraft design. The 1930s witnessed a significant leap in both aircraft speed and engine power, driving the concurrent advancements in propeller technology [1, 2]. However, by the mid-1950s, the relentless development and refinement of turbojet engines, which do not rely on propellers to generate thrust, began to extend from military to civilian aviation. While this transition posed some challenges to propeller technology, their continued application in domains such as short take-off and landing (STOL) and long-endurance flight showcased their enduring relevance. The global oil crisis of the early 1970s rekindled interest in propeller-powered engines due to their energy efficiency [2]. Amidst the era of fixed-pitch propellers, the concept of variable-pitch propellers emerged and garnered extensive research attention. Unlike their fixed counterparts, which are optimized for a specific airspeed range around the design point, variable-pitch propellers offer the distinct advantage of adaptability to diverse flight conditions across the entire operational envelope. Consequently, aerospace engineers and researchers found variable-pitch propellers increasingly appealing and integrated them into various aircraft types [3]. Notable examples include the groundbreaking S-97 advanced high-speed helicopter [4] (Figure 1(a)), a collaborative effort by Sikorsky Helicopter and Boeing Defense. The coaxial main rotors of S-97 adopt rigid rotors, which have no traditional flapping and lead-lag hinge, and retain the collective pitch control. Moreover, it incorporates a variable-pitch propulsion-propeller, enabling high-speed forward thrust as well as deceleration and even backward flight in level flight. Similarly, Bell's innovative V247 tilt-rotor Unmanned Aerial Vehicle (UAV) [5] (Figure 1(b)) features a pair of variable-pitch propellers at the wingtips, facilitating vertical take-off and landing (VTOL), seamless transition, and efficient forward flight. The cutting-edge Ma700 Advanced Turbo Propeller Branch Airliner, illustrated in Figure 1(c) [6], represents a remarkable achievement by the Aviation Industry Corporation of China. Furthermore, the MQ-9 UAV, developed by General Atomic Aviation Systems Corporation and showcased in Figure 1(d) [7], exemplifies a typical fixed-wing aircraft equipped with variable-pitch propellers. Variable-pitch propellers have also found their place in the rapidly evolving domain of electric vertical take-off and landing (eVTOL) aircraft [10], as evidenced by the Airbus Vahana Urban Air Mobility (UAM) depicted in Figure 1(e) [8]. Currently, more than 100 technology startups worldwide are fervently engaged in developing UAM solutions, envisioning them as vital modes of future transportation, potentially supplanting traditional automobiles in certain scenarios. Notably, a majority of these UAM initiatives employ variable-pitch propellers to enhance performance and versatility. In addition to these larger aircraft, variable-pitch propellers have also made their mark in the realm of smaller aviation, including multi-rotor Unmanned Aerial Vehicles (UAVs), to improve their flight performances. Figure 1(f) showcases a high-mobility and high-altitude-capable multi-rotor aircraft developed by the China Helicopter Research and Development Institute (CHRDI) [9], underscoring the adaptability of variable-pitch technology across a spectrum of aircraft types. Figure 1: Several types of aircraft using variable-pitch propellers. The incorporation of variable-pitch propellers offers a host of advantages, including the expansion of flight envelopes, enhancement of flight performance, improved fuel efficiency, and extended engine life. However, fully harnessing these benefits necessitates addressing a range of intricate technical challenges. Beyond the study of propeller design and variable-pitch mechanisms, comprehensive research into the engines that drive these propellers and the intricate control systems uniting these three components is essential for maximizing their potential. ## 2 Variable-pitch propellers ### Basics of variable-pitch propellers In 1872, Winham, a pioneer in British aviation, made a groundbreaking contribution by introducing variable-pitch propellers into fixed-wing aircraft, as documented in his annual report for the Aviation Society [3]. However, it wasn't until the 1910s that people began to grasp the immense potential of variable-pitch propellers in enhancing engine power and efficiency. The progress of variable-pitch propellers, up to that point, had been largely constrained by the structural limitations of wooden propellers, remaining largely theoretical. The turning point came with the practical introduction of metal propellers in 1923, marking a significant milestone in aviation history. The demand for variable-pitch propellers surged, driven by the realization that traditional fixed-pitch propellers could not adequately meet the efficiency requirements across various flight phases [11]. A few years later, the US Air Force embarked on a mission to optimize propeller efficiency throughout the entire flight envelope by providing a befitting blade angle for each section of the flight envelope. This endeavor aimed to surpass the performance of fixed-pitch propellers. It became evident that maintaining precise control over engine speed at different flight speeds held the key to enhancing both propeller and engine efficiency. Equally crucial was the ability to regulate engine speed to ensure consistent power output, spanning from takeoff to maximum flight speed. In a pivotal development in 1933, the automatic variable-pitch propeller made its debut in the Boeing 247, depicted in Figure 2. This innovation brought about substantial improvements in performance and adaptability. Implementing a variable-pitch mechanism during takeoff and cruise flight yielded remarkable benefits, including a 20% reduction in takeoff run distance, a 22% increase in climb rate, a 5% boost in cruise speed, and an impressive 1220-meter gain in ceiling altitude [12; 11]. This technology rapidly evolved into a sophisticated constant-speed propeller system, seamlessly adjusting the pitch in real-time to match flight speed variations. Variable-pitch propellers employ a mechanism that synchronously rotates their blades around each blade handle's axis. In the realm of aircraft propulsion, these variable-pitch propellers can be categorized into two fundamental types based on their blade rotation mechanisms: Figure 2: Boeing 247 airliner [12] 1. Hydraulic variable-pitch propellers: These systems utilize hydraulic pressure to drive the blade rotation mechanism and are predominantly employed in larger and medium-sized aircraft. 2. Electric variable-pitch propellers: These systems rely on electric motors to drive the blade rotation mechanism and find frequent use in smaller aircraft, especially UAVs. #### 2.1.1 Hydraulic variable-pitch propellers Hydraulic pitch propellers employ hydraulically operated governors to effect changes in blade angles. These governors incorporate rotating flyweights, a preloaded spring, and a driving gear, collectively constituting a critical component known as the Constant Speed Unit (CSU) [13]. The engine crankshaft drives the governor, causing the flyweights to respond to alterations in engine speed. When Revolutions Per Minute (RPM) of the engine increases, centrifugal forces compel the flyweights to move outward, and vice versa, as illustrated in Figure 3. Pilots exercise control over the extent of flyweight movement through a lever known as the propeller pitch control. Adjusting this lever tensions the spring connected to the flyweights. The constant speed device ensures that the propeller maintains a constant speed during flight. The governor's flyweights are intricately linked to the pilot valve, which governs the flow of oil to or from the propeller hub, thus determining the necessary propeller blade angle. When the propeller pitch control is adjusted in a rearward direction, the pilot's intention is to decrease the target rotational speed of the propeller. In response, the flyweights move outward, causing the pilot valve to open, and permitting the flow of oil into the propeller hub. Increasing blade angles will increase the air pressure on the propeller, which in turn demands more engine torque. As a result, the RPM decreases, and the flyweights return to their equilibrium position. Once the pilot valve is closed, both the vane angle and engine speed stop changing. Conversely, when the propeller pitch control is shifted forward, the pilot intends to increase the target rotational speed. In this scenario, the flyweights move inward, facilitating the outflow of oil from the propeller hub. Consequently, the blade angle decreases, causing the propeller to take a smaller "bite" of the air and necessitating less engine torque. As a result, the RPM increases, and the flyweights return to their equilibrium position. Once again, the pilot valve is closed, and both the blade angle and engine RPM are held steady [13, 14, 15]. #### 2.1.2 Electric variable-pitch propeller The electrically operated pitch change mechanism system incorporates a control system that passes electric power to the propeller via a sensor/brush assembly and a specially constructed slip-ring assembly. The assemblies are mounted separately on the aircraft engine and the spinner back-plate [16], as illustrated in Figure 4. The system's pitch adjustment mechanism is orchestrated by an electric servo motor working in tandem with a planetary gearbox. The pitch variation mechanism comprises a precision control screw mechanism responsible for regulating the position of the pitch variation Figure 3: Hydraulic pitch control system [13]. slider. This slider, positioned along the axis of the propeller hub, in turn, impels the cams affixed to the assembly base of each propeller blade. As a result, the pitch mechanism orchestrates precise adjustments to the angle of each blade. Similar to the hydraulic pitch mechanism, the electric counterpart also features a constant speed controller, harnessed through a solid-state microprocessor. This governor orchestrates adjustments to the blade or pitch angle, ensuring a consistent engine and propeller speed. Propeller speed is meticulously gauged through a solid-state magnetic sensor mounted on the aircraft engine [16; 17]. The electronic governor operates by comparing the current propeller speed with the speed specified by the pilot, employing a control loop to minimize any disparities. The nonlinear response of this control loop is finely tuned through a range of parameters designed in the controller software. These control parameters dictate how the controller responds to variations in speed error, affecting factors such as the error's magnitude and whether it is diminishing or increasing. ### Engines with variable-pitch propellers The aviation industry currently encompasses more than ten types of aero-engines, as illustrated in Table 1. These engines are categorized based on different thrust generation methodologies and engine structures. Irrespective of their specific design, all aero-engines achieve propulsion by exerting a propulsive force in opposition to a particular composition of fluid, commonly referred to as the propulsive working fluid [18; 19]. One classification involves the direct ejection of the fluid passing through the engine, such as turbojet engines, rocket engines, and similar variants. In contrast, the second major category entails engines that drive propellers or other power devices to transmit momentum into the surrounding ambient air. This category encompasses engines like turboshaft engines and piston engines, among others. Beyond the generation of thrust, aircraft engines require a source of energy to sustain their operation. In the case of aero-engines, nearly all of them rely on the expansion of fluids created through the combustion of fuel in the presence of an oxidizer or air. This resulting composition, referred to as the engine working fluid, serves as the vital energy source. Various types of aviation engines employ diverse methods of compressing air to facilitate its entry into the engine. Several engine types, as indicated in Table 1, are closely integrated with propellers, including those equipped with variable-pitch propellers. These engine categories encompass turbopro engines, turboshaft engines, piston engines, Wankle engines, and DC motors [18; 19]. These aero-engines act by driving propellers to expel ambient air in a manner that generates forward propulsion or lift to counteract the force of gravity. Figure 4: Electric pitch control system [16]. #### 2.2.1 Turboprop Engine Turboprop engines represent a subtype of gas turbine engines extensively utilized in aircraft of diverse sizes. These engines primarily allocate a significant portion of their power output to drive an external variable-pitch propeller. Typically, turboprop engines find favor among smaller aircraft categories, including test aircraft, ultralight aircraft, and UAVs, as depicted in Figure 5. \begin{table} \begin{tabular}{c c c c c} \hline \hline \multicolumn{2}{c}{Engine Type} & Means of Compression & Engine Working Fluid & Propulsive Working Fluid \\ \hline \multirow{4}{*}{Gas Turbine Engine} & Turbojet Engine & Turbine-driven Compressor & Fuel/Air Mixture & Fuel/Air Mixture \\ \cline{2-5} & _Turboprop Engine_ & _Turbine-driven Compressor_ & _Fuel/Air Mixture_ & _Ambient Air_ \\ \cline{2-5} & Turbofan Engine & Turbine-driven Compressor+Fan & Fuel/Air Mixture & Fuel/Air Mixture \\ \cline{2-5} & _Turboshaff Engine_ & _Turbine-driven Compressor_ & _Fuel/Air Mixture_ & _Ambient Air_ \\ \cline{2-5} & _Turboshaff Engine_ & Turbine-driven Compressor+Fan & Fuel/Air Mixture & Fuel/Air Mixture +-Ambient Air \\ \hline \multirow{2}{*}{Rameit Engine} & Ram Compression & Fuel/Air Mixture & Fuel/Air Mixture \\ \hline Pulse-jet Engine & Compression Due to Combustion & Fuel/Air Mixture & Fuel/Air Mixture \\ \hline \multicolumn{2}{c}{_Wankel Engine_} & _Rotation of Rotors_ & _Fuel/Air Mixture_ & _Ambient Air_ \\ \hline \multicolumn{2}{c}{_Piston Engine_} & _Reciprocating Action of Pistons_ & _Fuel/Air Mixture_ & _Ambient Air_ \\ \hline \multicolumn{2}{c}{_DC Motor_} & – & – & _Ambient Air_ \\ \hline \hline \end{tabular} \end{table} Table 1: Aircraft Engine Types Figure 5: Turboprop engine for test aircraft, ultralight aircraft and UAVs [20]. The anatomy of a turboprop engine encompasses key components such as an intake, a compressor, a combustion chamber, a turbine, and a nozzle. The airflow is channeled through the intake port and flows into the compressor [18]. Fuel is added to the compressed air within the combustion chamber, igniting the working fluid that proceeds into the turbine. Here, the kinetic energy generated serves a dual purpose: it propels the propeller while concurrently sustaining the compressor's operation by supplying air to the combustion chamber. The remaining energy cegenerates a small amount of thrust outside the propeller via the nozzle. Most turboprop engines feature a hydraulic variable-pitch propeller paired with a constant-speed mechanism. #### 2.2.2 Turboshaft Engine Turboshaft engines, another category of gas turbine engines, are optimized for maximizing shaft power output. These engines find extensive use in helicopters and rotor-type aircraft relying on a shaft drive, as exemplified in Figure 6. Turboshaft engines hold a distinct advantage by delivering impressive power-to-weight ratios exceeding 2.5 kW/kg [21, 18]. In terms of raw power production, turboshaft engines can reach substantial magnitudes. Presently, these engines can generate up to 6,000 or even 10,000 horsepower, a stark contrast to the relatively lower power output of piston engines. Economically, while turboshaft engines consume slightly more fuel compared to top-performing piston engines, this is partly compensated by the fact that jet fuel is cheaper than petrol. Nevertheless, it is essential to acknowledge that the manufacturing of turboshaft engines is intricate and costly, representing a significant drawback in their widespread adoption. The turboshaft engine shares fundamental functions and structural similarities with other turbogenerator engines. In terms of its structural composition, the turboshaft engine adheres to the fundamental framework of a gas generator, featuring components such as an inlet, compressor, combustion chamber, and exhaust nozzle. However, a defining characteristic of the turboshaft engine is the incorporation of a free turbine. This unique turbine does not drive the compressor; instead, it serves the primary purpose of power generation. The alteration in rotor speed within the engine induces significant variations in centrifugal forces. To ensure stable operation, the rotor needs to be designed to maintain a consistent speed, driven by the free turbine. Any fluctuations in power output are finely tuned by adjusting the blade pitch accordingly, allowing for precise control of the engine's performance. #### 2.2.3 Piston engine The piston engine stands is the most common type of power plant found in today's world, from automobiles and boats to an array of self-powered machinery. Regardless of whether it's fueled by gasoline, alcohol, or diesel, the piston Figure 6: Turboshaft engine designed for light helicopters and UAVs [22]. engine is the driving force behind many of mankind's prized mechanical possessions. This engine design employs pistons connected to a crankshaft through connecting rods, enabling the transmission of power. Fuel and air are drawn into the engine through the carburetor or fuel injection system, eventually entering the combustion chamber. Here, the mixture is ignited by spark plugs, initiating a downward motion of the piston. This, in turn, sets the crankshaft into motion, producing the mechanical power that drives the machinery. In the mid-1940s, piston aero-engines began to yield ground to gas turbine engines in military aircraft and large-scale civil aircraft [19]. Nevertheless, piston aero-engines continue to find widespread use in light, low-speed aircraft, helicopters, and UAVs, primarily due to their exceptional cost-effectiveness. Figure 7 showcases the logistics UAV deployed by Jingdong, which employs a Rotax piston engine to reduce manufacturing and operational costs. #### 2.2.4 Wankel engine The Wankel engine, also known as the rotary engine, owes its invention to the German engineer Felix Wankel (1902-1988). Drawing on insights from prior research, he successfully tackled critical technical challenges and successfully developed the first functional Wankel engine [24]. What sets the Wankel engine apart is its unique reliance on the rotational movement of a triangular rotor to govern the processes of compression and exhaust, a stark departure from the linear motion characteristic of traditional reciprocating piston engines. In this ingenious design, as the center of the triangular rotor orbits around the center of the output shaft, the rotor itself undergoes rotation around its own center. This triangular rotor effectively partitions the rotor housing into three distinct chambers, each sequentially carrying out intake, compression, power generation, and exhaust functions. Remarkably, the triangular rotor completes three cycles during a single revolution. Compared to conventional four-stroke engines, which perform work only once every two revolutions, the Wankel engine boasts a distinct advantage with its high horsepower-to-volume ratio. Furthermore, due to the rotational running characteristics of the Wankel engine, it doesn't necessitate precise crankshaft balancing to achieve higher operational speeds. This engine features a minimal number of moving parts, just two in total, as opposed to the more than 20 moving parts found in typical four-stroke engines, including components like intake and exhaust valves. This streamlined structure not only simplifies the engine but also significantly reduces the likelihood of failure. While the use of Wankel engines in the automotive industry has been restricted due to their relatively high specific fuel consumption, susceptibility to wear, and elevated cost, they have found a niche in the aviation field. Notably, their small size, light weight, and favorable vibration characteristics make them well-suited for specialized aviation applications. A typical example is the Martin Jetpack, which employs a Wankel engine to fulfill the size and power-to-weight ratio requirements of its power plant, as depicted in Figure 8. #### 2.2.5 DC motor DC motors harness the principles of electricity and magnetic fields to generate shaft power[27, 28, 29]. In its simplest configuration, this motor relies on a pair of magnets with opposing polarities and a coiled wire acting as an electromagnet. The interplay of attraction and repulsion between these magnets furnishes the torque necessary to set the motor in motion. Figure 7: The JD Logistics UAV and its piston engine [23]. Notably, brushless DC motors have become the powerhouses driving a myriad of UAV propulsion systems, particularly in the realm of multi-rotor UAVs and various clean energy UAV setups. The utilization of alternative energy sources such as solar power, hydrogen energy, and fuel cell technology in the aviation sector hinges on the performance of DC motors. Additionally, the influence of DC motor power systems extends to the evolution of general aviation, encompassing domains like ultralight aircraft and eVTOL aircraft, as depicted in Figure 9. Brushless motors stand out due to their commendable dynamic response, impressive power-to-weight ratio, and environmentally friendly attributes when contrasted with traditional fuel engines. Presently, the development and application of brushless motors are predominantly influenced by advancements in battery technology. In specific scenarios demanding extended endurance, piston engines continue to hold sway, with certain applications opting for a hybrid configuration that combines piston engines with DC motors. Figure 8: The Martin Jetpack and its wankel engine [25, 26]. Figure 9: DC motors are used in the field of UAVs and UAMs [27, 28]. Variable-pitch control and engine control ### Variable-pitch control The linchpin in the seamless integration of variable-pitch propellers with various engine types resides in precise variable-pitch control. Numerous research endeavors have contributed a diverse array of methods for controlling variable-pitch propellers [30, 31, 32]. Among these, the pitch angle controller, founded on Proportional-Integral (PI), Proportional-Differential (PD), and Proportional-Integral-Differential (PID) strategies, enjoys widespread application [33, 34, 35, 36]. Presently, the conventional PID control method is the preferred choice in pitch control systems, well-suited to specific operational conditions. However, its adaptability falters when confronted with changing operating conditions, rendering controller parameter adjustments a challenging endeavor [37]. For enhanced robustness in addressing nonlinear challenges, Linear Quadratic Gaussian (LQG) and Sliding Mode Control (SMC) techniques have been harnessed for pitch angle control [38]. SMC, in particular, stands out as an effective approach for designing robust control methodologies tailored to the intricacies of complex variable-pitch nonlinear systems. It endeavors to resolve fundamental issues such as time delays, parameter uncertainties, and disturbances [39]. However, SMC often has converging problems when approaching the sliding surfaces and relies on fixed control laws that are not easily adaptable to changing system dynamics or varying operational conditions. This is a similar problem in aircraft control. These limitations have spurred the exploration of adaptive control methodologies, such as Incremental Nonlinear Dynamic Inversion (INDI) [40] and Adaptive Model Predictive Control (AMPC) [41], which exhibit the capacity to adapt and fine-tune control strategies in response to evolving system behaviors, external disturbances, and uncertainties. This adaptability positions INDI and AMPC as promising candidates for further enhancing the precision and versatility of variable-pitch control systems. Intelligent control techniques have also found a niche in modeling and controlling variable-pitch propellers, particularly in the context of nonlinear dynamic systems. Neural network and fuzzy logic methods have emerged as potent tools when grappling with highly nonlinear systems. Artificial Neural Networks (ANN) exhibit remarkable precision in nonlinear control under specific system conditions, relying on trainable information [42, 43]. Building on this foundation, machine learning algorithms, including Reinforcement Learning (RL), have been applied to pitch control challenges, bolstering the controller's online learning capabilities, reducing model requirements, and delivering commendable control precision [44, 45]. Recent research efforts have witnessed a surge in RL studies, aiming to enhance learning efficiency and adaptability in the face of nonlinearities and uncertainties [46, 47]. These advancements have positioned RL as a promising solution, particularly when confronted with the intricacies of optimal adaptive control problems inherent in variable-pitch control systems. ### Control strategies for aircraft with variable-pitch propellers Beyond the fundamental propeller pitch control methods, the realm of aircraft equipped with variable-pitch propellers has witnessed a surge in enthusiasm for innovative control strategies. These aircraft, including helicopters, rotors, and fixed-wing platforms, harness variable pitch technology and appropriate control strategies in pursuit of specific operational advantages [48, 49, 50, 51, 52]. A noteworthy development in this field involves the development of practical online optimization algorithms aimed at minimizing power consumption within the propulsion system across a range of thrust settings [53]. This pioneering approach has resulted in a remarkable 25% enhancement in power efficiency and was effectively applied to a variable-pitch propeller actuated by a DC motor. Furthermore, the groundwork for the Variable-Pitch Propeller Drive Controller (VPPDC) has been laid [54], extending the standard drive configuration to enhance energy efficiency and prolong flight duration. VPPDC leverages precise knowledge of propeller and brushless motor characteristics, employing an online optimization algorithm to calculate the optimal blade angle. This calculated angle minimizes power consumption within the electric propulsion system for the given thrust value, offering an innovative solution for improved flight efficiency. In addition, researchers have delved into the domain of flight dynamics modeling and controller design for variable-pitch Quadrotors [55]. Their approach incorporated three control loops, with an additional loop addressing the challenges of control allocation stemming from the non-trivial relationship between variable-pitch and rotor forces. This multi-loop controller framework enhances the maneuverability and control precision of variable-pitch Quadrotors, further expanding the application possibilities of this technology. Despite the advancements in variable-pitch flight control strategies, there are still several promising control methods used in flight control that have yet to be extensively applied in the realm of variable-pitch aircraft. One such method is INDI [40, 56], which excels in handling unknown dynamics, changing conditions, and disturbances by iteratively updating control laws. When used in complex dynamic systems like aircraft with variable-pitch propellers, INDI with a cascaded control structure can be employed to handle the intricacies of these systems while providing effective control and adaptation. Another intriguing approach is Adaptive Critic Designs (ACDs) [57], which combines RL and neural networks to adapt and optimize control policies based on the aircraft's performance and environmental factors. Additionally, Hierarchical Reinforcement Learning (HRL) offers the potential to create complex control hierarchies that manage various aspects of flight simultaneously [58], providing a comprehensive approach to aircraft control. These control methods have shown promise in addressing complex and dynamic flight scenarios and may hold the key to further enhancing the capabilities of variable-pitch aircraft. ### Benefits to engineering applications Variable-pitch propeller control has garnered increasing attention within the aerospace engineering domain, driven by its potential to deliver substantial benefits. These anticipated advantages for the aerospace sector can be succinctly outlined as follows: 1. Improve the endurance performance of the aircraft Findings from experimental research conducted at the College of Automation Engineering, Nanjing University of Aeronautics and Astronautics [59] have illustrated that variable-pitch quadrotors can simultaneously improve endurance performance and positioning accuracy when compared to fixed-pitch counterparts by using steady-state identification method with minimum power consumption, as shown in Figure 10. Furthermore, the VPPDC research at Ostrava University of Technology has showcased the significant impact of variable-pitch propulsion units (VPPU) equipped with integrated algorithms, such as the Adaptive Pitch Control Algorithm (APCA) or Pitch Control Algorithm (PCA), in augmenting hover and flight times [54]. 2. Enhance the aircraft maneuverability A series of studies conducted by Cutler and his team at Massachusetts Institute of Technology (MIT) have shed light on the notable dynamic distinctions in thrust output between fixed-pitch and variable-pitch propellers [60, 61, 62]. Variable-pitch actuation offers substantial advantages over fixed-pitch quadrotors, particularly in terms of enhancing thrust response and enabling swift and efficient force reversal. Variable-pitch propellers confer a pivotal advantage to quadrotors--the capacity to generate reverse thrust. This capability empowers the vehicle to execute maneuvers like flying upside down and enables rapid deceleration by momentarily reversing the propeller pitch. In variable-pitch mode, the quadrotor exhibits exceptional tracking precision, with just 1% overshoot compared to the substantial 60% overshoot observed in fixed-pitch mode [62]. The improved Figure 10: The endurance performance versus propeller pitch angles [59]. tracking performance is attributed to the ability to achieve significant negative accelerations through pitch control. Furthermore, the Indian Institute of Technology has developed a hybrid UAV combining a variable-pitch quadrotor and fixed-wing configuration for express transportation [35]. Variable-pitch control technology bestows this platform with exceptional maneuverability in logistics applications, enhancing its capability to navigate and adapt to diverse scenarios effectively. 3. Improve the fuel economy of the power system The external characteristics of the 492Q piston engine, as depicted in Figure 11, illustrate the power, torque, and power-specific fuel consumption curves of the engine under full load conditions (with the gasoline engine at full throttle) as functions of speed [63]. The symbols \(g_{c}\) and \(P_{c}\) denote the engine-specific fuel consumption and shaft power, respectively, while \(T_{c}\) represents the shaft torque of the piston engine. These parameters exhibit variations across engine speeds, spanning from 1000 RPM to approximately 3700 RPM. Notably, at around 3000 RPM, the engine exhibits nearly the minimum power-specific fuel consumption, accompanied by significant effective power and appropriate torque output. These data represent typical piston engine performance characteristics. In practice, engines should not operate at their maximum speeds for extended durations. For the 492Q engine, maintaining a constant speed of around 3000 RPM allows for sufficient power and torque output while effectively reducing fuel consumption. 4. Extend engine life Aero-piston engines, owing to their complex structures and demanding operating conditions, are susceptible to various types of faults, with wear fault being the most common one. Engine tachometers often feature a red zone denoting speed limitations. While achieving these speeds is possible, prolonged operation within this range significantly accelerates engine wear. High engine speeds can lead to severe wear, but even at lower speeds, wear may not necessarily decrease. The engine's internal lubrication relies on an oil pump to deliver lubricant to critical areas, ensuring that the shaft is lifted with lubrication to prevent direct contact with bearings and reduce wear. The most substantial Figure 11: External characteristics of 492Q piston engine [63]. wear occurs when oil pressure is insufficient during startup. The oil pump is linked to the engine's crankshaft, and oil pressure increases with engine speed. Consequently, wear is exacerbated at low speeds and when the engine operates under heavy loads. * Meet the special requirements To comprehensively study the aerodynamic layout characteristics and flight dynamics of the original aircraft, it becomes imperative to establish specific similarity criteria between the original aircraft and the sub-scale test model. For propeller-driven aircraft, the preferred similarity principle often revolves around achieving similarity in the Froude number and the advance ratio [64]. The Froude number \(Fr\) can be represented as: \[Fr=\frac{V^{2}}{gl},\] (1) where \(V\) denotes the aircraft's velocity, \(g\) stands for the gravitational coefficient, and \(l\) represents the characteristic length of the aircraft. Given the equality of Froude numbers between the original aircraft (\(Fr_{1}\)) and the sub-scale test model (\(Fr_{2}\)), we can derive the following equations: \[\frac{V_{1}^{2}}{V_{2}^{2}}=\frac{l_{1}}{l_{2}}\] (2) \[\frac{V_{1}}{V_{2}}=\sqrt{\frac{l_{1}}{l_{2}}}\] (3) The propeller advance ratio \(\lambda\) can be represented as \[\lambda=\frac{V}{nD},\] (4) where \(n\) denotes the RPM and \(D\) stands for the diameter of the propeller. Assuming an equivalent advance ratio and based on the outcome of Eq. (3), we can derive \[\frac{n_{2}}{n_{1}}=\frac{V_{2}D_{1}}{V_{1}D_{2}}=\frac{V_{2}l_{1}}{V_{1}l_{2} }=\sqrt{\frac{l_{2}}{l_{1}}}\cdot\frac{l_{1}}{l_{2}}=\sqrt{\frac{l_{1}}{l_{2}}}.\] (5) After establishing the scale of the aircraft, it becomes essential to maintain a constant rotational speed for both turboprop and turbo-shaft engines. Eq. (5) illustrates the specific ratio between these rotational speeds. In the context of propeller power system similarity studies, it is often imperative to ensure similarity in Reynolds number and advance ratio [65; 64]. The Reynolds number can be defined as \[Re=\frac{\rho Vl}{\mu},\] (6) where \(\rho\) represents air density, \(V\) denotes the aircraft velocity, \(l\) stands for the characteristic length of the aircraft, and \(\mu\) is the dynamic viscosity of air. Assuming equivalence in Reynolds number and advance ratio, we can derive \[\frac{V_{1}}{V_{2}}=\frac{\rho_{2}\mu_{1}l_{2}}{\rho_{1}\mu_{2}l_{1}}=\frac{ \rho_{2}l_{2}}{\rho_{1}l_{1}},\] (7) \[\frac{n_{2}}{n_{1}}=\frac{V_{2}D_{1}}{V_{1}D_{2}}=\frac{\rho_{1}l_{1}}{\rho_{ 2}l_{2}}\cdot\frac{D_{1}}{D_{2}}=\frac{\rho_{1}l_{1}^{2}}{\rho_{2}l_{2}^{2}}.\] (8) In many scenarios, the air density in the operational environment of the propeller prototype and the test air density of the sub-scale test model are determined. As indicated in Eq. (8), maintaining a constant rotational speed ratio between the scaled model and the prototype becomes imperative to fulfill the similarity requirements. ## 4 New challenges in constant speed variable-pitch control While the advantages of implementing effective variable pitch control at a constant rotational speed are evident, they also bring forth a set of new challenges. This approach allows us to harness the practical benefits mentioned earlier, whether individually or in combination. The dynamic interplay among the variable-pitch propeller, engine, and various associated mechanisms and accessories forms a highly complex propulsion system. Controlling such a system necessitates addressing a multifaceted array of factors including thermodynamics, aerodynamics, mechanics, electronics, vibration dynamics, and the influence of the atmospheric environment. Notably, this complexity introduces substantial nonlinear characteristics and uncertainties that manifest in various segments of the system and across different operational phases [66]. These challenges are persistent and omnipresent, stimulating ongoing research and the continuous development of pertinent engineering methodologies to overcome them. Within this realm of technology, boasting a legacy spanning several decades, the focal point driving intense discourse on its challenges remains twofold: the pressing need for adaptability to new scenarios and heightened efficiency. Taking a closer look at the emerging challenges posed by the endeavor to achieve variable pitch control while maintaining a constant rotational speed, we can categorize them into three main domains: system modeling, engine-propeller compatibility, and external unsteady factors, as depicted in Figure 12. Examining these fresh demands through the lens of engineering and specific scenarios, it becomes apparent that they hold the potential to usher in novel ideas and herald breakthroughs. This inherent capacity for innovation underlines the enduring value of these endeavors. ### Insufficient system model The system model primarily encompasses the engine and propeller models, with instances where the propeller is considered an integral part of the engine. Concerning the engine, model establishment generally adopts one of three approaches. The first method entails establishing a theoretical model for each engine component and, based on this foundation, formulating a mathematical engine model through the imposition of constraints and boundary conditions [67]. The second approach relies on empirical data, with the engine's mathematical model derived through curve fitting of experimental, flight test, and calculated data [68, 69]. The third method combines the first two and is typically employed when developing an accurate theoretical model for certain engine components proves challenging [70]. However, in numerous instances, obtaining precise theoretical and numerical engine models remains a formidable task. For example, the variable pitch control encountered in the sub-scale verification test aircraft discussed earlier is a typical technical challenge in the realm of constant speed variable pitch control. On one hand, the powerplants for simulated turbopro and turboshaft engine aircraft are usually small to medium-sized aviation piston engines or Wankel engines. For the majority of these engines, detailed test data to support an accurate numerical model is lacking. Furthermore, theoretical models often yield significant practical errors due to discrepancies between different engine types [71]. On the other hand, the use of small and medium-sized piston engines and rotor engines, which experience wear and exhibit Figure 12: New challenges for constant speed variable-pitch control. issues like installation vibrations and exhaust pipe complications, introduces substantial variations in their operational characteristics compared to newly installed powerplants. The challenge of inadequate system modeling is growing in scope, primarily due to the widespread adoption of small and medium-sized piston engines and rotor engines in UAVs and general aircraft. These aircraft have emerged as strong competitors across various industries, particularly in terms of operational costs. To establish a clear advantage over ground transportation vehicles and gain recognition and value within the logistics sector, unmanned aerial systems must significantly enhance their economic viability. One aspect of this challenge stems from budget constraints, preventing comprehensive testing similar to that performed on military aircraft engines, including ground and air-based tests. Additionally, to curtail operational costs, propulsion system efficiency and longevity are often achieved through fixed rotational speed variable pitch control. Consequently, addressing the model deficiency issue at a lower cost takes on broader significance. Such efforts are pivotal in providing the technical foundation required for constant speed variable pitch control, thereby exerting a profound impact on the application and advancement of the aviation industry. ### Partial propeller-engine matching Ensuring an appropriate relationship between the propeller and the engine is a critical aspect of enhancing overall aircraft performance and optimizing design. The process of engine-propeller matching typically relies on experimental testing methods [72] and theoretical design techniques [73]. It centers on several key aspects, including aligning the power absorbed by the propeller with the engine's shaft power, matching propeller torque with engine shaft driving torque, and achieving rotational speed compatibility while adhering to limitations. These limitations primarily involve preventing the engine speed from falling below idle speed or exceeding its maximum speed [74]. In the case of larger power turboshaft and turboprop engines, ample test and design data are often available to support successful matching. However, for UAVs employing small and medium-sized piston engines, Wankel engines, general aviation aircraft, and vertical take-off and landing fixed-wing aircraft utilizing DC motors, the lack of power system models presents unique challenges to propeller-engine matching. Consequently, matching results achieved under such conditions are frequently limited in the scope of the full flight envelope, less efficient, and may even pose safety concerns. Matching a fixed-pitch propeller with an engine typically revolves around meeting the primary requirement set by aircraft developers for the propulsion system: generating the necessary thrust at predetermined flight speeds. This type of matching centers on cruise point working conditions as the core, with other key flight conditions as secondary considerations. Such an approach simplifies the engine-propeller matching process significantly. Variable pitch propellers, on the other hand, allow for precise matching with the engine across most of the flight envelope range, thereby optimizing efficiency. However, if matching constraints are solely based on thrust requirements at predetermined flight speeds, the challenge of multiple balance points emerges. Introducing the additional requirement of fixed rotational speed resolves the issue of multiple balance points while delivering additional performance benefits. However, this places higher demands on the precision of propeller-engine matching, particularly when addressing matching and control under dynamic flight conditions. ### Non-negligible external unsteady factors During flight through the atmosphere, an aircraft's dynamic system's control and operational state are inevitably influenced by external factors, particularly when these factors are time-varying or sudden changes. Many high-speed and large-scale aircraft often simplify the impact of external unsteady factors, treating them as minor disturbances. However, for low-speed and small to medium-sized aircraft, these external environmental disturbances cannot be dismissed. Taking these disturbances seriously demands significant efforts, often of a comprehensive nature. For example, High Altitude Super-Long Endurance (HASLE) solar-powered UAVs prioritize cruise efficiency and aim for increasingly extended flight durations. The random meteorological factors these UAVs may encounter during flight can profoundly affect efficiency goals and even UAV safety. Traditionally, UAVs operated on predefined flight profiles crossing day and night, as illustrated in Figure 13. These profiles were based on theoretical considerations and flight test experience [75], and included the corresponding implementation of propeller variable pitch control. Looking ahead, high-altitude, long-endurance solar-powered UAVs will necessitate online learning to construct variable flight profiles and corresponding pitch angle control strategies to adapt to evolving environmental conditions in real time. The principal challenges revolve around controller design, enhancing the accuracy of engine and propeller models, real-time online control optimization, and the verification of control system tests. Initial steps involve shedding fixed power unit digital models to minimize model inaccuracies, regardless of whether they are derived from experiments or theory. Thorough consideration must be given to random environmental factors and the cumulative differences that emerge as the aircraft is in use. Simultaneously, there is a growing expectation for UAVs to exhibit increasing intelligence, demanding that the aircraft's control system be autonomous, adaptive, and able to learn online and in real-time. ## 5 Conclusion The history of aircraft development spans over a century, yet propeller-driven aircraft remain prevalent and vital in aviation. Pitch control technology, as a paramount means to enhance the efficiency of propeller power systems, finds extensive application in helicopters, propeller-driven transporters, and ultralight aircraft. And more recently, it also plays an important role in the burgeoning realm of multi-rotor drones, fixed-wing unmanned aircraft, and urban air mobility concepts. The continuous advancement of aircraft technology has elevated the demands placed on this traditional but critical technology. Variable-pitch technology, with its diverse mechanisms and principles, empowers aircraft and pilots to achieve desired outcomes such as thrust, pull, acceleration, rotational speed, or optimal dynamic efficiency while ensuring stable flight. The effective coordination of propellers and powerplants, with precise control systems, is pivotal in realizing these expectations. In the pursuit of these goals, certain challenges have come to the forefront: 1. The inaccuracy or even the absence of a dynamic model. 2. Considerable variations in engine states, which escalate with engine wear. 3. The reluctance or inability of pilots to participate fully in flight, while power control laws lack specificity in certain aspects. It is evident that the advancement of intelligent technologies, such as machine learning and depth perception, will play a pivotal role in addressing these challenges and steering the future of aviation towards greater efficiency and precision. Figure 13: Common flight profile of solar powered HASLE UAV [76] ## Acknowledgments This work is sponsored by Universiti Sains Malaysia (USM) with the Short Term Research Grant Scheme [grant number 304/PAERO/6315297].
2309.06266
Modelling the interaction of Alfvénic fluctuations with coronal mass ejections in the low solar corona
Alfv\'enic fluctuations of various scales are ubiquitous in the corona; their non-linear interactions and eventual turbulent cascade result in an important heating mechanism that accelerates the solar wind. These fluctuations may be processed by large-scale, transient, and coherent heliospheric structures such as coronal mass ejections (CMEs). In this study we investigate the interactions between Alfv\'enic solar wind fluctuations and CMEs using magnetohydrodynamic (MHD) simulations. We study the transmission of upstream solar wind fluctuations into the CME leading to the formation of CME sheath fluctuations. Additionally, we investigate the influence of the fluctuation frequencies on the extent of the CME sheath. We used an ideal MHD model with an adiabatic equation of state. An Alfv\'en pump wave is injected into the quiet solar wind by perturbing the transverse magnetic field and velocity components, and a CME is injected by inserting a flux-rope modelled as a magnetic island into the quasi-steady solar wind. The upstream Alfv\'en waves experience a decrease in wavelength and change in the wave vector direction due to the non-radial topology of the CME shock front. The CME sheath inhibits the transmission of long-wavelength fluctuations due to the presence of non-radial flows in this region. The frequency of the solar wind fluctuations also affects the steepening of MHD fast waves causing the CME shock propagation speed to vary with the solar wind fluctuation frequencies.
Chaitanya Prasad Sishtla, Jens Pomoell, Rami Vainio, Emilia Kilpua, Simon Good
2023-09-12T14:28:33Z
http://arxiv.org/abs/2309.06266v2
Modelling the interaction of Alfvenic fluctuations with coronal mass ejections in the low solar corona ###### Abstract Context:Alfvenic fluctuations of various scales are ubiquitous in the corona; their non-linear interactions and eventual turbulent cascade result in an important heating mechanism that accelerates the solar wind. These fluctuations may be processed by large-scale, transient, and coherent heliospheric structures such as coronal mass ejections (CMEs). In this study we investigate the interactions between Alfvenic solar wind fluctuations and CMEs using magnetohydrodynamic (MHD) simulations. Aims:We study the transmission of upstream solar wind fluctuations into the CME leading to the formation of CME sheath fluctuations. Additionally, we investigate the influence of the fluctuation frequencies on the extent of the CME sheath. Methods:We used an ideal MHD model with an adiabatic equation of state. An Alfven pump wave is injected into the quiet solar wind by perturbing the transverse magnetic field and velocity components, and a CME is injected by inserting a flux-rope modelled as a magnetic island into the quasi-steady solar wind. Results:The upstream Alfven waves experience a decrease in wavelength and change in the wave vector direction due to the non-radial topology of the CME shock front. The CME sheath inhibits the transmission of long-wavelength fluctuations due to the presence of non-radial flows in this region. The frequency of the solar wind fluctuations also affects the steepening of MHD fast waves causing the CME shock propagation speed to vary with the solar wind fluctuation frequencies. Conclusions: ## 1 Introduction The turbulent fluctuations in velocity, magnetic field, electric field, and density are ubiquitous in the solar wind and corona (Coleman Jr, 1968; Belcher & Davis Jr, 1971; Bale et al., 2005). The convective motions of the dense photospheric plasma, which contains the solar magnetic field, are considered to be the primary source of energy for these fluctuations (Cranmer & Van Ballegooijen, 2005; Kato et al., 2016). These fluctuations have been observed both in situ (Belcher & Davis Jr, 1971; D'Amcis & Bruno, 2015) and remotely (Tomczyk et al., 2007). In the solar wind, the power contained in Alfvenically polarised fluctuations dominates over the power in compressive fluctuations (Tu & Marsch, 1995; Chen, 2016). Additionally, the solar wind exhibits broad-band Alfvenic fluctuations, which can then non-linearly interact to initiate an energy cascade leading to dissipation via heating on smaller spatial scales. In this view of a turbulence cascade, the inertial range is the spatial scale of the fluctuations exhibiting a power-law behaviour between the energy injection and dissipation scales. This inertial-range turbulence is often studied within the framework of reduced magnetohydrodynamics (RMHD), in which Alfven waves are the linear wave modes (Zank & Matthaeus, 1992; Schekochihin et al., 2009; Perez & Chandran, 2013). Previous studies (Matthaeus et al., 1984; Gershman et al., 2019; Gonzalez et al., 2021) have also discussed the role of Alfven wave propagation and reflection-driven Alfvenic turbulence for particle acceleration in planetary radiation belts, in MHD reconnection sites, and at interplanetary discontinuities. In this study we investigate the interaction of Alfvenic perturbations with a coronal mass ejection (CME) in the low corona. A CME is a transient plasma and magnetic field eruption from the solar corona; it exhibits complex magnetic substructures. They are one of the primary drivers of geomagnetic activity near Earth (Kilpua et al., 2013, 2015; Kalliokoski et al., 2020, 2022). In coronagraph images CMEs often exhibit a three-part structure with a bright loop of compressed coronal plasma enclosing a dark low-density cavity, corresponding to a flux rope (FR), which contains a high-density core (Gibson & Low, 2000; Kilpua et al., 2017). A spacecraft encountering a CME typically observes a shock, followed by a turbulent sheath and the ejecta. Only part of the ejecta at 1 AU shows clear FR signatures due to interaction and evolution. The internal structure of the CME is of significant interest as FR can cause strong and sustained southward magnetic fields influencing the Earth (Kilpua et al., 2017). In addition, the turbulent and compressed CME sheath is highly geoeffective (Kilpua et al., 2017, 2019). Sheaths are known to exhibit an extensive range of inertial and kinetic range spectral indices (Kilpua et al., 2020, 2021), embed multi-scale structures (Ruohotie et al., 2022), and contribute to the acceleration of solar energetic particles (Kilpua et al., 2021). The fluctuations in the CME sheath have been seen to exhibit turbulence characteristics often observed in the slow solar wind (e.g. higher compressibility), and yet they are still dominated by non-compressible Alfvenic fluctuations (Moissard et al., 2019). Additionally, compared to the predominantly anti-sunward fluctuations in the solar wind preceding CMEs near 1 AU, sheaths are found to exhibit a more balanced distribution of sunward and anti-sunward fluctuations (Good et al., 2020; Good et al., 2022; Soljento et al., 2023). There is currently a wide range of models (Gibson and Low, 1998; Isavnin, 2016; Verbeke et al., 2019; Asvestari et al., 2021, 2022) to investigate the evolution of the global FR structure of CMEs from Sun to Earth and their interactions with the ambient solar wind. However, we need the understanding and capabilities to model the smaller-scale features of CMEs and their sheath regions. One important aspect is the transmission of Alfvenic fluctuations from the surrounding ambient solar wind into the CME and the role it plays in forming the sheath. In this study, by using numerical simulations, we aim to enhance our understanding of the formation of sheath structures by demonstrating the effect of Alfvenic solar wind fluctuations on the large-scale structures of the CME and to analyse the transmission of these fluctuations into the sheath region. We find the CME shock speed influenced by the frequency of solar wind fluctuations, with the CME sheath exhibiting non-radial flows, along with both sunward and anti-sunward Alfvenic fluctuations. These results are obtained by performing 2.5D MHD simulations of the solar corona assuming a radial solar magnetic field, with the FR modelled using the Grad-Shafranov equation. In Section 2 we introduce the MHD equations and associated boundary conditions, the mechanism for Alfven wave injection, and the CME model used for the simulations. The influence of solar wind fluctuations on the CME and their transmission to the sheath is discussed in Section 3. Section 4 presents a statistical comparison of the shock location and sheath extent for varying solar wind and CME parameters, including a case with no solar wind fluctuations. The conclusions are summarised in Section 5. ## 2 Methodology To perform our study we developed a 2.5D magnetohydrodynamic (MHD) simulation from the low corona at 1.03 solar radii (\(R_{\odot}\)) to 30 \(R_{\odot}\). The simulation domain is 2D in space; the velocity and electromagnetic field vectors have three components. The solar wind is modelled assuming a global radial unipolar (outward) solar magnetic field, which can be considered realistic for a limited region of the Sun, such as a coronal hole. The MHD equations and the relevant physical processes of gravity and ad hoc coronal heating are described by the following equations: \[\frac{\partial\rho}{\partial t}+\nabla\cdot(\rho\mathbf{v})=0, \tag{1}\] \[\frac{\partial(\rho\mathbf{v})}{\partial t}+\nabla\cdot[\rho\mathbf{v}\mathbf{ v}+(P+\frac{B^{2}}{2\mu_{0}})\mathbf{I}-\frac{\mathbf{B}\mathbf{B}}{\mu_{0}}]=- \frac{GM_{\odot}\rho}{r^{2}}\mathbf{\hat{r}}, \tag{2}\] \[\frac{\partial\mathcal{E}}{\partial t}+\nabla\cdot[(\mathcal{E}+P-\frac{B^{2} }{2\mu_{0}})\mathbf{v}+\frac{1}{\mu_{0}}\mathbf{B}\times(\mathbf{v}\times \mathbf{B})]=-\frac{GM_{\odot}\rho v_{r}}{r^{2}}+S, \tag{3}\] \[\nabla\cdot\mathbf{B}=0, \tag{4}\] \[\frac{\partial\mathbf{B}}{\partial t}-\nabla\times(\mathbf{v}\times\mathbf{B} )=0, \tag{5}\] where \[\mathcal{E}=\frac{1}{2}\rho v^{2}+\frac{P}{\gamma-1}+\frac{B^{2}}{2\mu_{0}}, \tag{6}\] \[S=S_{0}\mathrm{exp}\left(-\frac{r}{L}\right). \tag{7}\] Here the quantities \(\rho\), \(\mathbf{v}\), \(\mathbf{B}\), \(\mathcal{E}\), and \(P\) correspond to the mass density, bulk plasma velocity, magnetic field, total energy density, and thermal pressure. Equations 1-5 correspond to the mass continuity, momentum, energy continuity, and induction equations, respectively, and are subsequently referred to as such. The solar wind plasma evolves by solving these MHD equations for an adiabatic polytropic index of \(\gamma=5/3\). Thus, to obtain a steady-state solar wind that approximates a Parker-like outflow, we incorporate an additional energy source term in Equation 7 (Pomoell et al., 2015; Mikic et al., 2018; Sishtla et al., 2022) with \(S_{0}=0.5\times 10^{-6}\) Wm\({}^{-3}\) and \(L=0.4~{}R_{\odot}\). The numerical method used in this work to solve the MHD equations was employed in previous studies of the solar corona (Pomoell and Vainio, 2012). The method utilises a strong stability preserving (SSP) Runge-Kutta method to advance the semi-discretised equations in time, and employs the Harten-Lax-van Leer (HLL) approximate Riemann solver supplied by piece-wise linear slope-limited interface states. The equations are solved in spherical coordinates and the magnetic field is ensured to be divergence free to the floating point accuracy by utilising the constrained transport method (Kissmann and Pomoell, 2012). The MHD equations were integrated forward in time for a 2D meridional plane with a radial extent of \(r=r_{0}=1.03~{}R_{\odot}\) to \(r=30~{}R_{\odot}\), and an co-latitudinal extent of \(\theta=10^{\circ}\) to \(\theta=170^{\circ}\). The domain is therefore symmetric in the out-of-plane longitudinal \(\phi\) direction. The solar magnetic field was initialised to be radially outward with an associated vector potential \(\mathbf{A}=-B_{0}r_{0}(r_{0}/r)\cot\theta\,\hat{\phi}\), where \(B_{0}=5\) G, and the magnetic field in the simulation was then specified using \(\mathbf{B}=\nabla\times\mathbf{A}\). The simulation grid was defined by 500 cells logarithmically spaced in the radial direction, and 128 equidistant cells in the latitudinal direction. Appendix A validates this choice of the radial grid resolution by verifying the results presented in the following sections for a significantly higher resolution. At the inner radial boundary, representing the coronal base, we specified a constant mass density and temperature along the boundary with \(\rho_{0}=8.5\times 10^{-13}\) kg and \(T_{0}=1.2\times 10^{6}\) K. At the outer radial and the latitudinal boundaries, we linearly extrapolated all the dynamical quantities to enforce an outflow boundary condition. ### Introducing Alfvenic perturbations After achieving a steady-state solar wind by integrating Equations 1- 7 in time, we introduced Alfvenic fluctuations. The Alfven waves were introduced at the coronal base by utilising a time-dependent boundary condition for the Elsasser variables, defined by \[\mathbf{z}_{\perp}^{\mathrm{\,*}}=\mathbf{v}_{\perp}\pm\frac{\mathbf{B}_{\perp }}{\sqrt{\mu_{0}\rho}}. \tag{8}\] We continuously injected the monochromatic and linearly polarized Alfvenic fluctuations in the out-of-plane \(\phi\) direction by specifying the anti-sunward (outgoing) Elsasser variable as \(\delta\pi^{\mathrm{\,*}}=\mathrm{Z}_{0}\sin\left(2\pi f_{0}t\right)\,\hat{\phi}\) at the lower boundary with \(\mathrm{Z}_{0}=32\,\sqrt{2}\) kms\({}^{-1}\) being the amplitude and \(f_{0}\) the frequency of the wave. In Figure 1 we present the quasi-steady solar wind after the injection of a 3 mHz Alfven wave. In general, the solar wind response to the injected fluctuations depends on the polarization of the waves (Goldstein, 1978; Hollweg, 1971). The propagation of the linearly polarised injected Alfven wave causes a fluctuating magnetic field strength which results in the steepening of the Alfven waves themselves (Cohen & Kulsrud, 1974), in addition to generating density fluctuations due to the ponderomotive force (Nakariakov et al., 1997). Due to this, we observed an increase in temperature from 1.2 MK at the lower boundary to 1.4 MK near 3 \(R_{\odot}\), before decreasing again (see Figure 1(a)). The generation of density fluctuations is a second-order non-linear effect, and is absent in incompressible MHD. In this simulation the density fluctuations are absent as the chosen grid resolution causes the Alfven waves to be damped due to numerical diffusion before the density fluctuations can be generated. This damping ensures that we have only a pure monochromatic Alfven wave in the simulation that has not yet experienced any reflections from large-scale density gradients in the solar wind (Vedini & Velli, 2007; Van Ballegooijen et al., 2011), and confines the waves to be present below \(\approx 10\) R\({}_{\odot}\). Thus, in this study we confined our analysis to the wind below \(\approx 10\) R\({}_{\odot}\). The Alfvenicity, steepening, and absence of density fluctuations in the simulation are illustrated by considering the radial propagation of the injected waves along a viewing angle (annotated in Figure 1(a)). In Figure 1, panels (b) and (c), we present the out-of-plane velocity \(v_{\phi}\) and magnetic field \(B_{\phi}\) components along this viewing angle. Upon comparing the two panels, we observe an anti-correlation between \(v_{\phi}\) and \(B_{\phi}\), which confirms both the Alfvenicity and anti-sunward direction of the injected wave. Furthermore, to verify the lack of accompanying density perturbations, we plot in Figure 1(d) the fluctuating component of the mass density \(\Delta\rho/\rho=(\rho-\rho(t=0))/\rho(t=0)\), where \(\rho(t=0)\) is the mass density in the coronal volume prior to the Alfven wave injection. The panel shows a large-scale variation in the density, but the absence of any smaller-scale fluctuations. ### Introducing a coronal mass ejection In this study, we do not model the initiation and subsequent eruption of the CME, but instead directly instantiate an erupting plasma structure mimicking an eruptive CME. We superimpose an appropriate plasma structure on the quasi-steady solar wind containing the Alfvenic fluctuations to achieve this. The magnetic field of the CME is modelled as a force-free FR using the Soloviev solution of the Grad-Shafranov (GS) equation (Solov'ev, 1968). The solutions to the GS equation represent axisymmetric MHD equilibria of magnetized plasmas without flows such that the equilibrium condition \[\mathbf{J}\times\mathbf{B}=\nabla P \tag{9}\] is satisfied where \(\mathbf{J}\) is the current density given by \(\mathbf{J}=\nabla\times\mathbf{B}/\mu_{0}\), and \(P\) is the thermal pressure of the plasma. Once the magnetic structure of the CME is modelled using Equation 9 under the assumption of zero-beta (\(P=0\)) conditions we then populate it with plasma to model a high-density ejecta. The density inside the structure is specified as \[\rho_{\mathrm{cme}}=\frac{\rho_{\mathrm{cme,0}}}{2}\left[1-\cos\left(\pi\frac {d_{\mathrm{cme}}-d}{d_{\mathrm{cme}}}\right)\right], \tag{10}\] where \(d\) is the distance from the centre of the structure and \(d_{\mathrm{cme}}\) is the radial extent, and \(\rho_{\mathrm{cme,0}}\) is the density specified at the centre. This formulation of \(\rho_{\mathrm{cme}}\) ensures a continuous transition from the high-density \(\rho_{\mathrm{cme,0}}\) CME core to the background density at the edge of the structure. We also initialise the plasma with a constant temperature of \(0.5\times 10^{6}\) K, and an ejection velocity \(\mathbf{v}_{\mathrm{ej}}\) along the radial direction inside the CME. The constructed CME (Equations 9 and 10) is then superimposed on the quasi-steady solar wind including the Alfvenic fluctuations described in Section 2.1. We note that due to the ad hoc specification of the thermal pressure inside the CME and superposition of the structure on the quasi-steady wind the plasma in and immediately surrounding the CME is not in equilibrium causing the FR to expand and propagate. In Figure 2 we present a schematic showing the magnetic field configuration and dynamic contributions acting on the CME at the onset. The poloidal field of the FRs we used for this study is oriented in the anti-clockwise direction, as seen by the direction of magnetic field vectors around the FR. This causes them to deflect when reconnecting with the radially outward magnetic field lines. The ejection velocity is also directed in the radial direction. The plasma signatures encountered by a virtual spacecraft upon traversing such an FR are shown in Figure 3. The dashed vertical lines demarcate the upstream, CME sheath, and FR regions. The spacecraft is placed at 5 \(R_{\odot}\), and a viewing angle of 105 degrees with the time axis referenced from the time of CME injection in the simulation. The CME is modelled using an initial speed \(v_{\mathrm{ej}}=500\) km s\({}^{-1}\), peak density of \(2\rho_{0}\) (where \(\rho_{0}\) is the constant mass density at the coronal base at \(r=r_{0}\)), and \(B_{\phi}\approx 12\) G. Prior to encountering the CME, the virtual spacecraft measures the pristine upstream solar wind conditions as seen in Figures 3(a)-(d). We observe anti-sunward Alfvenic fluctuations by the anti-correlated variations in \(B_{\phi}\) and \(v_{\phi}\). The first CME-related signature registered is the shock at \(t\approx 15\) min. The shock is followed by the CME sheath. In this sheath region Figure 1: Coronal quasi-steady state. Panel (a) shows a snapshot of the plasma temperature upon the injection of a 3 mHz linearly polarised Alfvén wave, with an annotation describing the viewing angle along \(105^{\circ}\). In panels (b) and (c) are shown the out-of-plane \(v_{\phi}\) velocity and \(B_{\phi}\) magnetic field components along the viewing angle. The variations in the density \(\rho\) from the quasi-steady values prior to the injection of the Alfvén wave are presented in panel (d). we observe larger non-radial flows (compared to the upstream fluctuations) as non-zero values for \(v_{\theta}\) and \(v_{\phi}\) (Figure 3(b)). The CME sheath is also characterised by an enhanced density and temperature (Figures 3(c), (d)) due to the shock transition and plasma piling ahead of the CME. Finally, the spacecraft encounters the FR. It features a smooth variation in \(B_{\theta}\) (Figure 3(a)) indicating rotation of the field as it crosses the magnetic island initialised in Equation 9. The FR region also has a relatively high density (Figure 3(c)). ## 3 Results In this section we describe the interaction of the Alfvenic fluctuations in the quasi-steady solar wind (Section 2.1) with a CME modelled as in Section 2.2 with \(v_{\rm ej}=500\) km s\({}^{-1}\), peak density of \(2\rho_{0}\), and \(B_{\phi}\approx 12\) G. The CME is deflected in the -X direction as it reconnects with the anti-sunward-directed radial magnetic field line due to the chosen poloidal field direction of the FR. In Figure 4(a)-(c) we show the density compression ratio computed as \(\rho(t)/(\rho(t=0)\)(Pomoell et al., 2015), the plasma beta \(\beta=p_{\rm thermal}/p_{\rm magnetic}\), and the out-of-plane (longitudinal) velocity component \(v_{\phi}\) at simulation time \(t=10.8\) min. The initial velocity of the FR (\(\mathbf{v}_{\rm ej}\)) and the out-of-equilibrium \(\mathbf{J}\times\mathbf{B}-\nabla P\) force allows the plasma of the CME to expand at a rate much higher than the ambient solar wind velocity. This results in the FR driving a fast mode shock. In an ideal MHD system, the maximum density compression ratio at a shock front is \(\frac{\gamma+1}{\gamma-1}\)(e.g. Koskinen, 2011), which in our case, with \(\gamma=5/3\), gives a theoretical maximum compression of 4. At the shock front, located approximately at 2.5 \(R_{\odot}\), there is an observed density compression jump from 1 in the upstream region to \(\approx 2\) inside the sheath at the flank of the CME, and to \(\approx 3\) near the head-on region of the CME. The FR is driving a shock as a result of the large difference between the CME ejection velocity and upstream solar wind velocity. The FR trails behind the leading shock front and is identified by the closed field lines forming the magnetic island. The sheath is the region between the shock and FR. CME sheath regions are often characterised by non-radial flows and build-up of density in a pile-up compression region (PUC) (Das et al., 2011). In our simulation the presence of non-radial flows is due to the draping of the flow around the magnetic island in the sheath (Siscoe & Odstrcil, 2008). This draping causes the formation of an oblique shock, which in turn causes large-scale flows to be generated to maintain the non-radial continuities in the Rankine-Hugoniot jump conditions. Additionally, the compression of plasma in the sheath region causes the formation of a PUC. Figure 4(a) is annotated with markings denoting the PUC, the sheath region, and the location of the reconnection site causing the CME to deflect. The reconnection at the CME flank reduces magnetic flux at this location, while the field still drapes around the FR in the opposite flank. This drives a strong magnetic field gradient that deflects the CME. In Figure 4(b) we plot the plasma beta in the simulation to investigate whether plasma dynamics are dominated by the magnetic field (low \(\beta\)) or gas dynamics (high \(\beta\)). We see that the whole steady-state solar wind upstream of the shock has \(\beta\ll 1\), which indicates a frozen-in plasma condition is strongly met. We observe a region of high \(\beta\) inside the sheath as we view the CME head-on and around the reconnection site. A comparison with Figure 4(a) shows that this high-\(\beta\) region occurs when we encounter a region of enhanced density inside the sheath. The FR is isolated from the surrounding sheath region and maintains a low \(\beta\). Finally, in Figures 4(c) and 4(d) we present the \(v_{\phi}\) and \(v_{\theta}\) components in the simulation, respectively. At the CME flanks, we see that the solar wind perturbations in \(v_{\phi}\) are modified by the shock. The radially directed wave vectors upstream of the shock are modified downstream to reflect the non-radial topo Figure 3: CME encounter with a virtual spacecraft located at 5 \(R_{\odot}\) and a viewing angle of 105. The vertical lines differentiate the upstream, sheath, and flux rope encountered by the spacecraft. Figure 2: CME insertion into the solar wind. The black curves show magnetic field lines with the black arrows indicating the direction and relative strength. The figure is annotated with the directions of the \(\nabla P\) and \(\mathbf{J}\times\mathbf{B}\) forces that comprise the Grad-Shafranov equilibrium condition. An initial ejection velocity \(\mathbf{v}_{\rm ej}\) is given to the CME along the radial direction. ogy of the shock front. However, in the regions of enhanced \(\beta\) from Figure 4(b), there are significant flows in the \(\pm\phi\) directions, as shown by the large \(v_{\phi}\) magnitudes. These are the non-radial flows that are generated as a result of the structure of the FR that causes the solar magnetic field to drape around it, as well as the strong guide field of the FR affecting the flow of the surrounding plasma. The spatial extent of the non-radial flows, as depicted by the dark blue region in Figure 4(c), indicates a similarity in size to the wavelength of Alfven waves at the flanks of the CME. The \(v_{\phi}\) component does not have any perturbations upstream as expected. However, downstream of the shock, we see large flows in \(\theta\) as the FR sweeps away the surrounding plasma as it propagates. ### CME modified solar wind fluctuations The cut through the CME flanks in Figure 4(c) shows that the frequency of the upstream solar wind fluctuations decreases downstream of the shock. Alfvenic fluctuations such as these are characterised by a correlation in the velocity and magnetic field (Belcher et al., 1969) and can be identified by the accompanying Elsasser variables (Equation 8). Fluctuations can be identified by subtracting the mean plasma flow speed from the accompanying Elsasser variable. The simulation snapshots of the anti-sunward-propagating Elsasser variable \(z_{\phi}^{-}\) at various times is shown in Figure 5. This figure is annotated with the viewing angle of \(160^{\circ}\) corresponding to the flank of the CME. The panels (a) and (b) present the anti-sunward Elsasser variable 6.25 and 8.75 minutes after the onset of the eruptive event, respectively. The significant negative value of \(z_{\phi}^{-}\) in the figure is due to the positive \(B_{\phi}\) inside the FR. The large positive \(B_{\phi}\) field compresses the plasma ahead of it, causing the large negative valued \(z_{\phi}^{-}\). This positive \(B_{\phi}\) along with the anti-clockwise direction of the poloidal field around the FR (Figure 2) denotes a positive (right-handed) chirality for the FR. At the flanks, the initially expanding CME amplifies the imposed fluctuations on the flanks as it 'drags' the solar wind at speeds higher than the ambient Alfven velocity prior to shock formation (Figures 5(a)-(d)). After the formation of a shock along the \(160^{\circ}\) viewing angle in panel (e), these CME-modified anti-sunward fluctuations are also present in the downstream region. #### 3.1.1 Shock transmitted solar wind fluctuations The presence of a shock modifies the upstream anti-sunward solar wind fluctuations as they are transmitted (propagating anti-sunward) and reflected (propagating sunward) downstream of the shock (e.g. Vainio & Schlickeiser, 1998, 1999). If a medium is stationary, a wave propagates conserving its frequency. In the shock frame, the fluid structure is quasi-stationary on the timescale it takes for the wave to be transported through the shock, so Alfven waves conserve their frequency in the shock frame. Another boundary condition at the shock for the wave vector comes from the conservation of the tangential wave length. Thus, for a transmitted, outward-propagating Alfven wave, \[k_{1,n}(u_{1,n}-v_{a1,n}) = k_{2,n}(u_{2,n}-v_{a2,n}), \tag{11}\] \[k_{1,t} = k_{2,t}, \tag{12}\] Figure 4: Snapshots of CME propagation. The figure presents snapshots of the simulation as the CME is propagating in the low corona at \(t=10.8\) min. In panel (a) the colour intensity denotes the density compression compared to the quasi-steady solar wind, with annotations indicating the PUC, sheath, and reconnection site. The plot in panel (b) shows the plasma beta, and panels (c) and (d) present the out-of-plane velocity component \(v_{\phi}\) and the co-latitudinal (meridional) component \(v_{\phi}\). where \({\bf u}\) is the fluid velocity in the shock frame, \({\bf k}\) is the wave vector, \({\bf v}_{a}\) is the Alfven velocity, the subscripts 1 and 2 denote the upstream and downstream regions, and the subscripts \(n\) and \(t\) denote the normal and tangential components of the vector quantities in relation to the shock surface normal. As the normal component of the magnetic field is conserved at the shock, \(v_{a2,n}=v_{a1,n}/\sqrt{X}\), where \(X=\rho_{2}/\rho_{1}=u_{1,n}/u_{2,n}\) is the compression ratio of the shock. Thus, the downstream wave number \[k_{2,n}=k_{1,n}X\frac{M_{A}-1}{M_{A}-\sqrt{X}}, \tag{13}\] where \(M_{A}=u_{1,n}/v_{a1,n}\) is the Alfvenic Mach number, showing that the wavelength in the shock normal direction is compressed by a factor exceeding the gas compression ratio of the shock. For a low-Mach-number (\(M_{A}\lesssim 2\)) quasi-parallel fast-mode shock propagating in a low-\(\beta\) plasma, the compression ratio is approximately \(X\lesssim M_{A}^{2}\)(Vainio & Schlickeiser, 1999), implying that the wave compression can be very significant. We note that at the limit of a switch-on shock (\(X=M_{A}^{2}\)), wave compression becomes infinite. For a reflected wave (i.e. the case where the downstream wave is propagating towards the Sun), the wave compression is less significant, \[k_{2,n}=k_{1,n}X\frac{M_{A}-1}{M_{A}+\sqrt{X}}, \tag{14}\] in particular for a low-Mach-number shock. Thus, we expect the upstream Alfven wave to significantly decrease in wavelength as it propagates downstream of the CME shock. Therefore, the expected composition of the downstream anti-sunward solar wind fluctuations as a consequence of the shock transmission consists of a long-wavelength component as the CME flank drags the waves that were transmitted downstream through the early quasi-perpendicular phase of the shock on a given field line (\(k_{t}\) is conserved) and a short-wavelength component due to the Alfven wave transmission across the quasi-parallel part of the shock. The waves transmitted in the early quasi-perpendicular stage (Figure 5) have a longer wavelength as they experience a higher Alfven velocity downstream of the CME shock (Figure 6) causing the increase in wavelength. #### 3.1.2 Fluctuations around the CME shock In Figure 3, we observe the presence of a variety of radial (via the shock propagation) and non-radial (in the sheath plasma) enhancements in the velocity. In Figure 7 we attempt to exclude the radial flow by viewing the Elsasser variables in the frame of reference of the shock. The CME shock is detected by locating a jump in density compression that exceeds a factor of 2 and is always placed at the \(x=0\) coordinate with the Elsasser variables shown in the neighbourhood of 5 \(R_{\odot}\). The positive x-axis values are upstream of the shock, and the negative x-axis are downstream. In Figure 7 we present the sunward (\(z_{b,0}^{+}\)) and anti-sunward Elsasser variables for a viewing angle of 160\({}^{\circ}\) (CME flank) and the Alfven wave with frequency 3 mHz injected in the quiet solar Figure 5: Snapshots of Elsasser variables. The figure presents the anti-sunward-propagating Elsasser variable \(z_{a}^{-}=v_{a}-B_{a}/\sqrt{\mu a\rho}\) during the CME evolution shown at various times. The figures are annotated with a viewing angle of 160\({}^{\circ}\) corresponding to the CME flank. Figure 6: Snapshot of the Alfvén velocity, defined as \(v_{a}={\bf v_{a}}\cdot\hat{\bf b}\), at \(t=37.5\) min. The figure is annotated to show the CME flank at the viewing angle 160\({}^{\circ}\). Figure 7: Evolution of the Elsässer variables at the CME flank. The spatio-temporal evolution of the Elsässer variables for the non-radial directions in the frame of reference of the shock (\(x=0\)) is presented. The quantities are shown for a viewing angle of 160\({}^{\circ}\), and an Alfvenic fluctuation frequency of 3 mHz. The x-axis denotes the shock neighbourhood in units of \(R_{\odot}\); positive values indicate the solar wind and negative values indicate the region downstream of the shock. wind. The x-axis represents the distance along the given viewing angle, and the y-axis is the simulation time. The magnitude of the Elsisser variables are described using the colour intensity. In Figure 7(a) the upstream solar wind fluctuations can be observed as they are incident onto the CME shock from \(x>0\). The upstream Alfven waves in the simulation have significant amplitudes until \(\approx 10\) R\({}_{\odot}\) and therefore disappear beyond t \(\approx 40\) min. Further into the downstream \(x<0\) we see the long-wavelength component as the CME propagation modifies the downstream waves. The white region between these two regions is where the short-wavelength component of the shock-transmitted waves should be present, as expected based on the analysis in Section 3.1.1. This white region corresponds to a similar region in Figures 5(e)-(f) around the CME shock where the upstream waves should be compressed. Additionally, we see the generation of sunward fluctuations in Figure 7(b) due to the interaction of solar wind fluctuations with the CME shock (Equation 14). In the \(\theta\) direction, we do not see any fluctuations upstream, as expected in Figure 4(d). However, in the shock neighbourhood we can see the effect of non-radial flows due to it being a non-radial shock, which was also seen in Figure 7(d). The white region in Figure 7(a) is further investigated through Figure 8 where the density compression, the anti-sunward Elsisser variable, the flow speed (\(\mathbf{v}\cdot\mathbf{b}\)), and the Alfven speed along the background field (\(\mathbf{v}_{a}\cdot\mathbf{b}\)) are presented at \(t=25\) min. Panel (a) of the figure is an annotation of Figure 5(f) with the location of the CME shock and the approximate beginning of the sheath, where we start observing the long-wavelength fluctuations. The white region thus corresponds to the location between these two markers. Panel (b) shows the density compression utilised in identifying the shock, and panel (c) is the anti-sunward Elsisser variable. The average shock velocity at the \(160^{\circ}\) viewing angle between \(t=6.25\) min to \(t=31.25\) min is found to be \(\approx 2078\) km s\({}^{-1}\), as annotated in panel (a). The shock is associated with a gas compression ratio of \(\approx 2\) (panel (d)), with the Alfven speed increasing from upstream to downstream (panel (e)). Then, downstream of the shock (through Equation 14) the wavelength of the upstream wave would be compressed by about three times. The absence of the anticipated compression of the upstream Alfven wave in our simulation indicates that the spatial grid does not adequately resolve this specific region. This causes the transmitted waves to be of lower amplitude in this location, as observed in panel (c), signifying numerical dissipation. Therefore, the downstream fluctuations plotted in Figure 7 do not contain the additional shock-compressed Alfven waves. However, the restricted grid resolution for this simulation is necessary to sustain a monochromatic Alfven wave before the CME injection by numerically damping the waves before their decay. In Appendix A we present a modified simulation where the shock-compressed waves are captured. The results in the Appendix show an adherence to the expected composition of the downstream waves (see Section 3.1.2), with the downstream waves becoming modified compared to Figure 7 only after t \(\approx 20\) min, due to possible wave steepening. Thus, in practice the downstream solar wind fluctuations would contain long- and short-wavelength components due to the CME passage and upstream waves transmission, respectively, prior to the development of further non-linear interactions. In Figure 9, we show the solar wind fluctuations around the shock when viewing the CME head-on instead of the flank. Panel (a) shows the upstream solar wind fluctuations incident onto the shock. A similar white region corresponds to the region where the incident waves would be compressed. However, in the far downstream region \(x<0\), we only observe large non-radial flows as the large positive guide field \(B_{\phi}\) of the FR affects the surrounding plasma to generate a non-radial flow. These non-radial flows are additionally observed in the sunward component (panel (b)). In the \(\theta\) directions (panels (c) and (d)) the large flows are generated due to the non-radial topology of the CME shock. Therefore, a primary difference between solar wind fluctuations downstream of the CME shock for a head-on encounter (Figure 9) compared to a flank encounter (Figure 7) is the absence of long-wavelength amplified fluctuations, which are comparable in size to the non-radial flows. Through Figure 8(a) it is seen that the CME shock is non-radial as the shock velocity is greater head-on (the direction where the FR is expanding) than at the flank. This indicates that the wavelengths of the shock-compressed upstream waves differ as the compression depends on the Alfven Mach number in the shock frame of reference. Furthermore, as the shock expands faster than the ambient Figure 8: Shock at the CME flank. Panel (a) is a simulation snapshot at \(t=25\) min of the anti-sunward Elsisser variable \(z_{\phi}^{*}\) with annotations describing the viewing angle along \(160^{\circ}\), the shock location, the approximate beginning of the CME sheath, and the approximate shock velocity \(e_{\rm shock}\). Panels (b) and (c) are the density compression and \(z_{\phi}^{*}\) along the viewing angle, respectively. Panels (d) and (e) present the fluid velocity and Alfvén speed along the direction of the background magnetic field. Alfven speed, we expect different characteristics of the fluctuations closer to the shock (containing a mix of shock-transmitted and already present fluctuations) and further downstream (with the CME-amplified fluctuations). ## 4 Formation of the CME Sheath In Section 3 we discussed the dependence of the CME sheath fluctuations on the upstream solar wind conditions and the shock properties. The interaction of the solar wind fluctuations with the CME shock gave rise to both sunuward and anti-sunward Alfvenic fluctuations at the CME flank (Figures 7), along with the compression of anti-sunward upstream waves. In addition, the CME sheath contains non-radial flows due to the magnetic structure of the FR and the non-radial CME shock (Figures 4(c)- 4(d)). The extent of the non-radial flows, represented by the dark blue region in Figure 4(c) and Figure 5, suggests that their spatial extent is comparable to the Alfven waves at the CME flanks. This limitation could hinder the presence of large-amplitude Alfvenic fluctuations in the presence of similarly large non-radial flows. Thus, we now investigate the influence of the Alfven waves on the growth of the CME sheath region and propagation of the CME shock. This allows us to understand the effect of Alfven waves on the shock properties and to infer the development of non-radial flows as the CME propagates further in the solar wind. Previous studies have shown significant variations of the sheath thickness based on the physical properties of the CME, more precisely the properties of the CME FR and the shock compression ratio (Russell & Mulligan, 2002). Thus, we investigate how the large-scale structure of the sheath depends on the density and injection velocity of the FR driving it, and the frequency of the Alfvenic fluctuations that are present in the solar wind. These different cases, studied by varying a selection of the parameter values of the simulation set-up, including the case studied in the previous sections (henceforth designated as C1) are detailed in Table 1. We find that the large-scale structures of the sheath, such as a PUC, high-speed flows, and magnetic field line draping, are similar for all the cases considered in Table 1. To quantify the differences, for each model run the extent of the sheath, location of the FR, and the shock location for a viewing angle of \(105^{\circ}\) (head-on encounter) are computed and presented in Figure 10(a)-(c) as a function of time. The computation of the shock's location relies on the density compression ratio, while the positioning of the FR is determined by identifying the first closed magnetic field contour encountered along the viewing angle directed towards the Sun. Subsequently, in Figures 10(d)-(i), we display the snapshot of the simulation for the various cases (Table 1) at \(t=21\) min from the event onset. These snapshots are overlaid with markers displaying the viewing angle, shock location, and the FR leading edge location. Panels (b) and (c) in Figure 10 show that the lower the density of the FR, the faster the FR and its leading shock propagate through the corona; this is seen by comparing the high-density (C2; blue curve), nominal density (C1; black curve), and low-density (C3; orange curve) cases. We note that for these three cases the initial FR speed and the frequency of injected fluctuations were the same (\(500\) km s\({}^{-1}\) and \(3\) mHz, respectively). From these cases, the low-density FR (C3) that propagates fastest through the solar corona has the widest sheath (panel a). This dependency of the propagation speed on density can be understood by considering the deflection of case C3; as explained in Section 3, deflection is expected to result from magnetic reconnection at the FR boundaries. The comparison between simulation snapshots in Figures 10(d)-(f) show that the low-density case C3 deflects more than the higher-density cases C1 and C2. This deflection causes our selected viewing angle to focus on the flank of the CME for C3, while for C1 and C2 their higher inertia prevents them from deflecting and they are probed head-on, as intended. Next we explore the effect of FR injection velocity. The shock and the FR for C4 (green curve) propagate faster through the corona than for C1, which has the same density but slower speed (see Fig. 10, panels (b) and (c)). This results in a much smaller sheath thickness for C4 than for C1 (Figure 10(a)). Finally, when we increase the Alfvenic fluctuation frequency in C5, and compare it to run C1, which otherwise has identical parameters, there is no notable difference in the FR location (Figure 10(b)), but the shock propagates faster (Figure 10(c)), causing the sheath extent to increase (Figure 10(a)). Moreover, when not injecting any Alfvenic fluctuations, as is the case for run C6, we still see no difference in the FR location compared to C1, but the shock propagates more slowly and the sheath extent decreases. The variations in the shock evolution (Figure 10(c)) for the different cases can be understood through the steepening of the MHD waves. The initial eruption of the CME onto a quasi-steady solar wind generates a fast wave propagating ahead of the CME as it initially strongly expands in the surrounding plasma. If we assume the wave driven by the FR to be a pressure wave, then as this wave propagates it locally compresses the plasma and \begin{table} \begin{tabular}{|c|c|c|c|} \hline \multirow{2}{*}{**Case**} & \multicolumn{2}{c|}{**CME Parameters**} & \multirow{2}{*}{\(f_{\text{in}}\)**[mHz]**} \\ \cline{2-3} & \(\mathbf{v}_{\text{\text{\tiny{gl}}}}\) [km s\({}^{-1}\)] & \(\rho(t=0)/\rho_{0}\) & \multirow{2}{*}{\(f_{\text{in}}\)**[mHz]**} \\ \hline C1 & 500 & 1 & 3 \\ \hline C2 & 500 & 2 & 3 \\ \hline C3 & 500 & 0.5 & 3 \\ \hline C4 & 1000 & 1 & 3 \\ \hline C5 & 500 & 1 & 5 \\ \hline C6 & 500 & 1 & 0 \\ \hline \end{tabular} \end{table} Table 1: Model parameters used in the different simulation runs. The ejection velocity \(\mathbf{v}_{\text{\tiny{gl}}}\) is the initial velocity imparted to the CME along \(90^{\circ}\), the \(\rho/\rho_{0}\) parameter is the density imparted to the CME as a multiple of the low-sound boundary density, and \(f_{\text{in}}\) denotes the frequency of the injected Alfvenic perturbations. Figure 9: Evolution of the Elsässer variables at the CME nose, similar to Figure 7, but with a viewing angle of \(105^{\circ}\). The x-axis similarly denotes the shock neighbourhood in units of \(R_{\text{\tiny{0}}}\) with the shock centred at \(x=0\). increases the local sound speed. This would cause the next pressure wave pulse generated by the outward-propagating FR to catch up to the preceding wave modes, thus causing a shock to be generated by the steepening of large-amplitude compressive disturbances. In the general case, the fast wave generated by the FR eruption would be a MHD wave. The rate of steepening a fast mode MHD wave, with no additional assumptions other than the compressibility of the medium, was previously derived (e.g. Kantrowitz et al., 1966) to be given by \[\gamma_{s}=\omega\frac{\delta\rho}{\rho}\left[1+\frac{1}{2}\frac{(\gamma-1)v_{ A}^{2}c_{s}^{2}\sin^{2}\theta+(v_{\rm ph}^{2}-c_{s}^{2})^{2}}{v_{A}^{2}c_{s}^{2} \sin^{2}\theta+(v_{\rm ph}^{2}+c_{s}^{2})^{2}}\right]. \tag{15}\] Here \(\omega\) is the wave frequency, \(v_{A}\) the Alfven speed, \(c_{s}\) the sound speed, \(v_{ph}\) the phase speed of the wave, and \(\theta\) the wave normal angle relative to the magnetic field. The steepening rate depends primarily on the compressibility of the medium (\(\delta\rho/\rho\)), where \(\rho\) corresponds to the undisturbed solar wind density, with minor contributions from the term in brackets (Kennel et al., 1985; Tsurutani et al., 1987). Among the CME runs described in Table 1, the high-density FR (case C2) corresponds to an increased \(\delta\rho/\rho\), while for the low-density FR (case C3) \(\delta\rho/\rho\) is smaller than for C1. As a consequence, for C2 the wave steepens to a shock more quickly (at a lower starting height) than for C1, while for C3 the shock forms later. The shock locations in Figure 10(c) grow linearly with time, which indicates that after the fast wave steepens the shock propagates with a constant velocity in this region. Therefore, for the case of C2 the fast wave steepens to a shock fastest from the investigated cases and the shock thus forms at the lowest heights in the corona (Figure 10(c)). For C3 in turn, the wave decelerates more slowly and the shock forms at a greater height. For run C4 the higher injection velocity does not have a direct influence on the steepening rate (Equation 15), and the shock starts at approximately the same height as for C1 in Figure 10(c). However, because the FR in C4 propagates much faster through the corona than for C1 (due to the higher injection velocity), it drives the shock faster at the CME nose, and so the difference in the shock location between C4 and C1 increases as the simulation progresses. Finally, the cases C5 and C6 show that the frequency of the upstream Alfvenic perturbations seem to affect the speed of the CME shock. We note that this dependence of the shock speed on the Alfven wave frequency is independent of the grid resolution (Appendix A). In Section 2.1, we showed that Alfvenic waves in the solar wind in our simulation could not steepen to form shocks themselves. Only the interaction of the Alfven waves with the shock, and the initially propagating fast MHD wave prior to shock formation may alter the shock speed. Figure 10: Formation of the CME sheath in the different simulation runs. Shown are the evolution of the radial extent of the sheath (a), the flux rope leading edge (b), and the location of the CME shock along a viewing angle of \(105^{\circ}\) (c) for the simulation runs detailed in Table 1. The individual runs at time \(t=21\) min are visualised in (d)-(i); the magnetic field lines are colour-coded according to the case number. In each panel (d)-(i), the viewing angle, the flux rope, and shock locations are also indicated. It should be noted that this effect of the Alfven wave frequency affecting the shock formation is shown for a quasi-parallel shock in this simulation. In the case of a perpendicular shock, previous studies (Lu et al., 2009) only indicated structural modifications at the shock front without an influence on the propagation speed. Thus, the propagation velocity of the CME shock depends initially on the effect of the wave steepening followed by the FR driving it further. The FR itself propagates based on the injection velocity and momentum contributing to the resulting force imbalance at onset. This variation of how different parameters affect the shock and FR location separately causes differences in the CME sheath extent. ## 5 Conclusion This study presents the interaction between small amplitude Alfvenic fluctuations and a CME in the low corona using 2.5D time-dependent MHD simulations. The fluctuations in the quasi-steady solar wind are linearly polarised and monochromatic in frequency. They are injected using time-dependent boundary conditions in the low corona. In Section 2.1 we described the linear evolution of the injected Alfven waves without decaying into compressive and reflected wave modes. In this scenario we found that the CME sheath would consist of short-wavelength waves that are compressed by the shock and long-wavelength waves transmitted in the initial quasi-perpendicular phase of the CME expansion, which were modified in wavelength by the CME shock passage. The Alfven waves downstream of the CME shock were inhibited close to the FR due to non-radial flows. While this result was obtained for a 2D simulation, we can extrapolate this argument into a higher dimension. In a 3D case we would observe non-radial flows in \(\phi\) at the CME flanks as well (in the same manner as we do for \(\theta\)). Thus, we might expect the CME sheath fluctuations to consist of short-wavelength components based on the non-radial flows present in each direction. Due to the importance of the CME sheath structure in influencing the fluctuations present in this region, we investigated the formation of the sheath in Section 4. We found the CME-driven shock to be formed due to wave steepening, with an additional constraint on the frequency of the fluctuations present in the system. At the same time the FR evolution is unaffected by the frequency of the fluctuations. In the discussion presented in this study, we do not address the Alfven waves generated by the magnetic recconnection (Cranmer, 2018; Lynch et al., 2014) inside the CME sheath. We cannot capture these additional waves in this simulation as they require a much finer simulation grid. The properties of such reconnection-driven Alfven waves depend on the rate of reconnection, plasma \(\beta\), and magnetic field strength (Kigure et al., 2010). A complete discussion of the impact of these waves on the observed properties of fluctuations inside CME sheaths would require further study. The results presented in this study are thus in the context of the shock transmission of already-present solar wind fluctuations. Therefore, the applicability of these results is valid close to the CME shock when compared with spacecraft observations. A primary result of this study is the transmission of the upstream Alfven waves based on the upstream solar wind conditions in the frame of reference of the CME shock (Section 3.1.1). This transmission process naturally generates sunward-propagating Alfven waves, with the compression of the upstream anti-sunward propagating waves varying in the latitudinal direction (\(\theta\)) due to the varying shock speeds. This indicates that Alfvenic fluctuations have only anti-sunward components upstream, and both sunward and anti-sunward components downstream due to their interaction with the CME shock. This behaviour has been observed across CME shocks (Good et al., 2020). Additionally, the properties of the downstream Alfvenic fluctuations depend on their relative distance to the CME shock; locations closer to the shock contain more compressed upstream waves. This might indicate varying spectral slopes in the near-shock, mid-sheath, and near-FR regions of the CME sheath (Kilpua et al., 2020). In Section 4 we observed that the Alfven wave frequency affects the shock velocity. This requires further study, as previous studies investigating this interaction for perpendicular shocks found no appreciable differences in shock speeds. Thus, the result presented in this study might be a feature of the quasi-parallel CME shock. ###### Acknowledgements. The work has been supported by the Finnish Centre of Excellence in Research on Sustainable Space (FORESAIL). This is a project under the Academy of Finland, and this research has been supported by the European Research Council (SoIMAG; grant no. 724391) as well as Academy of Finland project SWATCH (343581). The authors are also grateful to the anonymous referee for their constructive input during the peer review process.
2309.14662
Transformer-based classification of user queries for medical consultancy with respect to expert specialization
The need for skilled medical support is growing in the era of digital healthcare. This research presents an innovative strategy, utilizing the RuBERT model, for categorizing user inquiries in the field of medical consultation with a focus on expert specialization. By harnessing the capabilities of transformers, we fine-tuned the pre-trained RuBERT model on a varied dataset, which facilitates precise correspondence between queries and particular medical specialisms. Using a comprehensive dataset, we have demonstrated our approach's superior performance with an F1-score of over 92%, calculated through both cross-validation and the traditional split of test and train datasets. Our approach has shown excellent generalization across medical domains such as cardiology, neurology and dermatology. This methodology provides practical benefits by directing users to appropriate specialists for prompt and targeted medical advice. It also enhances healthcare system efficiency, reduces practitioner burden, and improves patient care quality. In summary, our suggested strategy facilitates the attainment of specific medical knowledge, offering prompt and precise advice within the digital healthcare field.
Dmitry Lyutkin, Andrey Soloviev, Dmitry Zhukov, Denis Pozdnyakov, Muhammad Shahid Iqbal Malik, Dmitry I. Ignatov
2023-09-26T04:36:12Z
http://arxiv.org/abs/2309.14662v2
Transformer-based classification of user queries for medical consultancy with respect to expert specialization ###### Abstract The need for skilled medical support is growing in the era of digital healthcare. This research presents an innovative strategy, utilizing the RuBERT model, for categorizing user inquiries in the field of medical consultation with a focus on expert specialization. By harnessing the capabilities of transformers, we fine-tuned the pre-trained RuBERT model on a varied dataset, which facilitates precise correspondence between queries and particular medical specialisms. Using a comprehensive dataset, we have demonstrated our approach's superior performance with an F1-score of over 92%, calculated through both cross-validation and the traditional split of test and train datasets. Our approach has shown excellent generalization across medical domains such as cardiology, neurology and dermatology. This methodology provides practical benefits by directing users to appropriate specialists for prompt and targeted medical advice. It also enhances healthcare system efficiency, reduces practitioner burden, and improves patient care quality. In summary, our suggested strategy facilitates the attainment of specific medical knowledge, offering prompt and precise advice within the digital healthcare field. Keywords:Transformers, Query Matching, Medical Texts, Many-class Learning ## 1 Introduction The demand for qualified medical assistance has never been more significant, especially in the digital era. As online platforms increasingly serve as crucial sources of medical information and support [1], ensuring the provision of accurate and specialized advice becomes imperative. One such platform that has garnered attention is Babyblog.ru [2], which uniquely leverages user-generated content as a gateway and contextual backdrop for medical professionals' knowledge dissemination. However, the abundance of user-generated content poses challenges regarding the scientific credibility and reliability of the information shared [3]. Consequently, there is a pressing need to implement mechanisms that ensure the verification and enrichment of user-generated content through the input and recommendations of diverse professionals, including doctors, psychologists, speech therapists, and educators. This collaborative approach allows professionals to review user posts, comments, and discussions, thereby providing expert insights, correcting non-specialist advice, and ensuring the delivery of accurate and reliable medical information. Given the substantial volume of user-generated content across various platforms, encompassing a wide array of topics including medical, quasi-medical, and non-medical domains, the challenge of identifying content requiring medical or professional verification becomes increasingly significant. Furthermore, the importance of classifying this diverse content based on thematic specialization emerges as a critical factor in directing relevant user queries to the appropriate professionals for verification purposes. To address these challenges, the research and development team embarked on the development of an automatic classifier for medical texts. This classifier aims to determine the likelihood of associating a given text with a specific medical specialization. The envisioned implementation involves integrating the classifier into the platform, wherein it identifies medical content and assigns corresponding medical specializations. Subsequently, professionals in the respective specializations are notified to verify the content and provide appropriate responses. The successful development of the classification system offers multiple benefits, including streamlining the verification process by reducing irrelevant information presented to medical professionals, alleviating the workload involved in content verification, and accelerating the provision of professional responses to users. Moreover, the proposed system serves as a valuable tool in improving the quality, completeness, and reliability of medical information related to conception, pregnancy, and motherhood on the platform. This study aims to explore the efficacy of a transformer-based system in classifying user-generated medical content within the context of Babyblog.ru. By leveraging advanced Natural Language Processing (NLP) techniques, this research endeavors to revolutionize the ways users access specialized medical expertise, ensuring the delivery of timely and accurate guidance while upholding scientific rigor and reliability. ## 2 Related works In the realm of medical text classification, the research paper titled "Automatic Medical Specialty Classification Based on Patients' Description of Their Symptoms" [4] presents a significant contribution to the field. The study proposes a pioneering Hybrid Model (HyM) that combines multiple deep learning techniques, including LSTM, TEXT-CNN, BERT, and TF-IDF, along with an attention mechanism to address the critical challenge of accurately directing patients to the appropriate medical specialty based on their symptom descriptions. The article "Text Classification Using Improved Bidirectional Transformer" [5] presents a significant contribution to the field of text processing, particu larly in the context of handling large amounts of text data generated daily. The authors highlight the necessity for automation in text data handling and discuss recent developments in text processing, including attention mechanisms and transformers, as promising methods to address this need. In their study, the authors introduce a novel bidirectional transformer (Bi-Transformer) model, constructed using two transformer encoder blocks that utilize bidirectional position encoding. By considering both forward and backward position information of the text data, the proposed BiTransformer aims to capture more comprehensive contextual dependencies, enhancing the model's ability to handle complex text data. To evaluate the effectiveness of attention mechanisms in the classification process, the authors compare four models, namely long short-term memory (LSTM), attention, transformer, and their proposed BiTransformer. Experiments are conducted on a large Turkish text dataset comprising 30 categories, allowing for a comprehensive assessment of the models' performance. One of the notable findings of the study is the promising results obtained from the classification models that employ transformer and attention mechanisms compared to classical deep learning models. This demonstrates the potential of attention mechanisms and transformers in text classification tasks, showcasing their ability to capture meaningful patterns and context in textual data. The authors also investigate the impact of using pretrained embeddings on the models' performance. Pretrained embeddings, which capture semantic representations of words based on large corpora, have been a popular approach to improve model performance in various NLP tasks. The study sheds light on how pretrained embeddings can further enhance the efficiency and accuracy of text classification models. Perhaps the most significant result of the study is the superior performance of their proposed BiTransformer in text classification. By effectively incorporating bidirectional position encoding and leveraging transformer-based architecture, the BiTransformer outperforms other models in accurately categorizing the text data. "Text Classification Using Improved Bidirectional Transformer" provides insights into the potential of attention mechanisms and transformers in text processing. The introduction of the BiTransformer and its superior performance in text classification open up new avenues for future research and application of transformer-based models in NLP tasks. The study's findings have important implications for automating text data handling, sentiment analysis, information retrieval, and other text-related applications. As the demand for efficient and accurate text processing techniques continues to grow, this research makes a significant contribution to the advancement of the field and serves as a valuable reference for researchers and practitioners in the domain of natural language processing. ## 3 Data Collection: Building a Comprehensive Dataset for Medical Text Classification In this section, we describe the process of building the dataset. It includes developing data parsers and normalizers to create a normalized dataset for the experimental setup. ### Data parsing To obtain a suitable training sample, we extensively explored various Russian-language websites that provide public access to medical questions posed by users to healthcare professionals. We employed specific criteria to select our data sources, including: 1) presence of openly accessible sections containing medical questions, 2) availability of pre-annotated questions based on medical specialization, and 3) responses provided by healthcare professionals, which verified the appropriateness of the assigned medical specialization to the responding doctor. Based on our analysis, we selected the following sources for data acquisition: **sprosivracha.com**[6], **doctu.ru**[7], **03online.com**[8] and **health.mail.ru**[9]. We developed software for parsing these data sources, allowing for the asynchronous, multi-threaded retrieval of information from public data sources. The software was designed to extract relevant information from the HTML structure and store them for further processing. In the subsequent step, the algorithm asynchronously processes each row of the obtained table and retrieves the HTML code of the page containing the question posed to the doctor. From each HTML code, the algorithm extracts the question text and the doctor's specialization using predefined tags and classes that enclose the relevant text. The extracted data (question text and doctor's specialization) are then added to the same table, complementing the rows with the data source (URL as the data source identifier). Once the parsing process and table population are complete, all the acquired data are exported to a CSV file for further processing. ### Data Augmentations After analyzing the acquired dataset, we noticed that the distribution of data units across medical specializations followed a pattern similar to a Pareto dis \begin{table} \begin{tabular}{|c|c|c|} \hline **Website** & **Number of Questions** & **Percentage of Total** \\ \hline sprosivracha.com & 550,000 & 23.2 \\ \hline doctu.ru & 83,000 & 3.5 \\ \hline 03online.com & 1,148,000 & 48.4 \\ \hline health.mail.ru & 590,000 & 24.9 \\ \hline \end{tabular} \end{table} Table 1: Comparison of Medical Question Platforms tribution. This observation can be attributed to the fact that certain medical specializations are in higher demand compared to others, resulting in a significant number of data units belonging to those specific classes. However, to ensure the stability and resilience of our classifier, it was crucial to address the class imbalance issue [10]. To tackle class imbalance and enhance the model's generalization ability, we employed data augmentation methods facilitated by the Albumentations library [11]. This versatile tool enabled us to create new textual data by rearranging word positions within sentences, preserving the overall context and meaning. This diversification of input data aimed to produce a more balanced and comprehensive dataset. Specifically, our data augmentation techniques involved shuffling words and reordering sentence components. Through the augmentation process, we were able to generate additional data points for the minority classes, effectively reducing the class imbalance and achieving a more uniform distribution across all medical specializations. This augmentation strategy not only helped to improve the classifier's performance for underrepresented classes but also enhanced its ability to handle unseen data during the testing phase. After applying all the necessary transformations and augmentations, we successfully obtained a dataset with a more uniform distribution of classes and expanded the original dataset to 5 million texts, where there are approximately 50000 exemplars per class. This balanced dataset formed the basis for training and evaluating our proposed framework. The development of the proposed dataset arises from the recognition of a crucial need in the field. While there exist analogous datasets, they exhibit certain limitations in adequately covering a comprehensive spectrum of diseases and medical conditions. Additionally, these existing datasets suffer from a paucity of records, which impedes their capacity to comprehensively represent the diverse range of health concerns. A further challenge lies in the nature of the content within these datasets; predominantly composed in technical language, they lack congruence with the narratives of individuals detailing their ailments. Figure 1: Class Distribution after transformation and augmentation (first 70 classes). This discrepancy hampers the efficacy of these datasets in capturing the nuanced descriptions of health issues as articulated by individuals themselves. In light of these deficiencies, the development of the proposed dataset emerges as a pivotal endeavor, with the intent to address these gaps and furnish a resource that aligns more closely with the authentic narratives of people regarding their health conditions. Through the proposed dataset, an avenue is created to elicit novel insights that may have remained obscured within the confines of the existing datasets, fostering a more holistic understanding of individuals' health experiences. ## 4 Proposed Methodology This section provides details of proposed methodology. We explore various methods of transformers and their training. The pipeline is presented in Figure 2. Figure 2: Processing pipeline. ### Transformer Models Typically, neural networks are trained using the backpropagation algorithm [12], which optimizes model parameters by computing gradients to improve generalization performance through error minimization and/or enhancing metrics on the validation set. However, this method heavily relies on the choice of optimization algorithm [13], as there is a risk of getting stuck in local minima during gradient computation, leading to the model's inability to learn and improve prediction/recognition quality (vanishing gradients). To address this, we employed the AdamW [14] optimizer, one of the state-of-the-art methods, which leverages information about the learning rate history to approximate the direction of the anti gradient while incorporating momentum to expedite the convergence of our function. This optimizer significantly improves model training; however, it is sensitive to the choice of the learning rate. Hence, we employed a learning rate scheduler that suits our task best - the cosine scheduler [15]. This scheduler adjusts the learning rate for each batch of data, allowing transformers to adaptively change the learning rate. We opted for the cross-entropy loss function as our choice for the loss function, as it measures how well the model is trained for classification tasks. The utilization of the AdamW optimizer and the linear scheduler with a warm-up for training text classifiers based on BERT has proven effective for several reasons: **AdamW Optimizer:** AdamW is a variant of the Adam optimizer that has been shown to work well for fine-tuning pre-trained models such as Transformer [16]. It addresses the weight decay issue in Adam, helping to prevent overfitting [17]. **Cosine Scheduler:** The cosine scheduler modifies learning rate starting with a lower learning rate and gradually increases it over a specified number of training steps. This warm-up period allows the model to converge faster, preventing instability or fluctuations in the loss function during training [18]. It is worth noting that the optimal training methods may vary depending on the specific task and the data being used, requiring experimentation and fine-tuning. Furthermore, there are several reasons why transformer-based models have emerged as the preferred choice for medical text classification compared to classical machine learning methods [19]: **Pretraining:** Transformers are pre-trained on large corpora of texts, which provides them with a strong knowledge base and an understanding of language patterns and word relationships. This pretraining allows models like Transformer to perform well across various NLP tasks with limited fine-tuning. **Contextual Representation:** Transformers employ bidirectional attention mechanisms to create contextual word representations, enabling them to capture the context and meaning of words within a sentence. This is particularly crucial for text classification, where understanding sentence context is key to assigning the correct label. **Transfer Learning:** The pretraining and fine-tuning process of BERT allows for transfer learning, where a pre-trained model on a related domain can be accurately fine-tuned for specific tasks with a limited amount of labeled data. This is a significant advantage for text classification tasks, which often have a limited number of annotated data. **Superior Performance:** Transformers have shown to outperform traditional machine learning methods in various NLP tasks, including text classification. This can be attributed to their ability to capture contextual representations and word relationships, which are crucial for understanding sentence semantics. **Pretraining on Russian Texts:** Models pre-trained on large Russian corpora exhibit improved performance and quality in Russian text processing tasks compared to training from scratch. Raw textual data provides models with a natural foundation for building language contextual representations. The size of the raw text corpus is crucial during the pretraining phase. It is important to note that traditional machine learning methods are still widely used and can yield good results for specific NLP tasks. However, the possibilities offered by pretraining, contextual representation, transfer learning, availability of Russian language models, and the use of raw texts for training make transformers a powerful tool for medical text classification. ### Training Process The training algorithm makes use of the architecture and pre-trained weights of a transformer model, obtained from the transformers [20] package, and cached for subsequent utilization. During this phase, the model initialization is executed, which includes the initiation of the tokenizer via the AutoTokenizer module from the transformers library. Additionally, the output layer of the model is modified to suit the specific task at hand. Subsequently, an optimal batch size is determined by generating an artificial dataset and conducting a grid search to identify the batch size that optimally balances computational efficiency and resource utilization. This strategic step is essential to ensure the model's efficiency during computations on the server. During the course of training, a significant aspect involves the aggregation of energy following the application of the softmax activation function [21]. This process offers insights into the model's confidence levels for each distinct class. The resulting energy accumulation, presented in the form of probability scores, functions as an indicator of the model's assurance in assigning input data to specific classes. This measure of confidence holds a central role in the model's final predictions, contributing to its ability to make well-informed decisions about the designated classes. It's important to mention that the target labels are numerical class identifiers, previously encoded using the LabelEncoder, while the input data comprises natural language questions with descriptive explanations of medical issues. ## 5 Experimental Setup In this section, we describe the detail of experimental setup including the hardware setup used for training the models. Furthermore the training time required for each transformer model is also discussed. ### Hardware Setup For the model training, we utilized a powerful hardware setup consisting of two NVIDIA V100 GPUs with 32GB of memory each. The GPUs were complemented by 250GB of RAM, ensuring efficient processing and storage of the large-scale dataset. The training process was conducted on the high-performance computing system cHARISMA [22], which provided the necessary computational resources for training deep learning models. ### Training Time The training time for each transformer model varies depending on its architecture and complexity. Following are the training times observed for each model: * **SBERT [23]** : The SBERT model required approximately 54 hours to complete the training process. The extensive training time can be attributed to its deep architecture and complex attention mechanisms. * **LaBSE [24]**: The LaBSE model demonstrated faster training times, with the training process taking approximately 12 hours. The model's efficient architecture and advanced pre-training techniques contribute to its reduced training time. * **RuBERT [25]** : Training the RuBERT model took around 13 hours. The model's architecture, specifically designed for the Russian language, required additional time for fine-tuning and convergence. * **BERT [26]** : Similar to LaBSE, the BERT model also completed training in approximately 12 hours. Its widely adopted architecture and availability of pre-trained weights contribute to the faster training time. * **BART [27]**: The BART model, known for its transformer-based sequence-to-sequence architecture, required a longer training time of 55 hours. The complexity of the model and the additional training required for the encoder-decoder structure contributed to the extended duration. The complete training and evaluation cycle, including cross-validation with a k-fold value of 3, ranged from 3 days to 12 days, depending on the specific model. This timeframe accounted for multiple iterations of training, hyperparameter tuning, and performance evaluation. The significant training times for certain models underscore the computational resources and time investment required for training large-scale transformer models. However, the improved performance achieved by these models justifies the efforts put into training and fine-tuning them. In the next section, we present the results of our experiments and evaluate the performance of each model. ## 6 Experimental Results and Performance Analysis In this section, we present the analysis of learning outcomes obtained by several experiments. We trained several models using cross-validation techniques and evaluated their performance using the F1-score metric. As depicted on Figure 3, the plot presents the learning curves of the LaBSE, SBERT, BERT, and BART models. It is evident from the graph that LaBSE demonstrates remarkable performance superiority compared to the other models. The learning curve of LaBSE displays significantly higher accuracy and faster convergence, indicating its exceptional capability to learn from the provided dataset. However, for the Russian text specifically, the RuBERT model achieves the highest quality due to its pre-training on a Russian corpus of texts. Conversely, the learning curves of SBERT, BERT, and BART models exhibit relatively lower accuracy and slower convergence, suggesting their relatively inferior performance in this specific task. The notable contrast in performance between LaBSE and RuBERT underscores their effectiveness and underscores their potential as robust models for the given classification problem. This can be explained by the fact that LaBSE is good at distinguishing between entities, this can be seen in the Umap image, which converts sentence embeddings into a two-dimensional representation. Also, Umap shows that RuBERT is very similar to other pictures which show rather poor quality, but considering that this model is well adapted for Russian, after fine-tuning it starts to show much better quality. Figure 3: Training Curve of Various Models across Folds. The following models were trained and their corresponding F1-scores are reported. Results in table 2 provide insights into the performance of different models in our experimental setup. The high F1-scores obtained by RuBERT and LaBSE suggests that they effectively captured the semantic representations and contextual information in the text data. SBERT and BERT also demonstrated competitive performance, although slightly lower than RuBERT and LaBSE. BART exhibited a slightly lower F1-score, indicating that its performance may \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{Model} & \multicolumn{2}{c|}{K-fold (F1-score, k = 3)} & \multicolumn{2}{c|}{Split (F1-score, train = 90\%)} \\ \cline{2-5} & not augmented & augmented & not augmented & augmented \\ \hline BART & 0.798 & 0.891 & 0.794 & 0.896 \\ \hline BERT & 0.743 & 0.894 & 0.760 & 0.903 \\ \hline LaBSE & 0.824 & 0.911 & 0.833 & 0.913 \\ \hline LogRegression & 0.457 & 0.552 & 0.531 & 0.564 \\ \hline Random Forest & 0.521 & 0.579 & 0.596 & 0.603 \\ \hline **RuBERT** & **0.839** & **0.918** & **0.852** & **0.918** \\ \hline SBERT & 0.782 & 0.905 & 0.761 & 0.895 \\ \hline SVM & 0.525 & 0.565 & 0.534 & 0.598 \\ \hline \end{tabular} \end{table} Table 2: Performance Comparison of Transformer Models Figure 4: Umap image of different models on our dataset be influenced by the specific task and dataset. Overall, the analysis of learning outcomes highlights the effectiveness of various models in our experiments, with RuBERT and LaBSE demonstrating particularly promising results. These findings contribute to our understanding of the strengths and limitations of different models and can guide future research and practical applications in the field of NLP. Table 3 presents the performance evaluation results of a classification model for various categories of medical specialists. It includes the metrics precision (\(P\)), recall (\(R\)), F1-score (\(F1\)), and support for each category. These metrics provide insights into the model's ability to correctly classify instances belonging to different medical specialties. These evaluation metrics help assess the effectiveness of the classification model in distinguishing between different medical specialties. The values in the \begin{table} \begin{tabular}{l|c c c c} **Category** & **Precision** & **Recall** & **F1-Score** & **Support** \\ \hline ENT & 0.7555 & 0.7432 & 0.7493 & 15276 \\ Ophthalmologist & 0.9403 & 0.9210 & 0.9305 & 14936 \\ Pediatric Surgeon & 0.8405 & 0.8782 & 0.8589 & 14847 \\ Gynecologist & 0.7834 & 0.7459 & 0.7642 & 14844 \\ Dentist & 0.8815 & 0.8893 & 0.8854 & 14861 \\ Sexologist-Andrologist & 0.7904 & 0.6955 & 0.7399 & 15148 \\ Therapist & 0.5066 & 0.3738 & 0.4302 & 15080 \\ Surgeon & 0.6705 & 0.5818 & 0.6230 & 14929 \\ Cardiologist & 0.8646 & 0.8567 & 0.8606 & 14836 \\ Psychologist & 0.7759 & 0.7215 & 0.7477 & 15020 \\ Orthopedic Traumatologist & 0.7981 & 0.7683 & 0.7829 & 15081 \\ Pediatrician & 0.6482 & 0.5712 & 0.6073 & 15087 \\ Dermatologist & 0.7111 & 0.6569 & 0.6829 & 14941 \\ Neurosurgeon & 0.8797 & 0.9025 & 0.8910 & 14898 \\ Endocrinologist & 0.8478 & 0.8072 & 0.8270 & 15011 \\ Venereologist & 0.7763 & 0.8112 & 0.7934 & 15140 \\ Urologist & 0.6445 & 0.6240 & 0.6341 & 15110 \\ Neuropathologist & 0.6633 & 0.5834 & 0.6206 & 15058 \\ Medical Doctor & 0.8667 & 0.8824 & 0.8745 & 14959 \\ Infectious Disease Specialist & 0.8409 & 0.7986 & 0.8192 & 14924 \\ Oncologist & 0.8796 & 0.8742 & 0.8769 & 14957 \\ Gastroenterologist & 0.7574 & 0.7339 & 0.7455 & 14839 \\ \(\ldots\) & & & & \\ **accuracy** & 0.9111 & 0.9111 & 0.9111 & 0.9031 \\ **macro avg** & 0.9177 & 0.9205 & 0.9189 & 1470000 \\ **weighted avg** & 0.9178 & 0.9201 & 0.9189 & 1470000 \\ \end{tabular} \end{table} Table 3: Performance Evaluation of RuBERT for Medical Specialties Classification table represent the performance of the model for each category, allowing for a comparison of its accuracy and effectiveness across various medical specialties. The Confusion Matrix allowed for a detailed exploration of classification outcomes, delineating true positives, true negatives, false positives, and false negatives. This analysis unveiled a notable trend: the majority of errors observed stemmed from the disparities present in the real-world data's structure and semantics. This observation can be attributed to the inherent diversity and complexity of genuine medical texts, where nuances in language and context can lead to intricate classification challenges. Interestingly, when the model was evaluated using synthetic data, the Confusion Matrix demonstrated a contrasting pattern. Synthetic data, crafted to adhere to specific structures and semantics, presented fewer challenges for the Figure 5: Confusion Matrix model's classification accuracy. This stark difference suggests that the model might encounter difficulties when confronted with the heterogeneity inherent in genuine medical text, compared to the more controlled environment of synthetic data. The primary limitations observed encompass various aspects. First, the text length constraint, set at 128 words, significantly affects the model's ability to capture intricate nuances present in longer textual data. When exceeding this threshold, the data becomes represented as sparse vectors, potentially leading to information loss and diminished performance. Secondly, the distinctive writing style encountered in the test data, which differs from that seen in the training data, poses a challenge. The model's training on a particular style limits its adaptability to new, previously unseen writing patterns. This mismatch between training and test data styles can result in reduced accuracy and nuanced misclassifications. Moreover, the presence of questions addressing topics that were not covered extensively in the training data presents another constraint. Models struggle when faced with questions that delve into unfamiliar territories, as they lack the contextual familiarity to provide accurate predictions. In conclusion, the discussed limitations, including text length constraints, writing style divergence, and unfamiliar thematic areas, collectively highlight the challenges faced when applying classifiers to such datasets. These limitations underscore the need for robust pre-training, data augmentation, and model fine-tuning strategies to enhance the model's performance and mitigate the observed shortcomings. ## 7 Conclusion In this study, we collected a comprehensive dataset for text classification, augmented it with various techniques, and conducted experiments using five state-of-the-art transformer models: SBERT, BERT, LaBSE, BART, and RuBERT. We observed that RuBERT achieved the best performance with f1-score of 91.9%, outperforming the other models. Based on these findings, we conclude that transformer models, particularly RuBERT, are highly effective for text classification tasks. The ability of transformers to capture contextual information and learn complex patterns in textual data contributes to their superior performance compared to classical machine learning methods. Further research can be conducted to explore the performance of these transformer models on smaller datasets or specific domain-related datasets. Additionally, there is potential for developing new transformer architectures tailored specifically for text classification tasks. These architectures can incorporate domain-specific knowledge and enhance the model's ability to extract meaningful features from text, further improving classification accuracy. Investigating transfer learning techniques, fine-tuning strategies, and hyperparameter optimization for these transformer models can also be valuable directions for future work. The exploration of different augmentation techniques and their impact on model performance can provide insights into improving the robustness and generalization capabilities of text classification models. Overall, there is ample opportunity for advancing the field of text classification using transformer models, and these future works can contribute to the development of more accurate and efficient models for various applications.
2309.04471
Multiplicative Anomaly matches Casimir Energy for GJMS Operators on Spheres
An explicit formula to compute the multiplicative anomaly or defect of $\zeta$-regularized products of linear factors is derived, by using a Feynman parametrization, generalizing Shintani-Mizuno formulas. Firstly, this is applied on $n$-spheres, reproducing known results in the literature. Then, this framework is applied to a closed Einstein universe at finite temperature, namely $S^1_{\beta}\times S^{n-1}$. In doing so, it is shown that the standard Casimir energy for GJMS operators coincides with the accumulated multiplicative anomaly for the shifted Laplacians that build them up. This equivalence between Casimir energy and multiplicative anomaly, unnoticed so far to our knowledge, brings about a new turn regarding the physical significance of the multiplicative anomaly, putting both now on equal footing. An emergent improved Casimir energy, that takes into account the multiplicative anomaly among the building Laplacians, is also discussed.
R. Aros, F. Bugini, D. E. Díaz, B. Zúñiga
2023-09-08T17:59:22Z
http://arxiv.org/abs/2309.04471v4
# Multiplicative Anomaly matches Casimir Energy for GJMS Operators on Spheres ###### Abstract An explicit formula to compute the multiplicative anomaly or defect of \(\zeta\)-regularized products of linear factors is derived, by using a Feynman parametrization, generalizing Shintani-Mizuno formulas. Firstly, this is applied on \(n\)-spheres, reproducing known results in the literature. Then, this framework is applied to a closed Einstein universe at finite temperature, namely \(S_{\beta}^{1}\times S^{n-1}\). In doing so, it is shown that the standard Casimir energy (as computed via \(\zeta\) regularization) for GJMS operators coincides with the accumulated multiplicative anomaly for the shifted Laplacians that build them up. This equivalence between Casimir energy and multiplicative anomaly within \(\zeta\) regularization, unnoticed so far to our knowledge, brings about a new turn regarding the physical significance of the multiplicative anomaly, putting both now on equal footing. An emergent _improved_ Casimir energy, that incorporates the multiplicative anomaly among the building Laplacians, is also discussed. ## 1 Introduction It has long been known that \(\zeta\)-regularized functional determinants of differential operators [1] may be afflicted by a _multiplicative anomaly_[2]. Even for commuting (elliptic) differential operators A and B, in general \(\det_{\zeta}\left(A\cdot B\right)\neq\det_{\zeta}A\cdot\det_{\zeta}B\). In the mid-80s, an explicit expression for this multiplicative anomaly was devised by Wodzicki in terms of the so-called non-commutative residue [3, 4, 5]. Interestingly, in cases where the eigenvalues factorize into linear factors the discrepancy for the resulting \(\zeta\)-regularized products had been pinpointed a decade before in Shintani's works [6, 7]. In these cases, the individual \(\zeta\) functions are Barnes multiple zetas [8] and the collective ones are the Shintani-Barnes generalizations thereof [9, 10]. The direct connection between the two approaches is enabled by a crucial feature of the \(\zeta\)-regularized products: the multiplicative anomaly between several factors is _pairwise accumulative_, i.e. it is enough to compute it between all possible pairings and then average the result [11, 12, 13]. Therefore, although the multiplicative anomaly between linear factors may not be captured by Wodzicki's formula, the converse holds: the multiplicative anomaly between the quadratic factors in the Laplacians, for which Wodzicki's formula applies, is equally captured by the multiplicative anomaly among the linear factors. Another remarkable feature of the multiplicative anomaly between linear factors is that, as compared to the \(\zeta\)-regularized products that involve Shintani-Barnes gammas, it is far simpler. In all known examples it reduces to an exponential of a rational function in the coefficients of the linear factors and the logarithms of these coefficients (see, _e.g._[6, 9, 10, 11]). Motivated by these results, in this note, we revisit the computation of Casimir or vacuum energy for higher-derivative operators on spheres since the standard calculation seems to overlook the possible multiplicative anomaly among the factors (see, e.g., [14] for the Paneitz operator in 4D). For concreteness, we focus on conformal powers of the Laplacian or GJMS operators \(P_{2k}\)[15] which happen to factorize into shifted Laplacians on spheres \(S^{n}\)[16, 17], as well as on the conformally flat product space \(S^{1}_{\beta}\times S^{n-1}\)[18, 19]. The partition function on the latter geometry is dominated by the Casimir energy in the low-temperature (\(\beta\to\infty\)) limit, and the presence of a multiplicative anomaly leads to an _improved_ Casimir energy. The improvement relies on consistency: there are two alternative factorizations in terms of shifted Laplacian and only after the inclusion of the multiplicative anomaly one can find agreement between the partition functions and, in consequence, between the Casimir energies. In addition, on the two torus, the universal dependence of the Casimir energy on the central charge \(E_{0}=-c/12\) is restored. The organization of this paper is as follows. In Section 2, a generalization of Sintani-Mizuno formulas for the multiplicative anomaly of linear factors is obtained by a procedure based on Feynman parametrization and Fock-Schwinger-DeWitt proper-time representation. In Section 3, Mizuno's result in two dimensions is extended to three and four dimensions, casting the answer into Bernoulli polynomials and keeping the quasi-periods in the greatest generality. Since the explicit expressions become increasingly cluttered as the dimensions grow, we restrict attention to particular examples in what follows. In section 4, as a preliminary exercise, the multiplicative anomaly is computed on round spheres confirming previous results in the literature. Then, in Section 5, the background of present interest is addressed, namely the closed Einstein universe at finite temperature, and new features of the Casimir energy are reported. In Section 6, we highlight the role of the multiplicative anomaly in Shintani's derivation of the Kronecker second limit formula. In Section 7, as the main application, we examine the computation of the Casimir energy for GJMS operators in the light again of the new features that the inclusion of the multiplicative anomaly brings in. Summary and outlook are provided in Section 8. Finally, miscellaneous results are collected in two appendices. ## 2 Derivation of Shintani-Mizuno formulas via Feynman parametrization To compute the ratio of the functional determinants, and of the corresponding \(\zeta-\)regularized products, we start with the relative zeta function \[\zeta_{AB}(s)-\zeta_{A}(s)-\zeta_{B}(s)=\sum_{\vec{m}=\vec{0}}^{\infty}\left\{ \frac{1}{[\vec{m}\cdot\vec{a}+w]^{s}\cdot[\vec{m}\cdot\vec{b}+z]^{s}}-\frac{1 }{[\vec{m}\cdot\vec{a}+w]^{s}}-\frac{1}{[\vec{m}\cdot\vec{b}+z]^{s}}\right\} \tag{1}\] for \(\Re(s)>n\), with the multi-index \(\vec{m}=(m_{1},m_{2},...,m_{n})\) being an \(n\)-tuple of non-negative integers and assuming that the quasi-periods \(a_{i}\) and \(b_{i}\), as well as the arguments \(w\) and \(z\), all have positive real parts (although this may be relaxed later, as we will see). We now combine the factors in the first term into a single denominator by using Feynman parametrization while the second and third terms come in for free \[\frac{\Gamma(2s)}{\Gamma^{2}(s)}\int_{0}^{1}dv\,[v\,(1-v)]^{s-1} \left\{\frac{1}{[\vec{m}\cdot(\vec{a}\,v+(1-v)\,\vec{b})+w\,v+(1-v)\,z]^{2s}} \right.\] \[\left.-\frac{1}{[\vec{m}\cdot\vec{a}+w]^{s}}-\frac{1}{[\vec{m} \cdot\vec{b}+z]^{s}}\right\} \tag{2}\] Next, we introduce Fock-Schwinger-DeWitt proper-time representations for the inverse powers \[\frac{1}{\Gamma(2s)}\int_{0}^{\infty}\frac{dt}{t}\,t^{2s}\,e^{-t[ \vec{m}\cdot(\vec{a}\,v+(1-v)\,\vec{b})+w\,v+(1-v)\,z]} \tag{3}\] \[- \frac{1}{\Gamma(s)}\left(\int_{0}^{\infty}\frac{dt}{t}\,t^{s}\,e^ {-t[\vec{m}\cdot\vec{a}+w]}+\int_{0}^{\infty}\frac{dt}{t}\,t^{s}\,e^{-t[\vec{m} \cdot\vec{b}+z]}\right)\] The geometric series summation in the multi-index \(\vec{m}\) produces the following Bose factors \[\frac{1}{\Gamma(2s)}\int_{0}^{\infty}\frac{dt}{t}\,t^{2s}\,\frac{ e^{-t[w\,v+(1-v)\,z]}}{\prod_{i=1}^{n}\left\{1-e^{-t(a_{i}\,v+(1-v)\,b_{i})} \right\}} \tag{4}\] \[- \frac{1}{\Gamma(s)}\int_{0}^{\infty}\frac{dt}{t}\,t^{s}\,\frac{ e^{-t\,w}}{\prod_{i=1}^{n}\left\{1-e^{-t\,a_{i}}\right\}}-\frac{1}{\Gamma(s)} \int_{0}^{\infty}\frac{dt}{t}\,t^{s}\,\frac{e^{-t\,z}}{\prod_{i=1}^{n}\left\{1 -e^{-t\,b_{i}}\right\}}\] The Bose factors are now expressed as a Taylor series in \(t\) with Bernoulli numbers1 as coefficients Footnote 1: We use the convention \(B_{k}^{+}=B_{k}(1)\), as opposed to \(B_{k}^{-}=B_{k}(0)\), in terms of the Bernoulli polynomials. \[\sum_{\vec{l}=\vec{0}}^{\vec{\otimes}}\left\{\prod_{i=1}^{n}\frac{ B_{l_{i}}^{+}}{(l_{i})!}\,(a_{i}\,v+(1-v)\,b_{i})^{l_{i}-1}\right\}\left\{ \frac{1}{\Gamma(2s)}\int_{0}^{\infty}\frac{dt}{t}\,t^{2s-n+\sum_{i=1}^{n}l_{i }}e^{-t[w\,v+(1-v)\,z]}\right\}\] \[- \sum_{\vec{l}=\vec{0}}^{\vec{\otimes}}\left\{\prod_{i=1}^{n}\frac {B_{l_{i}}^{+}}{(l_{i})!}\,a_{i}^{l_{i}-1}\right\}\left\{\frac{1}{\Gamma(s)} \int_{0}^{\infty}\frac{dt}{t}\,t^{s-n+\sum_{i=1}^{n}l_{i}}\,e^{-t\,w}\right\}\] \[- \sum_{\vec{l}=\vec{0}}^{\vec{\otimes}}\,\left\{\prod_{i=1}^{n} \frac{B_{l_{i}}^{+}}{(l_{i})!}\,b_{i}^{l_{i}-1}\right\}\left\{\frac{1}{\Gamma( s)}\int_{0}^{\infty}\frac{dt}{t}\,t^{s-n+\sum_{i=1}^{n}l_{i}}\,e^{-t\,z}\right\}\] The proper-time integrals, taken in terms of Euler gamma functions, enable the analytic continuation in the spectral parameter \(s\), and yield \[\sum_{\vec{l}=\vec{0}}^{\vec{\otimes}}\,\left\{\prod_{i=1}^{n} \frac{B_{l_{i}}^{+}}{(l_{i})!}\,(a_{i}\,v+(1-v)\,b_{i})^{l_{i}-1}\right\}\frac {\Gamma(2s-n+\sum_{i=1}^{n}l_{i})}{\Gamma(2s)}\,\left[w\,v+(1-v)\,z\right]^{-2 s+n-\sum_{i=1}^{n}l_{i}}\] \[- \sum_{\vec{l}=\vec{0}}^{\vec{\otimes}}\,\left\{\prod_{i=1}^{n} \frac{B_{l_{i}}^{+}}{(l_{i})!}\,a_{i}^{l_{i}-1}\right\}\frac{\Gamma(s-n+\sum_{ i=1}^{n}l_{i})}{\Gamma(s)}\,\left[w\right]^{-s+n-\sum_{i=1}^{n}l_{i}}\] \[- \sum_{\vec{l}=\vec{0}}^{\vec{\otimes}}\,\left\{\prod_{i=1}^{n} \frac{B_{l_{i}}^{+}}{(l_{i})!}\,b_{i}^{l_{i}-1}\right\}\frac{\Gamma(s-n+\sum_{ i=1}^{n}l_{i})}{\Gamma(s)}\,\left[z\right]^{-s+n-\sum_{i=1}^{n}l_{i}}\] The \(\zeta\)-regularized products are obtained from the derivative of the \(\zeta\) with respect to the spectral parameter \(s\) at \(s=0\). By careful examination of the behavior as \(s\to 0\) one can realize \[\zeta_{AB}(s)-\zeta_{A}(s)-\zeta_{B}(s)=\frac{s}{2}\times{\rm Regular}+\frac{1} {2}\zeta_{A}(2s)+\frac{1}{2}\zeta_{B}(2s)-\zeta_{A}(s)-\zeta_{B}(s)+O(s^{3}). \tag{7}\] As a consistency check, direct evaluation at \(s=0\) results in the additive property of the zeta function weighted by the order, 2 for AB and 1 for A and for B, of the corresponding differential operators \[\zeta_{AB}(0)=\frac{1}{2}\,\zeta_{A}(0)+\frac{1}{2}\,\zeta_{B}(0). \tag{8}\] Back to the regularized products, the overall prefactor \(\Gamma(2s)/\Gamma^{2}(s)\) goes as \(s/2\) and the rest is regular at \(s=0\), after symmetrization with respect to \(v\leftrightarrow 1-v\), so it is enough to consider the limit of the latter as \(s\to 0\) to compute the derivative at zero. In addition, the factors \(\Gamma(2s-n+\sum_{i=1}^{n}l_{i})/\Gamma(2s)\) and \(\Gamma(s-n+\sum_{i=1}^{n}l_{i})/\Gamma(s)\) in the limit \(s\to 0\) produce a vanishing result unless the numerators also hit a pole, say \(-p\) with \(p=0,1,2,...,n\), and gives a finite answer \((-1)^{p}/p!\). Therefore the sums over \(l_{i}\geq 0\) are truncated by the condition \(p+\sum_{i=1}^{n}l_{i}=n\) \[-\frac{1}{2}\sum_{l_{i},p\geq 0}\,\left\{\frac{(-1)^{p}}{p!}\prod_ {i=1}^{n}\frac{B_{l_{i}}^{+}}{(l_{i})!}\right\}\,\int_{0}^{1}\frac{dv}{v(1-v)}\] \[\times \frac{1}{2}\left\{\left\{\prod_{i=1}^{n}(a_{i}\,v+(1-v)\,b_{i}) ^{l_{i}-1}\right\}\,[w\,v+(1-v)\,z]^{p}+\left\{(\vec{a},w)\leftrightarrow(\vec {b},z)\right\}\right.\] \[\left.-\left\{\prod_{i=1}^{n}a_{i}^{l_{i}-1}\right\}\,[w]^{p}- \left\{\prod_{i=1}^{n}b_{i}^{l_{i}-1}\right\}\,[z]^{p}\right\}\] One last change of variables in the Feynman parameter \(1/v-1=u\) and realizing that the inversion \(u\to 1/u\) merely interchanges \((\vec{a},w)\leftrightarrow(\vec{b},z)\), cast the final result in Shintani-Mizuno [6, 10] form \[{\rm MA}(A,B) = -\zeta_{AB}^{\prime}(0)+\zeta_{A}^{\prime}(0)+\zeta_{B}^{\prime }(0)\] \[= -\frac{1}{2}\sum_{l_{i},p\geq 0}\,\left\{\frac{(-1)^{p}}{p!}\prod_ {i=1}^{n}\frac{B_{l_{i}}^{+}}{(l_{i})!}\right\}C(\vec{a},z;\vec{b},w\,|\,\vec{ l},p)\Bigg{|}_{p+\sum_{i=1}^{n}l_{i}=n}\,,\] with \[C(\vec{a},w;\vec{b},z\,|\,\vec{l},p) = \int_{0}^{1}\,\frac{du}{u}\left\{\left\{\prod_{i=1}^{n}(a_{i}+u\, b_{i})^{l_{i}-1}\right\}\,[w+u\,z]^{p}-\left\{\prod_{i=1}^{n}a_{i}^{l_{i}-1} \right\}\,[w]^{p}\right\} \tag{11}\] \[+ \left\{(\vec{a},w)\leftrightarrow(\vec{b},z)\right\}.\] This formula, which computes the multiplicative anomaly for a pair of linear factors, suffices to deal with a generic number of linear factors because, as already mentioned, the multiplicative anomaly turns out to be pairwise accumulative [11, 12, 13]. The \(n=2\) case corresponds to the formula put forward by Mizuno (cf. proof of Lemma 4 in [10]), whereas the original Shintani formula (cf. Proposition 1 in [6]) applies to the particular choice of the arguments \(w=\vec{a}\cdot\vec{x}\) and \(z=\vec{b}\cdot\vec{x}\). In that case, the polynomial dependence on \(x_{i}\) can be rearranged by trading the Bernoulli numbers by Bernoulli in \(x_{i}\) after summing over \(p\). This is easily seen by going back to the Bose factors and expanding each of them in terms of Bernoulli polynomials in \(x_{i}\). One can alternatively choose to expand the whole product of Bose factors in terms of Bernoulli polynomials of higher order2, Footnote 2: Our convention differs slightly from the definition in [20] or [21]. \[\frac{t^{n}\,e^{-w\,t}}{\prod_{i=1}^{n}\left\{1-e^{-a_{i}\,t}\right\}}=\sum_{ l=0}^{\infty}B_{n,l}(w|\vec{a})\,\frac{t^{l}}{l!}, \tag{12}\] in which case the integral formula for the multiplicative anomaly can be written in the following neat and compact form \[{\rm MA}(A,B)=-\frac{1}{2\,n!}\int_{0}^{1}\,\frac{du}{u}\left\{B_{n,n}(w+u\,z \,|\,\vec{a}+u\,\vec{b})-B_{n,n}(w\,|\,\vec{a})\right\}+\left\{(\vec{a},w) \leftrightarrow(\vec{b},z)\right\}. \tag{13}\] Moreover, in our convention, the Bernoulli polynomial of higher order coincides essentially with Barnes zeta at \(s=0\) \[\frac{B_{n,n}(w,\vec{a})}{n!}=\zeta_{n}(0,w|\vec{a}). \tag{14}\] Consequently, the multiplicative anomaly becomes an average of Barnes \(\zeta\)'s with respect to the Feynman parameter \(u\) \[\boxed{\mbox{MA}(A,B)=-\frac{1}{2}\int_{0}^{1}\,\frac{du}{u}\left\{\zeta_{n}(0, w+u\,z\,|\,\vec{a}+u\,\vec{b})-\zeta_{n}(0,w\,|\,\vec{a})\right\}+\left\{(\vec{a},w) \leftrightarrow(\vec{b},z)\right\}.} \tag{15}\] ## 3 Previous and new results Let us now put the formula to work, keeping the quasi-periods in the greatest generality. ### 1D: Friedman-Ruijsenaars formula The \(n=1\) case was worked out by Friedman and Ruijsenaars [9] some time ago by exploiting a recurrence relation and, as expected, their result matches the outcome of the Shintani-Mizuno integral representation above \[\mbox{MA}(A,B)=\frac{1}{2}\cdot\left(\frac{w}{a}-\frac{z}{b}\right)\cdot\log \frac{a}{b} \tag{16}\] ### 2D: Shintani-Mizuno formula The \(n=2\) case was addressed by Mizuno [10], following Shintani's approach, and his result was concisely written in terms of the Bernoulli polynomial of order 2 as follows \[\mbox{MA}(A,B)=\frac{a_{1}\,b_{2}\,-\,a_{2}\,b_{1}}{4\,a_{1}\,b_{1}}\cdot B_{ 2}\left(\frac{a_{1}\,z\,-\,b_{1}\,w}{a_{1}\,b_{2}\,-\,b_{1}\,a_{2}}\right) \cdot\log\frac{a_{1}}{b_{1}}\;+\;\{1\leftrightarrow 2\} \tag{17}\] ### 3D: generalized Shintani-Mizuno formula We report here the \(n=3\) case for the first time, to our knowledge, obtained with MAPLE help to compute the integrals and to concisely express the result in terms of Bernoulli polynomials \[\mbox{MA}(A,B) = -\left\{\frac{[a_{1}\,b_{2}\,-\,a_{2}\,b_{1}\,+\,a_{1}\,b_{3}\,- \,a_{3}\,b_{1}]^{3}}{12\,a_{1}\,b_{1}\,(a_{1}\,b_{2}\,-\,a_{2}\,b_{1})(a_{1}\, b_{3}\,-\,a_{3}\,b_{1})}\cdot B_{3}\left(\frac{a_{1}\,z\,-\,b_{1}\,w}{a_{1}\,b_{2} \,-\,a_{2}\,b_{1}\,+\,a_{1}\,b_{3}\,-\,a_{3}\,b_{1}}\right)\right. \tag{18}\] \[+ \left.\frac{[a_{1}\,b_{2}\,-\,a_{2}\,b_{1}\,+\,a_{1}\,b_{3}\,-\, a_{3}\,b_{1}]}{24\,a_{1}\,b_{1}}\cdot B_{1}\left(\frac{a_{1}\,z\,-\,b_{1}\,w}{a_{1 }\,b_{2}\,-\,a_{2}\,b_{1}\,+\,a_{1}\,b_{3}\,-\,a_{3}\,b_{1}}\right)\right\} \cdot\log\frac{a_{1}}{b_{1}}\] \[- \left.\{1\leftrightarrow 2\}-\{1\leftrightarrow 3\}.\] ### 4D: generalized Shintani-Mizuno formula For \(n=4\) the answer, also new to our knowledge, becomes more involved. We introduce some notation to write it down more compactly: \[D_{ij} = a_{i}b_{j}-b_{i}a_{j}\] \[D = D_{12}+D_{13}+D_{14} \tag{19}\] \[{\rm MA}(A,B) = \left\{\frac{D^{4}}{48\,a_{1}\,b_{1}\,D_{12}\,D_{13}\,D_{14}}\cdot B _{4}\left(\frac{a_{1}\,z\,-\,b_{1}\,w}{D}\right)\right. \tag{20}\] \[- \left.\frac{D^{2}\,(D_{12}^{2}+D_{13}^{2}+D_{14}^{2}-D^{2})}{96\,a _{1}\,b_{1}\,D_{13}\,D_{14}\,D_{12}}\cdot B_{2}\left(\frac{a_{1}\,z\,-\,b_{1} \,w}{D}\right)\right.\] \[- \left.\frac{D^{4}-D^{2}\,(D_{12}^{2}+D_{13}^{2}+D_{14}^{2})-2DD_{ 12}D_{13}D_{14}}{2880\,a_{1}\,b_{1}\,D_{12}\,D_{13}\,D_{14}}\right.\] \[+ \left.\frac{3(D_{12}D_{13}+D_{12}D_{14}+D_{13}D_{14})^{2}}{1440\, a_{1}\,b_{1}\,D_{12}\,D_{13}\,D_{14}}\right\}\cdot\log\frac{a_{1}}{b_{1}}\] \[+ \left.\left\{1\leftrightarrow 2\right\}+\left\{1\leftrightarrow 3 \right\}+\left\{1\leftrightarrow 4\right\}.\] Since the explicit answer becomes more complicated as we increase the dimension, we refrain from displaying it for higher dimensions, and, in what follows, we focus on particular choices for the quasi-periods. ## 4 Examples: shifted Laplacian on round spheres \(S^{n}\) Let us consider the factorization of the eigenvalues of the Laplacian, on (unit) spheres, shifted by a constant. First, recall the eigenvalues and multiplicities for the (negative) Laplacian \(-\nabla^{2}\) on the \(n\)-sphere \(S^{n}\) \[\lambda_{l}=l(l+n-1)\hskip 56.905512pt\deg(l)=(2l+n-1)\frac{(l+n-2)!}{l!\,(n-1)!}. \tag{21}\] Notice that a shift by \(\frac{(n-1)^{2}}{4}-a^{2}\) factorizes the eigenvalues \(\Lambda_{l}\) of the shifted Laplacian \(L_{a}=-\nabla^{2}+\frac{(n-1)^{2}}{4}-a^{2}=D_{a}\,D_{-a}\) into linear factors on \(l\) \[\Lambda_{l}=(l+\frac{n-1}{2}+a)(l+\frac{n-1}{2}-a). \tag{22}\] There is now a gracious way to connect with the regularized product of the previous sections: we follow Dowker's 'central tactic' (see, e.g. [13, 22]) in the spectral treatment of the Laplacian on spheres which consists in taking the full sphere as the union of the Neumann and Dirichlet problems on the hemisphere. We trade then the 'orbital' quantum number \(l\) by the sum of non-negative integers \(m_{1}+m_{2}+...+m_{n}\) for Neumann boundary condition and \(1+m_{1}+m_{2}+...+m_{n}\) for Dirichlet. At fixed \(l\) the combinatorics produce the correct multiplicity: for Neumann, the counting consists of the different ways to distribute \(l\) balls in \(n\) boxes, whereas for Dirichlet there are only \(l-1\) balls to sort (the constant mode corresponding to \(l=0\) belongs exclusively to the Neumann case) \[\deg_{N}(l)=\frac{(l+n-1)!}{l!\,(n-1)!},\hskip 56.905512pt\deg_{D}(l)=\frac{(l+n- 2)!}{(l-1)!\,(n-1)!}. \tag{23}\] The degeneracy for the full sphere is then the sum of both degeneracies. We can address now the multiplicative anomaly on spheres between two generic linear factors by setting all quasi-periods to one and considering arguments \(\frac{n-1}{2}+a\) and \(\frac{n-1}{2}+b\) for Neumann boundary condition and \(1+\frac{n-1}{2}+a\) and \(1+\frac{n-1}{2}+b\), for Dirichlet. With these building blocks, we can compute first the multiplicative anomaly to build up the shifted Laplacian \(L_{a}\) by setting \(b=-a\), i.e. \({\rm MA}(D_{a},D_{-a})\), and then the multiplicative anomaly among shifted Laplacians \(L_{a}\) and \(L_{b}\) based on the accumulative and associative properties of the defects: \[2\cdot{\rm MA}(L_{a},L_{b}) = {\rm MA}(D_{a},D_{b})+{\rm MA}(D_{a},D_{-b})+{\rm MA}(D_{-a},D_{b })+{\rm MA}(D_{-a},D_{-b}) \tag{24}\] \[- {\rm MA}(D_{a},D_{-a})-{\rm MA}(D_{b},D_{-b})\.\] Let us mention in advance that the explicit results we will find follow the general rule that the multiplicative anomaly for Neumann and Dirichlet boundary conditions happen to be opposite in sign in odd dimensions, adding up to zero, whereas in even dimensions they are identical. The values for the multiplicative anomaly between linear factors that build up the shifted Laplacians coincide with those reported by Dowker using spectral techniques and expanding the zeta functions in terms of the shift (cf. [13], eqn.313), and the same holds between shifted Laplacian (cf. [13], eqn.15). Interestingly, for even spheres the same multiplicative anomaly between the linear factors in the shifted Laplacian had been previously obtained in [23] via Wodzicki residue and, as noticed in [24], also obtained in [25] while computing the partition function for a massive scalar in (Euclidean) de Sitter space as a regularized product of quasinormal frequencies. Footnote 3: We are grateful to J.S. Dowker for his help in fixing few numerical coefficients and signs in a previous version of this paper. ### Two-sphere: All quasi-periods set to one result in a quadratic polynomial in the arguments \((w,z)\) \[{\rm MA}(A,B)=\frac{(w-z)^{2}}{4}. \tag{25}\] The eigenvalues of the shifted Laplacian \(L_{a}=-\nabla^{2}+1/4-a^{2}\) on the two-sphere are then obtained with \(w=1/2+a,\,z=1/2-a\) and with \(w=3/2+a,\,z=3/2-a\) for Neumann and Dirichlet boundary conditions on the Equator, respectively. Both multiplicative anomalies turn out to be equal and the combined multiplicative anomaly between the linear factors \(D_{\pm a}=\sqrt{-\nabla^{2}+1/4}\pm a\) that build up the shifted Laplacian is obtained \[{\rm MA}(D_{a},D_{-a})=2\,a^{2}. \tag{26}\] By contrast, the multiplicative anomaly between a pair of shifted Laplacians \(L_{a}\) and \(L_{b}\) vanishes \[{\rm MA}(L_{a},L_{b})=0. \tag{27}\] ### Three-sphere: The generalized Shintani-Mizuno formula with all quasi-periods set to one yields now a cubic polynomial in the arguments \((w,z)\) \[{\rm MA}(A,B)=-\frac{(w-z)^{2}\cdot(w+z-3)}{8}. \tag{28}\] The eigenvalues of the shifted Laplacian \(L_{a}=-\nabla^{2}+1-a^{2}\) on the three-sphere are obtained with \(w=1+a,\,z=1-a\) and with \(w=2+a,\,z=2-a\) for Neumann and Dirichlet boundary conditions on the Equator, respectively. The multiplicative anomaly for Neumann b.c. turns out to be \(a^{2}/2\) opposite to that for Dirichlet b.c. \[{\rm MA}(D_{a},D_{-a})|_{{}_{Neu}}=-{\rm MA}(D_{a},D_{-a})|_{{}_{Dir}}=\frac {1}{2}a^{2}, \tag{29}\] so that the combined multiplicative anomaly between the linear factors \(D_{\pm a}=\sqrt{-\nabla^{2}+1}\pm a\) that build up the shifted Laplacian vanishes. For the multiplicative anomaly between a pair of shifted Laplacians \(L_{a}\) and \(L_{b}\) we again obtain vanishing results \[{\rm MA}(L_{a},L_{b})|_{{}_{\rm Neu}}={\rm MA}(L_{a},L_{b})|_{{}_{\rm Dir}}=0. \tag{30}\] ### Four-sphere: With all quasi-periods set to one, the generalized Shintani-Mizuno formula produces a quartic polynomial in the arguments \((w,z)\) \[{\rm MA}(A,B)=\frac{(w-z)^{2}\cdot(11w^{2}+14wz+11z^{2}-72w-72z+132)}{288}. \tag{31}\] The eigenvalues of the shifted Laplacian \(L_{a}=-\nabla^{2}+9/4-a^{2}\) on the four-sphere are then obtained with \(w=3/2+a,\,z=3/2-a\) and with \(w=5/2+a,\,z=5/2-a\) for Neumann and Dirichlet boundary conditions on the Equator, respectively. Both multiplicative anomalies turn out to be equal and the combined multiplicative anomaly between the linear factors \(D_{\pm a}=\sqrt{-\nabla^{2}+9/4}\pm a\) that build up the shifted Laplacian is obtained \[{\rm MA}(D_{a},D_{-a})=\frac{2}{9}a^{4}\,-\,\frac{1}{12}a^{2}. \tag{32}\] The multiplicative anomaly between a pair of shifted Laplacians \(L_{a}\) and \(L_{b}\) is nontrivial now \[{\rm MA}(L_{a},L_{b})=\frac{(a^{2}-b^{2})^{2}}{24}. \tag{33}\] ### Five-sphere: All quasi-periods set to one in the generalized Shintani-Mizuno formula result now in a quintic polynomial in the arguments \((w,z)\) \[{\rm MA}(A,B)=-\frac{(w-z)^{2}\cdot(w+z-5)\cdot(5w^{2}+2wz+5z^{2}-30w-30z+60)}{ 576}. \tag{34}\] The eigenvalues of the shifted Laplacian \(L_{a}=-\nabla^{2}+4-a^{2}\) on the three-sphere are obtained with \(w=2+a,\,z=2-a\) and with \(w=3+a,\,z=3-a\) for Neumann and Dirichlet boundary conditions on the Equator, respectively. The multiplicative anomaly for Neumann boundary conditions turns out to be the opposite of that for Dirichlet, namely \[{\rm MA}(D_{a},D_{-a})|_{{}_{\rm Neu}}=-{\rm MA}(D_{a},D_{-a})|_{{}_{\rm Dir}} =\frac{1}{18}a^{4}-\frac{1}{12}a^{2}. \tag{35}\] Therefore, the combined multiplicative anomaly between the linear factors \(D_{\pm a}=\sqrt{-\nabla^{2}+4}\pm a\), that build up the shifted Laplacian, vanishes. For the multiplicative anomaly between a pair of shifted Laplacians \(L_{a}\) and \(L_{b}\) we obtain \[{\rm MA}(L_{a},L_{b})|_{{}_{\rm Neu}}=-{\rm MA}(L_{a},L_{b})|_{{}_{\rm Dir}}= \frac{(a^{2}-b^{2})^{2}}{96}. \tag{36}\] ### Six-sphere: With all quasi-periods set to one, the generalized Shintani-Mizuno formula produces here a sextic polynomial in the arguments \((w,z)\) \[{\rm MA}(A,B) = \frac{(w-z)^{2}}{86400}\left(137w^{4}+202w^{3}z+222w^{2}z^{2}+202wz^ {3}+137z^{4}\right. \tag{37}\] \[- \left.2250w^{3}-3150w^{2}z-3150wz^{2}-2250z^{3}+14025w^{2}+17850wz\right.\] \[+ \left.14025z^{2}-40500w-40500z+49320\right)\] The eigenvalues of the shifted Laplacian \(L_{a}=-\nabla^{2}+25/4-a^{2}\) on the six-sphere are then obtained with \(w=5/2+a,\,z=5/2-a\) and with \(w=7/2+a,\,z=7/2-a\) for Neumann and Dirichlet boundary conditions on the Equator, respectively. Both multiplicative anomalies turn out to be equal and the combined multiplicative anomaly between the linear factors \(D_{\pm a}=\sqrt{-\nabla^{2}+25/4}\pm a\) that build up the shifted Laplacian is obtained \[{\rm MA}(D_{a},D_{-a})=\frac{23}{2700}a^{6}\,-\,\frac{1}{36}a^{4}\,+\,\frac{3 }{320}a^{2}. \tag{38}\] The multiplicative anomaly between a pair of shifted Laplacians \(L_{a}\) and \(L_{b}\) is then \[{\rm MA}(L_{a},L_{b})=\frac{(a^{2}-b^{2})^{2}\cdot(2a^{2}+2b^{2}-5)}{960}. \tag{39}\] ## 5 Examples: shifted conformal Laplacian on \(S^{1}_{\beta}\times S^{n-1}\) Let us now consider a temperature circle times the round sphere. This time, the conformal Laplacian \(Y=-\nabla^{2}+\frac{n-2}{4(n-1)}\,R=-\partial_{0}^{2}-\vec{\nabla}^{2}+\frac{( n-2)^{2}}{4}\equiv-\partial_{0}^{2}+\Delta_{0}\) factorizes into linear factors provided one of the quasi-periods is purely imaginary, say \(\tau=\frac{2\pi\,i}{\beta}\). Again, considering the Neumann and Dirichlet problems on the \((n-1)\)-sphere one can trade the orbital number \(l\) by \(n-1\) non-negative integers \(m_{1},m_{2},...,m_{n-1}\). The same can be done with the winding number on the temperature circle, introducing an additional counting number \(m_{n}\) and Neumann and Dirichlet boundary conditions on the circle. We compute first for generic arguments \((w,z)\) and then restrict them to get the four combinations of boundary conditions Neumann-Neumann, Neumann-Dirichlet, Dirichlet-Neumann and Dirichlet-Dirichlet. A quite surprising fact will be evident from the examples below: The standard Casimir energy for the Laplacian and for shifted Laplacians turns out to be exactly equal to the multiplicative anomaly among the linear factors that build them up! ### Two-torus The linear factors in this case are \((m_{1}+m_{2}\tau+w)\) and \((m_{1}+m_{2}\overline{\tau}+z)\), with \(\Im(\tau)>0\). The generalized Shintani-Mizuno formula for the multiplicative anomaly produces an overall factor of \(i\pi\) from the logarithm of the ratio of \(\tau\) and \(\overline{\tau}=-\tau\), accompanied by a quadratic polynomial in the arguments \((w,z)\). The remarkable feature of the outcome is that it turns out to be linear in the inverse temperature \(\beta\), just as the vacuum or Casimir energy: \[{\rm MA}(A,B)=-\beta\cdot\frac{(z+w)\cdot(z+w-2)}{16}-\beta\cdot\frac{1}{24}. \tag{40}\] The multiplicative anomaly among the linear factors that build up the conformal Laplacian \(Y=D\cdot\overline{D}\) can then be worked out as the sum of the four contributions with \((w,z)\) equal to \((0,0),(1,1),(\tau,\overline{\tau})\) and \((1+\tau,1+\overline{\tau})\) for N-N, N-D, D-N and D-D boundary conditions, respectively, \[{\rm MA}(D,\overline{D})=-\beta\cdot\frac{1}{6}. \tag{41}\] More generally, let us allow for a shift in each of the linear factors \(D+a\) and \(\overline{D}+b\) \[{\rm MA}(D+a,\overline{D}+b)=-\beta\cdot\frac{(a+b)^{2}}{4}-\beta\cdot\frac{1 }{6}. \tag{42}\] There are now two alternative ways to build up shifted conformal Laplacians depending on where the shift is located, on the spatial (sphere) part or the temperature (circle) part. For the spatial shift \(Y_{a}=-\partial_{0}^{2}+(\sqrt{\Delta_{0}}+a)^{2}=(D+a)\cdot(\overline{D}+a)\) \[{\rm MA}(D+a,\overline{D}+a)=-\beta\cdot a^{2}-\beta\cdot\frac{1}{6}, \tag{43}\] whereas for the temperature shift \(K_{a}=-(i\sqrt{-\partial_{0}^{2}}+a)^{2}+\Delta_{0}=(D+a)\cdot(\overline{D}-a)\) \[{\rm MA}(D+a,\overline{D}-a)=-\beta\cdot\frac{1}{6}. \tag{44}\] One can also compute the multiplicative anomaly among shifted conformal Laplacians by exploiting the accumulative and associative properties (eqn.24) \[{\rm MA}(Y_{a},Y_{b})=\beta\cdot\frac{(a-b)^{2}}{4}\, \tag{45}\] and \[{\rm MA}(K_{a},K_{b})=-\beta\cdot\frac{(a-b)^{2}}{4}. \tag{46}\] Interestingly, the multiplicative anomaly among shifted Laplacians for the particular choice \(b=-a\) \[{\rm MA}(Y_{a},Y_{-a})=-{\rm MA}(K_{a},K_{-a})=\beta\cdot a^{2} \tag{47}\] coincides with the multiplicative anomaly computed by Elizalde et al. via Wodzicki residue for free massless scalars, provided one identifies the combination charge times chemical potential with the shift \(e\mu=a\) and compactifies the spatial direction to a circle \(V_{1}=2\pi\)(cf. eqns. 86 and 91 in [26]). But let us return to the multiplicative anomaly for \(Y_{a}=-\partial_{0}^{2}+(\sqrt{\Delta_{0}}+a)^{2}\). This linear term in \(\beta\) will enter the partition function and contribute to the large-\(\beta\) asymptotics determining the Casimir energy. The multiplicative anomaly turns up then in the exponential with an additional factor of \(-\frac{1}{2}\) and should be compared with the leading behavior dominated by the vacuum or Casimir energy \(-\beta\,E_{0}\). It is immediately apparent that both are exactly equal \[E_{0}=\frac{1}{2\,\beta}\,{\rm MA}(D+a,\overline{D}+a)=-\frac{a^{2}}{2}-\frac {1}{12}. \tag{48}\] The standard value for \(E_{0}\) is well known (see, e.g. [27]). It can easily be computed in terms of Hurwitz zetas and their relation with Bernoulli polynomials \[E_{0} = \frac{1}{2}\left(\sum_{l=0}^{\infty}(l+a)+\sum_{l=1}^{\infty}(l+a )\right)=\zeta_{H}(-1,a)-\frac{a}{2} \tag{49}\] \[= -\frac{B_{2}(a)}{2}-\frac{a}{2}\,=\,-\frac{a^{2}}{2}-\frac{1}{12}\.\] The same happens for the temperature-shifted Laplacian \(K_{a}\). The partition functions turn out to be dominated by the vacuum energy \[E_{0}=\frac{1}{2\,\beta}\,{\rm MA}(D+a,\overline{D}-a)=-\frac{1}{12}. \tag{50}\] ### \(S^{1}_{\beta}\times S^{3}\) The linear factors now are \((m_{1}+m_{2}+m_{3}+m_{4}\tau+w)\) and \((m_{1}+m_{2}+m_{3}+m_{4}\overline{\tau}+z)\). The formula for the multiplicative anomaly produces a quartic polynomial in the arguments \((w,z)\): \[{\rm MA}(A,B)=-\beta\cdot\frac{\left(z+w\right)^{2}\left(z+w-6\right)^{2}}{768 }-\beta\cdot\frac{\left(z+w\right)\left(z+w-6\right)}{64}-\beta\cdot\frac{19}{ 480} \tag{51}\] The multiplicative anomaly among the linear factors that build up the conformal Laplacian \(Y=D\cdot\overline{D}\) can now be worked out as the sum of the four contributions with \((w,z)\) equal to \((1,1),(2,2),(1+\tau,1+\overline{\tau})\) and \((2+\tau,2+\overline{\tau})\) for N-N, N-D, D-N, and D-D boundary conditions, respectively, \[{\rm MA}(D,\overline{D})=\beta\cdot\frac{1}{120}. \tag{52}\] Allowing for a shift in each of the linear factors \(D+a\) and \(\overline{D}+b\), we get \[{\rm MA}(D+a,\overline{D}+b)=-\beta\cdot\frac{\left(a+b\right)^{4}}{192}+\beta \cdot\frac{1}{120}. \tag{53}\] For the spatial shift in the conformal Laplacian \(Y_{a}=-\partial_{0}^{2}+(\sqrt{\Delta_{0}}+a)^{2}=(D+a)\cdot(\overline{D}+a)\) \[{\rm MA}(D+a,\overline{D}+a)=-\beta\cdot\frac{a^{4}}{12}+\beta\cdot\frac{1}{1 20}\, \tag{54}\] whereas for the temperature shift \(K_{a}=-(i\sqrt{-\partial_{0}^{2}}+a)^{2}+\Delta_{0}=(D+a)\cdot(\overline{D}-a)\) \[{\rm MA}(D+a,\overline{D}-a)=\beta\cdot\frac{1}{120}. \tag{55}\] Exploiting the accumulative properties (eqn.24), the multiplicative anomaly among shifted conformal Laplacians turns out to be \[{\rm MA}(Y_{a},Y_{b})=\beta\cdot\frac{(a-b)^{2}\left(7a^{2}+10ab+7b^{2}\right) }{192}\, \tag{56}\] and \[{\rm MA}(K_{a},K_{b})=-\beta\cdot\frac{(a-b)^{4}}{192}. \tag{57}\] Here, we again notice that the multiplicative anomaly among shifted Laplacians for the particular choice \(b=-a\) \[{\rm MA}(Y_{a},Y_{-a})=-{\rm MA}(K_{a},K_{-a})=\beta\cdot\frac{a^{4}}{12} \tag{58}\] coincides with the multiplicative anomaly computed by Elizalde et al. via Wodzicki residue for free massless scalars, provided the spatial directions are compactified to the three-sphere \(V_{3}=2\pi^{2}\) (cf. eqns. 88 and 94 in [26]). Going back to the multiplicative anomaly for \(Y_{a}=-\partial_{0}^{2}+(\sqrt{\Delta_{0}}+a)^{2}\), we verify again the equality with the Casimir energy \[E_{0}=\frac{1}{2\,\beta}\,{\rm MA}(D+a,\overline{D}+a)=-\frac{a^{4}}{24}+\frac{ 1}{240}. \tag{59}\] The standard value for \(E_{0}\) (see, e.g. [43]) can again be computed in terms of Hurwitz zetas and their relation with Bernoulli polynomials. The degeneracy \((l+1)^{2}\) needs to be expanded in powers of \((l+1+a)\) \[2\,E_{0} = \sum_{l=0}^{\infty}(l+1)^{2}\,(l+1+a)=\zeta_{H}(-3,1+a)-2a\,\zeta _{H}(-2,1+a)+a^{2}\,\zeta_{H}(-1,1+a) \tag{60}\] \[= -\frac{B_{4}(-a)}{4}-2a\,\frac{B_{3}(-a)}{3}-a^{2}\,\frac{B_{2}(- a)}{2}\,=\,-\frac{a^{4}}{12}+\frac{1}{120}\.\] The same happens for the temperature-shifted Laplacian \(K_{a}\). The partition functions turn out to be dominated by the vacuum energy \[E_{0}=\frac{1}{2\,\beta}\,{\rm MA}(D+a,\overline{D}-a)=\frac{1}{240}. \tag{61}\] ### \(S^{1}_{\beta}\times S^{5}\) In this case, we need to add two more counters on the sphere to the linear factors, \((m_{1}+m_{2}+m_{3}+m_{4}+m_{5}+m_{6}\tau+w)\) and \((m_{1}+m_{2}+m_{3}+m_{4}+m_{5}+m_{6}\overline{\tau}+z)\), and we end up with a sextic polynomial in the arguments \((w,z)\) for the multiplicative anomaly: \[{\rm MA}(A,B) = -\beta\cdot\frac{(z+w)^{3}\,(z-10+w)^{3}}{92160} \tag{62}\] \[- \beta\cdot\frac{5\,(z+w)^{2}\,(z-10+w)^{2}}{9216}\] \[- \beta\cdot\frac{19\,(z+w)\,(z-10+w)}{2304}\] \[- \beta\cdot\frac{863}{24192}\] To compute the multiplicative anomaly among the linear factors that build up the conformal Laplacian \(Y=D\cdot\overline{D}\) there are four contributions with \((w,z)\) equal to \((2,2),(3,3),(2+\tau,2+\overline{\tau})\) and \((3+\tau,3+\overline{\tau})\) coming from N-N, N-D, D-N, and D-D boundary conditions, respectively, \[{\rm MA}(D,\overline{D})=-\beta\cdot\frac{31}{30240}. \tag{63}\] Allowing again for a shift in each of the linear factors \(D+a\) and \(\overline{D}+b\), we get \[{\rm MA}(D+a,\overline{D}+b)=-\frac{\beta\,(a+b)^{6}}{23040}+\frac{\beta\,(a +b)^{4}}{2304}-\frac{31\beta}{30240}. \tag{64}\] Now, for the spatial shift in the conformal Laplacian \(Y_{a}=-\partial_{0}^{2}+(\sqrt{\Delta_{0}}+a)^{2}=(D+a)\cdot(\overline{D}+a)\) \[{\rm MA}(D+a,\overline{D}+a)=-\beta\cdot\frac{84a^{6}-210a^{4}+31}{30240}\, \tag{65}\] whereas for the temperature shift \(K_{a}=-(i\sqrt{-\partial_{0}^{2}}+a)^{2}+\Delta_{0}=(D+a)\cdot(\overline{D}-a)\) \[{\rm MA}(D+a,\overline{D}-a)=-\beta\cdot\frac{31}{30240}. \tag{66}\] Exploiting the accumulative and associative properties (eqn.24), the multiplicative anomaly among shifted conformal Laplacians turns out to be \[{\rm MA}(Y_{a},Y_{b})=\frac{\beta\left(a-b\right)^{2}\left(31a^{4}+56a^{3}b+66a ^{2}b^{2}+56a\,b^{3}+31b^{4}-70a^{2}-100ab-70b^{2}\right)}{23040}, \tag{67}\] and \[{\rm MA}(K_{a},K_{b})=-\frac{\beta\left(a^{2}-2ab+b^{2}-10\right)\left(a-b \right)^{4}}{23040}. \tag{68}\] As in lower dimensions, for the particular choice \(b=-a\) the multiplicative anomaly among shifted Laplacians \[{\rm MA}(Y_{a},Y_{-a})=-{\rm MA}(K_{a},K_{-a})=\frac{\beta\left(2a^{2}-5\right) a^{4}}{720} \tag{69}\] may be compared with the multiplicative anomaly computed by Elizalde et al. via Wodzicki residue for free massless scalars, provided the spatial directions are compactified to the five-sphere \(V_{5}=\pi^{3}\) (cf. eqns. 96 and 97 in [26]). Curiously, the agreement is now only achieved for the leading power. Going back to the multiplicative anomaly for \(Y_{a}=-\partial_{0}^{2}+(\sqrt{\Delta_{0}}+a)^{2}\), we can verify again the equality with the Casimir energy \[E_{0}=\frac{1}{2\,\beta}\,{\rm MA}(D+a,\overline{D}+a)=-\frac{84a^{6}-210a^{4 }+31}{60480}. \tag{70}\] The standard value for \(E_{0}\) (see, e.g. [43]) can again be computed in terms of Hurwitz zetas and their relation with Bernoulli polynomials, this time through a lengthier calculation. The degeneracy \(\frac{(n+1)(n+2)^{2}(n+3)}{12}\) needs to be expanded in powers of \((n+2+a)\) \[2\,E_{0} = \sum_{n=0}^{\infty}\frac{(n+1)(n+2)^{2}\,(n+3)}{12}\,(n+2+a) \tag{71}\] \[= \frac{1}{12}\,\zeta_{H}(-5,2+a)-\frac{1}{3}a\cdot\zeta_{H}(-4,2+a )+\frac{6a^{2}-1}{12}\cdot\zeta_{H}(-3,2+a)\] \[-\frac{a(2a^{2}-1)}{6}\cdot\zeta_{H}(-2,2+a)+\frac{a^{2}(a^{2}-1 )}{12}\cdot\zeta_{H}(-1,2+a)\] \[= -\frac{84a^{6}-210a^{4}+31}{30240}\.\] The same happens again for the temperature-shifted Laplacian \(K_{a}\). The partition functions turn out to be dominated by the vacuum energy \[E_{0}=\frac{1}{2\,\beta}\,{\rm MA}(D+a,\overline{D}-a)=-\frac{31}{60480}. \tag{72}\] ## 6 On Shintani's proof of the Kronecker limit formula Let us consider the zeta function \[\xi(s,w,\tau)=\sum_{m,n\in{\bf Z}}|m+n\,\tau+w|^{-2s}\, \tag{73}\] with \(\tau\) and \(w\) complex, assuming \(Im(\tau)>0\) and \(m+n\,\tau+w\neq 0\) to avoid a null term4. The version of the Kronecker (second) limit formula for the derivative with respect to \(s\) at \(s=0\), rather than for the value at \(s=1\) (c.f. [29]), as worked out by Shintani [7] consists in the following closed expression for the \(\zeta\)-regularized product Footnote 4: The case \(w={\bf Z}+{\bf Z}\,\tau\) can be readily obtained by carefully suppressing the zero factor in the final expression. \[\xi^{\prime}(0,w,\tau)=-\log\left|\frac{\vartheta(w,\tau)}{\eta(\tau)}e^{\frac {i\pi w(w-\overline{w})}{\tau-\overline{\tau}}}\right|^{2} \tag{74}\] where Dedekind's eta and Jacobi's eta functions are given by \[\eta(\tau) = e^{i\pi\frac{\tau}{12}}\prod_{n-1}^{\infty}(1-e^{2\pi i\tau}), \tag{75}\] \[\vartheta(w,\tau) = 2e^{i\pi\frac{\tau}{6}}\sin(\pi w)\eta(\tau)\prod_{n=1}^{\infty }(1-e^{2\pi i(n\tau+w)})(1-e^{2\pi i(n\tau-w)}). \tag{76}\] Shintani's approach proceeded by first splitting up the double sum \[\xi(s,w,\tau) = \sum_{m,n\geq 0}\left\{\,|m+n\,\tau+w|^{-2s}+|m-n\,\tau+1-w|^{-2s}\right. \tag{77}\] \[\left.+|m-n\,\tau+w-\tau|^{-2s}+|m+n\,\tau+1-w+\tau|^{-2s}\right\}\,\] followed by splitting the regularized products on each term 'liberating' the Barnes' gamma factors and paying the price of the multiplicative anomaly5 Footnote 5: The relevant anomaly is \({\rm MA}(D_{(w,\tau)},D_{(z,\overline{\tau})})=\frac{\tau-\overline{\tau}}{ \tau-\overline{\tau}}\cdot B_{2}(\frac{\tau-\overline{\tau}w}{\tau-\overline {\tau}})\cdot(\log\tau-\log\overline{\tau})\). The appropriate log-branch for opposite quasi-periods \(-\tau\) and \(-\overline{\tau}\) requires \(\log(-\tau)=-i\pi+\log(\tau)\) and \(\log(-\overline{\tau})=i\pi+\log(\overline{\tau})\), respectively. \[-\xi^{\prime}(0,w,\tau) = -\log|\Gamma_{2}(w|1,\tau)\,\Gamma_{2}(1-w|1,-\tau)\,\Gamma_{2}( w-\tau|1,-\tau)\,\Gamma_{2}(1-w+\tau|1,\tau)|^{2} \tag{78}\] \[+{\rm MA}(D_{(w,\tau)},D_{(\overline{w},\overline{\tau})})+{\rm MA }(D_{(1-w,-\tau)},D_{(1-\overline{w},-\overline{\tau})})\] \[+{\rm MA}(D_{(w-\tau,-\tau)},D_{(\overline{w}-\overline{\tau},- \overline{\tau})})+{\rm MA}(D_{(1-w+\tau,\tau)},D_{(1-\overline{w}+\overline {\tau},\overline{\tau})})\] \[= -\log|\Gamma_{2}(w|1,\tau)\,\Gamma_{2}(1-w|1,-\tau)\,\Gamma_{2}( w-\tau|1,-\tau)\,\Gamma_{2}(1-w+\tau|1,\tau)|^{2}\] \[+\,i\pi\frac{\tau-\overline{\tau}}{\tau\,\overline{\tau}}\cdot B _{2}(\frac{\tau\,\overline{w}-\overline{\tau}\,w}{\tau-\overline{\tau}})\.\] The reflection formula for Barnes double gamma (see, e.g. proposition 6.1 in [9]) came into play here to further reduce to infinite convergent products (further recast in terms of Jacobi theta and Dedekind eta functions) \[-\log|\Gamma_{2}(w|1,\tau)\,\Gamma_{2}(1-w|1,-\tau)\,\Gamma_{2}(w- \tau|1,-\tau)\,\Gamma_{2}(1-w+\tau|1,\tau)|^{2}\] \[=\log\prod_{m\geq 0}|(1-e^{2\pi\,i(w+m\tau)})(1-e^{2\pi\,i(\tau-w+ m\tau)})|^{2}\] \[+\left\{i\pi\zeta_{2}(0,w|1,\tau)+i\pi\zeta_{2}(0,1-w+\tau|1,\tau )+c.c.\right\}. \tag{79}\] Let us now examine the consequence of having chosen a different splitting, locating the \(m=0\) term of the initial sum in the second and fourth terms \[\xi(s,w,\tau) = \sum_{m,n\geq 0}\left\{\,|m+n\,\tau+1+w|^{-2s}+|m-n\,\tau-w|^{-2s}\right. \tag{80}\] \[\left.+|m-n\,\tau+1+w-\tau|^{-2s}+|m+n\,\tau-w+\tau|^{-2s}\right\}\.\] For the \(\zeta\)-regularized product, after 'liberating' the Barnes' gamma factors and paying the price of the multiplicative anomaly, we now obtain \[-\xi^{\prime}(0,w,\tau) = -\log|\Gamma_{2}(1+w|1,\tau)\,\Gamma_{2}(-w|1,-\tau)\,\Gamma_{2} (1+w-\tau|1,-\tau)\,\Gamma_{2}(-w+\tau|1,\tau)|^{2} \tag{81}\] \[+{\rm MA}(D_{(1+w,\tau)},D_{(1+\overline{w},\overline{\tau})})+ {\rm MA}(D_{(-w,-\tau)},D_{(-\overline{w},-\overline{\tau})})\] \[+{\rm MA}(D_{(1+w-\tau,-\tau)},D_{(1+\overline{w}-\overline{\tau },-\overline{\tau})})+{\rm MA}(D_{(-w+\tau,\tau)},D_{(-\overline{w}+\overline {\tau},\overline{\tau})})\] \[= -\log|\Gamma_{2}(1+w|1,\tau)\,\Gamma_{2}(-w|1,-\tau)\,\Gamma_{2} (1+w-\tau|1,-\tau)\,\Gamma_{2}(-w+\tau|1,\tau)|^{2}\] \[+\,i\pi\frac{\tau-\overline{\tau}}{\tau\,\overline{\tau}}\cdot B_ {2}(-\frac{\tau\,\overline{w}-\overline{\tau}\,w}{\tau-\overline{\tau}})\.\] Notice the subtle difference in the multiplicative anomaly, the argument of the Bernoulli polynomial comes out with the opposite sign. The reflection formula for Barnes double gamma produces the very same infinite convergent products (further recast in terms of Jacobi theta and Dedekind eta functions) but different zeta prefactors \[-\log|\Gamma_{2}(1+w|1,\tau)\,\Gamma_{2}(-w|1,-\tau)\,\Gamma_{2}(1+w-\tau|1,- \tau)\,\Gamma_{2}(-w+\tau|1,\tau)|^{2}\] \[=\log\prod_{m\geq 0}|(1-e^{2\pi\,i(w+m\tau)})(1-e^{2\pi\,i(\tau-w+m\tau)})|^{2}\] \[+\left\{i\pi\zeta_{2}(0,1+w|1,\tau)+i\pi\zeta_{2}(0,-w+\tau|1,\tau)+c.c.\right\}. \tag{82}\] Had we been a little cavalier concerning the multiplicative anomaly and not included it in the first place, we would then have had a discrepancy for the \(\zeta\)-regularized products depending on the initial splitting6. The apparent discrepancy would be the difference between the Barnes zeta terms7 in the exponentials: Footnote 6: This is very reminiscent of the two different prescriptions in the ‘one-step’ regularization of the infinite products with two complex quasi-periods of [30]. In our particular case, the possible discrepancy in the final answer for any choice of the splitting is cured by the multiplicative anomaly. \[\{i\pi\zeta_{2}(0,w|1,\tau)+i\pi\zeta_{2}(0,1-w+\tau|1,\tau)+c.c.\}\] \[- \{i\pi\zeta_{2}(0,1+w|1,\tau)+i\pi\zeta_{2}(0,-w+\tau|1,\tau)+c.c.\}\] \[= -2i\pi\left\{\frac{w}{\tau}-\frac{\overline{w}}{\overline{\tau}} \right\}. \tag{83}\] However, by taking into consideration the additional term given by the multiplicative anomaly we have an additional contribution \[i\pi\frac{\tau-\overline{\tau}}{\tau\,\overline{\tau}}\cdot B_{ 2}(\frac{\tau\,\overline{w}-\overline{\tau}\,w}{\tau-\overline{\tau}})-\,i\pi \frac{\tau-\overline{\tau}}{\tau\,\overline{\tau}}\cdot B_{2}(-\frac{\tau\, \overline{w}-\overline{\tau}\,w}{\tau-\overline{\tau}}) \tag{84}\] \[= 2i\pi\left\{\frac{w}{\tau}-\frac{\overline{w}}{\overline{\tau}} \right\}\,\] that exactly cancels the mismatch and yields a unique answer for the \(\zeta\)-regularized product, i.e., the Kronecker second limit formula. In all, one can say that what saves the day is precisely the role of the multiplicative anomaly. ## 7 Application: Casimir energy for GJMS operators It is known that there are two alternative factorizations of the GJMS operators on \(S^{1}_{\beta}\times S^{n-1}\) in terms of shifted conformal Laplacian (see, e.g., [18, 19, 31]), given by \[P_{2k}=\prod_{j=1}^{k}\left\{-\partial_{o}^{\,2}\,+\,(\sqrt{\Delta_{0}}+2j-k- 1)^{2}\right\}=\prod_{j=1}^{k}\left\{-(i\sqrt{-\partial_{o}^{2}}+2j-k-1)^{\,2 }\,+\,\Delta_{0}\right\} \tag{85}\] It is worth noticing that the factorization of the eigenvalues into linear factors is unique, but two different pairings lead to the two alternative quadratic factorizations into shifted conformal Laplacians. The conventional computation of the one-loop partition function, or functional determinant, yields different results for the Casimir energy under \(\zeta\)-regularization. However, in this section, we will show their equivalence once the multiplicative anomaly is properly taken into account. Let us first compute the accumulated Casimir energy for the shifted conformal Laplacian factors and then add up the corresponding multiplicative anomaly among them. The latter is given by the averaged multiplicative anomaly between all possible pairings by the pairwise-accumulative property. ### Two-torus Factorization with spatial shift:The standard Casimir energy for the GJMS operator is simply the sum of the individual ones (eqn.48) \[E_{0}^{(k)}=\sum_{j=1}^{k}-\frac{6(2j-k-1)^{2}+1}{12}=-\frac{1}{6}k^{3}+\frac {1}{12}k. \tag{86}\] Notice the conflict for \(k>1\) with the universal relation for a two-dimensional CFT where \(E_{0}=-\frac{c}{12}\), since the central charge for the GJMS operators in 2D is \(k^{3}\) (see, e.g. [32, 33, 34]). Let us include now the correction to the Casimir energy coming from the multiplicative anomaly between the shifted conformal Laplacian. The multiplicative anomaly, being pairwise accumulative, equals the average among all pairs \[\frac{1}{k}\sum_{1\leq j,l\leq k}{\rm MA}(Y_{(2j-k-1)},Y_{(2l-k-1)}). \tag{87}\] Plugging in eqn.45, we get \[{\rm MA}=\beta\cdot\left(\frac{1}{6}k^{3}-\frac{1}{6}k\right). \tag{88}\] For the GJMS operator, the improved Casimir energy becomes \[\tilde{E}_{0}^{(k)}=E_{0}^{(k)}+\frac{1}{2\beta}{\rm MA}=-\frac{k^{3}}{12}\, \tag{89}\] restoring the universality. Factorization with temperature shift:For the alternative factorization, the standard Casimir energy for the GJMS operator is again the sum of the individual ones (eqn.50) \[E_{0}^{(k)}=\sum_{j=1}^{k}\left(-\frac{1}{12}\right)=-\frac{1}{12}k. \tag{90}\] Again the result is in conflict for \(k>1\) with the expectation for a \(CFT_{2}\). Including now the corrections to the Casimir energy coming from the multiplicative anomaly between the shifted conformal Laplacian \[\frac{1}{k}\sum_{1\leq j,l\leq k}{\rm MA}(K_{2j-k-1}),K_{(2l-k-1)})\, \tag{91}\] and plugging in eqn.46, we get instead \[{\rm MA}=-\beta\cdot\left(\frac{1}{6}k^{3}-\frac{1}{6}k\right). \tag{92}\] For the GJMS operator, the improved Casimir energy becomes \[\tilde{E}_{0}^{(k)}=E_{0}^{(k)}+\frac{1}{2\beta}{\rm MA}=-\frac{k^{3}}{12}, \tag{93}\] restoring the universality and the agreement between the two factorizations. We find out another remarkable fact, readily verified in this case by using eqn.42, \[\tilde{E}_{0}^{(k)}=\frac{1}{2\beta}\frac{1}{k}\sum_{1\leq j,l\leq k}{\rm MA} (D_{(2j-k-1)},\overline{D}_{(2l-k-1)}). \tag{94}\] The common value for the improved Casimir energy can also be obtained as the multiplicative anomaly among all linear factors that build up the GJMS operator, this decomposition being unique. ### \(S^{1}_{\beta}\times S^{3}\) Factorization with spatial shift:The standard Casimir energy for the GJMS operator (cf. [31]) is simply given by the sum of the individual ones (eqn.59) \[E_{0}^{(k)}=\sum_{j=1}^{k}-\frac{10\left(2j-k-1\right)^{4}-1}{240}=-\frac{k \left(6k^{4}-20k^{2}+11\right)}{720}. \tag{95}\] We now include the corrections to the Casimir energy from the multiplicative anomaly between the shifted conformal Laplacian. The multiplicative anomaly, being the average among all pairs, equals \[\frac{1}{k}\sum_{1\leq j,l\leq k}\mathrm{MA}(Y_{(2j-k-1)},Y_{(2l-k-1)}). \tag{96}\] Plugging in eqn.56, we obtain \[\mathrm{MA}=\beta\cdot\frac{k\left(k^{2}-1\right)\left(4k^{2}-11\right)}{360}. \tag{97}\] For the GJMS operator, the improved Casimir energy becomes \[\tilde{E}_{0}^{(k)}=E_{0}^{(k)}+\frac{1}{2\beta}\mathrm{MA}=-\frac{k^{3} \left(2k^{2}-5\right)}{720}. \tag{98}\] Factorization with temperature shift:For the alternative factorization, the standard Casimir energy for the GJMS operator is again the sum of the individual ones (eqn.61) \[E_{0}^{(k)}=\sum_{j=1}^{k}\frac{1}{240}=\frac{k}{240}. \tag{99}\] We now include the corrections to the Casimir energy coming from the multiplicative anomaly between the shifted conformal Laplacian \[\frac{1}{k}\sum_{1\leq j,l\leq k}\mathrm{MA}(K_{(2j-k-1)},K_{(2l-k-1)}). \tag{100}\] Plugging in eqn.57, we obtain instead \[\mathrm{MA}=-\beta\frac{k\left(k^{2}-1\right)\left(2k^{2}-3\right)}{360}. \tag{101}\] For the GJMS operator, the improved Casimir energy then becomes \[\tilde{E}_{0}^{(k)}=E_{0}^{(k)}+\frac{1}{2\beta}\mathrm{MA}=-\frac{k^{3} \left(2k^{2}-5\right)}{720}\, \tag{102}\] attaining agreement between the two factorizations. Again this common value for the improved Casimir energy can also be obtained as the multiplicative anomaly (eqn.53) among all linear factors that build up the GJMS operator. To discuss yet another feature of this improved Casimir energy, we make a brief digression here. In a four-dimensional CFT, according to Cappelli and Coste [27], the Casimir energy is related to the coefficients of the trace anomaly \[E_{o}=\frac{3}{4}\left(a+\frac{1}{2}g\right)\, \tag{103}\] where \(a\) is the type-A central charge and \(g\) is the coefficient of the total derivative in \[(4\pi)^{2}\langle T\rangle=-a\,E_{4}+c\,W^{2}+g\,\nabla^{2}R. \tag{104}\] The total derivative term is what makes the Casimir energy scheme dependent. For free conformal fields, the trace anomaly can be read off from the heat kernel coefficients. For example, sticking to zeta-regularization, for the conformal Laplacian one finds \[[\,a\,,\,c\,,\,g\,]=[\,\frac{1}{360}\,,\,\frac{1}{120}\,,\,\frac{1}{180}\,] \tag{105}\] and the Casimir energy \(E_{0}=\frac{1}{240}\). For the Paneitz operator, in turn, \[[\,a\,,\,c\,,\,g\,]=[\,-\frac{7}{90}\,,\,-\frac{1}{15}\,,\,\frac{1}{15}\,] \tag{106}\] the standard Casimir energy \(E_{o}^{(2)}=-3/40\) fails to comply with the Capelli-Coste relation, whereas it surprisingly holds for the improved Casimir energy \(\tilde{E}_{o}^{(2)}=-1/30\). Unfortunately, even the first few heat coefficients for higher-derivative operators remain largely unknown, and total derivative terms are usually discarded. One notable exception is Branson's computation for the Paneitz operator [35]8, from where we extracted the value \(g=1/15\) above9. Our prediction then is that the coefficient of the total derivative term in the heat kernel coefficient for GJMS operators is the one related to the improved Casimir energy via the Cappelli-Coste relation. Footnote 8: Actually, he reported for the heat coefficient \([\,(c-a)/2\,,\,-2a\,,\,-3g+2a\,]=[\,1/4\,,\,7\,,\,-16\,]/45\) in a basis where he traded the Euler density by his Q-curvature, which contains itself also a total derivative (cf. Lemma 2 and the subsequent evaluation at \(m=4\) in [35]). To compare with the trace anomaly, the heat coefficient must be multiplied by two because of the quartic nature of the Paneitz operator. Footnote 9: The same value can also be worked out from the expression found by Gusynin [36] for quartic operators. We are grateful to L. Casarin for bringing this paper to our attention. ### \(S^{1}_{\beta}\times S^{5}\) Factorization with spatial shift:The standard Casimir energy for the GJMS operator is given by the sum of the individual ones (eqn.70) \[E_{0}^{(k)}=\sum_{j=1}^{k}-\frac{84(2j-k-1)^{6}-210(2j-k-1)^{4}+31}{60480}=- \frac{k\left(12k^{6}-126k^{4}+336k^{2}-191\right)}{60480}. \tag{107}\] Let us include now the corrections to the Casimir energy coming from the multiplicative anomaly between the shifted conformal Laplacian. The multiplicative anomaly, being pairwise accumulative, equals the average among all pairs \[\frac{1}{k}\sum_{1\leq j,l\leq k}{\rm MA}(Y_{(2j-k-1)},Y_{(2l-k-1)}). \tag{108}\] Plugging in eqn.67, we get \[{\rm MA}=\beta\cdot\frac{k\left(k^{2}-1\right)\left(9k^{4}-89k^{2}+191\right) }{30240}. \tag{109}\] The improved Casimir energy for the GJMS operator then becomes \[\tilde{E}_{0}^{(k)}=E_{0}^{(k)}+\frac{1}{2\beta}{\rm MA}=-\frac{k^{3}\left(3 k^{4}-28k^{2}+56\right)}{60480}. \tag{110}\] Factorization with temperature shift:For the alternative factorization, the standard Casimir energy for the GJMS operator is again the sum of the individual ones (eqn.72) \[E_{0}^{(k)}=\sum_{j=1}^{k}-\frac{31}{60480}=-\frac{31k}{60480}. \tag{111}\] We now include the corrections to the Casimir energy coming from the multiplicative anomaly between the shifted conformal Laplacian \[\frac{1}{k}\sum_{1\leq j,l\leq k}{\rm MA}(K_{2j-k-1}),K_{(2l-k-1)}). \tag{112}\] Plugging in eqn.68, we get instead \[{\rm MA}=-\frac{k\left(k^{2}-1\right)\left(3k^{4}-25k^{2}+31\right)\beta}{3024 0}. \tag{113}\] For the GJMS operator, the improved Casimir energy becomes \[\tilde{E}_{0}^{(k)}=E_{0}^{(k)}+\frac{1}{2\beta}{\rm MA}=-\frac{k^{3}\left(3k^ {4}-28k^{2}+56\right)}{60480}\, \tag{114}\] achieving the agreement between the two factorizations. We stress again that this common value for the improved Casimir energy can also be obtained as the multiplicative anomaly (eqn.64) among all linear factors that build up the GJMS operator. ## 8 Summary and outlook We have succeeded in extending the Shintani-Mizuno expression for the multiplicative anomaly of linear factors and used it to reproduce known results for Laplacians on spheres. Regarding thermal partition functions for different factorizations of higher-derivative operators, we have shown they agree once the multiplicative anomaly is properly included. This yields a modified (_improved_) Casimir energy that dominates the zero temperature limit. In addition, we have found out that the standard Casimir energy for (shifted) Laplacians precisely coincides with the multiplicative anomaly among the linear factors 10. For GJMS operators, the improved Casimir energy restores the universal relation with the central charge in two dimensions, whereas in four dimensions it reconciles with the Cappelli-Coste relation for the Paneitz operator. Although established for the case of scalar Laplacians and their conformal powers (GJMS operators), this may well hold for Laplacians and higher-derivative operators on vector, tensor, and even higher-spin fields. Regarding the ambiguity of the Casimir energy in four (and higher even) dimensions, it can be traced back to local finite counterterms which are the conformal primitives of the trivial total derivatives or trivial anomalies in the trace anomaly. On the conformally flat \(S^{1}_{\beta}\times S^{n-1}\) backgrounds, the universal part of the Casimir energy that depends on the type-A central charge is already known [40], but this is only valid in a particular regularization scheme where all trivial divergences in the trace anomaly are discarded. This scheme certainly differs from \(\zeta\) regularization, which produces a particular combination of trivial total derivatives. In 4D the ambiguity is controlled by the coefficient \(g\) of \(\nabla^{2}R\) in the trace anomaly, as shown by Capelli and Coste [27] \[E_{o}=\frac{3}{4}a+\frac{3}{8}g. \tag{115}\] In 6D things are more complicated, there is a basis of six independent trivial anomalies [41] and the universal part obtained by Herzog and Huang [40] must be supplemented by the coefficients of these trivial total derivatives. Prompted by the result of Cappelli and Coste in 4D, we have obtained the following extension to 6D (further details11 will be given elsewhere [44]) Footnote 11: We have verified the validity of this expression in all 6D cases considered in [42], where the coefficients \(g\)’s were computed via heat kernel, against the Casimir energies computed in [43]. \[E_{o}=-\frac{15}{8}a-\frac{5}{12}\left(g_{5}+\frac{1}{4}g_{7}+\frac{1}{2}g_{8} -10g_{9}+g_{10}\right)\, \tag{116}\] where \(a\) is the 6D type-A trace anomaly coefficient and the \(g\)'s are the coefficients of the six independent trivial anomalies \(M_{5},M_{6},M_{7},M_{8},M_{9}\) and \(M_{10}\) of [41]. Alternatively, in a 6D conformally flat background, the above basis is redundant and one can simplify further to get, in terms of the Schouten scalar \(J\) and the Schouten tensor \(V\), Branson's basis (see, e.g. [16]) for trivial total derivatives \(\nabla^{2}\nabla^{2}J\), \(\nabla^{2}J^{2}\) and \(\nabla^{2}|V|^{2}\) with coefficients \(\gamma_{1},\gamma_{2}\) and \(\gamma_{3}\), respectively, \[E_{o}=-\frac{15}{8}a-\frac{1}{192}\left(8\gamma_{1}-8\gamma_{2}+11\gamma_{3} \right). \tag{117}\] The matching we have found between the multiplicative anomaly and Casimir energy in 4D and 6D holds whenever the Casimir energy is computed in \(\zeta\) regularization and the trivial total derivative coefficients are obtained as well via heat heat kernel in \(\zeta\) regularization. For GJMS operators, in particular, the heat kernel computation should produce coefficients g's that match the improved Casimir energy. This claim remains a prediction for other than the conformal Laplacian or Yamabe operator, except for the Paneitz operator in 4D where the explicit coefficients have been worked out and the matching, via Cappelli-Coste relation, was successfully verified. As for the physical interpretation, the Casimir energy in 4D and 6D remains ambiguous due to the above-mentioned trivial total derivatives terms in the trace anomaly. We emphasize that the equivalence we found applies to a particular regularization scheme (\(\zeta\) regularization), so that the inclusion of the multiplicative anomaly can be traced back to the addition of a precise combination of finite local counterterms. There is certainly no new physics in the inclusion of the multiplicative anomaly; however, if one sticks to \(\zeta\) regularization then consistency and conformity with trivial total derivatives, regardless factorization choices, demands a proper account of the multiplicative anomaly. There are several instances where the role of the multiplicative anomaly seems worth to be revisited. A prominent example is the supersymmetric version of the Casimir energy [38] that ought to be _physical_ and connected with the central charges of the CFT. A multiplicative anomaly might turn up in the traditional manipulation of one-loop functional determinants, as shown in Shintani's derivation of the Kronecker limit formula (Section 6) and the example in Appendix B, as well as with the inclusion of higher-derivative multiplets [39]. Finally, it seems natural to ask whether the multiplicative anomaly and its connection with the CFT Casimir energy may find its place in a dual holographic counterpart. **Acknowledgments** We thank F. Bastianelli, L. Casarin, J.S. Dowker, A. Monin, and especially E. Friedman for valuable conversations and comments. We are also grateful to the anonymous referee for helpful suggestions and clarifications. This work was partially funded through FONDECYT-Chile 1220335. D.E.D. wishes to salute Harald Dorn and Hans-Jorg Otto on the occasion of the 30th anniversary of the DOZZ formula. ## Appendix A Bernoulli polynomials of higher degree Let us write down the explicit form of the first few Bernoulli polynomials of higher degree that enter the integral formula. The generating function \[\frac{t^{n}\,e^{-w\,t}}{\prod_{i=1}^{n}\left\{1-e^{-a_{i}\,t}\right\}}=\sum_{l= 0}^{\infty}B_{n,l}(w|\vec{a})\,\frac{t^{l}}{l!}, \tag{10}\] determines the polynomial \(B_{n,l}(w|\vec{a})\). The explicit expressions up to order five are the following \[B_{1,1}(w|a) = \frac{1}{2}-\frac{w}{\sigma_{1}}\, \tag{11}\] \[B_{2,2}(w|a_{1},a_{2}) = \frac{\sigma_{1}^{2}+\sigma_{2}}{6\sigma_{2}}-\frac{\sigma_{1}}{ \sigma_{2}}\,w+\frac{w^{2}}{\sigma_{2}}\,\] (12) \[B_{3,3}(w|a_{1},a_{2},a_{3}) = \frac{\sigma_{1}\sigma_{2}}{4\sigma_{3}}-\frac{\sigma_{1}^{2}+ \sigma_{2}}{2\sigma_{3}}\,w+\frac{3\sigma_{1}}{2\sigma_{3}}\,w^{2}-\frac{w^{ 3}}{\sigma_{3}},\] (13) \[B_{4,4}(w|a_{1},a_{2},a_{3},a_{4}) = \frac{4\sigma_{1}^{2}\sigma_{2}+3\sigma_{2}^{2}-\sigma_{1}^{4}+ \sigma_{1}\sigma_{3}-\sigma_{4}}{30\sigma_{4}}-\frac{\sigma_{1}\sigma_{2}}{ \sigma_{4}}\,w+\frac{\sigma_{1}^{2}+\sigma_{2}}{\sigma_{4}}\,w^{2}\] (14) \[- \frac{2\sigma_{1}}{\sigma_{4}}\,w^{3}+\frac{w^{4}}{\sigma_{4}}\,\] \[B_{5,5}(w|a_{1},a_{2},a_{3},a_{4},a_{5}) = -\frac{\sigma_{1}(\sigma_{1}^{2}\sigma_{2}-3\sigma_{2}^{2}+ \sigma_{1}\sigma_{3}-\sigma_{4})}{12\sigma_{5}}\] (15) \[- \frac{4\sigma_{1}^{2}\sigma_{2}+3\sigma_{2}^{2}-\sigma_{1}^{4}+ \sigma_{1}\sigma_{3}-\sigma_{4}}{6\sigma_{5}}\,w\] \[+ \frac{5\sigma_{1}\sigma_{2}}{2\sigma_{5}}\,w^{2}-5\frac{\sigma_{ 1}^{2}+\sigma_{2}}{3\sigma_{5}}\,w^{3}+\frac{5\sigma_{1}}{2\sigma_{5}}\,w^{4} -\frac{w^{5}}{\sigma_{5}}\.\] where the \(\sigma^{\prime}s\) are the elementary symmetric functions \[\sigma_{k}=\sum_{1\leq r_{1}<r_{2}<\ldots r_{k}\leq n}a_{r_{1}}a_{r_{2}}\ldots a _{r_{k}}. \tag{16}\] ## Appendix B Barnes multiple gammas and reflection formulas Let us try to emulate Shintani's derivation of the Kronecker limit formula in the case of two quasi-complex periods and examine the role of the multiplicative anomaly. Consider the zeta function \[\xi(s,w|\tau,\sigma)=\sum_{m\in{\bf Z};n,l\geq 0}|m+n\,\tau+l\,\sigma+w|^{-2s}\, \tag{17}\] with \(\tau,\sigma\) and \(w\) complex, assuming \(\Im(\tau)>0,\Im(\sigma)>0\) and \(m+n\,\tau+l\,\sigma+w\neq 0\) to avoid a null term. Splitting up the sum over integer \(m\) as \[\xi(s,w|\tau,\sigma)=\sum_{m,n,l\geq 0}\left\{\,|m+n\,\tau+l\,\sigma+w|^{-2s} +|m-n\,\tau-l\,\sigma+1-w|^{-2s}\right\}, \tag{18}\] and hence the regularized products can be written in terms of Barnes gamma factors by paying the price of the multiplicative anomaly12 Footnote 12: The multiplicative anomaly \({\rm MA}(D_{(w,\tau,\sigma)},D_{(\overline{w},\overline{\tau},\overline{\sigma})})\), as computed with the generalized Shintami-Mizuno formula, is given by \(\frac{\tau(\overline{w}-\frac{\overline{\sigma}+1}{2})-(w-\frac{\sigma+1}{2}) \overline{\tau}}{6\overline{\tau}(\tau-\overline{\tau})(\tau\overline{\sigma} -\overline{\sigma}\overline{\tau})}\left\{[\tau(\overline{w}-\frac{\overline {\sigma}+1}{2})-(w-\frac{\sigma+1}{2})\overline{\tau}]^{2}-\frac{(\tau- \overline{\sigma}\overline{\tau})^{2}}{4}-\frac{(\tau-\overline{\tau})^{2}}{4 }\right\}(\log\tau\,-\log\overline{\tau})\,+\,\left\{\left(\frac{\tau}{\overline {\tau}}\right)\leftrightarrow\left(\frac{\sigma}{\overline{\sigma}}\right)\right\}\] \[-\xi^{\prime}(0,w|\tau,\sigma) = -\log|\Gamma_{3}(w|1,\tau,\sigma)\,\Gamma_{3}(1-w|1,-\tau,- \sigma)|^{2}\] (B.3) \[+{\rm MA}(D_{(w,\tau,\sigma)},D_{(\overline{w},\overline{\tau}, \overline{\sigma})})+{\rm MA}(D_{(1-w,-\tau,-\sigma)},D_{(1-\overline{w},- \overline{\tau},-\overline{\sigma})})\] \[= -\log|\Gamma_{3}(w|1,\tau,\sigma)\,\Gamma_{3}(1-w|1,-\tau,- \sigma)|^{2}\] \[-2\pi i\,\frac{\tau(\overline{w}-\frac{\overline{\sigma}+1}{2}) -(w-\frac{\sigma+1}{2})\overline{\tau}}{6\overline{\tau}(\tau-\overline{\tau })(\tau\overline{\sigma}-\overline{\sigma}\overline{\tau})}\left\{\left[\tau( \overline{w}-\frac{\overline{\sigma}+1}{2})-(w-\frac{\sigma+1}{2})\overline{ \tau}\right]^{2}\right.\] \[\left.-\frac{(\tau\overline{\sigma}-\sigma\overline{\tau})^{2}}{ 4}-\frac{(\tau-\overline{\tau})^{2}}{4}\right\}+\,\left\{\left(\frac{\tau}{ \overline{\tau}}\right)\leftrightarrow\left(\frac{\sigma}{\overline{\sigma}} \right)\right\}.\] The reflection formula for Barnes double gamma (see, e.g. proposition 6.1 in [9]) comes into play here to further reduce to infinite convergent products \[-\log|\Gamma_{3}(w|1,\tau,\sigma)\,\Gamma_{3}(1-w|1,-\tau,- \sigma)|^{2}\] (B.4) \[=\log\prod_{m,n\geq 0}|1-e^{2\pi\,i(w+m\tau+n\sigma)}|^{2}+\left\{ i\pi\zeta_{3}(0,w|1,\tau\sigma)+c.c.\right\}\.\] Let us now examine the consequence of having chosen a different splitting, locating the \(m=0\) term of the initial sum in the second term \[\xi(s,w|\tau,\sigma) = \sum_{m,n,l\geq 0}\left\{\,|m+n\,\tau+l\,\sigma+1+w|^{-2s}+|m-n\, \tau-l\,\sigma-w|^{-2s}\right\}\.\] (B.5) For the \(\zeta\)-regularized product, after 'liberating' the Barnes' gamma factors and paying the price of the multiplicative anomaly, we obtain now \[-\xi^{\prime}(0,w|\tau,\sigma) = -\log|\Gamma_{3}(1+w|1,\tau,\sigma)\,\Gamma_{3}(-w|1,-\tau,- \sigma)|^{2}\] (B.6) \[+{\rm MA}(D_{(1+w,\tau,\sigma)},D_{(1+\overline{w},\overline{ \tau},\overline{\sigma})})+{\rm MA}(D_{(-w,-\tau,-\sigma)},D_{(-\overline{w}, -\overline{\tau},-\overline{\sigma})})\] \[= -\log|\Gamma_{3}(1+w|1,\tau,\sigma)\,\Gamma_{3}(-w|1,-\tau,- \sigma)|^{2}\] \[-2\pi i\,\frac{\tau(\overline{w}-\frac{\overline{\sigma}-1}{2} )-(w-\frac{\sigma-1}{2})\overline{\tau}}{6\tau\overline{\tau}(\tau-\overline {\tau})(\tau\overline{\sigma}-\sigma\overline{\tau})}\left\{\left[\tau( \overline{w}-\frac{\overline{\sigma}-1}{2})-(w-\frac{\sigma-1}{2})\overline{ \tau}\right]^{2}\right.\] \[- \left.\frac{(\tau\overline{\sigma}-\sigma\overline{\tau})^{2}}{ 4}-\frac{(\tau-\overline{\tau})^{2}}{4}\right\}+\,\left\{\left(\frac{\tau}{ \overline{\tau}}\right)\leftrightarrow\left(\frac{\sigma}{\overline{\sigma}} \right)\right\}\] Notice the subtle difference in the multiplicative anomaly. The reflection formula for Barnes' double-\(\Gamma\) produces the very same infinite convergent products, but different exponential prefactors \[-\log|\Gamma_{3}(1+w|1,\tau,\sigma)\,\Gamma_{3}(-w|1,-\tau,-\sigma)|^{2}\] (B.7) \[=\log\prod_{m,n\geq 0}|1-e^{2\pi\,i(w+m\tau+n\sigma)}|^{2}+\left\{i\pi\zeta_{3}( 0,1+w|1,\tau,\sigma)+c.c.\right\}\.\] The apparent discrepancy would be the difference between the Barnes zeta terms13 in the exponentials: Footnote 13: The difference is again easily computed due to the recurrence relation for Barnes multiple \(\zeta\)’s (cf. eqn.1.2 in [21]) \(\zeta_{3}(0,w|1,\tau,\sigma)-\zeta_{3}(0,1+w|1,\tau,\sigma)=\zeta_{2}(0,w|\tau, \sigma)=\frac{3w^{2}}{27\sigma}-\frac{\tau+\sigma}{27\sigma}w+\frac{\tau^{2}+ \sigma^{2}+3\tau\sigma}{12\tau}\). \[\left\{i\pi\zeta_{3}(0,w|1,\tau,\sigma)+c.c.\right\}-\left\{i\pi \zeta_{3}(0,1+w|1,\tau,\sigma)+c.c.\right\}\] (B.8) \[= i\pi\left\{\frac{w^{2}}{2\tau\sigma}-\frac{\overline{w}^{2}}{2 \tau\overline{\sigma}}-\frac{\tau+\sigma}{2\tau\sigma}w+\frac{\overline{ \tau}+\overline{\sigma}}{2\overline{\tau}\overline{\sigma}}\overline{w}+ \frac{\tau^{2}+\sigma^{2}}{12\tau\sigma}-\frac{\overline{\tau}^{2}+\overline {\sigma}^{2}}{12\overline{\tau}\overline{\sigma}}\right\}\.\] Nonetheless, by taking into consideration the additional term given by the multiplicative anomaly we have an additional contribution that exactly cancels the mismatch and yields a unique answer for the \(\zeta\)-regularized product.
2309.10363
Quantum information spreading and scrambling in a distributed quantum network: A Hasse/Lamport diagrammatic approach
Large-scale quantum networks, known as quantum internet, hold great promises for advanced distributed quantum computing and long-distance quantum communication. It is essential to have a proper theoretical analysis of the quantum network and explore new applications and protocols that justify building such an extensive network. We propose a novel diagrammatic way of visualizing information flow dynamics within the quantum network, which preserves the causal relationship between different events at different nodes. This facilitates synchronization among network nodes, studies the error propagation, and allows for tracking valuable quantum resources. Additionally, We propose a quantum information scrambling protocol, where a specific node scrambles secret quantum information across the entire network. This protocol ensures that a malicious party would need access to a significant subset of the network to retrieve the information.
Kiran Adhikari, Christian Deppe
2023-09-19T06:48:42Z
http://arxiv.org/abs/2309.10363v1
Quantum information spreading and scrambling in a distributed quantum network: A Hasse/Lamport diagrammatic approach ###### Abstract Large-scale quantum networks, known as quantum internet, hold great promises for advanced distributed quantum computing and long-distance quantum communication. It is essential to have a proper theoretical analysis of the quantum network and explore new applications and protocols that justify building such an extensive network. We propose a novel diagrammatic way of visualizing information flow dynamics within the quantum network, which preserves the causal relationship between different events at different nodes. This facilitates synchronization among network nodes, studies the error propagation, and allows for tracking valuable quantum resources. Additionally, We propose a quantum information scrambling protocol, where a specific node scrambles secret quantum information across the entire network. This protocol ensures that a malicious party would need access to a significant subset of the network to retrieve the information. Hasse diagram, Lamport diagram, quantum information scrambling, quantum secret sharing, synchronization ## I Introduction Recently, there has been a growing interest from both experimental and theoretical perspectives in building large-scale quantum networks called quantum internet [1, 2, 3, 4, 5]. Quantum internet offers exciting opportunities for advanced quantum information processing, distributed quantum computing, quantum metrology, and long-distance quantum communication [6, 7, 8, 9, 10, 11, 12, 13, 14]. Because of technological advancement [15, 16, 17, 18, 19], it becomes crucial to explore these networks from the theoretical perspective as well [20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31]. By studying information flows within these networks, we can effectively track the information dynamics and error propagation and ensure seamless synchronization among network nodes. At the same time, finding new protocols and applications of such distributed quantum networks justifies building a quantum internet in the first place. Therefore, the theoretical analysis of the quantum network design and the finding novel applications of such networks are of fundamental importance in large-scale quantum networks. Quantum information scrambling has been studied extensively in the context of quantum many-body systems [32, 33, 34]. However, we are unaware of any connections between quantum information scrambling and distributed quantum networks. In this context, we propose a novel protocol called quantum information scrambling protocol, where the information from one node gets scrambled across the entire network such that the external malicious party cannot reconstruct it from having control of only a fraction of the nodes in the network. In Section II, we propose a novel way of diagrammatically showing the flow of quantum information in the network. In Section III, we present an application of a quantum network called quantum information scrambling protocol, which utilizes the diagrammatic representation of Section II. ### _Definition and conventions_ A quantum network has two fundamental elements: nodes processing quantum information and edges representing quantum channels. These nodes can be any entity able to perform local operations (LO) as dictated by the principles of quantum mechanics, for example, a client, a quantum repeater, a universal quantum computer, or an extremely powerful quantum data center (QDC) with multiple universal quantum computers. Conversely, a quantum channel is an edge between two arbitrary nodes, allowing the exchange of a quantum system between them. Examples include an optical fiber, a superconducting microwave transmission line, or an optical free-space link. We can represent the structure of a quantum network by a graph \(G=(V,E)\), as shown in Figure 1, with a set \(V\) of vertices and a set \(E\) of directed edges. Vertices \(V=\{P_{1},P_{2},....,P_{N}\}\) indicates \(N\) nodes with each vertex \(P_{i}\) representing a classical or quantum information processing node. A directed edge \(e=v\to v^{\prime}\) represents a quantum channel \(\mathcal{N}^{v\to v^{\prime}}\). A set of cuts \(\partial V\) among the edge, at certain time \(t\), can divide the quantum network into \(L\) different non overlapping sub networks \(V_{1}^{t}=\{P_{1},...,P_{t}\},...,V_{t}^{t}=\{P_{j+1},...P_{k}\},....,V_{L}^{t }=\{P_{1},....P_{N}\}\) where each \(V_{i}^{t}\) is a subset of the graph at time \(t\). Often, we will remove superscript \(t\) for convenience. Furthermore, a quantum network \(G\) is denoted by \(G_{\phi}\) if every edge \(e\in G_{\phi}\) represents a maximally entangled state. If the entanglement in a \(G_{\phi}\) network is a free resource, we denote it by \(G_{\phi}^{\infty}\). Mathematically, a function of an arbitrary node \(X\) can be represented by a local operation where, after getting quantum state \(\rho\) as an input, return a quantum state \(\sigma_{k}\) with probability \(p_{k}\) as an output, where \(\sigma_{k}:=M_{k}^{X}\rho(M_{k}^{X})^{\dagger}/p_{K}\) with Kraus operators \(M_{k}^{X}\) satisfying \(\sum_{k}(M_{k}^{X})^{\dagger}M_{k}^{X}=1^{X}\) and \(p_{k}=\mathrm{Tr}[(M_{k}^{X})^{\dagger}M_{k}^{X}\rho]\). Suppose there are \(N\) nodes in a quantum network, and we label the \(i\)th node by \(P_{i}\). A quantum state in such a network lives in a Hilbert space: \(\mathcal{H}=\mathcal{H}_{1}\otimes\mathcal{H}_{2}\otimes.....\mathcal{H}_{N}\) where \(\mathcal{H}_{i}\simeq\mathcal{C}^{d_{i}}\), \(d_{i}<\infty\), corresponds to Hilbert space of node \(P_{i}\). Furthermore, a quantum channel \(\mathcal{N}^{X\to Y}\) from a node \(X\) to a node \(Y\) is a completely-positive trace-preserving (CPTP) map where, \(\mathcal{N}^{X\to Y}(\rho):=\mathrm{Tr}_{E^{\prime}}[U^{XE\to YE^{\prime}}(\rho \otimes|0\rangle\,\langle 0|)(U^{XE\to YE^{\prime}})^{\dagger}]\) where \(U^{XE\to YE^{\prime}}\) is a unitary operator on Hilbert space \(\mathcal{H}^{X}\otimes\mathcal{H}^{E}\) to \(\mathcal{H}^{Y}\otimes\mathcal{H}^{E^{\prime}}\) and a state \(\left|0\right\rangle^{E}\) of auxiliary system \(E\). We will also need the concept of system and subsystem sizes to use results such as the decoupling theorem [35] of quantum information theory. The size of the node \(P_{i}\) is denoted by \(|P_{i}|\), and it is related to the dimension by \(d_{P_{i}}=2^{|P_{i}|}\). If in a arbitrary protocol, each node \(P_{i}\) dedicates \(n_{P_{i}}\) qubits, then the size of that node would be \(|P_{i}|=n_{P_{i}}\) and the dimension would be \(d_{P_{i}}=2^{|P_{i}|}=2^{n_{P_{i}}}\). Then, the size of subnetwork \(V_{i}\) is \(|V_{i}|=\sum_{P_{j}\in V_{i}}|P_{j}|\), and it's dimension is \(d_{V_{i}}=2^{|V_{i}|}\). The size of the entire network \(G=(V,E)\) with \(P\) different subnetworks would be \(|V|=\sum_{i=1}^{P}|V_{i}|=\sum_{i=1}^{N}|P_{i}|\) with network's dimension being \(d=2^{|V|}\). For example, if there are \(n\) qubits in total participating in some arbitrary protocol in a quantum network, then the network size would be just \(n\), dimension \(2^{n}\). One can also generalize it to the case of qudits. One primary goal of introducing the diagrammatic approach is to track various resources spent in the quantum network while running some arbitrary protocol. In a resource theoretic language [36], \([c\to c]\) would denote the communication resource of a noiseless classical bit channel, \([q\to q]\) a noiseless qubit channel, \([cc]\) a shared, non-local bit of shared randomness and \([qq]\) a shared, noiseless EPR pair. Furthermore, other than having two-party quantum resources such as EPR pair, it is possible to have multi-party quantum resources in a network. For this, we introduce the notation \(((x_{1},y_{1}),(x_{2},y_{2}),..(x_{i},y_{i})..(x_{R},y_{R}))\) where \(R\) indicates the number of species of resources and \(x_{i}\) indicates the amount of \(y_{i}\) species. Of course, the definition of species depends upon the problem of \([cq]\) a shared, non-local bit of shared randomness and \([qq]\) a shared, noiseless EPR pair. Furthermore, other than having two-party \([cq]\) a resource of \(y_{i}\) species can be shared through the network. For example, \([(3,GHZ),(4,W))\) would imply that we have 3 GHZ states and 4 W states shared among three nodes. \(R=2\) as we have more parameter in the notation \((x_{2},y_{2},P_{i}P_{j}....P_{k})\). Here, \(P_{i}P_{j}....P_{k}\) denotes the nodes where the resource \(y_{i}\) is shared upon. _Quantum Information Scrambling:_ The phenomenon of scrambling of quantum information has recently emerged as a fundamental concept across multiple disciplines, including many-body physics, quantum chaos, complexity theory, and black holes [37, 38, 32, 33, 39]. Here, we present a concise overview of information scrambling while more extensive insights can be explored in the literature [32, 40]. Fig. 1: Quantum network with subnetworks: A network is represented by a graph \(G=(V,E)\) where vertices represent quantum information processing nodes and edges represent quantum channels. A subnetwork \(V_{A}=\{P_{0},P_{1},P_{2},P_{3},P_{4},P_{5}\}\) is shown with a red shade while subnetwork \(V_{B}=\{P_{6},P_{7},P_{8},P_{9},P_{10}\}\) is represented with green shade. We can roughly understand quantum information scrambling as a phenomenon of thermalization of quantum information and is thus crucial in exploring how quantum information spreads in quantum systems. To grasp the essence of it, it is easier to express it using the framework of quantum circuits. Suppose we have a unitary \(U_{AB}:A\otimes B\to C\otimes D\), and external system \(R^{\prime}\) purifying \(A\). The unitary is of scrambling nature if \(R^{\prime}\) and \(C\) are almost independent for all or most \(C\) of size smaller than some parameter \(l\). How one selects a subset \(C\) from the composite system \(CD\) becomes arbitrary as long as it remains below \(l\). Mathematically, this implies that arbitrary subsystem \(C\) of size less than \(l\) approximately decouples from \(R^{\prime}\): \(\rho_{R^{\prime}C}\xleftrightarrow{\rho_{R^{\prime}}}\otimes\rho_{C}\). Hence, the outcome of any measurement on \(C\) is statistically independent of any measurement on \(R^{\prime}\) and can provide no information about \(R^{\prime}\). The parameter \(l\) depends upon the purity of \(B\). It is possible to change the purity of \(B\) by entangling \(B\) with some external system \(B^{\prime}\). The entire setup is depicted in Figure 2. ## II Diagrammatic representation of Quantum information spreading In this section, we propose a novel way of diagrammatically representing the information flow within a quantum network motivated by the formalism of Hasse diagram [41] and Lamport diagrams [42]. Causality in a single node is trivial, as one event is the cause for the next. But, it quickly gets complicated in cooperating nodes that exchange information to solve a common task. In the realm of quantum mechanics, preserving causal relationships can be even trickier due to the presence of entanglement. Nonetheless, despite this intricacy, a causal framework should persist due to the no-signaling theorem [36], which states the impossibility of sending information just using entanglement alone. Furthermore, if we have sufficient non-local quantum resources like entanglement, all non-local quantum computations can be shifted to local quantum computations and classical communication. With this, most multi-party quantum protocols can be phrased as a synchronization problem among different nodes. Considering the large scale of quantum networks, which may comprise numerous nodes with many qubits each, creating a quantum circuit diagram for the entire network is impractical. Instead, this diagrammatic approach focuses on maintaining a casual relationship across events in different nodes, thus providing a robust tracking mechanism for information dynamics and error propagation within the network. This approach also facilitates the seamless synchronization of different nodes within the network while tracking resource expenses, thereby enabling the successful implementation of quantum protocols. ### _Space-time diagram_ The concept of the Hasse/Lamport diagram has found prior applications within the realm of classical computing (see [43, 44, 45, 46]). Furthermore, it has been employed in physics, particularly in specific quantum gravity models (see [41, 47]). In this work, we use the Hasse/Lamport diagrammatic approach to visually depict quantum protocols. The evolution of distributed execution is given by a Hasse space-time diagram where each node \(P_{i}\) has a corresponding horizontal line that tracks the progress of that node. The binary relationship arrow indicates the casual relationship from one event to another. A solid dot can mark a local event. A line at the end of the arrow can indicate the message received event. Messages can be sent via quantum or classical channels. Because of the Hasse-like structure, the diagram has interesting properties. Suppose \(x\), \(y\), and \(z\) indicate three events, and if there is an arrow connecting between two events, we indicate it by a binary relation \(\prec\) with properties: 1. Acyclic: x \(\prec\) y and y \(\prec\) x \(\Rightarrow\) x = y 2. Transitive: y \(\prec\) y and y \(\prec\) z \(\Rightarrow\) x \(\prec\) z The transitive property guarantees that we don't have a loop in the diagram, which otherwise would imply a future affecting the past and violate causality. Space-time diagram 3 shows an example of a space-time diagram with four nodes. In the following sections, we will only use the arrow in the message send event for convenience and drop the line at the message received event. ### _Local event_ Defining an event in a quantum network can be trickier because of non-local resources such as entanglement and the requirement to track both quantum and classical states. In this section, we propose a novel definition of a local event Fig. 2: The quantum information of \(A\) interacts with system \(B\) via a scrambling unitary \(U_{AB}\). System \(B\) is purified by external system \(B^{\prime}\) while \(A\) is purified by reference system \(R\). Thus, the total initial state is \(\left|RA\right\rangle\left|BB^{\prime}\right\rangle\) while the final state after scrambling unitary \(U_{AB}\) is \(\left|\psi\right\rangle_{BB^{\prime}CD}=U_{AB}\otimes I_{RB^{\prime}}\left|RA \right\rangle\left|BB^{\prime}\right\rangle\). based on the notion of density matrix and its change. The quantum state of a node \(P_{i}\) can be mathematically described by a local density matrix \(\rho_{i}\) obtained by tracing out the rest of the network, \(\rho_{i}=\mathrm{Tr}_{\{V/P_{i}\}}(\rho)\), where \(\rho\) is the quantum state of the entire network. This characterizes both the classical and quantum state of the node \(P_{i}\). We say a local event occurred at node \(i\) whenever this local (reduced) density matrix \(\rho_{i}\) changes. The change in the density matrix can be computed using \(L_{1}\) norm which for any operator \(M\) is defined as \(||M||_{1}=\text{Tr}\sqrt{M^{\dagger}M}\). With this, we propose a following definition of a local event. **Definition 1** (Local event): _A local event is recorded at node \(P_{i}\) at time \(t^{\prime}\) if \(||\rho_{i}(t^{\prime})-\rho_{i}(t^{\prime\prime})||_{1}\geq\epsilon\), where \(t^{\prime\prime}\) is the time of the immediate previous event at the same node \(P_{i}\). The threshold parameter \(\epsilon\) depends upon the protocol and its requirements._ This definition is also physically motivated because for \(||\rho_{i}(t^{\prime})-\rho_{i}(t^{\prime\prime})||_{1}<\epsilon\), \(\mathrm{Tr}(\Pi(\rho_{i}(t^{\prime})-\rho_{i}(t^{\prime\prime})))<\epsilon\) for any projection operator \(\Pi\). This implies that the probability outcome of any experiment between two density matrices \(\rho_{i}(t^{\prime})\) and \(\rho_{i}(t^{\prime\prime})\) differs by at most \(\epsilon\) when \(||\rho_{i}(t^{\prime})-\rho_{i}(t^{\prime\prime})||_{1}<\epsilon\). Therefore, if the density matrix doesn't change up to a certain threshold, operationally, it is unnecessary to call it an event as no new information can be obtained from it. This way of defining a local event is also ideal from the perspective of causality. Suppose two nodes \(P_{i}\) and \(P_{j}\) share a bipartite system and are even allowed to be entangled. No signaling theorem or, in general, no communication theorem [36] of quantum mechanics says that nothing \(P_{i}\) chooses to do using local computations on his side will have any effect on the local density matrix of \(P_{j}\). With this definition of a local event, we can then guarantee that the event at one node cannot affect other nodes unless the information is transferred via a quantum or classical channel. This serves as a foundation for maintaining causality in the space-time diagram. A future of an event \(x\) denotes all events casually affected by \(x\), and the past of an event \(x\) are all the events that could have affected \(x\). In the space-time diagram 4, a dot represents a local event at that node. ### _Quantum and Classical Communication_ Based on the definition of a local event, for one node to casually affect the other node, it must use a communication channel. The communication in a quantum network between two nodes can be via quantum or classical channel [48]. In the space-time diagram, quantum processes are typically represented with a solid line, while classical processes are typically represented with a dotted line. Furthermore, usually, non-local computations are not allowed. In the space-time diagram depicted in Figure 5, we represent a message transfer through a classical channel using a dotted slant arrow while a message transfer through a quantum channel using a solid slant arrow. The labels indicate the unit message, which could, for example, be bit or qubit. This allows one to track the resource consumption and generation in the network. ### _Shared resources_ A fundamental way a quantum network differs from a classical one is by introducing new resources, such as quantum entanglement, which cannot be achieved via classical Fig. 4: Local event denoted by a dot Fig. 3: Example of a space-time diagram with four nodes. The dots represents local event while arrows represents message send event. resources alone. We represent classical and quantum resources using dotted and solid curves rather than arrows. The shared classical resource between two nodes, R and S, is denoted by a dotted curved shape ending with dots at each node as shown in Figure 5. In contrast, a solid curve denotes shared quantum resources. The label in the diagram denotes the amount of shared resources. For example, it could be the amount of classical common randomness or entanglement. Because a maximally entangled pair is a fundamentally important resource in a quantum network, we represent it by a solid angle shape rather than a curve, terminating with dots at each node. Similarly, we can also represent shared common randomness by a similar shape but a dotted one. ### _Resource generation_ Using space-time diagrams, we will now show how to generate resources between two nodes. Because it is a shared resource, local processes cannot generate it alone. Therefore, it requires a communication channel, albeit quantum or classical. Furthermore, quantum resources cannot be generated via classical channel communication alone. As an example, let us consider two ways entanglement can be generated between nodes R and S. The first approach entails R preparing a maximally entangled pair and transmitting one of the qubits from the pair to node S through a quantum channel, as shown in the space-time diagram 6a. In the second approach, a third node Q generates a maximally entangled pair and sends one qubit from the pair to node R and another to S, which results in the establishment of a shared quantum resource between nodes R and S. This approach is depicted in space-time diagram 6b. ### _Resource consumption_ The exciting thing about quantum resources is that, by consuming them, one can do nontrivial tasks that would have otherwise been impossible. Therefore, it is necessary to distinguish between usual communication and communication after resource consumption. In a space-time diagram, we can represent a message sent after consuming quantum or classical resources using a classical channel by a dotted double-line arrow. We can represent the usage of the quantum channel after consuming these resources by a solid double-line arrow. as shown in the space-time diagram 5. The double line here doesn't imply that we are sending two units of messages. Instead, the label in the space-time diagram will tell how many units of messages are transferred. For example, we will show two fundamental protocols in a space-time diagram, which consumes quantum entanglement to achieve specific nontrivial tasks. #### Iii-F1 Superdense coding Super dense coding protocol uses pre-shared entangled qubits to encode two classical bits of information by transmission of a single qubit: \[[qq]+[q\to q]\geq 2[c\to c]\] In the space-time diagram 6c, R and S share a pair of EPR pairs. S performs an encoding on her half of the EPR pair and then sends it to R via a quantum channel denoted by a double-line solid arrow. Note that even though S sends a single qubit, it is represented by a double-line arrow in a space-time diagram. #### Iii-F2 Teleportation Quantum teleportation protocol uses pre-shared entanglement to transfer a quantum state from one node to another via classical communication only \[[qq]+2[c\to c]\geq[q\to q].\] Fig. 5: Space-time diagram with the following sequences: local event at node S represented by a solid dot, classical communication from S to R via dotted arrow, local event at S, quantum communication from S to R, shared quantum resources between R and S, shared quantum resources between R and S, local event at R, classical communication from R to S following resource consumption, local event at S, and then quantum message from S to R after the resource consumption. The labels \(a\), \(b\), \(c\), \(d\), \(e\), and \(f\) indicates the amount of unit resources. For example, \(a\) and \(e\) could indicates number of bits, while \(b\) and \(f\) qubits. Similarly, \(c\) indicates amount of maximally entangled qubits between \(R\) and \(S\) and \(d\) indicates the amount of shared classical resources. The protocol is shown in the space-time diagram 6d where dotted double line arrow is to indicate usage of classical channel after consuming quantum resources. In the space-time diagram 6d, the label \(2\) indicates sending two classical bits of messages. ### _Multiparty case_ The true potential of these space-time diagrams arises for the multiparty case. For two-party cases or a network with relatively few nodes, quantum circuits could suffice and might even offer an advantage. However, for the multiparty scenario, with a large number of nodes and a large number of processes, the quantum circuit description would be highly impractical. Nonetheless, space-time diagrams can offer a means to extract numerous essential features of such complicated network protocols. In the space-time diagram 7a, we've illustrated a crucial three-party protocol used in quantum repeater, called entanglement swapping, involving three nodes Q, R, and S. At the end of the protocol, one can observe that R and S are entangled by checking the dots on the end of a curved line. Moving to the space-time diagram depicted in Figure 7b, we've expanded the scenario to involve more nodes and presented in Figure 7c, a commonly used illustration in the literature to depict the same concept. One can have multiparty classical and quantum resources for the multiparty case, which we denote by a repeated curved \(\mathcal{W}\) shape with a hinge touching each participating node. Again, the shape will be dotted for classical resources and solid for quantum. We can use the following notation to label the resource \(((x_{1},y_{1}),(x_{2},y_{2}),..(x_{i},y_{i})..(x_{N},y_{N}))\), where \(N\) indicates the number of species of resources and \(x_{i}\) indicates the amount of \(y_{i}\) species. We can also use this notation \((x_{2},y_{2},P_{i}P_{j}....P_{k})\) to label the resources to indicate which nodes are participating. However, this information can also be obtained from the space-time diagram itself. For example, in the space-time diagram for a three-party case, it is possible to have two different families of shared quantum resources, such as GHZ or W entangled states. Suppose we label it as \(((3,GHZ),(4,W))\), then we have 3 GHZ states and 4 W states shared among three nodes. Here, we refer to GHZ and W states as two different families as they are not equivalent under Local Operation and Classical Communication (LOCC); however, how one categorizes different types of resources depends on their definition. The three-party controlled teleportation, a variation of teleportation protocol, is shown in the space-time diagram 9 to demonstrate this multiparty resources and consumption concept. There are two important protocols called local unitary equivalence (LU) and equivalence under local operations and classical communication (LOCC) equivalence, which doesn't require the use of a quantum channel. Two state vectors are considered equivalently entangled under LU if they differ only by a local unitary basis. \[\ket{\psi}\equiv_{\text{LU}}\ket{\phi}\rightarrow\ket{\psi}=(U_{1}\otimes.... \otimes U_{N})\ket{\phi} \tag{1}\] Figure 8a shows the space-time diagram for such a protocol. Similarly, we say that two states are considered LOCC Fig. 6: Resource generation and consumption. (**a**) and (**b**) shows resource generation among R and S. (**c**) and (**d**) shows two fundamental protocol, superdense coding and quantum teleportation respectively, that can be implemented by consuming these resources. Figure 8: (**a**) LU protocol where local nodes perform local operation but no communication among nodes is allowed. (**b**) LOCC protocol where nodes can perform local operation but only classical communication is allowed among nodes. In the subsequent round, another node may perform an operation depending on measurement from other nodes. The label \(|\psi\rangle\) and \(|\phi\rangle\) denote initial and final multiparty quantum state indicated by curved line touching multiple nodes. Figure 7: Entanglement swapping protocol used for building quantum repeater (**a**) Node R and Node Q are maximally entangled, while Node Q and Node S are maximally entangled. The goal is to get Node R and Node S maximally entangled. Node Q performs a local computation and sends classical information to Node S. This results in Node R and Node S being maximally entangled. (**b**) Space-time diagram showing entanglement swapping protocol among 5 nodes in three time steps. (**c**) Entanglement swapping protocol illustrated in a conventional literature style equivalent if they can be transformed into each other through a protocol that only involves the usage of a classical channel as shown in space-time diagram 8b. ### _Noise and types of operations_ One crucial advantage of a space-time diagram is that it helps to analyze the error propagation in a quantum network. We can denote a noisy local operation and communication with a cross, as shown in Figure 10. So far, we have assumed that all local operations are equally powerful. However, it is unrealistic, especially for distributed quantum networks, where some nodes might be more powerful. One can introduce different types of local operations based on their protocol. One way of classifying local operations would be based on the complexity the node can handle. For example, one can have type I and type II operations, where type I can only implement stabilizer protocol [49] while type II could be a fully universal quantum computation. In the space-time diagram, as shown in figure 10, a dot inside a Ket notation represents a local process with a Type II operation, and a dot represents a Type I operation. ### _Examples_ In this section, we will show some examples of how space-time diagrams can be used to represent quantum protocols. The first example is of a distributed CNOT gate as shown in the space-time diagram 11a. The second example is of entanglement distillation (see figure 11b), a LOCC protocol, where out of \(n\) copies of bipartite pure system \(\psi\), one can generate copies of EPR pair at a rate \(r\). The third example (see figure 11c) is a protocol that demonstrates Nielsen's theorem, a simple criterion for the equivalence of bipartite pure states under LOCC. ## III Quantum information scrambling protocol in a quantum network In this section, we propose a novel protocol for distributed quantum networks inspired by the phenomenon of quantum information scrambling. Additionally, this protocol demonstrates the diagrammatic approach introduced in the earlier section to investigate the quantum information spreading in a network setting. The central concept of the protocol revolves around scrambling quantum information of one particular node \(R\) across the quantum network to achieve the following two objectives. Firstly, to reconstruct the quantum information of node \(R\), any external malicious entity denoted as \(E\) would need to gain access to a substantial subset of the network since the information is not within \(R\) anymore. In the most adverse scenario, accomplishing this would necessitate \(E\) to breach the entire quantum network. Secondly, even if \(E\) were to somehow access the requisite network size, the protocol's construction allows for implementing measures that render the decoding process exponentially complex for \(E\), thus making it hard to reconstruct \(R\)'s information. Therefore, this protocol plays with both information-theoretic and complexity-theoretic arguments to improve the security of a quantum network. To a certain extent, one could perceive quantum information scrambling as the strongest manifestation of information spreading in a network. All nodes at the end of the protocol are causally related to the node \(R\). This feature, that the future of the node \(R\) is the entire network, is a necessary condition for scrambling quantum information. We will now formulate the protocol mathematically. At the start of the protocol, we divide the network \(V\) into two subsets, \(V_{R}=\{R\}\) and \(V_{A}=V\setminus V_{R}\) described by the quantum state \(\rho_{V_{A}}\). In the next subsection, we will go into the protocol details. For now, suppose the scrambling protocol is implemented successfully. After some time following the protocol, suppose the malicious party \(E\) got access to some subset of nodes, which we denote by \(V_{E}\) while the rest of the network is then given by \(V_{B}=V\setminus V_{E}\). Thus, we have a Hilbert space of \(n=|R|+|V_{A}|=|V_{E}|+|V_{B}|\) qubits partitioned as: \[\mathcal{H}=R\otimes V_{A}=V_{E}\otimes V_{B} \tag{2}\] and a unitary map transforming them as: \[U_{RV_{A}}:R\otimes V_{A}\to V_{E}\otimes V_{B} \tag{3}\] If the initial state is \(\rho_{R}\otimes\rho_{V_{A}}\), then after the information scrambling protocol \(U\), the final quantum state is \(\rho^{\prime}=U\rho U^{\dagger}\). The quantum state of a malicious party, \(\rho_{E}\) would be described by tracing out the rest of the network one doesn't have access to \(\mathrm{Tr}_{V_{B}}\rho^{\prime}\). Having done this, we can now precisely formulate the problem as follows: * What is the minimum size of \(|V_{E}|\) necessary for malicious party E to get access in order to reconstruct the R's quantum information? Fig. 10: Noisy event or communication can be indicated by a cross while type II operation can be indicated by a dot inside a Ket notation. Fig. 9: Space-time diagram showing third party controlled teleportation from node Q to node S using GHZ states and only classical communication. The resource inequality is: \([qqq]_{QRS}+2[c\to c]_{Q\to R}+2[c\to c]_{R\to S}\geq[q\to q]_{Q\to S}\) where \([qqq]_{QRS}\) indicates the GHZ state shared between Q, R and S. * Suppose malicious party E has access to this minimum necessary size but doesn't know the unitary \(U\), i.e., how the information was scrambled. How many queries does \(E\) need to make on different nodes to learn how the information was scrambled? Interestingly, it turns out that the minimum size \(|V_{E}|\) depends upon the purity, a measure of entanglement entropy, of the system \(V_{A}\). It is possible to change the purity of \(V_{A}\) by entangling it with some other node \(D\), which we refer to as a quantum data center. Quantum data centers could serve various purposes in a network. In our context, it primarily functions as a node that, at the protocol's beginning, helps change the purity of \(V_{A}\) by entangling it. The entire setup of the quantum information scrambling protocol, with quantum network graph representation, quantum circuit representation, and space-time diagrammatic representation, is shown in Figure 12. ### _Description of the protocol_ In this subsection, we describe how a scrambling unitary \(U(t)\) of size \(2^{|V|}\times 2^{|V|}\) can be implemented in a quantum network setting. A detailed analysis will be done in a subsequent section. A trivial way to implement this would be if the node \(R\), which wants to scramble the quantum information, has \(|V|\) qubits. \(R\) performs the \(U(t)\) locally and then selects \(|P_{i}|\) qubits randomly and teleport it to each node \(P_{i}\) in a network \(V\). However, adopting such a strategy could compromise security since it centralizes all computational activities within a single node. A better method would be implementing the scrambling unitary \(U(t)\) in a distributed fashion. A scrambling unitary \(U(t)\) does not necessarily have any inherent local structure. To account for the locality, for our protocol, we restrict it to be local in neighboring nodes, which are connected by edges. In a quantum network setting, it is reasonable to have all the quantum operations to be local in nodes. A unitary local in neighboring nodes can be made Fig. 11: **(a) Space-time diagram for the distributed CNOT gate as proposed in [50]. (b) Entanglement distillation protocol using only LOCC \(\ket{\psi}^{\otimes n}\rightarrow\ket{\text{EPR}}^{\otimes|r..n|}\). (c) Space-time diagram showing the diagrammatic representation of Nielsen’s theorem [51]. Nielsen’s theorem gives a simple criterion for the equivalence of bipartite pure states under LOCC. In the diagram, the network of 5 nodes \(V=\{P_{1},P_{2},P_{3},P_{4},P_{5}\}\) is divided into two subnetworks \(V_{1}=\{P_{1},P_{2},P_{3}\},V_{2}=\{P_{4},P_{5}\}\) making it a bipartite state. Within subnetworks \(V_{1}\) and \(V_{2}\), they are allowed to do quantum communications, but across the \(V_{1}\) and \(V_{2}\), they are only allowed to do classical communication. Nielsen’s theorem gives a criteria on how final state \(\ket{\phi}\) can be reached starting from initial state \(\ket{\psi}_{V_{1}V_{2}}\).** Figure 12: Quantum information scrambling protocol in different representations. To get information scrambling, the casual future of the node \(R\) must be the entire network. \(V_{E}\) indicates the nodes accessed by a malicious party, while \(V_{B}\) indicates the rest of the network. \(D\) denotes the quantum data center node. local in nodes by consuming entanglement under LOCC [52]. For example, if a node \(A(\in V)\) wants to send an unknown quantum state with dimension \(d\) to another node \(B(\in V)\), all they need to have is share a bipartite maximally entangled state \(\left|\phi_{d}\right\rangle^{AB}:=\sum_{i=1}^{d}\left|ii\right\rangle^{AB}/\sqrt{d}\). Thus, we assume that the network is of \(G_{\phi}^{\infty}\), meaning all the nodes connected by an edge have an infinite supply of maximal EPR pairs at the beginning of the protocol. Let \(\left|P_{i}\right|\) be the number of qubits allocated by node \(P_{i}\) for the protocol. It is reasonable to assume that the node \(P_{i}\) can have much more qubits than just \(\left|P_{i}\right|\), say at least \(\left|P_{i}\right|+\)max\(\left(\left|P_{i+1}\right|,\left|P_{i-1}\right|\right)\). This condition guarantees that node \(P_{i}\) can perform the non-local computation via LOCC, making the entire protocol local in individual nodes. As a simple example, let's consider a network \(V\in G_{\phi}^{\infty}\) with three nodes \(P_{1},P_{2},\&P_{3}\), see Figure 13, with sizes satisfying \(\left|P_{1}\right|<\left|P_{2}\right|<\left|P_{3}\right|\). This configuration implies that each node \(P_{1}\), \(P_{2}\) and \(P_{3}\) has at least \(\left|P_{1}\right|+\left|P_{2}\right|,\left|P_{2}\right|+\left|P_{3}\right|\), \(\left|P_{2}\right|+\left|P_{3}\right|\) qubits respectively. We consider a scrambling circuit with two Haar random unitaries \(U_{P_{1}P_{2}}\) and \(U_{P_{2}P_{3}}\), each sized at \(2^{\left|P_{1}\right|+\left|P_{2}\right|}\) and \(2^{\left|P_{2}\right|+\left|P_{3}\right|}\) respectively, applied sequentially as depicted in Figure 13. Clearly, in the circuit representation, \(U_{P_{1}P_{2}}\) and \(U_{P_{2}P_{3}}\) are local in neighboring nodes. We now map this circuit representation into a quantum network using only LOCC protocols. The initial setup involves a \(G_{\phi}^{\infty}\) network. At time \(t_{0}\), the objective is to implement \(U_{P_{1}P_{2}}\) at the local node level, either on node \(P_{1}\) or \(P_{2}\). The first step is to flip a coin among \(P_{1}\) and \(P_{2}\), which can be done without any communication cost by consuming entanglement. Furthermore, coin toss introduces randomness in the protocol, making it challenging for potential attackers to breach it. Assuming \(P_{1}\) is chosen, \(P_{2}\) teleports their qubits to \(P_{1}\) through a classical channel by sending \(2|P_{2}|\) classical bits. Since \(P_{1}\) possesses at least \(\left|P_{1}\right|+\left|P_{2}\right|\) qubits, \(P_{1}\) can then execute the \(U_{P_{1}P_{2}}\) unitary of size \(2^{\left|P_{1}\right|+\left|P_{2}\right|}\). After the unitary operation, \(P_{1}\) randomly selects \(\left|P_{2}\right|\) qubits to teleport them back to \(P_{2}\). This process ensures the local implementation of \(U_{P_{1}P_{2}}\) at the level of individual nodes. At time \(t_{1}\), the goal is to implement the \(U_{P_{2}P_{3}}\) unitary. Following a similar approach as with \(U_{P_{1}P_{2}}\), this can also be achieved locally. The entire protocol is diagrammatically represented in Figure 13 where the type II notation is used to represent non-trivial operations such as \(U_{P_{1}P_{2}}\) and \(U_{P_{2}P_{3}}\). Figure 14 shows a more complicated setup. ### _Analysis of the protocol_ In this subsection, we primarily consider the scrambling unitary of Haar scrambling nature, which refers to a unitary \(U\) chosen randomly from the Haar measure. We will consider other kinds of scrambling in Section III-C, where we focus on efficiency issues. However, important insights can be drawn from Haar scrambling as we expect the late-time values of entropy and mutual information for any scrambling unitary \(U(t)\) to match those of Haar random unitaries. One can implement Haar scrambling unitary \(U(t)\) in the quantum network following the framework outlined in the subsection III-A. Once again, we have a unitary map: \[U_{RV_{A}}:R\otimes V_{A}\to V_{E}\otimes V_{B} \tag{4}\] where \(U_{RV_{A}}\) is a Haar scrambling unitary local in nodes connected by edges and \(R\), \(V_{A}\), \(V_{B}\), and \(V_{E}\) are subnetworks as described in relation 2. For the quantum information theoretic analysis, it is often easier to track the quantum information of \(R\) in the network by considering an external reference state \(R^{\prime}\) which is perfectly entangled with \(R\). Furthermore, we can have a quantum data center \(D\) which purifies the subnetwork \(V_{A}\). Thus, the total initial state is \(\left|R^{\prime}R\right\rangle\left|V_{A}D\right\rangle\), and the state after the information scrambling protocol is: \[\left|\psi\right\rangle_{R^{\prime}DV_{E}V_{B}}=U_{RV_{A}}\otimes I_{R^{\prime }D}\left|R^{\prime}R\right\rangle\left|V_{A}D\right\rangle \tag{5}\] #### Iii-B1 Without quantum data center \(D\) We first consider the case without a quantum data center, i.e., \(V_{A}\) is in a pure state. After applying the Haar scrambling unitary, it can be shown that (see [53]) the output pure state in \(\mathcal{H}_{V_{E}V_{B}}\) is likely close to a maximally entangled state if \(\frac{\left|V_{E}\right|}{\left|V_{B}\right|}<<1\), more precisely for the output bipartite Hilbert space \(\mathcal{H}_{E}\otimes\mathcal{H}_{V_{B}}\), we get \[\int dU\left|\left|\rho_{V_{E}}-\frac{Id}{\left|V_{E}\right|}\right|\right|_{1 }\leq\sqrt{\frac{\left|V_{E}\right|^{2}-1}{\left|V_{E}\right||V_{B}|+1}} \tag{6}\] where \(||.||_{1}\) is the \(L_{1}\) norm and \(\rho_{V_{E}}\) describes the quantum state accessed by malicious party \(E\) in the network. When \(\left|V_{B}\right|\) is significantly larger than \(\left|V_{E}\right|\), the typical deviation of \(\rho_{V_{E}}\) from maximally mixed state is extremely small. For example, consider the case when the size of \(V_{B}\) exceeds the system in the control of \(V_{E}\) by just ten qubits, then the typical deviation of \(V_{E}\)'s quantum state from the maximal mixed state is upper bounded by \(2^{-5}\). And, \(V_{E}\) cannot carry any significant information about the scrambled information of \(R\). Therefore, as a consequence of equation 6, the malicious party \(E\) will need access to at least half the network's size to get any information of \(R\). This result can be intuitively understood as follows. For the pure case, making any arbitrary partition on a network \(V\) after information scrambling, if one partition can reconstruct the information of \(R\), then the other partition cannot. This follows simply from quantum no-cloning theorem which states that no operation can make copies of an arbitrary unknown quantum state. If both set is able to reconstruct the state \(\rho_{R}\), then no-cloning theorem is violated. This also immediately gives the bound that for \(E\) to reconstruct \(\rho_{R}\), \(E\) will need to have access to at least half the size of the entire network, even though the nodes which form the set can be completely arbitrary. More concretely, for \(V_{E}\) to be able have any chance to hack the network, the lower bound on the size of \(V_{E}\) should be \(\left|V_{E}\right|=\frac{\left|V\right|+1}{2}\), where \(\left|V\right|\) is the size of the network. It is possible to provide a formal security analysis using the language of quantum secret sharing. This follows from the decoupling theorem [35], which provides a unifying framework for the theory of quantum error correction and many other Figure 13: (**a**) A graph \(G_{\phi}^{\infty}\) with three nodes where \(P_{1}\)\(P_{2}\) and \(P_{2}\)\(P_{3}\) share unlimited EPR pairs. (**b**) A circuit with three nodes where unitary \(U_{P_{1}P_{2}}\) is local in nodes \(P_{1}\) and \(P_{2}\) while unitary \(U_{P_{2}P_{3}}\) is local in nodes \(P_{2}\) and \(P_{3}\). (**c**) Space-time diagram showing a protocol where the unitary \(U_{P_{1}P_{2}}\) and \(U_{P_{2}P_{3}}\) are converted into being local in individual nodes using quantum teleportation. Fig. 14: (**a**), (**b**) and (**c**) shows the information flow from a node \(R\) in a quantum network \(G_{\phi}^{\infty}\). \(D\) denotes a quantum data center. The dotted line with \(D\) indicates the possibility of sharing multiparty quantum resources among the network and data center. (**d**) shows an example of a possible quantum circuit. \(U_{P_{2}P_{4}}\) is local in node \(P_{2}\) and \(P_{4}\) as they are connected by an edge as shown in (**a**). (**e**) shows a space-time diagram where all non-local computations of the circuit (**d**) are made local in nodes using the technique of Figure 13. significant results of quantum Shannon theory. For the decoupling theorem to work, the Haar random unitary encoding must be scrambling, implying the possibility of quantum error correction [35, 54]. On the other hand, the recoverability requirement of a quantum secret-sharing scheme is a sufficient and necessary condition for quantum error correction. Thus, all quantum secret-sharing schemes are also quantum error-correcting codes. A quantum threshold secret sharing scheme is denoted by \(((k,n))\) where an arbitrary quantum secret is divided into \(n\) parties such that any \(k\) or more parties can perfectly reconstruct the secret. In comparison, any \(k-1\) or fewer parties have no information about the secret. Thus, for this pure case, we get a quantum secret sharing scheme similar to \(\left(\left(\frac{|V|+1}{2},|V|\right)\right)\). We plan to further explore this relationship between Haar scrambling and quantum secret sharing in the future. Because Haar scrambling has inherent randomness, it might not be possible to get the exact threshold scheme. Instead, the scheme might be of ramp secret sharing nature. #### Iii-B2 With quantum data center \(D\) We will now discuss the case with quantum data center \(D\), which, by entangling with \(V_{A}\) can purify it. Therefore, \(V_{A}\) is in a mixed quantum state as it is part of a pure state \(\left|V_{A}D\right\rangle\). The final quantum state after the scrambling protocol is \(\left|\psi\right\rangle_{R^{\prime}DV_{E}V_{B}}=U_{RV_{A}}\otimes I_{R^{\prime }D}\left|R^{\prime}R\right\rangle\left|V_{A}D\right\rangle\). Then, from the conservation of information under unitary evolution, we get: \[I(R^{\prime}:R)=I(R^{\prime}:(V\backslash\{D\}=V_{B}V_{E}))+I(R^{\prime}:D) \tag{7}\] where \(I(X:Y)\) is the quantum mutual information between \(X\) and \(Y\). For the case, when \(V_{B}V_{E}\) is maximally mixed, \(I(R^{\prime}:V_{B}V_{E})=0\) and all the information about \(R\) should now be in \(D\) as \(I(R^{\prime}:R)=I(R^{\prime}:D)\). Therefore, when \(V_{B}V_{E}\) is maximally mixed, even if the malicious party \(E\) has access to all the nodes of the network \(V\) except for the quantum data center \(D\), \(E\) can have no information about \(R\). From equation 6, this condition is achieved as soon as \(|D|>\frac{|V|}{2}\). If the size of \(D\) is equal to \(\frac{|V|-1}{2}\), \(E\) will need to get access to entire subnetwork \(V\backslash\{D\}\) to get the information of \(R\). If \(0<|D|<\frac{|V|-1}{2}\), then in contrast to the pure case of \(V_{A}\), the necessary size of \(V_{E}\) to hack the information of \(R\) is greater than \(\frac{|V|+1}{2}\). Therefore, by changing the purity (entanglement entropy) of \(V_{A}\) and the size of quantum data center \(D\), it is possible to get other security protocols than the one obtained for the pure case. Furthermore, it is also possible to construct other interesting protocols using a powerful quantum data center. Suppose \(V_{A}\) is maximally entangled with data center \(D\). Then, after the information scrambling protocol, \(D\) can reconstruct the information of \(R\) by just having access to a subnetwork of size \(|R|\) from \(V\backslash\{D\}\). One can use these features to error-correct quantum nodes in a network or improve security by quickly hiding information if a malicious party \(E\) tries to access it. We plan to explore these features in the future. #### Iii-B3 Achieving the two criteria Now, we will discuss the two criteria we wanted to achieve with the Haar scrambling protocol. The first criterion was that the minimum size of \(|V_{E}|\), necessary for malicious party E to get access to reconstruct the R's quantum information, should be large. In the previous two subsections, we showed that changing the purity of \(V_{A}\) makes it possible to achieve it. Even when \(V_{A}\) is pure, \(E\) will need access to half the network to hack the information of \(R\). If \(E\) is maximally entangled with quantum data center \(D\), then \(E\) must access the entire subnetwork \(V\backslash\{D\}\) to get any information out. We may also already start with \(V_{A}\) being maximally mixed from the start. In this case, \(E\) will need to access the entire network \(V\) to get the information of \(R\). Therefore, by changing the purity of the initial party \(V_{A}\), the information scrambling protocol achieves the first feature we wanted. The second criterion is related to decoding the information of \(R\) that was scrambled across the network. Suppose malicious party E does have access to this minimum necessary size but doesn't know how the information was scrambled. In this case, \(E\) must query quantum nodes to learn the scrambling unitary \(U(t)\). If \(E\) needs to make many queries, it will provide a security advantage in that \(E\) cannot decode information of \(R\). [55] showed that if \(U_{t}\) is a \(t\) doped Clifford circuit, it is possible to learn the circuit using only \(\mathcal{O}(\text{poly}(n)\exp(t))\) queries where \(n\) is the depth of the circuit. The scrambling circuit is a fully chaotic quantum circuit with all local events requiring \(t\) gates. Hence, \(E\) must make exponential, \(\exp(M)\) queries to the network where \(M\) is the total number of local events in the protocol. Therefore, the Haar scrambling protocol achieves the second criterion, offering the advantage that even if the information of \(R\) is within the grasp of the malicious party E, they cannot decode it. ### _Efficient construction_ Scrambling unitaries do not necessarily have to be perfect Haar unitaries; even simple operators can exhibit excellent scrambling properties. Haar-random and unitary-design states demonstrate nearly maximal entanglement, specifically for \(t\geq 2\). This implies that the states obtained from \(t\)-design circuits [56] are information-theoretically indistinguishable from Haar scrambling circuits up to the \(t\)-th moment [55]. Since we employed an information-theoretic argument to derive ramp secret sharing schemes from Haar scrambling, \(t\)-design circuits are already sufficient and come with the benefit that the whole protocol can be constructed in polynomial time. Nevertheless, it is necessary to take caution; otherwise, we may fail to attain the desired second feature. For example, if the scrambling protocol is composed of only Clifford gates, then \(E\) can learn the scrambling unitary \(U(t)\) with only \(\mathcal{O}(\text{poly}(n))\) queries where \(n\) is the depth of the protocol. Therefore, it is necessary to have at least \(n\) number of \(t\) gates for a \(n\) depth protocol, such that it will take an exponential, \(\mathcal{O}(\exp(n))\), queries by \(E\) to decipher the information of \(R\). Scrambling in a quantum network also depends on the network topology of the underlying graph \(G=(V,E)\). Information scrambling can be considered a quantum counterpart of a broadcast algorithm. The time complexity of the scrambling protocol is then lower bounded by the time complexity of the broadcast algorithm, which is of order \(\mathcal{O}(\text{Diam}(G))\) for dirty topology, and of order \(\mathcal{O}(|E|)\) for clean topology, where \(\text{Diam}(G)\) is the diameter of the graph. One suitable design for the network design is the hyperbolic network design. In Euclidean networks, the number of nodes increases only polynomially as the distance from the center grows. However, in a hyperbolic network, the number of nodes increases exponentially with distance from the center. This allows the spread of information quickly in the network. ## IV Conclusion By presenting the diagrammatic visualization approach, motivated by the Hasse diagram and Lamport diagrams, we contribute to the understanding and practical implementation of large-scale quantum networks. These techniques shed light on information flow dynamics, synchronization, error monitoring, resource allocation, and security considerations within the network. In addition, we have proposed a novel quantum information scrambling protocol where malicious parties would need to access a significant fraction of the network to retrieve the information scrambled by some node. Several intriguing open questions remain to be addressed in our study. Firstly, a deeper investigation into the mathematical structure of space-time diagrams is necessary, with potential insights to be drawn from quantum tensor networks [57], and classical distributed computing [42]. Equally important is exploring the practical utility of space-time diagrams across various domains, including synchronization, the construction of fault-tolerant quantum networks, quantum error propagation and correction, and quantum resource tracking. For example, one can explore what else a quantum data center can achieve in terms of constructing fault-tolerant quantum networks and enhancing security in the network. A quantum network's topology shapes a network's information dynamics, necessitating an in-depth analysis of its relationship with quantum information scrambling protocols. Lastly, the ongoing achievements of quantum information theory in many-body systems [58, 59], presents a promising avenue for the development of novel protocols in distributed quantum networks, making it a captivating area for future exploration. ## Acknowledgment The research is part of the Munich Quantum Valley, which is supported by the Bavarian state government with funds from the Hightech Agenda Bayern Plus. C.Deppe additionally acknowledge the financial support by the Federal Ministry of Education and Research (BMBF) in the programs with the identification numbers: 16KISK002, 16KISQ028, 16KISQ038, 16K1S1598K, 16KISQ077, 16KISQ093, 16KISR027K.
2309.09632
Crossover equation of state based on color-molecular-dynamics
The equation of State for dense matter is studied with color molecular dynamics, in which hadron matter and quark matter are automatically distinguished only from quark color state. The quark-quark interactions are optimized to be consistent with saturation properties: symmetric energy, $L-$parameter, and incompressibility around nuclear density. In the calculations, the degrees of freedom of colors are solved at each numerical step, although the flavors are fixed as up or down quarks. The resultant mass-radius relations also satisfy the observational constraints such as the gravitational wave observations, NICER, and ``{\it the two-solar mass observations}" of neutron stars. In this model with the allowed parameter range, deconfined quark matter appears in the core of neutron stars via crossover. Although the current constraints from the observations are not enough to conclude whether quark matter appears at high-density region, our method would help to understand high-density material properties inside neutron stars in the future.
Nobutoshi Yasutake, Toshiki Maruyama
2023-09-18T10:08:39Z
http://arxiv.org/abs/2309.09632v2
# Crossover Equation of State Based on Color-Molecular-Dynamics ###### Abstract Equation of State for dense matter is studied with color molecular dynamics, in which hadron matter and quark matter are automatically distinguished only from quark color state. The quark-quark interactions are optimized to be consistent with saturation properties: symmetric energy, \(L-\)parameter, and incompressibility around nuclear density. In the calculations, the degrees of freedom of colors are solved at each numerical step, although the flavors are fixed as up or down quarks. The resultant mass-radius relations also satisfy the observational constraints such as the gravitational wave observations, NICER, and "_the two-solar mass observations_" of neutron stars. In this model with the allowed parameter range, deconfined quark matter appears in the core of neutron stars via crossover. Although the current constraints from the observations are not enough to conclude whether quark matter appears at high-density region, our method would help to understand high-density material properties inside neutron stars in the future. ## I Introduction The equation of state (EOS) is a key to understand neutron star (NS) physics. In particular, it is an interesting and big question how hadron matter composed of baryons deconfine to quark matter at high density. However, since the lattice QCD simulations are not yet feasible for finite density and low-temperature \(T\simeq 0\), there remains significant theoretical uncertainty regarding the behavior of matter inside NSs. It is a fundamental issue to clarify deconfinement behaviors. Instead of solely relying on theoretical and direct understanding of the confinement-decontainment phase changes, the investigation of astronomical observations and nuclear experiments has emerged as a promising approach. In this context, it is the first step to reproduce the mass-radius (MR) relations of NSs. One of the well known constraints on the MR relations is "_two solar mass constraints_", which was derived from the observation of PSR J0348+432 [1]. Gravitational wave observations also provide strong constraints on EOS. The observation GW170817 has yielded an upper limit of the radius of neutron stars assuming a binary system with canonical masses [2]. The electromagnetic counterpart observations of the gravitational wave event have further constrained the MR relations of NSs [48]. On the other hand, Neutron star Interior Composition Explorer (NICER) has gradually narrowed the MR regions of NSs, e.g. PSR J0030+045 [4; 5] and PSR J0740+6620 [6; 7]. Consequently, it is expected that astrophysical observations will significantly constrain the EOSs for high-density regions of NSs. Thanks to these developments in astronomical observation techniques, it would be nice to consider the possibility of the quark-hadron phase change in NSs. This possibility has been pointed out by past studies, and many of them have assumed a first-order phase transition [8; 9; 10; 11; 12; 13]. The phase transitions, in this case, apply to that of multi-component systems, where two phases are in equilibrium with different chemical compositions. In such phase transitions nonuniform structures, namely "_pasta structures_" may appear, after the balance between the charge interactions and surface tensions. Not only the first-order phase transitions but also the possibility of the crossover are discussed [14; 15; 16; 17; 18]. One of the points that we should keep in mind is that the model should describe nuclear matter at lower densities as a hadron phase and at higher densities as a quark phase, on the latter phase we don't have enough information. Furthermore, big uncertainties lie in the mechanisms of confinement and deconfinement, and the EOS in between. At lower densities the property of hadronic matter is rather well-known: the binding energy per baryon of symmetric nuclear matter, \(S_{0}\), has the minimum value of -16 MeV at the saturation density \(n_{0}\), around 0.16 fm\({}^{-3}\). The range of symmetric energy \(J\), which is the energy difference per baryon between the symmetric nuclear matter and the pure neutron matter, is relatively narrowly restricted: a plausible range for \(J\) of 29-33 MeV [19; 20]. However, the values of the \(L-\)parameter (the slope of energy for neutron matter), and the incompressibility \(K\) (second derivative of symmetric matter energy by the density) are strongly model dependent. The fiducial range of the \(L\)-parameter is reported as \(L=60\pm 20\) MeV, which is constrained by analyses of terrestrial experiments, as summarized well in Refs.[21; 22]. Compared with them, there are also the possibilities for larger \(L\)-values: Danielewicz et al. have derived the range as \(70<L<101\) MeV from the isobaric analog states and iso-vector skin results [23]. Radioactive Isotope Beam Factory (RIBF) at RIKEN in Japan has reported as \(42<L<117\) MeV [24]. The Pb radius experiment (PREX)-II projects have suggested the range of \(L\) as \(L=106\pm 37\) MeV [25], while the analyses after the Ca radius experiment (CREX) result have suggested the ordinal \(L\)-values recently. Hence, the current terrestrial experiments did not solely conclude the value of \(L\). It is well known that the \(L-\)parameter described above is strongly correlated with neutron skin thickness. It was theoretically predicted that the neutron skin thickness would change by considering alpha-clustering at the surface of neutron-rich heavy nuclei. This point is mentioned in the paper on EOS by Typel et al. which is often used in numerical simulations in theoretical astronomy and widely known as "DD2" [26]. Recently, this prediction was confirmed experimentally by Tanaka et al.[27]. In this study, DD2 EOS is compared to our EOS as one benchmark. As for the incompressibility \(K\), Danielewicz et al.[28] pointed out 167 MeV \(<K<300\) MeV, where they analyzed the flow of matter to extract pressures in nuclear collisions. Piekarewicz has given the range of \(K=248\pm 8\) MeV from the iso-scalar giant monopole resonance (IS-GMR) in \({}^{208}\)Pb adopted the relativistic mean-field model with a random-phase-approximation [29]. G. Colo et al. have also analyzed the measurements of the ISGMR in medium-heavy nuclei introducing some types of Skyrme forces, then predicted the \(K\) around 230 MeV [30]. As for the prediction of a small value of \(K\), Sturm et al. and Hartnack et al. have concluded \(K\) is around 200 MeV from a comparison of the results of transport theories with the experimental data of heavy-ion collisions with the production of \(K^{+}\) mesons [31; 32]. The EOS of the deconfinement quark phase should be consistent with all constraints as described above. For this purpose, Kojo et al. constructed EOSs describing quark deconfinement with crossover [18]. These EOSs are based on hadron EOSs derived from the chiral effective field theory nuclear equation of state [33; 34; 35] at low densities. By interpolating this EOS with the quark EOS at high density, they construct a crossover EOS. Such an interpolative method is practical as a first step to obtain EOS, but one should be careful to obtain thermodynamic quantities such as thermal conductivity since one cannot clearly distinguish hadron matter and quark matter in the crossover case. One possible approach to obtaining such physical quantities is to discuss the deconfinement-confinement mechanism only within the framework of the quark model. Keeping this in mind, we employ a color molecular dynamics (CMD) simulation which deals with constituent quarks with color degrees of freedom [36]. In this method we solve the time evolutions of positions, momenta, and color coordinates of quarks, which are governed by the Hamiltonian with the potential term consisting of the color confining potential, the perturbative gluon-exchange potential, and the meson-exchange potentials. At low density, quarks are clusterized into baryons, in which the color-singlet state is favored. As density increases, the baryons start to overlap with each other, then they start deconfined into quark matter. Our CMD simulations show such percolative behavior during the deconfinement. Compared with our previous studies [37], we improved the scheme to include relativistic kinetic energy and solve the time-dependence of the color [36]. We also include color-independent non-linear quark-quark repulsions to keep the consistency with mass-radius relations of compact stars constrained by the astrophysical observations. The non-linear interaction can be understood as the quark many-body effects. This paper is organized as follows. In Sec. II, we outline our framework for CMD. Sec. III contains numerical results consistent with the constraints of mass-radius relations from astronomical observations, and with saturation properties around nuclear density. Sec. IV is devoted to the conclusion and the discussion of our results. ## II Color molecular dynamics Our formulation is based on our previous papers [36; 37]. Throughout this paper, we assume quark isospin symmetry with the constituent quark mass, \(m\). We start with the total wave function defined by a direct product of single-particle wave packets of quarks, the position and the momentum of which are centered at time-dependent variables \({\bf R}_{i}\) and \({\bf P}_{i}\), respectively, and \(\chi_{i}\) is the internal degree of freedom given by a direct production of the fixed flavor \({\chi_{i}}^{f}\), the time-dependent color \({\chi_{i}}^{c}\), and the fixed spin orientation \({\chi_{i}}^{s}\), \[\Psi=\prod_{i=1}^{N}\frac{1}{(\pi{L_{q}}^{2})^{3/4}}\exp\left[-\frac{({\bf r}_ {i}-{\bf R}_{i})^{2}}{2{L_{q}}^{2}}+\frac{i}{\hbar}{\bf P}_{i}{\bf r}_{i} \right]\chi_{i}, \tag{1}\] here \(N\) and \(L_{q}\) denote the total number of quarks and the fixed width of wave packets, respectively. We employ the width \(L_{q}=0.37\) fm in this work. In this paper, we fix the flavors and spins, hence \({\chi_{i}}^{f}\) and \({\chi_{i}}^{s}\) are treated as constants. The explicit form of the time-dependent degree of freedom for color is shown as \[{\chi_{i}}^{c}=\left(\begin{array}{cc}\cos\alpha_{i}&e^{-i\beta_{i}}&\cos \theta_{i}\\ \sin\alpha_{i}&e^{+i\beta_{i}}&\cos\theta_{i}\\ \sin\theta_{i}&e^{i\varphi_{i}}\end{array}\right), \tag{2}\] hence \(\alpha_{i}\), \(\beta_{i}\), \(\theta_{i}\), \(\varphi_{i}\) are the variables for the color of each particle. The system follows the Hamiltonian, \[H=H_{0}+V_{\rm Pauli}-T_{\rm spur}, \tag{3}\] where \(H_{0}\) is the conventional Hamiltonian expressed as \[H_{0}=\left\langle\Psi\left|\sum_{i=1}^{N}\sqrt{m+\vec{\bf p}_{i}^{2}}+\frac{ 1}{2}\sum_{i,j\neq i}^{N}\hat{V}_{ij}\right|\Psi\right\rangle. \tag{4}\] The first term is the kinetic term with relativistic kinematics. The second term is the interaction term ex pressed as \[\hat{V}_{i,j} = -\sum_{a=1}^{8}\tau_{i}^{a}\tau_{j}^{a}\hat{V}_{\rm C}+\hat{V}_{\rm M}, \tag{5}\] \[\hat{V}_{\rm C} = \kappa\hat{r}_{ij}-\frac{\alpha_{s}}{\hat{r}_{ij}}, \tag{6}\] \[\hat{V}_{\rm M} = \frac{g_{\omega}^{2}C_{\omega}}{4\pi}\left(\sum_{j\neq i}^{n} \frac{e^{-\mu_{\omega}\hat{r}_{ij}}}{\hat{r}_{ij}}\right)^{1+\epsilon_{\omega} }-\frac{g_{\sigma}^{2}C_{\sigma}}{4\pi}\left(\sum_{j\neq i}^{n}\frac{e^{-\mu_ {\sigma}\hat{r}_{ij}}}{\hat{r}_{ij}}\right)^{1+\epsilon_{\sigma}} \tag{7}\] \[+ \frac{\sigma_{i}^{3}\sigma_{j}^{3}}{4}\frac{g_{\rho}^{2}}{4\pi} \frac{e^{-\mu_{\rho}\hat{r}_{ij}}}{\hat{r}_{ij}}\] where \(\hat{r}_{ij}\equiv|\hat{\bf r}_{i}-\hat{\bf r}_{j}|\) is the distance between \(i\)-th and \(j\)-th quarks, \(\tau_{i}^{a}=\lambda_{i}^{e}/2\) with \(\lambda_{i}^{e}\) being the Gell-Mann matrices. The color-dependent interaction \(V_{C}\) consists of the linear confining potential (the first term) and the one-gluon exchange potential (the second term) as shown in Eq.(6). The string tension of confinement \(\kappa\) and the QCD fine structure constant \(\alpha_{s}\) are set as \(\kappa=0.75\) GeV fm\({}^{-1}\), and \(\alpha_{s}=1.25\), as shown in Ref. [36]. Note that these two interactions are the main components to determine the nucleon mass \(M\), and these parameter sets are typical values [38]. In this method, we do not conduct the antisymmetrization of the total wave function, in which the numerical cost is \(N^{4}\)[39; 40]. Hence, the interaction is underestimated by a factor of 4 when we take the matrix element of \(\tau_{i}^{a}\tau_{j}^{a}\), and then we introduce the effective coupling constants as \(\kappa_{\rm eff}=4\kappa\) and \(\alpha_{s}^{\rm eff}=4\alpha_{s}\) to get consistency with color SU(3) algebra. The constituent quark mass is then obtained as \(m=361.8\) MeV, which is determined to reproduce the nucleon mass, \(M=938\) MeV, with the above parameter sets for the confinement and the one-gluon exchange. As for the nuclear force, the non-perturbative gluon exchanges in the color singlet channels would be the essential part. Hence, we introduce the \(\sigma+\omega+\rho\) quark-meson couplings acting between quarks [41]. These coupling constants are set as \(g_{\omega}=5.46\), \(g_{\sigma}=3.23\), and \(g_{\rho}=8.19\). The meson-quark coupling constants \(g_{\omega}\), \(g_{\sigma}\), and \(g_{\rho}\), are estimated from the meson-nucleon couplings as \(g_{\omega}=g_{\omega N}/3=4.98\), \(g_{\sigma}=g_{\sigma N}/3=3.09\), and \(g_{\rho}=g_{\rho N}/3=9\), in previous works [37]. These values of \(g_{\omega}\), \(g_{\sigma}\), and \(g_{\rho}\) in this paper are different by \(\sim\)10 % from our previous works. We also introduce the small non-linearity parameters \(\epsilon_{\omega}\), and \(\epsilon_{\sigma}\) for the \(\omega\)-, and \(\sigma\)-exchange potentials. This non-linearity provides the density-dependence to the two-body interaction, which enables us to control the stiffness of matter in the present framework. We also introduce \(C_{\omega}\), \(C_{\sigma}\)to make the coupling constant \(g_{\omega}\), \(g_{\sigma}\) dimensionless, set as \(C_{\omega}=1/(1+\epsilon_{\omega})\), \(C_{\sigma}=1/(1+\epsilon_{\sigma})\), which are same as Ref.[37]. We have chosen \(\epsilon_{\omega}=0.20\), \(\epsilon_{\sigma}=-0.13\) to make our EOS consistent with the constraints from astrophysical observations and experimental nuclear physics described later. We have introduced the effective widths \(L_{\omega}\), \(L_{\sigma}\), and \(L_{\rho}\) as \(L_{\omega}=0.75\) fm, \(L_{\sigma}=1.35\) fm, and \(L_{\rho}=1.30\) fm for each meson exchange term. Also, these values are almost the same as the previous work: \(L_{\omega}=0.70\) fm, and \(L_{\sigma}=L_{\rho}=1.30\) fm [37]. The values \(L_{\omega}\), \(L_{\sigma}\), and \(L_{\rho}\) are set to larger values than \(L_{q}\) so that the quark-meson interactions replicate the nucleon-meson interactions. Instead of the anti-symmetrization of the wave function, we introduce an effective Pauli-potential, \(V_{\rm Pauli}\). It acts as a repulsive force between quarks with the same intrinsic degree of freedom such as flavor, color, and spin, collectively denoted as \(\chi_{i}\). According to our previous studies, we employ the following form of Pauli potential, \[V_{\rm Pauli}=\sum_{i=1}^{N}\frac{C_{p}}{(q_{0}p_{0})^{3}} \exp\left[-\frac{({\bf R}_{i}-{\bf R}_{j})^{2}}{2q_{0}^{2}}\right] \tag{8}\] \[\times\exp\left[-\frac{({\bf P}_{i}-{\bf P}_{j})^{2}}{2p_{0}^{2}} \right]\delta_{\chi_{i},\chi_{j}}.\] The parameters are set as \(q_{0}=2.46\) fm, \(p_{0}=240\) MeV, and \(C_{p}=131\) MeV, to reproduce relativistic kinetic energy for free fermions at zero temperature. In this paper, we do not take into account the spurious zero-point energy, \(T_{\rm spur}\), which comes from the center-of-mass motion of clusters as shown in Ref.[37]. These effects remain future problems. Time evolution of the system is given by the Euler-Lagrange equation for \(\{{\bf R}_{i}\), \({\bf P}_{i}\), \(\alpha_{i}\), \(\beta_{i}\), \(\theta_{i}\), \(\varphi_{i}\}\) with the classical Lagrangian [36]. Accordingly, the explicit form of the equations of motion reads: \[\dot{\bf R}_{i} = \frac{\partial H}{\partial{\bf P}_{i}},\ \ \ \ \dot{\bf P}_{i}=-\frac{\partial H}{\partial{\bf R}_{i}}, \tag{9}\] \[\dot{\beta}_{i} = -\frac{1}{2\hbar\sin 2\alpha_{i}}\frac{\partial H}{\partial\alpha_{i}},\] (10) \[\dot{\theta}_{i} = \frac{1}{\hbar\sin 2\theta_{i}}\frac{\partial H}{\partial\varphi_{i}},\] (11) \[\dot{\alpha}_{i} = \frac{1}{2\hbar\sin 2\alpha_{i}}\frac{\partial H}{\partial\varphi_{i}} \frac{\partial H}{\partial\beta_{i}}-\frac{\cos 2\alpha_{i}}{2\hbar\sin 2\alpha_{i}} \frac{\partial H}{\partial\varphi_{i}},\] (12) \[\dot{\varphi}_{i} = -\frac{1}{\hbar\sin 2\theta_{i}}\frac{\partial H}{\partial\theta_{i}}+ \frac{\cos 2\alpha_{i}}{2\hbar\sin 2\alpha_{i}}\frac{\partial H}{\partial\alpha_{i}}. \tag{13}\] In the calculations, all quarks are, initially, distributed randomly without momenta in a box with the periodic boundary condition. The ground state (matter at zero temperature) is obtained by the frictional cooling [36; 37]. For this purpose, we solve a damping equation of motion instead of Eq.(9), \[\dot{\bf R}_{i} = \frac{\partial H}{\partial{\bf P}_{i}}+\mu_{R}\frac{\partial H}{ \partial{\bf R}_{i}}, \tag{14}\] \[\dot{\bf P}_{i} = -\frac{\partial H}{\partial{\bf R}_{i}}+\mu_{P}\frac{\partial H}{ \partial{\bf P}_{i}}, \tag{15}\] where \(\mu_{R}\) and \(\mu_{P}\) are damping coefficients set as \(\mu_{R}=-0.00002\) and \(\mu_{P}=-0.02\). Based on the above formulation, we search for parameter sets consistent with the constraints from astronomical observations and nuclear experiments. Once the parameters are given, the energy per nucleon for symmetric nuclear matter and neutron matter are obtained. Then, we have made fits of the ground-state energies by regressions. Using the regressions, we calculate the EOS of matter in the charge neutral and beta-equilibrium including the contribution of electrons. Thanks to GPU parallel computing, it takes just one day to check one parameter set. In molecular dynamics, the heaviest numerical cost is in the particle correlation part. There are some well-known techniques to speed up calculations in molecular dynamics, such as the tree method and Fast Multipole Method (FMM) [42], but, to our knowledge, there does not exist such methods involving confining potential, which does no decrease at a long distance, hence we do not use such coarse graining techniques. ## III Results ### Equation of State By Color Molecular Dynamics Let us first show the energy components per nucleon for \(ud\) and \(udd\) matter in FIG.1. In symmetric nuclear matter, \(u\) and \(d\)quarks are equally present, hence we call it \(ud\) matter. In the same way, neutron matter is called \(udd\) matter in this paper. The total energy consists of the potential energies of quark-meson coupling such as \(q\)-\(\omega\), \(q\)-\(\sigma\), and \(q\)-\(\rho\), the confining potential, the one-gluon exchange potential, the Pauli potential, and the kinetic energy, expressed as \(E_{\omega}\), \(E_{\sigma}\), \(E_{\rho}\), \(E_{\rm conf}\), \(E_{\rm OGE}\), \(E_{\rm Pauli}\), and \(E_{\rm kin}\), respectively. As the density increases, the main component of the energy changes to \(q\)-\(\omega\) and \(q\)-\(\sigma\) couplings which are non-linearly dependent on the density, while color-dependent potentials are rather moderate since they have roughly linear dependence on the density. The total energies of matter are shown in FIG.2. Note that these quantities are obtained based on the quark models, namely CMD only. In this figure, the regression curves are obtained assuming a sigmoid function with regularization terms as \[(E/A)_{ud}(x)=3\Big{(}a_{1}x+a_{2}x^{2}\] \[+\frac{a_{3}}{1+\exp(-a_{4}x+a_{5})}-\frac{a_{3}}{1+\exp(a_{5})} \Big{)}, \tag{16}\] \[(E/A)_{udd}(x)=3\Big{(}b_{1}x^{2}+b_{2}x^{4}\] \[+\frac{b_{3}}{1+\exp(-b_{4}x+b_{5})}-\frac{b_{3}}{1+\exp(b_{5})} \Big{)}, \tag{17}\] where \(x\) is normalized baryon density as \(x=n/\bar{n}\). Note that \(\bar{n}\) is not necessarily the saturation density, but just a normalization factor set as \(\bar{n}=0.16\) fm\({}^{-3}\). Factor 3 at the beginning represents "three quarks", i.e. per nucleon. In other words, by removing this factor, we can obtain the energy per quark, \(E/Q\). The first two terms in both equations are introduced for the regularization of the regression. The last terms are introduced to make the regression curves on the origin. The obtained parameters are shown in Table 1. The resultant fitting curves of energy per baron for \(ud\) matter and \(udd\) matter are also shown in Fig.2. From these curves, we can obtain the characteristic values for EOS around saturation density \(n_{0}\): the saturation energy \(S_{0}\), the symmetry energy \(J\), the density gradient of neutron-matter energy \(L\), and the curvature of symmetric-matter energy \(K\). They are summarized in Tab.2. On the other hand, when we focus on high-density matter inside NSs, the charge-neutral condition and the beta-equilibrium are considered to be realized. However, even for hadron matter, it is described by quark dynamics in this model. Figure 1: Energy components per nucleon for \(ud\) matter (upper panel) and \(udd\) matter (lower panel). Each dot represents the numerical results by CMD. The energies, which come from quark-meson couplings of \(\omega\), \(\sigma\), \(\rho\), are shown as \(E_{\omega}\), \(E_{\sigma}\), \(E_{\rho}\). The captions, \(E_{\rm kin}\) and \(E_{\rm Pauli}\), are the quark kinetic energy and the quark Pauli energy. The energies originated from the confinement and the one-gluon exchange are expressed as \(E_{\rm conf}\) and \(E_{\rm OGE}\). Hence, the charge-neutral condition is evaluated as, \[0=\sum_{i=u,d,e}Q_{i}n_{i}, \tag{18}\] where \(Q_{i}\), \(n_{i}\) are the particle charge and density. We do not take into account muons for simplicity. In this study, we also neglect anti-particles, since we obtain ground state at zero temperature conducting the frictional coolings shown in Eq.(14) and Eq.(15). In Fig.2, we show both cases of \(ud\) matter with/without the contribution of Coulomb energy. The former one is obtained under the charge-neutral condition, and shown as the dashed curve in Fig.2. As for the latter one, the role of electrons is necessarily important, hence it does not include electrons. In addition, the beta-equilibrium must be also fulfilled between \(u,d\) quarks and electrons in any phases: \[\mu_{u}+\mu_{e}=\mu_{d}, \tag{19}\] where \(\mu_{u}\), \(\mu_{d}\), and \(\mu_{e}\) are the chemical potentials for \(u\), \(d\) quarks, and electrons, respectively. Once we obtain \((E/A)_{ud}\) and \((E/A)_{udd}\) curves, the energy per nucleon \(E/A\) under the charge-neutral condition and beta-equilibrium is calculated as the mixture of \(ud\) matter and \(udd\) matter. \[E/A\ =\ (1-\beta^{2})(E/A)_{ud}+\beta^{2}(E/A)_{udd}, \tag{20}\] where \(\beta\) is defined to denote the asymmetry of \(u,d\) quark density as \[\beta\ =\ 3\frac{n_{d}-n_{u}}{n_{d}+n_{u}}. \tag{21}\] In our calculation, we have to find the optimal value of \(\beta\) consistent with both conditions at each density. This result is also shown as the thick line in FIG.2, including the energy contribution from electrons. In the upper panel of FIG.3, we show the relationship between pressure and baryon density corresponding \(E/A\) under charge-neutral and beta-equilibrium conditions shown in Eq.(20). However, below the subnuclear density \(n_{B}<0.06\) fm\({}^{-3}\), we use the EOS by Baym et al.(BPS) [43]. There is no density jump which is a characteristic of the first-order phase transition. This suggests that the deconfinement of quarks appears as the crossover. The EOSs by Akmal et al.(APR) [44], Kojo et al.(QHC21A-D) [18] and S. Typel et al.(DD2) [26], available on the CompOSE archive [45], are included in the figure for comparison. The EOSs by Akmal et al.(APR) and S. Typel et al.(DD2) have often been adopted as benchmarks, but the EOSs by Kojo et al. are also chosen for comparison in this paper since they are the state-of-the-art crossover EOSs. The corresponding sound speeds normalized by light speed are shown in the lower panel of FIG.3. The band (painted region) shows the constraint from "PSR+GW+J0030+J0740" traced Ref. [18], which has been originally deduced by Legred et al.[46]. Since the EOSs except for APR are designed to be within this range, it is obvious that they satisfy the constraint. It can be seen that the sound velocity of our result is monotonically increasing with density, but not for QHC A-D models. This difference is due to the quark deconfinement occurring over a wide density range in our crossover EOS, compared with QHC A-D models. Our results suggest that the pure quark matter does not appear even in the core of neutron stars with the maximum mass. However, we confirm that the peak of sound velocity appears in our CMD calculations at high-density region, which is over the maximum density inside of stable NSs: \(n_{B}=1.08\) fm\({}^{-3}\). Additionally, our model does not violate the causality law in the stable interior of neutron stars with respect to the speed \begin{table} \begin{tabular}{|c c c c c|} \hline \(n_{0}\) & \(J\) & \(L\) & \(K\) & \(S_{0}\) \\ \([\)fm\({}^{-3}]\) & [MeV] & [MeV] & [MeV] & [MeV] \\ \hline \hline 0.167 & 31.0 & 74.2 & 260 & -15.8 \\ \hline \end{tabular} \end{table} Table 2: The characteristic physical values around the saturation density obtained by CMD calculations. \begin{table} \begin{tabular}{|c c c c c|} \hline \(a_{1}\) & \(a_{2}\) & \(a_{3}\) & \(a_{4}\) & \(a_{5}\) \\ \(b_{1}\) & \(b_{2}\) & \(b_{3}\) & \(b_{4}\) & \(b_{5}\) \\ \([\)MeV\(]\) & [MeV] & [MeV] & & \\ \hline \hline 92.13 & 4.52325 & 1636.17 & 0.202968 & 0.263397 \\ 2.70704 & -0.00203469 & 38.0054 & 0.728765 & 2.83639 \\ \hline \end{tabular} \end{table} Table 1: The optimized parameters for Eq.(16) and Eq.(17). Figure 2: Density dependence of energy per baryon \(E/A\) for each matter \(ud\) matter and \(udd\) matter. Each dot represents the numerical results by CMD. Thin dot lines are the regression curves of them. The thick line, which is labeled by “\(\beta\)-eq.+e”, is obtained \(E/A\) under the charge-neutral and beta-equilibrium conditions including contribution of electrons. The dashed curve represents \(E/A\) for \(ud\) matter with electrons under the charge-neutral condition. of sound, but it does at higher density, \(n_{B}=1.42\) fm\({}^{-3}\). When we extend our code to be fully relativistic in the future, it might be possible that it may no longer violate the causality law at such high densities. The conditions for confinement/deconfinement are set as follows: If three quarks are within a certain distance \(d_{\rm cluster}\) and the total color of three quarks is white with an accuracy \(\varepsilon\), these quarks are considered confined. It is formulated as \[\left\{\begin{array}{l}|{\bf R}_{i}-{\bf R}_{j}|<d_{\rm cluster}\\ \sum_{a=1}^{8}\left[\sum_{i=1}^{3}\langle\chi_{i}|\lambda^{a}|\chi_{i}\rangle \right]^{2}<\varepsilon.\end{array}\right.\] The values of the criteria are set as \(d_{\rm cluster}=0.33\) fm and \(\varepsilon=0.01\) in this study. The perspectives of \(ud\) matter and \(udd\) matter depended on the density are shown in FIG.4 and FIG.5. In these figures, the colors of the quarks are represented using the discontinuous colors red, blue, and green (RGB) for visibility, however in the actual calculations, the quarks are in a mixed state of RGB reflecting the internal degrees of freedom in Eq. (2). Namely, the color is selected and depicted among RGB, which is closest to the color mixed state for each particle. Particles with white colors indicate quarks in the confined state, while red, blue, and green ones show in the deconfined quark matter. In these figures, it can be seen that the fraction of deconfined quarks gradually increases with density. This behavior is similar to the percolative depiction [47]. The fractions of confined quarks to total quarks in \(ud\) and \(udd\) matter are shown in FIG.6. For both matters, deconfined quarks appear around 0.6 fm\({}^{-3}\). As we will show later, the maximum density in stable neutron stars is 1.08 fm\({}^{-3}\), hence our calculations suggest that the interior of neutron stars is mostly filled with confined quark matter, i.e., baryons. The fraction of deconfined quark matter is roughly estimated to be less than 0.3 in neutron stars. GW170817, and show them in FIG. 7. First, the gravitational wave observation, GW170817, itself gave us valuable information on the tidal deformability of the neutron stars, which could be a constraint for the radii [2]. The observation suggests that the radii should be less than 13.6 km for neutron stars with a canonical mass, 1.4 \(M_{\odot}\). The dimensionless tidal deformability from our CMD calculation is \(\Lambda_{1.4M\odot}=458\), and the corresponding radius is less than the constraint by GW170817. The electromagnetic (EM) observation accompanied by GW170817 has also provided complementary information: the merger was suggested not to be in a prompt collapse to a black hole (BH) because of the large quantity of ejecta and its high electron fraction. Given that the threshold for prompt collapse depended on the NS compactness, Bauswein et al. had placed a lower limit of NS radius as \(R_{1.6M_{\odot}}\)=10.3-10.7 km with the mass of \(M=1.6M_{\odot}\)[48]. Moreover, comparing numerical relativistic simulations as for a supermassive neutron star remnant with the measured binary mass \(M_{\rm tot}=2.74+0.04M_{\odot}\)[2], Shibata et al. have given an upper limit on the NS mass of \(M<2.3M_{\odot}\)[49]. For more details on EM counterparts, see also the review by Metzger [50] and the reference therein. Although not included in FIG 7 to avoid complications, it should be mentioned that Margalit and Metzger gave a more strict constraint, \(M<2.17M_{\odot}\)[51]. On the other hand, Keck-telescope spectrophotometry and imaging of the companion of the "black widow" pulsar PSR J09520607, suggests 2.35 \(\pm 0.17M_{\odot}\)[52]. Therefore, careful consideration is still needed regarding the maximum mass of neutron stars. With this background in mind, this paper adopts the more conservative constraint by Figure 4: The perspectives for \(ud\) matter depended on density. Each color corresponds to the color’s internal degree of freedom for each quark: the white color balls represent the quarks in the baryon state, while red, blue, and green ones do in the deconfined quark matter. Figure 5: Same as Fig. 4, but for \(udd\) matter. Figure 6: Fractions of quark number within baryon state against total quark number for \(ud\) matter and \(udd\) matter, shown in Fig. 4 and Fig. 5. The painted area indicates the region below the maximum density 1.08 fm\({}^{-3}\), i.e., at the density allowed in the interior of stable NSs. Shibata et al. [49] rather than the one by Margalit and Metzger [51]. ## IV Discussion In this study, CMD calculations were performed to obtain the EOS, which is consistent with the constraints from the astrophysical observations. In our current framework, the deconfinement of quarks is crossover, and the insides of NSs are occupied mainly by confined hadron matter. At low densities, our results deviate slightly from the values suggested by various nuclear experiments: our model satisfies just barely the range of \(K\) by Danielewicz et al.[28] but not for the others [29; 30; 31; 32], while it does the fiducial values of \(n_{0}\), \(J\), and \(L\)[19; 20]. However, we still miss a lot of physics: we adopt simple Newtonian interactions without strangeness, quarks spin-spin interactions (color-magnetic interactions), etc. One can easily anticipate that the EOS would be softened when we further take into account strangeness, but it is unclear how relativistic interactions and color magnetic interactions would change the EOS on the other side. Note that our model is based on the constituent quark models, and then we do not take into account the chiral condensations. It is a future challenge to incorporate such physics into our CMD code. ###### Acknowledgements. We are grateful to N. Hoshi, Y. Mukobara, H. Tanihisa, S. Ozaki, C. J. Xia, P. Gubler, K. Kyutoku, T. Muto, A. Park, S-H. Lee, M. Oka, and T. Hatsuda for fruitful discussions. This work was supported by JSPS KAKENHI Grant Numbers 20H04742, 20K03951. The calculations were performed by the supercomputing system HPE SGI 8600 at the Japan Atomic Energy Agency.
2309.04316
Incremental Learning of Humanoid Robot Behavior from Natural Interaction and Large Language Models
Natural-language dialog is key for intuitive human-robot interaction. It can be used not only to express humans' intents, but also to communicate instructions for improvement if a robot does not understand a command correctly. Of great importance is to endow robots with the ability to learn from such interaction experience in an incremental way to allow them to improve their behaviors or avoid mistakes in the future. In this paper, we propose a system to achieve incremental learning of complex behavior from natural interaction, and demonstrate its implementation on a humanoid robot. Building on recent advances, we present a system that deploys Large Language Models (LLMs) for high-level orchestration of the robot's behavior, based on the idea of enabling the LLM to generate Python statements in an interactive console to invoke both robot perception and action. The interaction loop is closed by feeding back human instructions, environment observations, and execution results to the LLM, thus informing the generation of the next statement. Specifically, we introduce incremental prompt learning, which enables the system to interactively learn from its mistakes. For that purpose, the LLM can call another LLM responsible for code-level improvements of the current interaction based on human feedback. The improved interaction is then saved in the robot's memory, and thus retrieved on similar requests. We integrate the system in the robot cognitive architecture of the humanoid robot ARMAR-6 and evaluate our methods both quantitatively (in simulation) and qualitatively (in simulation and real-world) by demonstrating generalized incrementally-learned knowledge.
Leonard Bärmann, Rainer Kartmann, Fabian Peller-Konrad, Jan Niehues, Alex Waibel, Tamim Asfour
2023-09-08T13:29:05Z
http://arxiv.org/abs/2309.04316v3
# Incremental Learning of Humanoid Robot Behavior ###### Abstract Natural-language dialog is key for intuitive human-robot interaction. It can be used not only to express humans' intents, but also to communicate instructions for improvement if a robot does not understand a command correctly. Of great importance is to endow robots with the ability to learn from such interaction experience in an incremental way to allow them to improve their behaviors or avoid mistakes in the future. In this paper, we propose a system to achieve incremental learning of complex behavior from natural interaction, and demonstrate its implementation on a humanoid robot. Building on recent advances, we present a system that deploys Large Language Models (LLMs) for high-level orchestration of the robot's behavior, based on the idea of enabling the LLM to generate Python statements in an interactive console to invoke both robot perception and action. The interaction loop is closed by feeding back human instructions, environment observations, and execution results to the LLM, thus informing the generation of the next statement. Specifically, we introduce incremental prompt learning, which enables the system to interactively learn from its mistakes. For that purpose, the LLM can call another LLM responsible for code-level improvements of the current interaction based on human feedback. The improved interaction is then saved in the robot's memory, and thus retrieved on similar requests. We integrate the system in the robot cognitive architecture of the humanoid robot ARMAR-6 and evaluate our methods both quantitatively (in simulation) and qualitatively (in simulation and real-world) by demonstrating generalized incrementally-learned knowledge. ## I Introduction For achieving truly intuitive human-robot interaction (HRI), a natural language interface is key for a humanoid robot. Via language, humans can easily communicate tasks and goals to the robot. However, the robot's interpretation of such commands, and thus the resulting execution, might be sub-optimal, incomplete or wrong. In such cases, it is desirable for the human to give further instructions to correct or improve the behavior. In particular, such cases should be memorized to incrementally learn from them and thus avoid the same mistake in the future. For instance, consider an interaction as depicted in Fig. 1. First, the user gives an instruction to the robot (1). The robot executes some (potentially incomplete or wrong) actions (2). The user observes the result and gives instructions for improvement (3), whereupon the robot performs corrective actions (4). If the desired goal is achieved, the user can reconfirm the correction (5), which leads to the robot updating its memory appropriately (6), thus incrementally learning new behavior based on language instructions. In this paper, we present a system to achieve such behavior and describe its implementation on the humanoid robot ARMAR-6 [1]. We build on the capabilities of Large Language Models (LLMs) [2, 3, 4] emerging solely from massive-scale next token prediction pretraining, and aim to transfer their success to HRI. The goal is to utilize the rich world knowledge contained in these LLMs to allow for embodied natural-language dialog, thus enhancing the capabilities of the LLM by integrating robot perception and action. In the cognitive architecture of our humanoid robot [5], this means the LLM will be in charge of the high-level planning and decision-making. Recent works like SayCan [6] and Code as Policies (CaP) [7] already demonstrate the usefulness of applying LLMs to orchestrate robot abilities, enabling high-level task understanding, planning and generalization. Going a step further, inner monologue [8] feeds back execution results and observations into the LLM, thus involving the LLM in a closed-loop interaction. Inspired by these works, we propose to utilize the code-writing capabilities of an LLM to directly integrate it into closed-loop orchestration of a humanoid robot. This is achieved by simulating an interactive (Python) console in the prompt, and letting the LLM produce the next statement given the previous execution history, including results returned or exceptions thrown by previous function calls. Thus, the LLM can dynamically respond to unexpected situations such as execution errors or wrong assumptions, while still leveraging the power of code-based interaction such as storing results in intermediate variables or defining new functions. For utilizing the few- and zero-shot capabilities of LLMs, it is crucial to design a (set of) prompts to properly bias the LLM towards the desired output. All of the above works use a predefined, manually written set of prompts tuned for Fig. 1: ARMAR-6 incrementally learns behavior from natural interaction. Demonstration video at [https://youtu.be/y5O2mRGtsLM](https://youtu.be/y5O2mRGtsLM) their respective use case. In contrast, we propose a novel, self-extending prompting method to allow incremental learning of new behaviors. To this end, we move away from a single, predefined prompt, and instead dynamically construct it based on a set of interaction examples, populated from prior knowledge and previously learned behavior. Given a user instruction, we rank all such interaction examples by semantic similarity to the input, and select the top-\(k\) entries to construct the actual prompt to the LLM. Crucially, the robot's prior knowledge contains specific examples involving the user complaining about mistakes and correcting the robot, or instructing it on how to improve its behavior. When the system fails to correctly execute a task and the user gives such instructions, the LLM is thus biased to invoke code that inspects the current execution history and forwards it to another, few-shot-prompted LLM. This LLM spots the mistakes and produces an improved interaction using chain-of-thought (CoT) prompting [9]. Finally, this will be added to the interaction examples, thus enabling the system to perform better the next time a similar command is called. We first evaluate our system quantitatively on the scenarios defined in SayCan [6] to show the effectiveness of our proposed prompting method. Furthermore, we perform experiments both in simulation as well as on a humanoid robot, demonstrating the effect of our incremental prompt learning strategy. ## II Related Work We start with reviewing works on understanding and learning from natural language in robotics. Subsequently, we present works using LLMs for high-level orchestration of robot abilities, underlining the novelties in our method. Finally, we focus on dynamic creation of prompts for LLMs, thus comparing our incremental learning strategy to the related work. ### _Understanding and Learning from Natural Language_ Understanding and performing tasks specified in natural language has been a long-standing challenge in robotics [10]. A main problem is _grounding_ the words of natural language sentences in the sensorimotor perception and action capabilities of a robot, which is known as _signal-to-symbol gap_[11]. Many works have focused on the grounding of expressions referring to objects, places and robot actions based on graphical models [12, 13], language generation [14], or spatial relations [15], especially for ambiguity resolution [16, 17]. Pramanick et al. [18] focus on resolving task dependencies to generate execution plans from complex instructions. However, in these works the robot does not explicitly learn from language-based interactions. In contrast, Walter et al. [19] enrich the robot's semantic environment map from language, and Bao et al. [20] syntactically parse daily human instructions to learn attributes of new objects. In [21], the robot asks for a demonstration if its current understanding of a spatial relation is insufficient to perform a given instruction. Other works go further by learning on the task level. Mohan et al. [22] learn symbolic task representations from language interaction using Explanation-based learning. Nicolescu et al. [23] learn executable task representations encoding sequential, non-ordering or alternative paths of execution from verbal instructions for interactive teaching by demonstration. Weigelt et al. [24] consider the general problem of programming new functions on code level via natural language. While our goal is similar to these works, we leverage LLMs for task-level reasoning and learning. ### _Orchestrating Robot Behavior with LLMs_ Recently, many works extend the capabilities of LLMs by giving them access to external models, tools and APIs [25, 26, 27, 28]. Tool usage can also be combined with reasoning techniques such as CoT prompting [9] to significantly improve planning [29]. In particular, orchestrating robot behavior and thus interacting with the physical environment can be seen as an embodied special case of LLM tool usage. Huang et al. [30] initially proposed the idea to utilize world knowledge from LLM pretraining to map high-level tasks to executable mid-level action sequences. SayCan [6] fuses LLM output probabilities with pretrained affordance functions to choose a feasible plan given a natural language command. Socratic Models [31] combine visual and textual LLMs to generate instructions in the form of API calls, which are then executed by a pretrained language-conditioned robot policy. Both Code as Policies (CaP) [7] and ProgPrompt [32] demonstrate the usefulness of a code-generating LLM for robot orchestration, as they convert user commands to (optionally, recursively defined) policy code grounded in predefined atomic API calls. While the generated policies can react to the robot's perception, these approaches do not directly involve the LLM in the online execution of a multi-step task after the policy has been generated. In contrast, inner homologue [8] feeds back execution results and observations into the LLM, but does not rely on code-writing, thus missing its combinatorial power. Recent technical reports [33, 34] provide guidance on utilizing ChatGPT [4] for robot orchestration. While TidyBot [35] uses GPT-3 [2] in a similar way to generate high-level plans for tidying up a cluttered real-world environment, the authors focus on personalization by summarizing and thereby generalizing individual object placement rules. With our proposed emulated Python console prompting, we differ from these existing works by _(i)_ formatting and interpreting all interaction with the LLM as Python code, in contrast to [6, 8], _(ii)_ closing the interaction loop by enabling the LLM to reason about each perception and action outcome, in contrast to [32, 34, 31, 6, _(iii)_ allowing the LLM to decide itself when and which perception primitives to invoke, instead of providing a predefined list of observations (usually a list of objects in the scene) as part of the prompt as in [31, 8, 32, 7, 35], and _(iv)_ simplifying the task for the LLM by allowing it to generate one statement at a time, in contrast to [7, 32, 33]. ### _Dynamic Prompt Creation_ When prompting an LLM to perform a task, quality and relevance of the provided few-shot examples are key to the performance of the system. Thus, several works propose to dynamically select these examples (e. g., from a larger training set) for constructing a useful prompt. Liu et al. [36] improve performance in a downstream question-answering (QA) task by selecting relevant few-shot samples via \(k\)-Nearest-Neighbor search in a latent space of pretrained sentence embeddings [37] representing the questions. Ye et al. [38] select not only the mostly similar, but also a diverse set of samples. Luo et al. [39] show that this dynamic prompt construction is also applicable for instruction-fine-tuned language models (LMs) [40] and in combination with CoT prompting. Similar to that approach, we apply vector embeddings of human utterances to find the top-\(k\) examples which are most similar to the current situation. Other works go further by proposing to update the database of examples by user interaction. In [41], GPT-3 is tasked with solving lexical and semantic natural language processing questions few-shot by generating both an understanding of the question as well as the answer. A user can then correct an erroneous understanding to improve the answer, and such correction is stored in a lookup table for later retrieval on similar queries. Similarly, user feedback can be used to improve open-ended QA by generating an entailment chain along with the answer, and allowing the user to then correct false model beliefs in that entailment chain [42]. Corrections are stored in memory and later retrieved based on their distance to a novel question. In our work, we also propose to construct a database based on user feedback. However, we go even further by _(i)_ letting the LM decide itself when such feedback is relevant (by invoking a certain function), _(ii)_ generating new examples of improved behavior from the human's feedback and thus _(iii)_ treating prior knowledge and instructed behavior in a uniform way by treating both as interaction examples in the robot's memory. The authors of [33] mention that ChatGPT can be used to change code based on high-level user feedback. However, they do not combine this with incremental learning to persist the improved behavior. ## III Approach In this section, we more precisely formulate the considered problem and explain our approach to intuitive HRI and incremental learning of humanoid robot behavior using LLMs. ### _Problem Formulation and Concept_ In this work, we consider the problem of enabling a robot to interact with a human in natural language as depicted in Fig. 2. First, the human gives a natural language instruction to the robot. Then, the robot interprets the instruction and performs a sequence of actions. However, the performed actions might be sub-optimal, incomplete or wrong. In that case, the human instructs the robot how to improve or correct its behavior. The robot executes further actions accordingly, and if the human is satisfied with the result, they can confirm that the robot should memorize this behavior. Finally, the robot must incrementally learn from the corrective instructions and avoid similar mistakes in the future. We formulate this problem as follows. Consider a robot with a set of functions \(\mathcal{F}=\{F_{1},\dots,F_{n}\}\). A function can be invoked to query the robot's perception or execute certain actions. Further, let \(\mathcal{M}\) denote knowledge of interactions and behaviors as part of the episodic memory of the robot which is initialized by prior knowledge. Based on the initial instruction \(I_{0}\) and \(\mathcal{M}\), the robot must perform a sequence of function invocations \((f_{1},\dots,f_{m})\), where each invocation \(f_{i}\) consists of the invoked function \(F_{i}\) with its corresponding parameters. Executing these invocations yields a sequence of results \((r_{1},\dots,r_{m})\). Overall, performing the task indicated by \(I_{0}\) results in an _interaction history_\(\mathcal{H}\) of the form \[\mathcal{H}=((f_{1},r_{1})\,,\dots,(f_{m},r_{m}))\leftarrow\mathrm{perform}\, (I_{0},\mathcal{M}) \tag{1}\] Note that we explicitly allow executing a generated invocation right away (potentially modifying the world state \(W\)) and using the result to inform the generation of the subsequent invocation. Therefore, the current history \(\mathcal{H}_{t}=((f_{1},r_{1})\,,\dots,(f_{t},r_{t}))\) is available when generating the next invocation \(f_{t+1}\), i. e., for \(t\in\{0,\dots,m-1\}\), \[f_{t+1} \leftarrow\mathrm{generate}\,(I_{0},\mathcal{H}_{t},\mathcal{M})\,, \tag{2}\] \[(r_{t+1},W_{t+1}) \leftarrow\mathrm{execute}\,(f_{t+1},W_{t})\,,\] (3) \[\mathcal{H}_{t+1} \leftarrow\mathcal{H}_{t}\circ((f_{t+1},r_{t+1}))\,, \tag{4}\] where \(\circ\) denotes sequence concatenation. In other words, invocations are generated auto-regressively by reasoning over the memory, the instruction as well as the previous actions and their execution results. To unify the subsequent notation, we define the human's instructions as a special case of perception, i. e., the system perceives them as a result of invoking the function \(F_{\mathrm{wait}}\in\mathcal{F}\). Using that terminology, \(\mathcal{H}_{0}=((f_{\mathrm{wait}},I_{0}))\), and we can drop \(I_{0}\) as explicit parameter of \(\mathrm{generate}\). Similarly, further instructions are handled as part of the interaction history. If the human gives an instruction to correct the robot's behavior, the robot must be able to learn from this instruction to improve its behavior in the future. We model this capability as another function \(F_{\mathrm{learn}}\in\mathcal{F}\). Its purpose is to update the robot's interaction knowledge \(\mathcal{M}\) to learn from the corrective instructions and avoid the mistake in the future \[\mathcal{M}\leftarrow\mathrm{learn\_from\_interaction}\,(\mathcal{M},\mathcal{H} _{t}) \tag{5}\] where \(\mathcal{H}_{t}\) is the interaction history when \(F_{\mathrm{learn}}\) is called. To address this problem, we propose a system as depicted in Fig. 3. A humanoid robot is interacting with a human and the scene. The robot is equipped with a multimodal memory system containing the following information: First, subsymbolic scene knowledge, containing information about objects, locations Fig. 2: Incremental learning of robot behavior. and agents in the world, and symbolic information about them including their relations in the current scene as part of the semantic memory of the system. It is populated by the perception modules of the robot. Second, the procedural memory of the robot, containing executable skills (in our case implemented through scripted policies). An execution request sent to the procedural memory triggers physical robot actions. The set of available functions \(\mathcal{F}\) contains functions to query the semantic and procedural memory. Third, we implement \(\mathcal{M}\) as part of the episodic memory of the robot containing interaction histories \(\mathcal{H}\), i. e., short episodes of interactions between the human and the robot, including the natural language inputs, the actions executed by the robot, and their results. The _interaction manager_ is responsible for the high-level orchestration of the robot's abilities. It has access to two instances of LLMs, an _interaction LLM_\(L_{\mathrm{interact}}\) and an _improvement LLM_\(L_{\mathrm{improve}}\), as well as a Python console environment \(E\) to execute generated function invocations. \(L_{\mathrm{interact}}\) is prompted by the interaction manager with the available functions \(\mathcal{F}\), the current interaction history \(\mathcal{H}_{t}\), as well as relevant few-shot examples retrieved from \(\mathcal{M}\), and generates function invocations \(f\). Following the notation of Eqs. (2) and (3), the function \(\mathrm{generate}\) is implemented through \(L_{\mathrm{interact}}\), while the function \(\mathrm{execute}\) is provided by \(E\). By generating an invocation of \(F_{\mathrm{learn}}\in\mathcal{F}\), \(L_{\mathrm{interact}}\) can trigger Eq. (5). We implement the function \(\mathrm{learn\_from\_interaction}\) by few-shot prompting \(L_{\mathrm{improve}}\). It reasons over \(\mathcal{H}_{t}\) and generates an improved version of the interaction, which is then saved to the memory \(\mathcal{M}\). ### _Procedure Overview_ To start, we populate the memory \(\mathcal{M}\) with both prior knowledge (i. e., predefined interaction examples) and previously learned interaction examples. The interaction manager sets up \(E\) including \(\mathcal{F}\), and then invokes an initial \(F_{\mathrm{wait}}=\) "wait_for_trigger()" inside that environment. This call waits for dialog input and returns when the human gives an initial instruction. The interaction manager handles any function return value by inserting its textual representation into the current interaction history, thus extending \(\mathcal{H}_{t}\). Thereby, it emulates the look of a Python console (Section III-C). In the following, a prompt is constructed (Section III-D) based on \(\mathcal{F}\), the most relevant examples from \(\mathcal{M}\), and \(\mathcal{H}_{t}\). This prompt is passed to \(L_{\mathrm{interact}}\) to produce the next command(s). The generated code is executed within \(E\), and both the code and its return values are again inserted into \(\mathcal{H}_{t}\). The interaction manager repeats this process as the high-level behavior-driving loop of the robot (see Fig. 4). Note that \(L_{\mathrm{interact}}\) can listen to further user utterances by generating "wait_for_trigger()" again. Our proposed prompt-based incremental learning strategy (Section III-E) is also invoked by \(L_{\mathrm{interact}}\) itself when it calls \(F_{\mathrm{learn}}=\) "learn_from_interaction()". ### _LLM interacting with an Emulated Python Console_ The left of Fig. 4 shows an interaction example using our proposed prompting scheme emulating a Python console. All commands entered into the emulated console (lines starting with ">>" or "...") are to be generated by the LLM, while the function return values are inserted below each invocation. The proposed syntax enables a closed interaction loop so that the LLM can dynamically react to unexpected situations and errors, while also keeping the flexibility of coding non-trivial statements. We achieve this by setting ">>" to be the stop token when prompting the LLM. This means that the LLM can generate continuation statements (including control flow and function definitions) by starting a new line with "...". Since generation stops at the beginning of the next statement, the LLM's output will also include the expected outcome of its own command, which we discard for the scope of this work. During our experiments, we observed that it is important for functions to provide semantically rich error messages, including hints on how to improve. This leads to self-correcting behavior [43]. For instance, when calling "move_to" with an invalid or underspecified location such as "counter," we pass the error message "Invalid location. Use one of the locations returned by list_locations()" to the LLM. In this example, the error message guides the LLM to query a list of possible locations which are then used to correctly ground the natural language request to the name "inFrontOf_mobile-kitchen-counter_0" that the "move_to" function understands. ### _Dynamic Prompt Construction_ We dynamically construct the prompt for \(L_{\mathrm{interact}}\) depending on the current interaction history \(\mathcal{H}_{t}\) (i. e., the code statements, execution results and user inputs observed so far). We start with some predefined base prompt, stating the general task and "importing" all defined names and functions. These imports are generated dynamically given the symbols defined in \(E\), i. e., the available functions \(\mathcal{F}\). The second part of the prompt consists Fig. 3: Conceptual view of our system. The robot’s memory system [5] works as a mediator between the interaction manager and the robot system. The interaction LLM acts in a Python console environment. It can invoke functions to fetch the content of the current scene (as given by perception modules) or invoke skills and thus perform robot actions. Relevant interaction examples are queried from the memory for few-shot prompting of the LLM. Incremental learning is performed by an improvement LLM updating the interaction examples memory with new content learned from instruction. of few-shot examples. For this, we make use of a memory \(\mathcal{M}\) of coding interaction examples, where each entry follows the Python console syntax defined in Section III-C. \(\mathcal{M}\) is initialized with hand-written prompts, but later extended dynamically as explained in Section III-E. Given the current interaction history \(\mathcal{H}_{t}\), we define a similarity measure \(S(\mathcal{H},\mathcal{H}_{t})\), see below, for each \(\mathcal{H}\in\mathcal{M}\) and choose the top-\(k\)\(\mathcal{H}\) to become part of the actual prompt. Afterwards, \(\mathcal{H}_{t}\) itself is inserted into the prompt to provide the LLM with the current context. Finally, the prompt is completed by inserting a syntax trigger for the LLM to correctly generate the next command, i. e., "\(>>\)". An example can be seen on the left of Fig. 4. To implement the similarity function \(S(\mathcal{H},\mathcal{H}_{t})\), we assume that examples with comparable natural language instructions are helpful. Therefore, we extract all such instructions from \(\mathcal{H}_{t}\) and each \(\mathcal{H}\in\mathcal{M}\). Let \(I_{t}^{i}\) with \(i=1,\ldots,N\) denote the \(N\) most recent instructions in \(\mathcal{H}_{t}\) (where \(I_{t}^{i}\) is the most recent one), and \(I_{\mathcal{H}}^{j}\) with \(j=1,\ldots,M_{\mathcal{H}}\) all the \(M_{\mathcal{H}}\) instructions found in each \(\mathcal{H}\in\mathcal{M}\). We make use of a pretrained sentence embedding model [37] to measure the semantic similarity \(\operatorname{sim}\left(a,b\right)=\operatorname{E}\left(a\right)\cdot \operatorname{E}\left(b\right)\) between two natural language sentences \(a,b\) by the dot product of their latent space embeddings \(\operatorname{E}\left(\cdot\right)\). First, we compute a latent representation of \(\mathcal{H}_{t}\) as \[e_{t}=\sum_{i=1}^{N}\gamma^{i-1}\operatorname{E}\left(I_{t}^{i}\right) \tag{6}\] where \(\gamma=0.6\) is an empirically chosen decay factor. Then, we determine a score \(\alpha_{\mathcal{H}}^{j}\) for each instruction \(I_{\mathcal{H}}^{j}\) of each history \(\mathcal{H}\in\mathcal{M}\) as given by \[\alpha_{\mathcal{H}}^{j}=e_{t}\cdot\operatorname{E}\left(I_{\mathcal{H}}^{j}\right) \tag{7}\] The final similarity score is given by \(S(\mathcal{H},\mathcal{H}_{t})=\max_{j}\alpha_{\mathcal{H}}^{j}\), and we pick the top-\(k\) such \(\mathcal{H}\) as the few-shot examples for the prompt. ### _Incremental Prompt Learning_ To enable our system to learn new or improved behavior from user interaction, we propose to make \(\mathcal{M}\) itself dynamic. For this purpose, we introduce a special function "learn_from_interaction()". This function is always "imported" in the base prompt, and there are predefined code interaction examples \(\mathcal{H}_{\text{learn}}\in\mathcal{M}\) involving this call. These \(\mathcal{H}_{\text{learn}}\) will be selected by dynamic prompt construction if semantically similar situations occur. They involve failure situations, where the user has to tell the robot what and how to improve, and that it should do better next time. Thus, when a mistake occurs and the user complains, these examples will be selected for the prompt and \(L_{\text{interact}}\) is biased towards invoking the "learn_from_interaction()" call. To implement learning from an erroneous interaction \(\mathcal{H}_{t}\), we query \(L_{\text{improve}}\) in a CoT-manner to identify and fix the problem. Specifically, we provide \(\mathcal{H}_{t}\) and first ask for a natural language description of the problem in this interaction. Subsequently, we request \(L_{\text{improve}}\) to explain what should be improved next time. Finally, \(L_{\text{improve}}\) is asked for an improved version \(\mathcal{H}_{t}^{*}\) of the interaction (in the given Python console syntax), and \(\mathcal{H}_{t}^{*}\) is added to the memory \(\mathcal{M}\). That way, the next time a similar request occurs, \(\mathcal{H}_{t}^{*}\) will be selected by dynamic prompt construction, and \(L_{\text{interact}}\) is biased towards not making the same mistake again. An example LLM transcript of such "learn_from_interaction()" implementation can be found in Listing 1. For robustness, there are two cases where we discard the generated \(\mathcal{H}_{t}^{*}\): First, we abort the learning if the response to the first CoT request is that there is no problem. Second, if \(\mathcal{H}_{t}^{*}\) is equal to the input interaction \(\mathcal{H}_{t}\), we discard it. ## IV Experimental Demonstration We first present quantitative results to evaluate our code-based LLM prompting, and then focus on qualitative demonstrations of our incremental learning strategy. All experiments were performed with the "gpt-3.5-turbo-0301" model of the OpenAI API [4] as both \(L_{\text{interact}}\) and \(L_{\text{improve}}\). ### _Quantitative Evaluation_ To evaluate the performance of our proposed approach quantitatively, we apply our method on the scenarios defined in SayCan [6]. We pick these scenarios as they involve robot commands that work on a similar abstraction level as our system implementation, in contrast to e. g. the lower-level RoboCodeGen benchmark defined in Code As Policies [7]. Moreover, while the method of SayCan is very different to ours, we are interested in the resulting robot behavior given a user command, thus making this dataset suitable for evaluation Fig. 4: Overview of our method for incremental learning of robot behavior. We use an LLM (in our experiments, ChatGPT [4]) to control robot perception and action given a prompt of few-shot examples (bottom, Section III-C). Prompts are constructed dynamically based on the similarity to the current user request (top left, Section III-D). The interaction examples memory is initialized with prior knowledge, and then incrementally enriched by LLM-improved problematic interactions to learn from mistakes (top right, Section III-E). of our system. For a comparable setup, we first translate their prompt into our coding syntax, using a similar set of actions (grab, move_to, put_down,...). However, our method acts in a closed interaction loop setting instead of forward planning of actions, thus we also allow the LM to make use of return values of functions (especially for perception, e. g., detect_objects,...) and allow it to ask clarification questions. Note that SayCan also makes use of perception by co-optimizing LLM and image-conditioned value function probabilities. Interactive incremental learning is not used in this type of scenario to remain comparable to SayCan, and as it requires a human in the loop. For dynamic prompt construction, \(k\) is set to \(8\). The evaluation is performed in a predicate world, where symbolic states of the agent, objects and locations are simulated. After performing the task, symbolic goal conditions are checked automatically for each scenario if possible, otherwise the interaction is evaluated by hand. Table I shows the results of these experiments. Each row represents an instruction family as defined in [6], for instance "Natural Language (NL) queries focused on abstract nouns." SayCan reports _plan_ success rate, which measures whether the generated plan can solve the instruction regardless of execution, as well as _execution_ success rate (where we compare to their results of execution in the real kitchen). As explained above, our method is evaluated in a simulated world, which means that the difficulty of our task roughly lies between their _plan_ and _execute_ settings. Our method cannot be evaluated in a "plan-only" manner, as it reasons step-by-step, observing the previous action's execution results. Overall, we reach equal or better performance compared to SayCan. Only for their long-horizon tasks such as "bring me a coke, an apple and a banana," our method suffers as the generated code interaction becomes lengthy, and the LLM loses track of its task. This problem is not as expressed for SayCan, since they only generate plain text, thus resulting in much shorter token sequences. However, this issue could be circumvented by giving the LLM access to hierarchical functions (e. g., bring_object_to_location instead of grab, move_to, put_down) and thereby reducing code complexity. Similarly, introducing more explicit CoT reasoning (e. g., using comments) in the prompts might help. We did not apply these techniques to keep the experiments comparable to those used in SayCan. Nevertheless, note that the results of our simulated symbolic-world evaluation are not completely comparable, since we need to define initial world state details (such as object placement) which are part of the scene but not explicitly mentioned in [6], and sometimes need to define answers to clarification questions the LLM asked. ### _Qualitative Evaluation_ To demonstrate the utility of our proposed prompt-based incremental learning technique, we perform experiments on the humanoid robot ARMAR-6 [1] both in simulation and real world. We first provide challenging commands which the LLM initially solves incompletely or wrong. Then, the human interactively provides feedback and tells the robot how to improve. Afterwards, we not only provide the same command again to check for improved behavior, but - in order to study generalization capability - also try similar commands that initially (i. e., without interactive learning) led to similar mistakes. We evaluated three scenarios, _Room Tour_, _Drink & Cup_, and _Ladder_, which are explained in the following. Quantitative evaluation of our incremental learning method, e. g. via a user study, is left for future work. #### Iv-D1 Room Tour In this scenario, we instruct the robot to give a commented tour of all the locations it knows. There is no comparable instruction in the prior knowledge prompt database. The initial interaction ``` >>> wait_for_trigger() {"type"'+'dialog', "text': "givewmacommendedtourof' 'allthelocationsyouknow.'} >>>locations=get_all_locations() >>>forlocationinactions: "say(f*Thisis(location).Itislocatedat"... "f*get_location_coordinates(location).")...wait_for_trigger() ``` lets the robot list all locations and their subsymbol positions (e. g., \((1.0,2.5,0.0)\)) in global coordinates verbally. Since this answer does not reflect the intention of the initial command, we give the instruction "you should actually move there and then say the name, not tell me the coordinates," which leads to: ``` >>>forlocationinlocations:...moveto(get_location_coordinates(location))...say(f*Thisis(location).")...wait_for_trigger() ``` Finally, we ask the robot "can you learn that for next time?" which triggers the "learn_from_interaction()" function. The improved code is equal to the second part of the interaction, i. e., the erroneous for-loop of the first interaction is removed and no further correction is necessary. Afterward, asking "Can you give me a room tour?" leads to the correct behavior. #### Iv-D2 Drink & Cup In this scenario, we ask the robot to bring juice to the table. The task of bringing the juice to the table is executed successfully, however, since the user needs a cup to drink the juice, we further instruct the robot "thanks, but I will also need a cup to drink," which causes the robot to additionally bring a cup to the table. Afterwards, we ask the robot to remember this behavior using "Learn to always bring a cup with a drink, without me having to ask for it." The LLM generates the following improved interaction example: ``` >>>wait_for_trigger() {"type"+'dialog', "text': "ifbingtheguoretabletobstapewable"} >>>say('Ok,'willbringthejuotetobstapewable')...list_object_locations(affordance='grasp' {...,.,.} ] >>>binding_object_to("multivitamin-juice_0', "hand-guare-table-80x80_0')'succeeded' >>>bring_object_to('cup,'''hand-guare-table-80x80_0') #Bringingthecmpwiththedrdrinkwithouttheuserhavingtoaskforit' ```'succeeded' >>>say('Energisyourjuiceandcup.' "canIhelpyouanlyangles?')...wait_for_trigger() {"type"+'dialog', "text': "No,thatwillbeall'} ``` When giving the same initial command again, the cup is brought to the table without further asking. However, we do not observe good generalization in this scenario, although we tried to enrich the prompt of \(L_{\mathrm{improve}}\) with comments for generalization, as shown above. For instance, when asking for milk, the LLM does not generate the code to bring a cup. This indicates that it is still challenging to robustly generalize from specific examples (i. e., juice) to categories (i. e., drinks). Improving this generalization capability should be a focus of future work. #### Iv-D3 Ladder As shown in Fig. 1, in this scenario we ask the robot to assist in cleaning the top of the fridge. The memory \(\mathcal{M}\) contains predefined comparable examples for cleaning the table and kitchen counter, which guide the LLM to only handing over the sponge to the human. However, since the top of the fridge is higher than the table or the kitchen counter, we require a ladder to reach it so we additionally ask for it. The robot then successfully places the ladder in front of the fridge. Eventually, we again instruct the robot to always bring the ladder when working on high surfaces. Listing 1 shows the transcript of the "learn_from_interaction()" call, including the resulting improved interaction. Afterwards, when we perform a similar request (e. g., "clean on top of the dishwasher"), the robot brings both the sponge and the ladder successfully, while for lower surfaces (e. g., kitchen counter) the robot still brings only the sponge. ## V Conclusion & Discussion We present a system for integrating an LLM as the central part of high-level orchestration of a robot's behavior in a closed interaction loop. Memorizing interaction examples from experience and retrieving them based on the similarity to the current user request allows for dynamic construction of prompts and enables the robot to incrementally learn from mistakes by extending its episodic memory with interactively improved code snippets. We describe our implementation of the system in the robot software framework ArmarX [44] as well as on the humanoid robot ARMAR-6 [1]. The usefulness of our approach is evaluated both quantitatively on the SayCan [6] scenarios and qualitatively in simulation and in the real world. While the proposed method, in particular the incremental prompt learning strategy, shows promising results, there are still many open questions for real-world deployment. First of all, the performance of LLMs is quite sensitive to wording in the prompt, thus sometimes leading to unpredictable behavior despite only slight variations of the input (e. g., adding "please" in the user command). This might be addressed by using more advanced models like GPT-4 [45] and further investigating the effect and performance of example retrieval in dynamic prompt construction. Furthermore, our incremental prompt learning strategy should be expanded to involve additional human feedback before saving (potentially wrong) interaction examples to the episodic memory. However, it is unclear how this can be accomplished if the user is not familiar with robotics or programming languages. Further work should also focus on abstraction of similar and forgetting of irrelevant learned behavior. Moreover, although we provide the LLM with access to perception functions and examples of how to use them, it sometimes comes up with non-grounded behavior (e. g., referring to non-existing objects or locations). This may be improved by adding further levels of feedback to the LLM, or using strategies like Grounded Decoding [46]. Finally, our system inherits biases and other flaws from its LLM [47], which may lead to problematic utterances and behaviors. In future work, we will try to address some of these challenging questions to further push the boundaries of natural, real-world interactions with humanoid robots.
2309.14663
Learning NEAT Emergent Behaviors in Robot Swarms
When researching robot swarms, many studies observe complex group behavior emerging from the individual agents' simple local actions. However, the task of learning an individual policy to produce a desired group behavior remains a challenging problem. We present a method of training distributed robotic swarm algorithms to produce emergent behavior. Inspired by the biological evolution of emergent behavior in animals, we use an evolutionary algorithm to train a population of individual behaviors to produce a desired group behavior. We perform experiments using simulations of the Georgia Tech Miniature Autonomous Blimps (GT-MABs) aerial robotics platforms conducted in the CoppeliaSim simulator. Additionally, we test on simulations of Anki Vector robots to display our algorithm's effectiveness on various modes of actuation. We evaluate our algorithm on various tasks where a somewhat complex group behavior is required for success. These tasks include an Area Coverage task and a Wall Climb task. We compare behaviors evolved using our algorithm against designed policies, which we create in order to exhibit the emergent behaviors we desire.
Pranav Rajbhandari, Donald Sofge
2023-09-26T04:40:52Z
http://arxiv.org/abs/2309.14663v2
# Learning Emergent Behavior in Robot Swarms with NEAT ###### Abstract When researching robot swarms, many studies observe complex group behavior emerging from the individual agents' simple local actions. However, the task of learning an individual policy to produce a desired emergent behavior remains a challenging and largely unsolved problem. We present a method of training distributed robotic swarm algorithms to produce emergent behavior. Inspired by the biological evolution of emergent behavior in animals, we use an evolutionary algorithm to train a 'population' of individual behaviors to approximate a desired group behavior. We perform experiments using simulations of the Georgia Tech Miniature Autonomous Blimps (GT-MABs) aerial robotics platforms conducted in the CoppeliSim simulator. Additionally, we test on simulations of Anki Vector robots to display our algorithm's effectiveness on various modes of actuation. We evaluate our algorithm on various tasks where a somewhat complex group behavior is required for success. These tasks include an Area Coverage task, a Surround Target task, and a Wall Climb task. We compare behaviors evolved using our algorithm against _designed policies_, which we create in order to exhibit the emergent behaviors we desire. Swarm robotics, Emergent behavior, Evolutionary algorithm, Neuroevolution ## I Introduction ### _Emergent Behavior_ Emergent behavior is a phenomenon observed in swarms of agents. It is generally defined as a complex swarm behavior which occurs as a consequence of each individual agent following a relatively simple control scheme [13, 15]. Examples of this can be found in the behavior of groups of animals, such as how some species of fire ants have been observed to create rafts out of their bodies to survive flooding [11]. In this case, the individual behavior of 'grab onto neighbors' creates the emergent behavior of 'build a raft'. Emergent behavior is also exhibited in migrating swarms of some species of caterpillars, which walk on top of each other in order to create a 'rolling swarm' [19]. This allows the caterpillars to migrate quicker than simply walking. Similar instances of complex emergent behavior have been observed in the field of swarm robotics. ### _Swarm Robotics_ Though there are various definitions of swarm robotics, it is generally agreed that robot swarms are multi-agent systems where each agent is autonomous and has low complexity [3, 8, 12]. They are used for various tasks, including exploration and surveillance. They are characterized by using locally communicating distributed systems as opposed to a central controller. Due to this, they remain functional as the swarm size is increased. However, they do not have a central controller, so careful fine tuning of the individual policies is required to achieve a desired swarm behavior. It is well known that emergent behavior can arise in robot swarms, and past studies have explored this. Pagello et al. [16] (2001) study how to make a robot swarm perform a cooperative task by creating emergent behaviors. They implement this by dynamically assigning predefined roles to each robot. A function \(Q\) is learned for each agent, which takes in local information and outputs the best role for the agent to adopt. After refining their \(Q\) functions, they observe cooperative emergent behavior in their chosen task, robot soccer. In a more recent study, Oliveri et al. [14] use a Monte Carlo scheme to continuously refine the behavior of individual agents in a swarm. The robots they choose to use are arranged in a line, and connected with soft tubes. Their actuation is in the form of a pump, which increases the distance between a robot and the neighbor behind it. In training, each robot attempts to learn its optimal phase time (the time that it takes for a cycle of the pump filling then unfilling the tube). The robots randomly sample nearby phase times, and update depending on the change in velocity they experience. The study concludes that the result of each agent attempting to optimize its velocity is an emergent behavior where the group of connected robots crawls forward. Similar to the research of Oliveri and Pagello, we focus on optimizing an individual policy to best create a desired emergent swarm behavior. Inspired by how biological evolution was effective in creating various emergent behaviors in animals, we propose using an evolutionary algorithm to solve this task. ### _Evolutionary Algorithms/NEAT_ Evolutionary algorithms are algorithms inspired by biological evolution, where a fitness function is optimized by repeatedly mutating and evaluating members of a 'population' of solutions. Importantly, these methods are generic, as they do not make any assumptions about the format of the solutions or the topography of the fitness function. Since we want to solve a problem with general inputs/outputs, we use neuroevolution (NE), a type of evolutionary algorithm that evolves neural networks. In early NE algorithms, the topology1 of the neural network is initially set, and the mutations take place in the weights of the connections [22]. However, using the Neuroevolution of Augmenting Topologies (NEAT) algorithm, we are able to evolve the structure of the network as well [23]. This is useful since it allows us to begin with a simple neural network that evolves in complexity according to the requirements of the task. Footnote 1: The topology of a neural network refers to the graph structure of the nodes and the connections between them Thus, we evaluate NEAT as a candidate for evolving emergent behavior in robot swarms. The method we propose is using a homogeneous swarm (where each agent is using the same neural network) to act in an experiment. At the end of the experiment, the 'performance' of the swarm is evaluated with some user-defined metric, and we use this as the fitness function for the neural network. This allows us to use NEAT to evolve a population of neural networks to better exhibit a desired emergent behavior. A trade off with this is that after running a simulation, we are left with a single fitness score for the network, in contrast to the many times we actually call the network. However, this seems to be an issue inherent to our problem, since the performance of a swarm can only be evaluated when considering the behavior of the swarm as a whole. Thus, the actions taken by each individual agent in a swarm are difficult to assign a reward to. This makes alternate learning algorithms like Reinforcement Learning (RL) difficult to use, since they require rewards to be assigned to every action taken. In contrast, this is coherent with the evolutionary structure we use in our algorithm, since each 'evaluation' only needs to yield one fitness value. ### _Organization_ The rest of the paper is organized as follows: Section II discusses related work. Section III details our proposed learning algorithm, as well as common sensing and output schemes we use. We present our experiments in Section IV, and discuss the results in Section V. The conclusions we draw are in Section VI. ## II Related Work Much research has focused on the goal of learning individual behaviors for a robot swarm [2, 3, 5, 6, 14, 16]. Behjat et al. [2] explore how to learn tactical swarm behavior through a combination of various techniques. These techniques include learning neural network based robot policies, dynamically organizing the swarm into groups, and Pareto filtering of points of interest to reduce the problem dimensionality. Fan et al. [5] compare different swarm intelligence algorithms against each other to evaluate their relative performance in obstacle performance under different circumstances. The algorithms they evaluate are the bat algorithm (BA), particle swarm optimization (PSO), and the grey wolf optimizer (GWO). They find that PSO outperforms BA, which outperforms GWO in general. However, GWO performs better than the other two algorithms in the case of large swarms and large communication ranges. Dorigo et al. [6] explore how to create communication protocols between agents in a swarm that best help methods such as deep reinforcement learning create good decentralized control policies. In addition to the studies by Oliveri and Pagello which focus on the learning aspect (described in Section I-B), there has also been research on helping the stability of emergent behaviors in swarms. In 2022, Chen and Ng explore secure communications between robots in a swarm [3]. They model these communications as a series of random graphs, and use a method involving hash chains to identify 'rogue robots' with high probability. This allows them to ensure that the emergent behavior exhibited by the swarm is protected. In their paper, they also create a system to distinguish different classes of robot swarms by identifying differences in the robot homogeneity, the interactions between robots, and the interactions with a central control. Our work is closely related to research done by Trianni et al. in 2003 [26]. They use an evolutionary algorithm to evolve an 'aggregation' behavior in a swarm of robots, inspired by the self-organized aggregation behavior observed in the cellular slime mold _Dictyostelium discoideum_[27]. Trianni's evolutionary algorithm uses a neural network with no hidden layers, and evolves by mutating the weights. They use a fitness function that seeks to minimize the mean distance of each agent from the swarm's center of mass. Later research by Bahceci explores this further by varying the parameters of the evolutionary algorithm to see how it affects the evolution of aggregation behaviors [1]. We expand on Trianni's idea by generalizing the algorithm to allow user-defined fitness functions. Additionally, we use the NEAT algorithm as opposed to simply evolving the weights of a set network. These extensions allow our algorithm to theoretically evolve a network to arbitrary complexity in order to create a desired emergent behavior. We use the GT-MABs as the robots for our experiments [25]. The control system for their actuation was designed based on the dynamic model created in [4, 24]. ## III Methodology ### _Evolutionary Algorithm_ In Algorithm 1, we apply the NEAT algorithm to evolving a swarm of robots to exhibit some desired emergent behavior. The experiment setup \(E\) includes the objects/agents involved, as well as the definition and implementation of the network inputs and outputs. In our case, we define \(E\) through a CoppeliaSim simulation [18]. The evaluation function \(F\) evaluates the overall behavior of the swarm throughout the experiment and returns a real number representing the 'fitness' of the swarm. \(F\) can either directly evaluate the desired emergent behavior, or evaluate some aspect of the swarm correlated with it. To implement this, we use the NEAT-Python package [10]. ### _Network Inputs_ The inputs into the neural network vary for each task, but usually include the following method of sensing neighbors. We consider the polar coordinates of each neighbor and split the angular (\(\theta\)) parameter into \(k\) regions. We allow each agent to sense the inverse distance to the nearest robot on each k-tant, as shown in Fig. 1. We use the inverse distance since having the sensor read 0 when not detecting neighbors made more sense with this mapping. It is continuous in the sense that the limit of the sensor output as the closest neighbor gets arbitrarily far away is the same as the value when not sensing a neighbor. We also experiment with allowing each agent to sense the number of neighbors within a range on each \(k\)-tant instead, as shown in Fig. 2. We introduce this variant in certain tasks to explore if it is more helpful for an agent to know the proximity or the count of its neighbors in a given direction. For 3D cases, we can extend this sensing method into '\(\ell\)-\(k\)-tant' sensing. We consider the spherical coordinates of each neighbor and split the \(\phi\) and \(\theta\) parameters into \(\ell\) parts and \(k\) parts respectively, for a total input dimension of \(\ell\cdot k\). However, for our experiments we only use \(k\)-tant sensing. ### _Network Outputs/Agent Actuation_ To actuate the GT-MABs, we use a velocity controller that takes input in \(\mathbb{R}^{3}\). Thus, we must transform the output of the neural network into a velocity vector, as shown in Fig. 3. For our experiments, we want the GT-MABs to keep a certain height, so we directly code the \(z\) component to keep the blimp at 1 meter. We set the output dimension of the evolved neural networks to 2, and assign one output to the \(x\) component and one to the \(y\) component. Additionally, since the neural network uses a sigmoid activation, the output is on the interval \([0,1]\). We transform this to allow a command vector on \([-1,1]^{3}\). We also run experiments using a simulation of the Anki Vector robots, which use a wheel velocity controller. We believe this is a significant variation from a global velocity controller since the motion of the agent is dependent on the heading angle as well as the network output. To go in a particular direction, an agent must rotate to face that direction before moving forward. Thus, we use the Anki Vector experiments to evaluate if our algorithm is robust to variations in the actuation scheme. As shown in Fig. 3, we use similar transformations to the GT-MABs to result in a command vector in \([-1,1]^{2}\). We use this to actuate the Anki Vectors by assigning one dimension Fig. 1: \(k\)-tant distance sensing with \(k=8\) Fig. 3: GT-MAB (left) and Anki Vector (right) actuation scheme Fig. 2: \(k\)-tant neighbor sensing with \(k=8\) to the left wheel velocity and the other to the right wheel velocity. ## IV Experiments To test our method, we create various tasks for the robot swarms to complete. We aim to make the tasks achievable through a swarm behavior. Since each agent's neural network is only given local information, the algorithm must evolve an individual policy that produces emergent behavior for the swarm to perform well. For each experiment, we train our evolutionary algorithm for 50 generations in a CoppeliaSim physics simulation. The agents we use are simulated versions of the GT-MAB and the Anki Vector robots (Fig. 4). We compare the performance of the evolved policy in each experiment to the performance of a _designed policy_ that we hard-code to solve the task. We use ROS2 Foxy to handle transferring sensing and actuation messages between CoppeliaSim and Python [9]. Our full implementation is available on Github [17]. ### _Task: Area Coverage_ In this task, we simulate a'search and rescue' scenario. The environment we use is an empty square arena with the agents spawned randomly near the center (Fig. 6 & Fig. 7). For the fitness function, we adapt deployment entropy, a measure of how well distributed the agents become in the environment [7]. Deployment entropy is defined by discretizing the environment into a grid, and measuring the entropy of the distribution of agents in that grid, as shown in Fig. 5. Since entropy is maximized by a uniform distribution, we believe using this as a fitness function would encourage the agents to spread out, which is the desired behavior for a'search and rescue' mission. We implement this task for the both the Anki Vector robots and the GT-MABs. #### Iv-A1 GT-MABs For the GT-MABs, we use 20 agents and divide the environment into a grid with 16 units for calculating deployment entropy. We terminate each experiment after 60 seconds. We use a variation of Octant Distance Sensing2 shown in Fig. 1 as the input. We additionally allow the agents the ability to sense the border walls, implemented by counting the closest wall point on each octant as an additional agent. We add this capability since the agents would otherwise have no way of sensing the boundaries. For output, we use our GT-MAB actuation scheme (Fig. 3). Footnote 2: \(k\)-tant distance sensing with \(k=8\) We define a _designed policy_ where each agent moves away from the closest neighbors. We expect this to cause the agents to distribute themselves evenly, producing the desired swarm behavior. #### Iv-A2 Anki Vectors For the Anki Vectors, we use 10 agents and divide the environment into a grid with 9 units for calculating deployment entropy. As before, we terminate each experiment after 60 seconds. We use Octant Distance Sensing (Fig. 1) along with the Anki's proximity sensor as the input. In this case, since Fig. 4: CoppeliaSim model of Anki Vector and GT-MAB Fig. 5: Calculation of deployment entropy Fig. 6: GT-MAB Area Coverage experiment initial position Fig. 7: Anki Area Coverage experiment initial position the proximity sensor is able to sense the boundary walls, we do not allow the agent to sense the border with Octant Distance Sensing. For output, we use the Anki's wheel velocity actuation scheme (Fig. 3). We implement a similar _designed policy_ for each Anki Vector to move away from its closest neighbors. We notice that this policy is more difficult to design, since we must include logic to turn the agent to face a more desirable direction. This supports our assumption that the control scheme of the Anki Vector robots is different enough from the GT-MAB's to evaluate our algorithm's robustness to control schemes. ### _Task: Wall Climb_ In previous research we established that a swarm of GT-MABs was able to climb a wall that was much taller than the altitude they were assigned to hold [20]. Upon closer inspection, we realized that that collisions were resulting in the agents stacking on top of one another. Since they read their altitude using ultrasonic sensors, the top GT-MAB would register the other one as a floor, effectively adding their desired heights. A stack of three agents would allow the top one to pass over the wall. We also note that the 'wall climbing' behavior is susceptible to the sensing angle of the GT-MAB's ultrasound-based range sensor. With a large sensing angle, the GT-MABs are able to sense the wall itself when they are next to it, which results in a single agent being able to climb the wall on its own. To avoid this, we use a narrow ultrasound angle so that the agents must stack in order to climb the wall. #### Iv-C1 Octant Distance Sensing We design the Wall Climb task to replicate this scenario. The environment we use is an arena with a 3 meter tall wall on the \(y\) axis. The GT-MABs spawn randomly on the right side, as shown in Fig. 8. For the fitness function, we use the number of agents that end the experiment on the other side of the wall. If no agents succeed, we subtract a penalty of the distance of the closest blimp to the wall. We believe using this as a fitness function will encourage the "aggregation" behavior, since this is the only way the agents can climb the wall. We use 20 agents, and terminate each experiment after 60 seconds. We use Octant Distance Sensing as our input (Fig. 1) and our GT-MAB actuation scheme as output (Fig. 3). We define a _designed policy_ where each agent moves towards their closest neighbor, as well as slightly towards the direction of the wall. We expect this to cause the GT-MABs to flock and stack on top of one another. Once they are stacked, we predict they will slowly make their way over the wall, producing the desired swarm behavior. #### Iv-C2 Octant Neighbor Sensing We also compare this with using the Octant Neighbor Sensing scheme as our input, as shown in Fig. 2. Our _designed policy_ for this scheme is similar, where we move towards the direction with the highest neighbor count. We predict that sensing the number of neighbors is more useful to create the 'flocking' behavior we desire. Thus, we expect that this sensing scheme will be more successful. ### _Task: Surround Target_ In this task, we simulate a goal of securing a perimeter around a target [21]. The environment we use is an arena containing a cylinder target spawned randomly, with the GT-MABs also spawned randomly in the arena. For the fitness function, we consider all the agents close to the target. Using polar coordinates centered at the target, we find the measure of the largest uncovered arc \(a\). We set the fitness to \(-a\) since we want to minimize this value. We also subtract a penalty of the distance of the closest agent to the target for experiments where no agents are near the target. We use 6 agents and terminate each experiment after 60 seconds. We include Octant Distance Sensing in our input (Fig. 1), as well as the agent's displacement vector from the target. For output, we use our GT-MAB actuation scheme (Fig. 3). We define a _designed policy_ where each agent moves towards the target, as well as slightly away from their closest neighbor. We expect this to cause the GT-MABs to head for the target, then slowly spread out, producing the desired swarm behavior of surrounding the target. ## V Results For each experiment, after training for 50 generations, we compare the fitness of the best genome in the final generation with the fitness of our _designed policy_. We run each for 60 trials in their respective experiments and collect the fitness mean and standard deviation. We report our results in Table I. Fig. 8: Wall Climb experiment initial position Fig. 9: Surround Target experiment initial position ### _Area Coverage_ #### Vi-A1 GT-MABs After training the GT-MAB models in CoppeliaSim, we observe that the behavior they evolved seems to be for each agent to move in a direction away from its close neighbors. Fig. 10 shows an experiment run with the network with the best genome in the last generation of training. The result of this behavior seems to be a well distributed swarm, with the agents arranged very close to a lattice. From Fig. 11, we can see that after around the 15th generation, the best genome of each generation achieved close to the theoretic maximal entropy3. Footnote 3: 2.718 from placing one agent in 12 of the grid units, and two agents in 4 of the units The result of the _designed policy_, shown in Fig. 10, also had the swarm distribute in the environment, forming a similar lattice. Comparing these results, we can see that the _designed policy_ outperforms the evolved behavior by about one standard deviation (Table I). Overall, we show that in this task, our algorithm learns a local behavior that closely approximated the desired'search and rescue' emergent swarm behavior. From the slight upwards trend in the mean of Fig. 11, we expect that the evolved results may perform closer to the _designed policy_ with more training. #### Vi-A2 Anki Vectors After training the Anki Vector models in CoppeliaSim, we observe the behavior appears to be for each agent to spin in place, and move away from close neighbors directly in front of or behind it. Fig. 12 shows an experiment run with the best genome in the last generation of training. The result of this behavior appears to be a well distributed swarm. From Fig. 13, we can see that after a few generations, the best genome of each generation achieved the theoretic maximal entropy4. Footnote 4: 2.16 from placing one agent in 8 of the grid units, and two agents in 1 of the units The result of the _designed policy_, shown in Fig. 12, had the swarm distribute reasonably well in the environment. Comparing these results, we can see that the evolved behavior outperforms the _designed policy_, since the mean is about one standard deviation higher (Table I). This seems to be due to the _designed policy_ favoring sending agents to the edges of the environment. \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Experiment** & **Behavior** & **Mean** & **Stdev.** \\ \hline \hline \multirow{2}{*}{_GT-MAB Area Coverage_} & Evolved & 2.53 & 0.10 \\ \cline{2-4} & Designed & 2.62 & 0.12 \\ \hline \hline \multirow{2}{*}{_Anki Area Coverage_} & Evolved & 2.09 & 0.083 \\ \cline{2-4} & Designed & 1.87 & 0.20 \\ \hline \hline \multirow{2}{*}{_Wall Climb_} & Evolved & 16.70 & 1.12 \\ \cline{2-4} & Designed & 12.13 & 3.78 \\ \hline \hline \multirow{2}{*}{_Wall Climb_} & Evolved & 16.72 & 1.32 \\ \cline{2-4} & Designed & 15.33 & 1.58 \\ \hline \hline \multirow{2}{*}{_Surround Target_} & Evolved & -2.21 &.70 \\ \cline{2-4} & Designed & -1.24 & 0.14 \\ \hline \end{tabular} \end{table} TABLE I: Comparison of Fitness in Evolveda and Designed Behaviors Fig. 11: GT-MAB Area Coverage population fitness across generations Fig. 12: Anki Area Coverage evolved behavior (left) and _designed policy_ behavior (right) Fig. 10: GT-MAB Area Coverage evolved behavior (left) and _designed policy_ behavior (right) Overall, we show that in this task, our algorithm learns a local behavior that closely approximated the desired'search and rescue' emergent swarm behavior. The evolved behavior greatly outperforms the _designed policy_, which shows that our evolutionary algorithm performs comparably to designing the behavior with knowledge of the task. ### _Wall Climb_ #### V-B1 Octant Distance Sensing After training the GT-MAB models in CoppeliaSim, we observe that the behavior they evolved seems to be for each agent to move in a direction towards its closest neighbors, in addition to moving in the direction to climb the wall. Fig. 14 shows an experiment run with the network with the best genome in the last generation of training. The result of this behavior does seem to be a flock of agents stacked on top of each other, climbing the wall. From Fig. 15, we can see that after a few generations, the best genome of each generation achieved having about 18 GT-MABs make it over the wall. #### V-B2 Octant Neighbor Sensing As for the Neighbor Sensing experiment, we noticed both sensing methods achieved similar population fitness values across generations (Fig. 15). The flocking behavior evolved was also similar in both experiments (Fig. 14, Fig. 16). From Table I, we see there is no statistically significant difference between the two fitness results5. Footnote 5: We perform a two sample \(t\)-test with unequal variance on the fitnesses obtained from the evolved Neighbor Sensing and Distance Sensing experiments. Using their respective means and standard deviations of (16.70,1.12) and (16.72,1.32) with a sample size of 60 for both, we arrive at a \(p\)-value of \(p>9\), which is much larger than \(.05\), the accepted value for statistical significance. This shows that with our sample size, there is no significant difference between the fitnesses obtained from the different sensing methods. The result of the _designed policy_ in both of these experiments similarly causes the agents to flock together and climb the wall. However, in this case we do notice a difference in the two modes of sensing. In the Distance Sensing experiments, the _designed policy_ seems to be more likely to cause the agents to form several smaller groups as opposed to one large one. This results in a lower fitness due to more agents being left behind, as shown in Table I. Comparing this with our evolved behavior, we can see in both experiments that the evolved behavior outperforms the best _designed policy_ by about one standard deviation. Overall, we show that in this task, our algorithm learns a local behavior that closely approximated the desired 'flocking' emergent swarm behavior. We can further see that although the different sensing modes has an effect on our _designed policy_, the evolutionary algorithm is robust to these variations. ### _Surrounding a Target_ After training the GT-MAB models in CoppeliaSim, we observe that the behavior they evolved seems to be for each Fig. 16: Neighbor Sensing Wall Climb evolved behavior (left) and _designed policy_ behavior (right) Fig. 14: Distance Sensing Wall Climb evolved behavior (left) and _designed policy_ behavior (right) Fig. 13: Anki Area Coverage population fitness across generations Fig. 15: Distance Sensing Wall Climb (left) and Neighbor Sensing Wall Climb (right) population fitness across generations agent to spiral around the target, in addition to moving away from its closest neighbor. Fig. 17 shows an experiment run with the network with the best genome in the last generation of training. The result of this behavior does seem to be a flock of agents surrounding the target. From Fig. 18, we can see that after a few generations, the best genome of each generation achieves having a largest uncovered arc of about 2 radians. The result of the _designed policy_, shown in Fig. 17, also caused the agents to surround the target. Comparing these results, we can see that our _designed policy_ outperforms the evolved behavior by at least one standard deviation (Table I). Overall, we show that in this task, our algorithm learns a local behavior that closely approximates the desired emergent swarm behavior. However, our _designed policy_ still does outperform the evolved behavior, which suggests that for some tasks it is more difficult to evolve the optimal behavior. We believe fine tuning of the evolution parameters may help this issue, but that is outside the scope of our research. ## VI Conclusion In this paper, we present a novel extension of the NEAT algorithm designed to learn emergent behaviors in robot swarms. The algorithm we present is able to be applied on robot swarms with various modes of sensing and actuation. Results from simulations show that individual agent behaviors evolved using this method are comparable to hand designed policies at producing desired complex emergent behaviors. In future research, we plan to test our evolved policies on the physical GT-MABs and Anki Vector robots. We also plan to evaluate our algorithm on a more complex set of tasks. We also may explore the fine tuning of NEAT parameters to try improving our results. Additionally, the accuracy of the physics evaluation seems to be dependant on the performance of the computer, as this determines the speed that each agent can respond to stimuli. In this paper, we choose to speed up our experiments by running multiple simulations in parallel. However, using less parallelization or using a more powerful computer could improve the results of our algorithm.
2309.16074
Infer and Adapt: Bipedal Locomotion Reward Learning from Demonstrations via Inverse Reinforcement Learning
Enabling bipedal walking robots to learn how to maneuver over highly uneven, dynamically changing terrains is challenging due to the complexity of robot dynamics and interacted environments. Recent advancements in learning from demonstrations have shown promising results for robot learning in complex environments. While imitation learning of expert policies has been well-explored, the study of learning expert reward functions is largely under-explored in legged locomotion. This paper brings state-of-the-art Inverse Reinforcement Learning (IRL) techniques to solving bipedal locomotion problems over complex terrains. We propose algorithms for learning expert reward functions, and we subsequently analyze the learned functions. Through nonlinear function approximation, we uncover meaningful insights into the expert's locomotion strategies. Furthermore, we empirically demonstrate that training a bipedal locomotion policy with the inferred reward functions enhances its walking performance on unseen terrains, highlighting the adaptability offered by reward learning.
Feiyang Wu, Zhaoyuan Gu, Hanran Wu, Anqi Wu, Ye Zhao
2023-09-28T00:11:06Z
http://arxiv.org/abs/2309.16074v1
Infer and Adapt: Bipedal Locomotion Reward Learning from Demonstrations via Inverse Reinforcement Learning ###### Abstract Enabling bipedal walking robots to learn how to maneuver over highly uneven, dynamically changing terrains is challenging due to the complexity of robot dynamics and interacted environments. Recent advancements in learning from demonstrations have shown promising results for robot learning in complex environments. While imitation learning of expert policies has been well-explored, the study of learning expert reward functions is largely under-explored in legged locomotion. This paper brings state-of-the-art Inverse Reinforcement Learning (IRL) techniques to solving bipedal locomotion problems over complex terrains. We propose algorithms for learning expert reward functions, and we subsequently analyze the learned functions. Through nonlinear function approximation, we uncover meaningful insights into the expert's locomotion strategies. Furthermore, we empirically demonstrate that training a bipedal locomotion policy with the inferred reward functions enhances its walking performance on unseen terrains, highlighting the adaptability offered by reward learning. ## I Introduction Humans exhibit a remarkable ability to achieve and generalize locomotion strategies from expert demonstrations. This inference ability enables the knowledge transfer from simple tasks to novel tasks and the efficient acquisition of new locomotion skills [1, 2, 3, 4]. Despite this amazing ability inherent in the human brain, our understanding remains limited regarding the internal representation of a locomotion skill and more importantly, the mechanism for applying acquired skills to novel tasks. Inspired by human's ability to learn from expert demonstrations, this study takes an initial step to mimic this learning ability in the context of bipedal robot locomotion. Moreover, we seek the explainability of the learned skills and demonstrate their generalizability by subjecting the robot to maneuver over various rough terrains. Imitation learning has been extensively explored as a methodology for learning from demonstration [5, 6, 7, 8]. Although unable to infer the true intention behind the demonstrations, imitation learning often adopts Reinforcement Learning (RL) formulations to sidestep the problem of lacking an accurate reward function. This RL-based approach requires only designing a reward for tracking the demonstrated actions. The development of efficient RL algorithms facilitated a wide range of successful applications of imitation learning for agile bipedal locomotion, such as running [9], jumping [10], climbing stairs [11], playing soccer [12], carrying loads [13], and walking over diverse terrains [14]. However, a majority of these works still adopt handcrafted reward functions that heavily rely on domain knowledge and experience. Such reward functions are often tailored for specific environments and have a combination of specific features from the robot's state. Consequently, agents learned under such rewards often lack generalizability and struggle to adapt to new environments. Inverse Reinforcement Learning (IRL) [15, 16], on the other hand, subsumes the aforementioned imitation learning problem. IRL not only recovers the expert's policy but also the underlying reward function, which captures the essence of the expert's intention and enables adaptations of the robot's motion to unseen tasks. Therefore, IRL has gained considerable interest within the robotics community [17, 18, 19, 20], with some studies employing IRL to gain a deep understanding of the reward function. However, prior IRL works often presuppose a predetermined feature space and reward structure [19, 21]. This constrains the expressiveness of reward modeling and leads to limited performance in estimating the true reward functions. Furthermore, the existing robotics IRL works do not analyze the learned reward functions for further usage in practice such as adapting the learned reward for RL during challenging unseen tasks. It remains unclear how one can leverage and transfer the information learned from the reward functions in new environments. Moreover, computational complexity has been a hurdle for IRL methods to be widely adopted in the robotics learning community. Recent advances focus on accelerating algorithm efficiency of IRL [22, 23, 24, 25]. In this paper, we develop a novel framework of reward learning, interpretation, and adaptation (Fig. 1) to address the aforementioned issues of the existing robotics IRL works. During the learning phase, we employ the Inverse Policy Mirror Descent (IPMD) method [25] to infer the reward from demonstrations. IPMD has been shown to be computationally efficient. It solves the IRL problem with a novel average-reward criterion under a Maximum Entropy framework [26, 27]. The Maximum Entropy framework can discern the most accurate reward estimation by guiding the policy search with the maximum entropy principle. The average-reward criterion also helps to accurately identify reward by dropping the discounted factor that is often used under the classic discounted-reward setting. Since demonstrations often lack an explicit discount factor, using a mismatching discounted factor from the ground truth will lead to drastically er roneous reward function estimations under the discounted setting [25]. Moreover, the average-reward criterion has been thoroughly investigated in the literature and has also been adopted in robotics learning tasks [28, 29, 30, 31, 32, 33, 29, 33]. It has become a common practice for RL benchmarks to use an average-reward metric for evaluation, which further motivates the adoption of the average-reward criterion for solving locomotion tasks. To gain an in-depth understanding of the learned reward, we employ a Value Decomposition Network (VDN) [34] and utilize Integrated Gradients (IG) [35] to obtain meaningful knowledge of locomotion features leading to high rewards. We will then incorporate such important features into reward design for locomotion in challenging unseen environments, which we refer to as reward adaptation. Note that it is not a new topic to adapt motor and locomotion skills learned from human demonstrations to robots [36, 37, 38] or from simulated environments to real-life environments [7, 21, 39]. However, these works require a sophisticated design and learning of policies or controllers to achieve robust adaptation. Instead, we investigate the possibility of adapting reward functions. Related methods in adapting reward [40, 41, 42] require crafting intricate, domain-specific reward functions and learning those reward functions under diverse environments to promote the robustness of the policy. In this work, we use IRL to learn a free-form reward function parametrized by a neural network with inputs directly from the robot's state and action space. We show that the learned reward functions contain transferable information about robot locomotion behaviors and verify such properties by training agents using the learned rewards in diverse challenging environments that are not previously seen. We observe a significant performance boost in walking speed and robustness by incorporating such information. To the best of our knowledge, we are the first to analyze and adapt free-form rewards in a principled way. The salient contributions of our work are listed as follows: * **Inverse Reinforcement Learning for Bipedal Locomotion**: We propose a two-stage IRL paradigm to address bipedal locomotion tasks via IPMD. In stage one, we obtain expert policies from a fully-body inverse kinematics function of Cassie. In the next stage, IPMD learns reward functions from the near-optimal demonstrations generated by the policies learned in the first stage. Our work is the first study that applies IRL to bipedal locomotion under the average-reward criterion. * **Importance Analysis of Expert Reward Function**: We employ a Value Decomposition Network (VDN) to approximate the inferred locomotion reward function and Integrated Gradients (IG) to analyze the VDN for reward interpretation. By ensuring the monotonicity of the feature space, VDN enables the interpretation of the reward function with IG while preserving model expressiveness. We successfully perform a rigorous analysis of the importance of individual features, exposing components of the locomotion behavior that are crucial to its reward functions, thereby guiding the design of new rewards for new environments. * **Reward Adaptation in Challenging Locomotion Environments**: We further verify that the learned reward from a flat terrain and the important features extracted from our reward analysis can be seamlessly adapted to novel, unseen terrains. Our empirical results substantiate that the inferred reward function encapsulates knowledge highly relevant to robotic motions that are generalizable across different terrain scenarios. ## II Background In this section, we introduce preliminaries for Average-reward Markov Decision Processes (AMDPs). An AMDP is formalized by a tuple \((\mathcal{S},\mathcal{A},\mathsf{P},r)\), where \(\mathcal{S}\) signifies the state space, \(\mathcal{A}\) represents the action space, \(\mathsf{P}\) denotes the transition probability, and \(r\) is the reward function. At each time instance \(t\), the agent selects an action \(a\in\mathcal{A}\) from the current state \(s\in\mathcal{S}\). The system then transitions to a subsequent state \(s^{\prime}\in\mathcal{S}\) based on the probability \(\mathsf{P}(s^{\prime}|s,a)\), while the agent accrues an instantaneous reward \(r(s,a)\). The primary objective of the agent is to establish a policy Fig. 1: In this work, we investigate the reward function learned by Inverse Reinforcement Learning algorithms. We propose a two-stage training algorithm for Cassie to learn reward functions and optimal policies from demonstrations. We then analyze the reward function learned from those demonstrations. The learned reward is further used to train RL agents in difficult environments. \(\pi:\mathcal{S}\rightarrow\mathcal{A}\) that optimizes the long-term average reward, mathematically given by \[\rho^{\pi}(s):=\lim_{T\rightarrow\infty}\frac{1}{T}\mathbb{E}\left[\sum_{t=0}^{T -1}r(s_{t},a_{t})|s_{0}=s\right]. \tag{1}\] Given an expert demonstration set \(\{(s_{i},a_{i})\}_{i\geq 1}\), IRL aims to extract a reward function that most accurately captures the behavior of the expert. Particularly, in this work, we adopt the Maximum Entropy Inverse Reinforcement Learning (MaxEnt-IRL) framework [26]. We denote \(r_{\theta}\) as the estimation of the reward function, where \(\theta\) is the parameter of the model of choice to represent the reward function \(r(s_{t},a_{t})\) in Eq. (1). For example, \(\theta\) can be the weights and biases in a neural network that parameterize the reward. In this work, we adopt the environment designed in [43] with the robot's joint-space state as the state space: for any state \(s=(x,\hat{x})\in\mathcal{S}\), let \(x=(q,\hat{q})\in\mathbb{R}^{2N}\) represent the robot joint position and velocity, \(N=14\) be the number of joints of Cassie and \(\hat{x}\in\mathbb{R}^{2N}\) represent the reference motions. Given a reference action \(\hat{a}\) at a reference state \(\hat{x}\), the policy outputs an augmentation term \(\delta a\) that corrects the reference action, where \(\hat{a},\delta a\in\mathbb{R}^{M},M=10\). The result is a Proportional Derivative (PD) target, \(a=\delta a+\hat{a}\), for a low-level PD controller, which generates a torque \(\tau\in\mathbb{R}^{M}\) to track joint angles. ## III Methods In this section, we first introduce the pipeline that applies Inverse Policy Mirror Descent (IPMD) for bipedal locomotion to learn reward functions. We then outline our approach to analyze the learned reward function and methodology of conducting reward adaptation experiments. ### _Two-Stage Learning Pipeline_ Recent RL techniques for bipedal locomotion rely on carefully constructing the state and action space and designing sophisticated reward functions [43, 44, 45]. IRL models endow capabilities to learn from demonstrations. However, a practical challenge often arises: _what type of trajectory data should IRL leverage for effective learning?_ Directly recording trajectories from robots such as motion capture approaches can be laborious and time-consuming, while data derived from model-based methods such as inverse kinematics or trajectory optimization often suffer from inaccurate models and unrealistic assumptions. To get high-quality demonstrations for effective IRL, we will use imitation learning with the Markov Decision Process (MDP) environment similar to [43], which can produce computationally convenient and dynamically accurate expert demonstrations, even if we only have trajectory data generated by model-based methods. Accordingly, we propose a two-stage IRL learning pipeline that utilizes both imitation learning and IPMD. Our approach is graphically summarized in Fig. 2. In the first stage, we apply imitation learning on data generated via inverse kinematics to create near-optimal demonstrations, as subsequent IRL training and reward analysis require dynamically accurate demonstrations. The imitation learning style reward function \(r_{I}\) used in this environment is defined as a weighted sum of tracking rewards at the joint level: \[r_{I}=c_{1}e^{-E_{\text{joint}}}+c_{2}e^{-\ |p_{\text{CoM}}-p_{\text{CoM}}^{c} \|}+c_{3}e^{-\|p_{a}-p_{\text{o}}^{c}\|} \tag{2}\] where \(c_{1},c_{2},c_{3}\) are constant coefficients, \(E_{\text{joint}}\) is a weighted Euclidean norm of the difference between the current joint position \(q\) and the reference joint position \(q^{r}\): \(E_{\text{joint}}^{2}:=w^{T}(q-q^{r})^{2}\), \(w,q,q^{r}\in\mathbb{R}^{N}\). \(p_{\text{CoM}}\) denotes the Center of Mass (CoM) position, and \(p_{o}\) denotes pelvis orientation. The superscript \(r\) denotes the reference motion. Using expert demonstrations generated from the first stage, the second stage employs our IPMD method to learn both the optimal policy and the associated reward function in the form of a deep neural network. Concretely, in each iteration of the IPMD algorithm, we sample state-action pairs by interacting with the environment and also sample state-action pairs from demonstrations. We then employ Temporal-Difference (TD) to evaluate our current policy given the first set of sampled pairs from the environment and apply a Mirror Descent step to improve the current policy. At the end of the iteration, we update the reward estimation through gradient descent given the two sets of sampled pairs. Due to the space limit, more details can be referred to in [25]. ### _Analysis of the Learned Reward Function_ We extend our study to a detailed analysis of the learned reward function. The reward function \(r_{\theta}\) is a deep neural Fig. 2: Our two-stage training pipeline. The blue box denotes the imitation learning part (first stage). The agent is then used to generate expert demonstrations, which are used by the second stage to update the reward and policy using Inverse Policy Mirror Descent. network that inherently lacks interpretability due to its black-box nature. To tackle this issue, we employ a more interpretable model, Value Decomposition Network (VDN) [34], which approximates the reward function and explains the significance of locomotion features in determining the reward value. VDN maintains a monotonic relationship between its input and output by constraining the weights and biases of the network to be positive, ensuring continuous positive gradients [46]. This property of VDN allows us to establish a monotonic mapping from the state space to the reward output without compromising the learned reward's accuracy due to its usage of neural networks [46]. Additionally, we aim to explore the features that are highly relevant to bipedal locomotion but may not be directly present in the state space, such as the leg length or ground reaction force, to study how these indirectly observed features affect the reward function. To facilitate this, we extend the input space of our approximation model to include these features. The full list of selected features is in Table I and a majority of them are annotated in Fig. 3. Through this approximation, we establish a relationship between the selected features and the reward function, while keeping the IRL training process separate and intact, allowing it to preserve the expressive power of deep neural nets. Equipped with an interpretable approximation from VDN, we proceed to further dissect the learned reward function using a set of neural network interpretation techniques. In particular, we find Integrated Gradients (IG), a widely recognized tool in the Deep Learning community, to be highly suitable for our objectives [35]. IG allows us to analyze the effect of individual features on the overall landscape of the reward function by perturbing the input and observing the resulting gradient changes, which in our case are manifested as variations in the neural network weights. We also find that directly applying IG to the original reward function itself does not yield any meaningful outcome, due to the highly nonlinear relationship between the input (states and actions) and the output (rewards). This validates the necessity of using VDN to approximate the original reward function for better reward interpretation with IG. ### _Adaptability of the Learned Rewards on Difficult Terrains_ In this context, we explore whether our learned reward function harbors generalized knowledge that enables adaptability across varying terrains. Specifically, we test its efficacy in a purely RL-driven training paradigm, without the need for additional expert demonstrations. Intriguingly, the RL guided by the learned reward not only allows training from scratch but also produces a better performance compared to policies learned from the hand-crafted reward. Even though the reward function was originally trained on flat terrain, our learned reward successfully guides the agent's learning in more complex environments. This observation aligns well with the intuition that a well-designed reward function encapsulates generalizable environmental knowledge. To validate this point, we present results showcasing Cassie's capability to navigate difficult terrains. More interestingly, with the understanding of reward functions, we show that factored components inside the reward function, i.e., those found during our reward function analysis, can improve the quality of locomotion behaviors. This constitutes a significant contribution to the field, as traditional algorithms often require the crafting of intricate, domain-specific reward functions. ## IV Experiments ### _Two-Stage Learning Setup_ Our experiments of Cassie locomotion were conducted using the MuJoCo physics simulator [47]. The training pipeline consists of two main stages as illustrated in Fig. 2. #### Iv-A1 First Stage - Training the Imitation Agent We train the Imitation agent using Soft Actor-Critic (SAC) [48]. The discount factor \(\gamma\) for this stage is set to \(0.99\). Both the policy and value functions are parameterized by \(256\times 256\) Multi-Layer Perceptrons (MLPs). For implementation, we adopt the state-of-the-art codebase from stable-baselines3 [49]. #### Iv-A2 Second Stage - Learning reward functions and policies via IRL We use the Inverse Policy Mirror Descent (IPMD) method described in [25]. The reward function, policy, and value functions are all represented by \(256\times 256\) MLPs. #### Iv-A3 Training Parameters Both agents are trained using \(5\times 10^{6}\) samples. We employ an experience replay buffer with a capacity of \(1\times 10^{6}\) and utilize a batch size of \(512\). The Adam optimizer [50] is employed with a learning rate set at \(3\times 10^{-4}\). These parameter settings are consistent with established norms for training Deep RL algorithms. From a simulation experiment, the optimal expert agent obtained an episodic reward of \(447.2\) while generating the corresponding expert demonstration data for the second stage; the IRL agent trained with IPMD reached a better performance--an episodic reward of \(482.87\). The fact that the IRL agent outperforms the expert demonstrations reflects the superiority of our methodology. The qualitative performance of the IRL agent has no distinguishable difference compared to the imitation agent, this is surprising since we learn both the reward functions and policies from scratch, while in the \begin{table} \begin{tabular}{|c|c|c|} \hline state & action & Euclidean norm of action \\ \hline leg roll & leg pitch & pelvis pitch \\ \hline hip yaw & foot pitch & foot force \\ \hline CoM velocity & CoM angular momentum & CoM to center of foot \\ \hline \end{tabular} \end{table} TABLE I: Considered Features for Approximating Learned Rewards Fig. 3: Illustration of important features for Cassie locomotion. imitation learning case, a complicated reward function has already been established. ### _Reward Analysis_ For the Value Decomposition Network (VDN), we adhere to the same network structure as described in [46]. We gather training samples by recording the states of the Cassie robot, along with additional data necessary for computing the features of interest. We list all features we find worth investigating in Table I. As we aim to approximate the learned reward function, we use the rewards generated by \(r_{\theta}\) as regression targets for the VDN. The optimization objective is the Mean Squared Error (MSE), thereby transforming the training of VDN into the following optimization problem: \(\min_{\psi}\text{ MSE}(\text{VDN}(\psi),r_{\theta}),\) where \(r_{\theta}\) is the learned reward function and \(\psi\) represents the parameters of the VDN, i.e., the weights and biases in neural networks. We record and compute specified feature data as input, and collect rewards computed from those data using the learned reward functions as regression targets. We employ the Adam optimizer with a learning rate of \(3\times 10^{-4}\) to train the VDN. To interpret the contribution of each feature to the reward function, we employ Integrated Gradients (IG) [35], which is further implemented by Captum [51]. Fig. 4 demonstrates that the reward function approximated by the VDN aligns well with our intuitive understanding of what features are important for bipedal locomotion. We plot the importance change of four features to the reward during one typical Cassie walking motion executed by the IRL agent. We find that some features of interest exhibit periodic patterns, due to the nature of the periodic walking motion. This aligns with our understanding of bipedal locomotion. Some particular features exhibit a strong influence on the reward even if they have no particular pattern. We note that pelvis pitch, plotted in Fig. 4, has significant values compared to its small-scale raw input data. We conjecture that the pelvis pitch plays an important role in maintaining the stability of the robot during walking. Other features also have strong correlations with their physical meaning. For example, the left foot has ground reaction force only when it is in contact with the ground. This is rather intuitive for robot locomotion. ### _Adaptive Reward Function_ We generate a variety of uneven terrains in MuJoCo environments as shown in Fig. 5. In particular, we create (a) random perturbed terrain, (b) gradually perturbed terrain, (c) gravel terrain, and (d) sine wave terrain, each with maximum height capped at 0.2, 0.3, 0.1, 0.4 meters respectively. These categories serve to evaluate the adaptability and generalization capacity of our learned reward function. We train the agent from scratch using SAC with a discount factor of \(\gamma=0.99\), following the same setup as in our imitation learning model. For comparative analysis, we also train a baseline RL agent with a handcrafted reward function defined as \(r_{h}=r_{f}+r_{s}-r_{c},\) where \(r_{f}\) encourages forward movement and corresponds to the sagittal velocity; \(r_{s}\) is a locomotion survival reward, awarded when Cassie torso remains upright; and \(r_{c}\), the control cost, is defined as \(r_{c}=\|a\|_{2}\). The baseline agent manages to navigate these terrains, albeit in a less graceful manner with jerky motions (see the submitted video). In contrast, our approach uses a modified reward function: \(r=r_{h}+r_{\theta}\), where \(r_{\theta}\) is the reward function learned from IRL. We refer to \(r\) as the Adaptive reward. We record the average sagittal velocity of CoM when comparing the baseline reward model and the adaptive reward model side by side. The results can be found in Table II. We also plot the sagittal travel distance in each environment, which is shown in Fig. 6. We find that incorporating \(r_{\theta}\) significantly accelerates learning and produces more natural and robust Fig. 4: The top four most important features: CoM lateral velocity, pelvis pitch angle, left and right foot forces. The vertical dashed lines represent time steps when the foot touches and leaves the ground. Green indicates when the left foot strikes and red is for the left foot taking off from the ground. The same for orange (strike) and purple (take off) for the right foot. Fig. 5: Random terrains generated for testing the learned reward function. Fig. 6: Sagittal travel distance comparison between baseline model using \(r_{h}\), and adaptive reward model using \(r\). Note that even though the Baseline model can walk up to maximum time steps, it can not walk as far as the one using the adaptive reward. locomotion behaviors, substantiating the transferability of the learned reward function across domains. ### _Analysis-based Adaptive Reward Design_ With the adaptive reward, the robot is able to walk on unseen rough terrains. However, instances of undesirable walking gait still occasionally occur. Specifically, using the adaptive reward alone, Cassie's CoM exhibits a higher sagittal CoM velocity. In reality, such behavior is undesirable as this inclination creates instability during locomotion in rough terrains. Consequently, the robot needs to maneuver agilely to maintain balance during walking. This leads to the robot deviating from the original lateral position, which is reflected by large variations of CoM velocity along the lateral direction. With the understanding of the learned reward, a natural question arises: _can we further exploit the learned reward functions to shape the locomotion behavior?_ We answer this question affirmatively. The top important features uncovered in the Reward Analysis improved the stability of walking behaviors when incorporated with the learned reward. As such, we incorporate important features discovered from the reward analysis to boost the stability of the robot, or "regularize" the robot's motion. To do this, we add three additional terms with high importance scores to the adaptive reward: pelvis orientation, pelvis pitch angle, and CoM velocity, which are implemented to follow their reference motions on the flat ground. We denote such rewards as \(r_{v}=e^{-\|q_{o}-q_{v}^{*}\|_{2}}+e^{-\|q_{\text{photo}}-q_{\text{photo}}^{*} \|_{2}}+e^{-\|v_{\text{CoM}}-v_{\text{CoM}}^{*}\|_{2}}\), where \(q_{o}\) denotes the pelvis orientation in a quaternion form, \(q_{\text{pitch}}\) is the pelvis pitch angle, and \(v_{\text{CoM}}\) is the CoM velocity. To verify the efficacy of the \(r_{v}\), we train RL agents with SAC on four combinations of reward functions: the baseline model \(r_{h}\), the regularized model \(r_{h}+r_{v}\), the adaptive model \(r_{h}+r_{\theta}\), and the regularized adaptive model \(r_{h}+r_{\theta}+r_{v}\). We plot the CoM trajectory, and standard deviation of the velocity drift along the lateral direction in Fig. 7. Although the adaptive model allows the robot to walk further, it has a higher deviation from its original lateral position and a higher deviation of lateral velocity. We conjecture that this is partially due to the fact that the orientation is less emphasized by the adaptive reward. We also observe that purely using the adaptive reward results in a "hopping" behavior where each walking step has a brief flight phase. In reality, such loss of ground contact can lead to a highly unstable walking motion and pose a risk of failure. Surprisingly, the integration of additional regularizing terms in the reward function \(r_{v}\) mitigates such undesirable hopping behaviors. We plot the ground reaction force of all four models in Fig. 8. Time steps when undesirable behaviors (both feet are in the air) occur are annotated with red color bars. Fig. 8(b) and (d) show a more stable and natural walking motion, compared with Fig. 8(c) (also shown in the video), indicating the efficacy of the \(r_{v}\) reward in regulating the robot's behavior. This result further demonstrates that the augmentation of the reward function with relevant extracted features leads to improved locomotion performance. ### _Zero-Shot Validation_ We observe that agents trained on diverse terrains display enhanced stability when deployed in unseen environments. For example, Cassie is able to navigate sinusoidal terrains with random height variations (Fig. 5d), without additional training. This corroborates the idea that the learned reward embodies a form of generalized knowledge beneficial for robotic locomotion across a range of terrain scenarios. ## V Conclusion In this work, we employ an IRL method to solve bipedal locomotion problems. Our analyses reveal that the learned reward function encapsulates meaningful insights and also serves as a valuable guide to understanding the underlying principles of robotic motion. The ability to learn and adapt using the inferred reward function paves the way for new avenues of research in robotics, particularly in the domain of reward inference and environmental adaptability. Our work supports the notion that leveraging learned reward functions could substantially accelerate the design, training, and deployment of robotic systems across a myriad of real-world scenarios. Our future direction will focus on hardware implementation on the Cassie robot. \begin{table} \begin{tabular}{|c|c|c|} \hline Terrain & Baseline & Adaptive \\ \hline Perturbed & 0.2617 & 0.6249 \\ \hline Gradual & 0.3970 & 0.8015 \\ \hline Gravel & 0.4132 & 0.9106 \\ \hline \end{tabular} \end{table} TABLE II: Average Center of Mass velocity (m/s) in sagittal direction Fig. 8: Ground reaction force with four reward setups: (a) \(r_{h}\), (b) \(r_{h}+r_{v}\), (c) \(r_{h}+r_{\theta}\), (d) \(r_{h}+r_{\theta}+r_{v}\). The orange bar denotes the left foot force, while the blue the right. The red bar denotes time steps when no ground reaction force exists for either foot. Fig. 7: Results for regularizing robotics behavior.
2309.10405
Gap results and existence of CMC free boundary hypersurfaces in rotational domains
In this paper, we work with the existence and uniqueness of free boundary constant mean curvature hypersurfaces in rotational domains. These are domains whose boundary is generated by a rotation of a graph. Under some conditions on the function that generates the graph and a gap condition on the umbilicity tensor, we classify the CMC free boundary hypersurfaces as topological disks or annulus. Also, we construct some examples of free boundary minimal surfaces in the rotational ellipsoid that, in particular, satisfy our gap condition.
Allan Freitas, Márcio Santos, J. Sindeaux
2023-09-19T08:11:03Z
http://arxiv.org/abs/2309.10405v2
# Gap results and existence of CMC free boundary hypersurfaces in rotational domains ###### Abstract. In this paper, we work with the existence and uniqueness of free boundary constant mean curvature hypersurfaces in rotational domains. These are domains whose boundary is generated by a rotation of a graph. Under some conditions on the function that generates the graph and a gap condition on the umbilicity tensor, we classify the CMC free boundary hypersurfaces as topological disks or annulus. Also, we construct some examples of free boundary minimal surfaces in the rotational ellipsoid that, in particular, satisfy our gap condition. Key words and phrases:Free Boundary Surfaces; Constant Mean Curvature; Rigidity 2020 Mathematics Subject Classification: 53C42, 53C50, 53E10 \({}^{\ast}\)Corresponding author ###### Contents * 1 Introduction * 2 Preliminaries * 3 Gap results * 3.1 CMC Free Boundary Surfaces in 3-dimensional rotational domains * 3.2 Minimal Free Boundary Surfaces in \((n+1)\)-dimensional rotational domains * 4 Examples of CMC free boundary surfaces in the rotational ellipsoid * 5.1 Introduction * 5.2 The \(\Sigma\)-free boundary * 5.3 The \(\Sigma\)-free boundary * 5.4 The \(\Sigma\)-free boundary * 5.5 The \(\Sigma\)-free boundary * 5.6 The \(\Sigma\)-free boundary * 5.7 The \(\Sigma\)-free boundary * 5.8 The \(\Sigma\)-free boundary * 5.9 The \(\Sigma\)-free boundary * 5.1 The \(\Sigma\)-free boundary * 5.1 The \(\Sigma\)-free boundary * 5.2 The \(\Sigma\)-free boundary * 5.3 The \(\Sigma\)-free boundary * 5.4 The \(\Sigma\)-free boundary * 5.5 The \(\Sigma\)-free boundary * 5.6 The \(\Sigma\)-free boundary * 5.7 The \(\Sigma\)-free boundary * 5.8 The \(\Sigma\)-free boundary * 5.9 The \(\Sigma\)-free boundary * 5.1 The \(\Sigma\)-free boundary * 5.1 The \(\Sigma\)-free boundary * 5.2 The \(\Sigma\)-free boundary * 5.3 The \(\Sigma\)-free boundary * 5.4 The \(\Sigma\)-free boundary * 5.5 The \(\Sigma\)-free boundary * 5.6 The \(\Sigma\)-free boundary * 5.7 The \(\Sigma\)-free boundary * 5.8 The \(\Sigma\)-free boundary * 5.9 The \(\Sigma\)-free boundary * 5.1 The \(\Sigma\)-free boundary * 5.2 The \(\Sigma\)-free boundary * 5.3 The \(\Sigma\)-free boundary * 5.4 The \(\Sigma\)-free boundary * 5.5 The \(\Sigma\)-free boundary * 5.6 The \(\Sigma\)-free boundary * 5.7 The \(\Sigma\)-free boundary * 5.8 The \(\Sigma\)-free boundary * 5.9 The \(\Sigma\)-free boundary * 5.1 The \(\Sigma\)-free boundary * 5.2 The \(\Sigma\)-free boundary * 5.3 The \(\Sigma\)-free boundary * 5.4 The \(\Sigma\)-free boundary * 5.5 The \(\Sigma\)-free boundary * 5.6 The \(\Sigma\)-free boundary * 5.7 The \(\Sigma\)-free boundary * 5.8 The \(\Sigma\)-free boundary * 5.9 The \(\Sigma\)-free boundary * 5.10 The \(\Sigma\)-free boundary * 5.11 The \(\Sigma\)-free boundary * 5.12 The \(\Sigma\)-free boundary * 5.13 The \(\Sigma\)-free boundary * 5.14 The \(\Sigma\)-free boundary * 5.15 The \(\Sigma\)-free boundary * 5.16 The \(\Sigma\)-free boundary * 5.17 The \(\Sigma\)-free boundary * 5.18 The \(\Sigma\)-free boundary * 5.19 The \(\Sigma\)-free boundary * 5.20 The \(\Sigma\)-free boundary * 5.30 The \(\Sigma\)-free boundary * 5.4.1 The \(\Sigma\)-free boundary * 5.4.2 The \(\Sigma\)-free boundary * 5.5.3 The \(\Sigma\)-free boundary * 5.5.4 The \(\Sigma\)-free boundary * 5.5.5 The \(\Sigma\)-free boundary * 5.6.1 The \(\Sigma\)-free boundary * 5.7.1 The \(\Sigma\)-free boundary * 5.8.1 The \(\Sigma\)-free boundary * 5.9.2 The \(\Sigma\)-free boundary * 5.10 The \(\Sigma\)-free boundary * 5.11 The \(\Sigma\)-free boundary * 5.12 The \(\Sigma\)-free boundary * 5.13 The \(\Sigma\)-free boundary * 5.14 The \(\Sigma\)-free boundary * 5.15 The \(\Sigma\)-free boundary * 5.16 The \(\Sigma\)-free boundary * 5.17 The \(\Sigma\)-free boundary * 5.18 The \(\Sigma\)-free boundary * 5.19 The \(\Sigma\)-free boundary * 5.20 The \(\Sigma\)-free boundary * 5.21 The \(\Sigma\)-free boundary * 5.22 The \(\Sigma\)-free boundary * 5.23 The \(\Sigma\)-free boundary * 5.24 The \(\Sigma\)-free boundary * 5.25 The \(\Sigma\)-free boundary * 5.26 The \(\Sigma\)-free boundary * 5.27 The \(\Sigma\)-free boundary * 5.28 The \(\Sigma\)-free boundary * 5.29 The \(\Sigma\)-free boundary * 5.30 The \(\Sigma\)-free boundary * 5.4.3 The \(\Sigma\)-free boundary * 5.5.5 The \(\Sigma\)-free boundary * 5.6.2 The \(\Sigma\)-free boundary * 5.7.3 The \(\Sigma\)-free boundary * 5.8.4 The \(\Sigma\)-free boundary * 5.9.5 The \(\Sigma\)-free boundary * 5.10 The \(\Sigma\)-free boundary * 5.11 The \(\Sigma\)-free boundary * 5.12 The \(\Sigma\)-free boundary * 5.13 The \(\Sigma\)-free boundary * 5.14 The \(\Sigma\)-free boundary * 5.15 The \(\Sigma\)-free boundary * 5.16 The \(\Sigma\)-free boundary * 5.27 The \(\Sigma\)-free boundary * 5.31 The \(\Sigma\)-free boundary * 5.32 The \(\Sigma\)-free boundary * 5.4.5 The \(\Sigma\)-free boundary * 5.4.6 The \(\Sigma\)-free boundary * 5.5.7 The \(\Sigma\)-free boundary * 5.6.8 The \(\Sigma\)-free boundary * 5.7.9 The \(\Sigma\)-free boundary * 5.8.1 The \(\Sigma\)-free boundary * 5.9.1 The \(\Sigma\)-free boundary * 5.10 The \(\Sigma\)-free boundary * 5.11 The \(\Sigma\)-free boundary * 5.12 The \(\Sigma\)-free boundary * 5.13 The \(\Sigma\)-free boundary * 5.14 The \(\Sigma\)-free boundary * 5.15 The \(\Sigma\)-free boundary * 5.16 The \(\Sigma\)-free boundary * 5.17 The \(\Sigma\)-free boundary * 5.18 The \(\Sigma\)-free boundary * 5.19 The \(\Sigma\)-free boundary * 5.20 The \(\Sigma\)-free boundary * 5.31 The \(\Sigma\)-free boundary * 5.32 The \(\Sigma\)-free boundary * 5.4.4 The \(\Sigma\)-free boundary * 5.4.5 The \(\Sigma\)-free boundary * 5.5.6 The \(\Sigma\)-free boundary * 5.7.7 The \(\Sigma\)-free boundary * 5.8.2 The \(\Sigma\)-free boundary * 5.9.3 The \(\Sigma\)-free boundary * 5.10 The \(\Sigma\)-free boundary * 5.11 The \(\Sigma\)-free boundary * 5.12 The \(\Sigma\)-free boundary * 5.13 The \(\Sigma\)-free boundary * 5.14 The \(\Sigma\)-free boundary * 5.15 The \(\Sigma\)-free boundary * 5.16 The \(\Sigma\)-free boundary * 5.17 The \(\Sigma\)-free boundary * 5.18 The \(\Sigma\)-free boundary * 5.19 The \(\Sigma\)-free boundary * 5.21 The \(\Sigma\)-free boundary * 5.22 The \(\Sigma\)-free boundary * 5.23 The \(\Sigma\)-free boundary * 5.24 The \(\Sigma\)-free boundary * 5.25 The \(\Sigma\)-free boundary * 5.26 The \(\Sigma\)-free boundary * 5.27 The \(\Sigma\)-free boundary * 5.33 The \(\Sigma\)-free boundary * 5.4.5 The \(\Sigma\)-free boundary * 5.4.6 The \(\Sigma\)-free boundary * 5.5.7 The \(\Sigma\)-free boundary * 5.5.8 The \(\Sigma\)-free boundary * 5.5.9 The \(\Sigma\)-free boundary * 5.10 The \(\Sigma\)-free boundary * 5.29 The \(\Sigma\)-free boundary * 5.30 The \(\Sigma\)-free boundary * 5.4.1 The \(\Sigma\)-free boundary * 5.4.2 The \(\Sigma\)-free boundary * 5.5.1 The \(\Sigma\)-free boundary * 5.5.1 The \(\Sigma\)-free boundary * 5.5.2 The \(\Sigma\)-free boundary * 5.5.3 The \(\Sigma\)-free boundary * 5.5.4 The \(\Sigma\)-free boundary * 5.6.1 The \(\Sigma\)-free boundary Chern, do Carmo and Kobayashi [12] get the following gap result for the second fundamental form \(A\) of the immersion: **Theorem 1** (Chern-do Carmo-Kobayashi [12], Lawson [18], Simons [25]).: _Let \(\Sigma\) be a closed minimal hypersurface in the unit sphere \(\mathbb{S}^{n+1}\). Assume that the second fundamental form \(A\) on \(\Sigma\) satisfies_ \[|A|^{2}\leq n.\] _Then_ 1. _either_ \(|A|^{2}=0\) _and_ \(\Sigma\) _is an equator;_ 2. _or_ \(|A|^{2}=n\) _and_ \(\Sigma\) _is a Clifford minimal hypersurface._ In the study of CMC hypersurfaces in the sphere, Alencar and do Carmo [1] also obtained a gap result, but now considering the umbilicity tensor \(\phi=A-\frac{H}{n}g\). **Theorem 2** (Alencar-do Carmo [1]).: _Let \(\Sigma\) be a closed, CMC hypersurface in the unit sphere \(\mathbb{S}^{n+1}\). If_ 1. _either_ \(\|\phi\|^{2}\equiv 0\) _and_ \(\Sigma^{n}\) _is totally umbilical in_ \(\mathbb{S}^{n+1}\)_,_ 2. \(\|\phi\|^{2}\equiv C_{H}\) _and_ \(\Sigma^{n}\) _is an_ \(H(r)\)_-torus in_ \(\mathbb{S}^{n+1}\)_._ _Here, \(C_{H}\) is related to a root of a polynomial whose coefficients depend on the mean curvature \(H\) and the dimension \(n\)1._ Footnote 1: For details, see the Introduction of [1]. We could describe some contributions by starting from these two characterizations and studying similar phenomena for free boundary CMC hypersurfaces in the ball. In [3], Ambrozio and Nunes proved that if \(\Sigma\) is a compact free boundary minimal surface in \(\mathbb{B}^{3}\) and for all points \(x\) in \(\Sigma\), \[|A|^{2}(x)\langle x,N(x)\rangle^{2}\leq 2, \tag{1.1}\] then \(\Sigma\) is a flat equatorial disk or a critical catenoid. In higher dimensions, some similar gap results to (1.1) can be obtained for \(2\)-dimensional surfaces in the ball (see [8]) and, with a topological rigidity, in submanifolds of any codimension in higher dimensional balls (see [9, Theorem 3.7]). Also, some gaps result just in the second fundamental form, as that in Theorem 1, was obtained in [10], [7] and [6]. The question arises: Does an analogous result hold to these in the context of free boundary CMC non-minimal surfaces? Barbosa, Cavalvante, and Pereira answered this question in [5]. More specifically, in this work they proved that if \(\Sigma\) is a compact free boundary CMC surface in \(\mathbb{B}^{3}\) and for all points \(x\) in \(\Sigma\), \[|\phi|^{2}\langle\vec{x},N\rangle^{2}\leq\frac{1}{2}(2+H\langle\vec{x},N \rangle^{2}),\] then \(\Sigma\) is a totally umbilical disc or a part of a Delaunay surface. In addition to the studies involving free boundary surfaces in the unit ball, investigations of this kind have also been conducted in other domains. For example, when the ambient space is a wedge (Lopez [19]), a slab (Ainouz and Souam, [2]), a cone (Choe [13]) or a cylinder (Lopez and Pyo [20]). We also cite the work [21] by Maximo, Nunes, and Smith, where they study free boundary minimal annuli inside convex subsets of \(3\)-dimensional Riemannian manifolds with nonnegative Ricci curvature. Regarding rigidity conclusions starting from a gap condition, Andrade, Barbosa, and Pereira [4] established some results for balls conforming to the Euclidean ball. More recently, when the ambient space is a strictly convex domain in a \(3\)-dimensional Riemannian manifold with sectional curvature bounded above, and \(\Sigma\) is a CMC free boundary surface in this region, Min and Seo [22] establish a pinching condition on the length of the umbilicity tensor on \(\Sigma\). This criterion ensures that the surface is topologically equivalent to a disk or an annulus. In the particular case where the domain is a geodesic ball of a \(3\)-dimensional space form, they concluded that \(\Sigma\) is a spherical cap or a Delaunay surface. In [6], the first author, jointly with Barbosa, Melo, and Vitorio, investigated the existence of compact free boundary minimal hypersurfaces immersed in domains whose boundary is a regular level set, in particular giving some gap results for free boundary minimal hypersurfaces immersed in an Euclidean ball and in a rotational ellipsoid. This work explores some gap results for CMC free Boundary Surfaces in Rotation Domains described below. By considering a curve \(\alpha(t)=(f(t),t)\), where \(f\) is a positive real-valued smooth function, we generate a hypersurface \(\partial\Omega\) starting from the revolution of this curve in an appropriate axis. In this sense, we can describe a domain \(\Omega\) such that \(\partial\Omega\subset F^{-1}(1)\) is a revolution hypersurface and \(F:\mathbb{R}^{n}\times I\to\mathbb{R}\) is a smooth function given by \[F(x,y)=\frac{1}{2}\left(|x|^{2}-f^{2}(y)\right)+1.\] Furthermore, we consider a hypersurface \(\Sigma\), which is a free boundary CMC surface in \(\Omega\). In our first results, we use an auxiliary function \(g\) given by \[g(x,y)=\langle\bar{\nabla}F,N\rangle\] to get a topological characterization for CMC free boundary surfaces in \((n+1)\)-dimensional rotation domains, with \(n\geq 2\). In particular, we obtain the following gap result for CMC surfaces in these domains **Theorem 3**.: _Let \(\Sigma^{2}\) be a compact CMC surface with a free boundary in \(F^{-1}(1)\). If \((f^{\prime})^{2}+ff^{\prime\prime}+1\leq 0\) and_ \[|\phi|^{2}g(x,y)^{2}\leq\frac{1}{2}(2+Hg(x,y))^{2}\] _on \(\Sigma\), then \(\Sigma\) is homeomorphic to a disk or an annulus._ Also, talking about the higher dimensional case, we can prove the following result for minimal free-boundary hypersurfaces. **Theorem 4**.: _Let \(\Sigma^{n}\) be an \(n\)-dimensional free boundary minimal hypersurface in a domain \(\Omega\) with boundary \(\partial\Omega\subset F^{-1}(1)\). Assume that \((f^{\prime})^{2}+ff^{\prime\prime}+1\leq 0\). If_ \[|A|^{2}g(x,y)^{2}\leq\frac{n}{n-1},\] _for every \((x,y)\in\Sigma^{n}\), then one of the following is true: 1. \(\Sigma^{n}\) is diffeomorphic to a disk \(\mathbb{D}^{n}\). 2. \(\Sigma^{n}\) is diffeomorphic to \(\mathbb{S}^{1}\times\mathbb{D}^{n-1}\) and \(C(\Sigma^{n})\) is a closed geodesic._ Furthermore, we construct new examples of CMC surfaces that are free boundary on the rotational ellipsoid, exploring the technique from [5]. This permits seeing examples of catenoids, nodoids, and onduloids in this domain. This paper is organized as follows. In the second section, we approach some preliminaries for the theme and obtain some auxiliary lemmas that permit obtaining the main results of Section 3, about CMC free boundary surfaces in 3-dimensional rotation domains and minimal free boundary surfaces in \((n+1)\)-dimensional rotation domains. Finally, in the last section, we get some examples of Delaunay surfaces that are free boundary and satisfy our pinching condition in the particular case where the ambient space is a rotational ellipsoid. ## 2. Preliminaries Throughout this paper, we will consider \(\Omega\subset\mathbb{R}^{n+1}\), with \(n\geq 2\), be a rotation domain with smooth boundary \(\partial\Omega\subset F^{-1}(1)\) where \(F:\mathbb{R}^{n}\times I\to\mathbb{R}\) is a smooth function for some interval \(I\subset\mathbb{R}\). We denote by \(\bar{N}:=\frac{\nabla F}{|\nabla F|}\) the outward unit normal to \(\partial\Omega\). Let \(\Sigma^{n}\hookrightarrow\Omega\) an hypersurface with boundary such that \(\partial\Sigma\subset\partial\Omega\). We denote \(N\) the outward unit normal to \(\Sigma\) and \(\nu\) the outward conormal along \(\partial\Sigma\) in \(\Sigma\). In this scope, a hypersurface \(\Sigma\) is called _free boundary_ if \(\Sigma\) meets \(\partial\Omega\) orthogonally, there is, \(\nu=\bar{N}\) along \(\partial\Sigma\) or, equivalently, \(\langle N,\bar{N}\rangle=0\) along \(\partial\Sigma\). More specifically, for \(n=2\), let us consider a rotational hypersurface in the following sense. Let \(\alpha(t)=(f(t),t)\) be a plane curve \(\alpha\) that is the graph of a positive real valued smooth function \(f:I\to\mathbb{R}\) in the \(x_{1}x_{3}\)-plane. Let \(\theta\) be a parametrization of the \(2-\)dimensional unit sphere in the hyperplane \(x_{3}=0\). The hypersurface of revolution with generatrix \(\alpha\) can be parametrized by \[X(\theta,t)=(\theta f(t),t)=(\cos\theta f(t),\sin\theta f(t),t).\] In this scope, we study free boundary surfaces \(\Sigma\) in domains \(\Omega\) which boundary is a hypersurface of revolution given above. Let us also consider \(F:\mathbb{R}^{2}\times I\to\mathbb{R}\) be the smooth function defined by \[F(x,y)=\frac{1}{2}\left(|x|^{2}-f^{2}(y)\right)+1, \tag{2.1}\] where \(x=(x_{1},x_{2})\) and \(y=x_{3}\), we have that \(\Omega\subset F^{-1}(1).\) Notice that \(1\) is a regular value of \(F\). Observe that \[\nabla F(x,y)=(x,-f(y)f^{\prime}(y))=(x,y)+(0,-y-f(y)f^{\prime}(y)),\] where \(y=\langle(x,y),E_{3}\rangle.\) Then, \[D^{2}F=\begin{pmatrix}1&0&0\\ 0&1&0\\ 0&0&-(f^{\prime}(y))^{2}-f(y)f^{\prime\prime}(y)\end{pmatrix}=\operatorname{Id }_{3\times 3}+\begin{pmatrix}0&0&0\\ 0&0&0\\ 0&0&-(f^{\prime}(y))^{2}-f(y)f^{\prime\prime}(y)-1\end{pmatrix}.\] Therefore, for all \(X,Y\in T(\Sigma)\) we have \[\operatorname{Hess}_{\Sigma}F(X,Y) =\langle\bar{\nabla}_{X}(\bar{\nabla}F)^{\top},Y\rangle\] \[=\langle\bar{\nabla}_{X}(\bar{\nabla}F-\langle\bar{\nabla}F,N \rangle N),Y\rangle\] \[=D^{2}F(X,Y)+\langle\bar{\nabla}F,N\rangle\langle A_{N}X,Y\rangle \tag{2.2}\] \[=\langle X,Y\rangle+g(x,y)\langle A_{N}X,Y\rangle-((f^{\prime}(y) )^{2}+f(y)f^{\prime\prime}(y)+1)\langle TX,Y\rangle,\] where \(T:T_{(x,y)})\Sigma\to T_{(x,y)}\Sigma\) is given by \(TX=\langle X,E_{3}^{\top}\rangle E_{3}^{\top}\) and \[g(x,y)=\langle\bar{\nabla}F,N\rangle=\langle(x,y),N\rangle+\langle N,E_{3} \rangle(-y-f(y)f^{\prime}(y)).\] It is easy to check that \(T\) is a self adjoint operator whose \(E_{3}^{\top}\) is an eigenvector associated with the eigenvalue \(|E_{3}^{\top}|^{2}\). Besides, we can take any nonzero vector in \(T\Sigma\), orthogonal to \(E_{3}^{\top}\), to verify that zero is also an eigenvalue of \(T\). Therefore \[0\leq\langle TX,X\rangle\leq|E_{3}^{\top}|^{2}|X|^{2},\ \forall X\in T_{(x,y)}\Sigma. \tag{2.3}\] **Lemma 1**.: _Suppose that \((f^{\prime})^{2}+ff^{\prime\prime}+1\leq 0\). Then for each \((x,y)\in\Sigma\), the eigenvalues of \(\operatorname{Hess}_{\Sigma}F(x,y)\) are greater of equal to_ \[1+k_{1}g(x,y)\text{ and }1+k_{2}g(x,y),\] _where \(k_{1}\leq k_{2}\) are the principal curvatures of \(\Sigma\) with respect to the normal vector \(N\)._ Proof.: Suppose that \((f^{\prime})^{2}+ff^{\prime\prime}+1\leq 0\), then using (2) and (2.3), we have that \[\operatorname{Hess}_{\Sigma}F(X,X) =\langle X,X\rangle+g(x,y)\langle A_{N}X,X\rangle-((f^{\prime}(y) )^{2}+f(y)f^{\prime\prime}(y)+1)\langle TX,X\rangle\] \[\geq\langle X+g(x,y)AX,X\rangle.\] But, the eigenvalues of \(X\to X+g(x,y)AX\) are \[1+k_{1}g(x,y)\text{ and }1+k_{2}g(x,y),\] where \(k_{1}\leq k_{2}\) are the eigenvalues of A. Then, \(k_{1}\) and \(k_{2}\) are the principal curvature of \(\Sigma\) and the eigenvalues \(\lambda_{1}\leq\lambda_{2}\) of \(\operatorname{Hess}_{\Sigma}F(x,y)\) satisfy that \[\lambda_{1}\geq 1+k_{1}g(x,y)\text{ and }\lambda_{2}\geq 1+k_{2}g(x,y).\] **Remark 1**.: _Observe that if \((f^{\prime})^{2}+ff^{\prime\prime}+1=0\), we get_ \[0=(f^{\prime}(y))^{2}+f(y)f^{\prime\prime}(y)+1=(y+f(y)f^{\prime}(y))^{\prime}.\] _Then,_ \[y+f(y)f^{\prime}(y)=c_{1},\] _where \(c_{1}\) is a constant. Thus,_ \[(f^{2}(y))^{\prime}=2f(y)f^{\prime}(y)=2(c_{1}-y)=(2c_{1}y-y^{2})^{\prime},\] _Therefore,_ \[f^{2}(y)=2c_{1}-y^{2}+c_{2},\] _where \(c_{2}\) is a constant. It implies that_ \[F(x,y)=\frac{1}{2}(|x|^{2}+y^{2}-2c_{1}y-c_{2})+1.\] _Then, the set \(F^{-1}(1)\) is the sphere_ \[x_{1}^{2}+x_{2}^{2}+(y-c_{1})^{2}=c_{2}+c_{1}^{2}.\] ## 3. Gap results This section aims to give a topological classification of CMC free-boundary hypersurfaces in the rotational domains, as defined earlier. We employ a gap condition in the umbilicity tensor and the graph function whose rotation generates the boundary domain. We subdivide our analysis in the three-dimensional and higher dimensional cases. ### CMC Free Boundary Surfaces in \(3\)-dimensional rotational domains In this subsection, we get a topological characterization for CMC Free Boundary Surfaces in \(3\)-dimensional rotation domains. The next proposition shows that the gap condition given bellow implies the convexity of \(F\) on \(\Sigma\), and the proof of the result follows the same steps as in [5, Lemma 2.1]. **Proposition 1**.: _Let \(\Sigma\) be a compact free boundary CMC surface in \(\Omega\). Assume that \((f^{\prime})^{2}+ff^{\prime\prime}+1\leq 0\) and for all points (x,y) in \(\Sigma\),_ \[|\phi|^{2}g(x,y)^{2}\leq\frac{1}{2}(2+Hg(x,y))^{2}, \tag{3.1}\] _where \(\phi=A-\frac{H}{2}\langle\cdot,\cdot\rangle\) is the umbilicity tensor. Then,_ \[Hess_{\Sigma}F(X,X)\geq 0,\] _for all \((x,y)\in\Sigma\) and \(X\in T_{(x,y)}\Sigma\)._ Proof.: By Lemma 1, we have that the eigenvalues \(\lambda_{1}\leq\lambda_{2}\) of \(\mathrm{Hess}_{\Sigma}F(x,y)\) satisfy that \[\lambda_{1}\geq 1+k_{1}g(x,y):=\tilde{\lambda_{1}}\text{ and }\lambda_{2} \geq 1+k_{2}g(x,y):=\tilde{\lambda_{2}},\] where \(k_{1}\) and \(k_{2}\) are the principal curvature of \(\Sigma\). In order to prove \(\mathrm{Hess}_{\Sigma}F(X,X)\geq 0\), we need to show that \(\lambda_{1}\) and \(\lambda_{2}\) are nonnegative. Using condition (3.1) we have \[\begin{split} 4\tilde{\lambda_{1}}\tilde{\lambda_{2}}& =4(1+k_{1}g(x,y))(1+k_{2}g(x,y))\\ &=4+4k_{2}g(x,y)+4k_{1}g(x,y)+4k_{1}k_{2}g(x,y)^{2}\\ &=4+4Hg(x,y)+2(H^{2}-|A|^{2})g(x,y)^{2}\\ &=(2+Hg(x,y))^{2}-2|\phi|^{2}g(x,y)^{2}\geq 0.\end{split} \tag{3.2}\] Therefore, we need to show that at least one \(\tilde{\lambda_{i}}\) is non-negative. For this, we will to show that function \(v\) defined on \(\Sigma\) and given by \[v:=\tilde{\lambda_{1}}+\tilde{\lambda_{2}}=2+Hg(x,y)\] is nonnegative. Note that we can assume that \(\Sigma\) is not totally umbilical, otherwise it is obvious to check. Let us suppose that \(v(p)<0\) at some point \(p\in\Sigma\). The free boundary condition ensures that \[v=2+Hg(x,y)=2\] along \(\partial\Sigma\). Choose \(q\in\partial\Sigma\) and let \(\alpha:[0,1]\to\Sigma\) be a continuous curve such that \(\alpha(0)=p\) and \(\alpha(1)=q\). Since \(v\) changes the signal along \(\alpha\), there is a point \(p_{0}=\alpha(t_{0}),\ t_{0}\in(0,1)\) such that \(v(p_{0})=0\). The condition (3.1) implies that \[|\phi|^{2}(p_{0})=0,\] and hence \(p_{0}\) is an umbilical point. Since \(\Sigma\) is not a totally umbilical surface, we have that \(p_{0}\) is an isolated point. So there is \(\epsilon>0\) such that \(v(\alpha(t))<0\), if \(t\in[t_{0}-\epsilon,t_{0})\) and \(v(\alpha(t))>0\), if \(t\in(t_{0},t_{0}+\epsilon]\), or vice-versa. On the other hand, since \(0=v(p_{0})=2+Hg(x,y)\), we have \(g(x,y)(p_{0})\neq 0\). Let \(D_{r_{0}}(p_{0})\) be a geodesic disk with radius \(r_{0}\) centered at \(p_{0}\) such that \(p_{0}\) is the only umbilical point of \(\Sigma\) on \(D_{r_{0}}(p_{0})\). We can choose \(r_{0}\) and \(\epsilon\) in such way that \(\alpha(t)\in D_{r_{0}}(p_{0})\) for all \(t\in[t_{0}-\epsilon,t_{0}+\epsilon]\). Choose \(\vec{r}_{0}<r_{0}\) such that \(\alpha(t_{0}-\epsilon),\ \alpha(t_{0}+\epsilon)\notin D_{\vec{r}_{0}}(p_{0})\). Let \(\mathcal{A}=D_{r_{0}}(p_{0})\setminus D_{\vec{r}_{0}}(p_{0})\) be the annulus determined by these two discs and let \(\beta\) denote a path in \(\mathcal{A}\) joining the points \(\alpha(t_{0}-\epsilon)\) and \(\alpha(t_{0}+\epsilon)\). Again, \(v\) changes the signal along of \(\beta\), and therefore there is a point \(\tilde{q}\in D_{r_{0}}(p_{0})\) such that \(v(\tilde{q})=0\). But, as above, it implies that \(\tilde{q}\) is another umbilical point in \(D_{r_{0}}(p_{0})\) which is a contradiction and we conclude that \(v\geq 0\) as desired. Then, \(\lambda_{i}\geq\tilde{\lambda_{i}}\geq 0\) for all \(i\). Therefore \(Hess_{\Sigma}F(X,X)\geq 0\). **Remark 2**.: _Observe that, from (3.2) if we want to prove that the gap (3.1) is valid, it is enough to show that \(\tilde{\lambda_{i}}\) is non-negative for \(i=1,2\)._ **Lemma 2**.: _Suppose that \((f^{\prime})^{2}+ff^{\prime\prime}+1\leq 0\). Then the Weingarten operator \(A^{\mathbb{B}^{3}}_{\partial\Omega}\) of \(F^{-1}(1)=\partial\Omega\) in \(\mathbb{R}^{3}\) with respect to inward unit normal satisfies_ \[\langle A^{\mathbb{B}^{3}}_{\partial\Omega}X,X\rangle\geq k_{1}|X|^{2}>0,\ \forall X \in T\partial\Omega,\ x\neq 0.\] Proof.: We claim that both eigenvalues \(k_{1}\leq k_{2}\) of \(A^{\mathbb{B}^{3}}_{\partial\Omega}\) are positive. Let \(U\subset\mathbb{R}^{2}\) be an open set and \(x:U\subset\mathbb{R}^{2}\to V\subset\partial\Omega\) the immersion \[x(\theta,t)=(\cos\theta f(t),\sin\theta f(t),t),\ (\theta,t)\in U.\] A straight forward calculation shows that the Gaussian curvature of \(\partial\Omega\) at \(x(\theta,t)\) is \[K(\theta,t)=-\frac{ff^{\prime\prime}}{(1+(f^{\prime})^{2})^{2}f^{2}}>0.\] Hence, \(K\) is strictly positive on \(\partial\Omega\). In particular, \(k_{1}\) and \(k_{2}\) have the same sign. Furthermore, a simple calculation gives us \[H=\frac{1+(f^{\prime})^{2}-ff^{\prime\prime}}{2f(1+(f^{\prime})^{2})^{\frac{3 }{2}}}>0.\] Therefore \(k_{2}>0\) and \(k_{1}>0\). Thus, for all \(X\in T\partial\Omega\) with \(X\neq 0\), \[\langle A^{\mathbb{B}^{3}}_{\partial\Omega}X,X\rangle\geq k_{1}|X|^{2}>0.\] Now, we are in conditions to prove Theorem 3. Proof of Theorem 3.: First, we claim that the geodesic curvature \(k_{g}\) of \(\partial\Sigma\) in \(\Sigma\) is positive. In fact, given \(X,Y\in T\partial\Sigma\), we have on \(\partial\Sigma\) that \[\nabla^{\mathbb{B}^{3}}_{X}Y=\nabla^{\partial\Omega}_{X}Y+\langle A^{\mathbb{ B}^{3}}_{\partial\Omega}X,Y\rangle\bar{N}=\nabla^{\partial\Sigma}_{X}Y+\langle A^{ \partial\Omega}_{\partial\Sigma}X,Y\rangle N+\langle A^{\mathbb{B}^{3}}_{ \partial\Omega}X,Y\rangle\bar{N}\] and \[\nabla^{\mathbb{B}^{3}}_{X}Y=\nabla^{\Sigma}_{X}Y+\langle A^{\mathbb{B}^{3}}_ {\Sigma}X,Y\rangle N=\nabla^{\Sigma}_{X}Y+\langle A^{\Sigma}_{\partial\Sigma }X,Y\rangle\bar{N}+\langle A^{\mathbb{B}^{3}}_{\Sigma}X,Y\rangle N.\] Then, we will have \(A^{\Sigma}_{\partial\Sigma}=A^{\mathbb{B}^{3}}_{\partial\Omega}\) on \(\partial\Sigma\), where \(\partial\Omega=F^{-1}(1)\). Hence, if \(X\in T\partial\Sigma\) is unitary, it follows from Lemma 2 that \[k_{g}=\langle A^{\Sigma}_{\partial\Sigma}X,X\rangle=\langle A^{\mathbb{B}^{3} }_{\partial\Omega}X,X\rangle>0. \tag{3.3}\] Now, observe that if either \(\Sigma\) is totally umbilical or \(\Sigma\) has nonnegative Gaussian curvature everywhere, then \(\Sigma\) is homeomorphic to a disk. In fact, if \(\Sigma\) is totally umbilical, we have that the Gaussian curvature \(K_{\Sigma}\) of \(\Sigma\) satisfies \[K_{\Sigma}=H^{2}\geq 0.\] Then, in any case, \(\Sigma\) has nonnegative Gaussian curvature everywhere. From the Gauss-Bonnet theorem and (3.3), it follows \[\int_{\Sigma}K_{\Sigma}+\int_{\partial\Sigma}k_{g}=2\pi\mathcal{X}(\Sigma)>0,\] which shows that \[\mathcal{X}(\Sigma)=2-2\hat{g}-r>0,\] where \(\hat{g}\) and \(r\) are respectively the genus and quantity connected components of \(\Sigma\). Then, \(\hat{g}=0\) and \(r=1\). Therefore, \(\mathcal{X}(\Sigma)=1\), \(\Sigma\) is orientable and has exactly one boundary component. Thus, \(\Sigma\) is homeomorphic to a disk. Therefore, from now on, let us assume that \(\Sigma\) is not a totally umbilical surface and has negative Gaussian curvature at some point of \(\Sigma\) and consider \[\mathcal{C}=\{p\in\Sigma;F(p)=\min_{x\in\Sigma}F(x)\}.\] Given \(p,q\in\mathcal{C}\), let \(\gamma:[0,1]\to\Sigma\) be a geodesic such that \(\gamma(0)=p\) and \(\gamma(1)=q\). It follows from Proposition 1 Hess\({}_{\Sigma}F\geq 0\) on \(\Sigma\). Then, \[\frac{d^{2}}{dt^{2}}(F\circ\gamma)=\text{Hess}_{\Sigma}F\left(\frac{d\gamma} {dt},\frac{d\gamma}{dt}\right)\geq 0\] for all \(t\in[0,1]\). Since \(p,q\in\mathcal{C}\), we have \[\frac{d}{dt}(F\circ\gamma)(0)=\frac{d}{dt}(F\circ\gamma)(1)=0,\] which implies that \(F\) is constant on \(\gamma\) by the maximum principle. Then, we conclude that \((F\circ\gamma)(t)\equiv\min_{\Sigma}\!F.\) Therefore, \(\gamma([0,1])\subset\mathcal{C}\) and \(\mathcal{C}\) must be a totally convex subset of \(\Sigma\). In particular, totally convex property of \(\mathcal{C}\) also assures that \(\gamma([0,1])\subset\mathcal{C}\) for all geodesic loop \(\gamma:[0,1]\to\Sigma\), based at a point \(p\in\mathcal{C}\). Moreover, using (3.3) we assures that each geodesic \(\gamma\) which connect two points in \(\mathcal{C}\) is completely inside of \(\Sigma\), that is, the trace of \(\gamma\) does not have points of \(\partial\Sigma\). Hence, \(\mathcal{C}\) is contained in the interior of \(\Sigma\). Finally, we claim that \(\Sigma\) is homeomorphic to either a disk or an annulus. To see this, we divide into two cases: Case 1: \(\mathcal{C}\) consists of a single point. Case 2: \(\mathcal{C}\) contains more than one point. For Case 1, let \(p\in\Sigma\setminus\partial\Sigma\) be the only point of \(\mathcal{C}\). Suppose that there is a non-trivial homotopy class \([\alpha]\in\pi_{1}(\Sigma,p)\), then we can find a geodesic loop \(\gamma:[0,1]\to\Sigma\), \(\gamma(0)=\gamma(1)=p\) with \(\gamma\in[\alpha]\). But, since \(\mathcal{C}\) is totally convex, \(\gamma([0,1])\subset\mathcal{C}\) and, in particular, \(\mathcal{C}\) has more than one point, which is a contradiction. This implies that \(\pi_{1}(\Sigma,p)\) is trivial. Thus, \(\Sigma\) is simply connected and we conclude that \(\Sigma\) is homeomorphic to a disk. For Case 2, we may assume that \(\Sigma\) is not homeomorphic to a disk. Given \(p\in\mathcal{C}\) we can find a geodesic loop \(\gamma:[0,1]\to\Sigma\), \(\gamma(0)=\gamma(1)=p\) belonging to a non-trivial homotopy class \([\alpha]\in\pi_{1}(\Sigma,p)\). The totally convexity of \(\mathcal{C}\) ensures that \(\gamma([0,1])\subset\mathcal{C}.\) We claim that \(\gamma\) is a regular curve. Indeed, if \(\gamma^{\prime}(0)\neq\gamma^{\prime}(1)\), we can choose \(\epsilon_{0}>0\) small and for each \(\epsilon<\epsilon_{0}\) consider the minimizing geodesic \(\tilde{\gamma}_{\epsilon}\) joining \(\gamma(1-\epsilon)\) and \(\gamma(0+\epsilon)\). Since \(\mathcal{C}\) is totally convex and \(\gamma\subset\mathcal{C}\), we conclude that \(\tilde{\gamma}_{\epsilon}\in\mathcal{C}.\) Now, we can choose an nonempty open set \(U\subset\{\tilde{\gamma}_{\epsilon}\}_{\epsilon<\epsilon_{0}}\) of \(\mathcal{C}\). Thus, for any geodesic \(\beta(t)\in U\), \[0=\frac{d^{2}}{dt^{2}}(F\circ\beta)=\text{Hess}_{\Sigma}F\left(\frac{d\beta}{ dt},\frac{d\beta}{dt}\right)\geq 0,\] Therefore, \(\text{Hess}_{\Sigma}F\left(\frac{d\beta}{dt},\frac{d\beta}{dt}\right)=0\) in \(U\). By the proof of Lemma 1 and Proposition 1 \[0=\text{Hess}_{\Sigma}F(e_{i},e_{i})\geq 1+\langle\bar{\nabla}F,N\rangle k_{i} \geq 0.\] Then, \[1+\langle\bar{\nabla}F,N\rangle k_{1}=1+\langle\bar{\nabla}F,N\rangle k_{2}=0,\] and we get that \(k_{1}=k_{2}\) in \(U\). Thus, the open subset \(U\) is totally umbilical, which shows that \(\Sigma\) must be totally umbilical which is a contradiction. Therefore \(\mathcal{C}\) has to be equal to the unique closed geodesic \(\gamma\). Since \([\alpha]\) was chosen to be arbitrary, this implies that \(\pi_{1}(\Sigma,p)\approx\mathbb{Z}\) and \(\Sigma\) is homeomorphic to an annulus. ### Minimal Free Boundary Surfaces in \((n+1)\)-dimensional rotational domains In this subsection, let us consider a rotational hypersurface in the following sense. Let \(\alpha(t)=(f(t),t)\) be a plane curve \(\alpha\) that is the graph of a positive real valued smooth function \(f:I\to\mathbb{R}\) in the \(x_{1}x_{n+1}\)-plane. Let \(\theta\) be a parametrization of the \(n-\)dimensional unit sphere in the hyperplane \(x_{n+1}=0\). The hypersurface of revolution with generatriz \(\alpha\) can be parametrized by \[X(\theta,t)=(\theta f(t),t).\] In this scope, we study minimal free boundary surfaces in domains \(\Omega\) which boundary is a hypersurface of revolution. Let us denote \(x=(x_{1},x_{2},...,x_{n})\) and \(y=x_{n+1}\). Let \(F:\mathbb{R}^{n+1}=\mathbb{R}^{n}\times\mathbb{R}\to\mathbb{R}\) be the smooth function defined by \[F(x,y)=\frac{1}{2}\left(|x|^{2}-f^{2}(y)\right)+1,\] we have that \(\partial\Omega\subset F^{-1}(1).\) Observe that, analogous to what was done in the previous section for dimension \(3\), denoting by \(\Sigma\) a minimal free boundary surface in \(\Omega\), we have \[\nabla F(x,y)=(x,-f(y)f^{\prime}(y))=(x,y)+(0,-y-f(y)f^{\prime}(y)),\] where \(y=\langle(x,y),E_{n+1}\rangle.\) Then, for all \(X,Y\in T(\Sigma)\) we have \[\operatorname{Hess}_{\Sigma}F(X,Y)=\langle X,Y\rangle+g(x,y)\langle A_{N}X,Y \rangle-((f^{\prime}(y))^{2}+f(y)f^{\prime\prime}(y)+1)\langle TX,Y\rangle,\] where \(T:T_{(x,y)})\Sigma\to T_{(x,y)}\Sigma\) is given by \(TX=\langle X,E_{n+1}^{\top})E_{n+1}^{\top}\) and \[g(x,y)=\langle\bar{\nabla}F,N\rangle=\langle(x,y),N\rangle+\langle N,E_{n+1} \rangle(-y-f(y)f^{\prime}(y)).\] We can write \[\operatorname{Hess}_{\Sigma}F(X,X)=\langle X,X\rangle+\langle A(X,X),(\nabla F )^{\perp}\rangle-((f^{\prime}(y))^{2}+f(y)f^{\prime\prime}(y)+1)\langle TX,Y\rangle, \tag{3.4}\] It is easy to check that \(T\) is a self adjoint operator whose \(E_{n+1}^{\top}\) is an eigenvector associated with the eigenvalue \(|E_{n+1}^{\top}|^{2}\). Besides, we can take any nonzero vector in \(T\Sigma\), orthogonal to \(E_{n+1}^{\top}\), to verify that zero is also an eigenvalue of \(T\). Therefore \[0\leq\langle TX,X\rangle\leq|E_{n+1}^{\top}|^{2}|X|^{2},\ \forall X\in T_{(x,y)}\Sigma.\] **Lemma 3**.: _[_11_, Chen]_ _Let \(a_{1},...,a_{n}\) and \(b\) be real numbers. If_ \[\sum_{i=1}^{n}a_{i}^{2}\leq\frac{(\sum_{i=1}^{n}a_{i})^{2}}{n-1}-\frac{b}{n-1},\] _then \(2a_{i}a_{j}\geq\frac{b}{n-1}\) for every \(i,j\in\{1,...,n\}\)._ The next proposition shows that the gap condition given bellow implies the convexity of \(F\) on \(\Sigma\). **Proposition 2**.: _Let \(\Sigma^{n}\) be a minimal free boundary hypersurface \(n\)-dimensional in \(\Omega\), with \(n\geq 3\). Assume that \((f^{\prime})^{2}+ff^{\prime\prime}+1\leq 0\). If_ \[|\nabla F^{\perp}|^{2}|A(x,y)|^{2}\leq\frac{n}{n-1}, \tag{3.5}\] _for every \((x,y)\in\Sigma^{n}\). Then,_ \[Hess_{\Sigma}F(X,X)\geq 0,\] _for all \((x,y)\in\Sigma\) and \(X\in T_{(x,y)}\Sigma\)._ Proof.: Suppose that \((f^{\prime})^{2}+ff^{\prime\prime}+1\leq 0\), then using (3.4) we get \[\operatorname{Hess}_{\Sigma}F(X,X)\geq\langle X,X\rangle+\langle A(X,X),( \nabla F)^{\perp}\rangle. \tag{3.6}\] Let \(\{e_{1},...,e_{n}\}\) be an orthonormal basis of eigenvectors of \(\operatorname{Hess}_{\Sigma}F\) at \((x,y)\in\Sigma\) with respective eigenvalues \(\lambda_{1},...,\lambda_{n}\). We want to show that \(\lambda_{i}\geq 0\) for every \(i\). By (3.6), \(\lambda_{i}\geq\lambda_{i}:=1+\langle A(e_{i},e_{i}),(\nabla F)^{\perp}\rangle\) and joining with (3.5) we get \[\sum_{i=1}^{n}\tilde{\lambda_{i}}^{2} =n+2\sum_{i=1}^{n}\langle A(e_{i},e_{i}),(\nabla F)^{\perp}\rangle +\sum_{i=1}^{n}\langle A(e_{i},e_{i}),(\nabla F)^{\perp}\rangle^{2}\] \[\leq n+|\nabla F^{\perp}|^{2}\sum_{i=1}^{n}|A(e_{i},e_{i})|^{2}\] \[\leq n++|\nabla F^{\perp}|^{2}|A|^{2}\leq n+\frac{n}{n-1}=\frac{n^ {2}}{n-1}.\] On the other hand, we have that \((\sum_{i=1}^{n}\tilde{\lambda_{i}})^{2}=n^{2}\) since \(\Sigma^{n}\) is minimal. Then \[\sum_{i=1}^{n}\tilde{\lambda_{i}}^{2}\leq\frac{(\sum_{i=1}^{n}\tilde{\lambda_{ i}})^{2}}{n-1}.\] By Lemma 3, where \(\tilde{\lambda_{i}}=a_{i}\) and \(b=0\), we get that \(2\tilde{\lambda_{i}}\tilde{\lambda_{j}}\geq 0\). Consequently, the eigenvalues \(\tilde{\lambda_{i}}\), \(i=1,...,n\), have all the same sign. Since \(\sum_{i=1}^{n}\tilde{\lambda_{i}}=n\), we conclude that \(\tilde{\lambda_{i}}\geq 0\) for every \(i\). Therefore, \(\lambda_{i}\geq\tilde{\lambda_{i}}\geq 0\) for every \(i\). Then \[Hess_{\Sigma}F(X,X)\geq 0,\] for all \((x,y)\in\Sigma\) and \(X\in T_{(x,y)}\Sigma\). Proof of Theorem 4.: Firstly, let us define \(\mathcal{C}=\{(x,y)\in\Sigma:F(x,y)=\min_{\Sigma}F\}.\) From Proposition 2, \[Hess_{\Sigma}F(X,X)\geq 0,\] for all \((x,y)\in\Sigma\) and \(X\in T_{(x,y)}\Sigma.\) The convexity of \(Hess_{\Sigma}F\) strongly restricts the set \(\mathcal{C}\) and the topology of \(\Sigma\). We first prove that \(\mathcal{C}\) is a totally convex set of \(\Sigma\). As in the proof of Theorem 3, the convexity of \(\mathrm{Hess}F\) restricted to \(\Sigma\) implies that the set \(\mathcal{C}\) is a totally convex set of \(\Sigma\). From now on, the proof follows the same line as in [9, Theorem 3.7] that uses standard Morse's theory. If \(\mathcal{C}=\{(x_{0},y_{0})\}\), for some \((x_{0},y_{0})\in\Sigma\), \(\Sigma\) is diffeomorphic to a disk \(\mathbb{D}^{n}.\) If \(\mathcal{C}\) contains more than one point we can show that \(\dim(\mathcal{C})=1\) and \(\mathcal{C}\) is a geodesic. In this case, \(\mathcal{C}\) is not a closed geodesic (what would imply that \(\Sigma\) is diffeomorphic to a disk) or is a closed geodesic (what would force \(\Sigma\) to be diffeomorphic to \(\mathbb{S}^{1}\times\mathbb{D}^{n-1}\)). ## 4. Examples of CMC free boundary surfaces in the rotational ellipsoid In this section, we show that there are a catenoid and some portions Delaunay surfaces that are free boundary on the rotational ellipsoid \[a^{2}x^{2}+a^{2}y^{2}+b^{2}z^{2}=R^{2}, \tag{4.1}\] with \(a^{2}\leq b^{2}\) and some constant \(R^{2}\), and satisfy the pinching condition (3.1). **Remark 3**.: _Let us consider_ \[f(y)=\frac{b}{a}\sqrt{\left(\frac{R}{b}\right)^{2}-y^{2}},\] _in (2.1). Then, we obtain the rotational ellipsoid given by (4.1). In this case, the hypothesis \((f^{\prime})^{2}+ff^{\prime\prime}+1\leq 0\) is automatically satisfied. In fact, we have_ \[f^{\prime}(y)=-\frac{yb}{a\sqrt{\left(\frac{R}{b}\right)^{2}-y^{2}}}\] _and_ \[f^{\prime\prime}(y)=-\frac{R^{2}}{ab\left(\left(\frac{R}{b}\right)^{2}-b^{2} \right)^{\frac{3}{2}}}.\] _Therefore,_ \[(f^{\prime})^{2}+ff^{\prime\prime}+1 =-\frac{y^{2}b^{2}}{a^{2}\left(\left(\frac{R}{b}\right)^{2}-y^{2} \right)}+\frac{b\sqrt{\left(\frac{R}{b}\right)^{2}-y^{2}}}{a}\left(-\frac{R^{ 2}}{ab\left(\left(\frac{R}{b}\right)^{2}-b^{2}\right)^{\frac{3}{2}}}\right)+1\] \[=\frac{(a^{2}-b^{2})\left(\left(\frac{R}{b}\right)^{2}-y^{2} \right)}{a^{2}\left(\left(\frac{R}{b}\right)^{2}-y^{2}\right)}=\frac{a^{2}-b^ {2}}{a^{2}}\leq 0.\] First, let us consider a smooth curve parametrized by arc length in the \(xz\)-plane \(\beta(s)=(x(s),0,z(s))\), with \(x(s)>0\) and denote by \(\Sigma\) the surface obtained by rotation of \(\beta\) around the \(z\)-axis. We start presenting a lemma with sufficient conditions for a general rotational surface to satisfy the pinching condition (3.1) in the rotational ellipsoid. **Lemma 4**.: _Suppose that the curve \(\beta\) satisfies the following conditions_ \[-1\leq x^{\prime\prime}(s)\left(x(s)-\frac{x^{\prime}(s)}{z^{\prime}(s)}z(s) \frac{b^{2}}{a^{2}}\right),\text{ if }z^{\prime}(s)\neq 0, \tag{4.2}\] \[-1\leq z(s)z^{\prime\prime}(s)\frac{b^{2}}{a^{2}},\text{ if }z^{\prime}(s)=0, \tag{4.3}\] \[-x(s)x^{\prime}(s)^{2}\leq z^{\prime}(s)x^{\prime}(s)z(s)\frac{b^{2}}{a^{2}}, \tag{4.4}\] _with \(a^{2}\leq b^{2}\). Then, \(\Sigma\) satisfies the pinching condition_ \[|\phi|^{2}g(x,y)^{2}\leq\frac{1}{2}(2+Hg(x,y))^{2},\] _on the rotational ellipsoid given in (4.1)._ Proof.: From Remark 2, it suffices to show that \[\tilde{\lambda}_{1}=1+k_{1}g(x,y)\geq 0\text{ and }\lambda_{2}=1+k_{2}g(x,y)\geq 0.\] Let us consider \(X:[s_{1},s_{2}]\times\mathbb{S}^{1}\to\mathbb{R}^{3}\) given by \[X(s,\theta)=(x(s)\cos(\theta),x(s)\sin(\theta),z(s)),\] obtained by rotation of \(\beta\) around the \(z\)-axis. Therefore, \[X_{s}(s,\theta)=(x^{\prime}(s)\cos(\theta),x^{\prime}(s)\sin(\theta),z^{ \prime}(s))\] and \[X_{\theta}(s,\theta)=(-x(s)\sin(\theta),x(s)\cos(\theta),0).\] Then, a straight forward computation shows that \[N=(-z^{\prime}(s)\cos(\theta),-z^{\prime}(s)\sin(\theta),x^{ \prime}(s)).\] Thus, \[\langle(x,y),N\rangle =-x(s)z^{\prime}(s)\cos^{2}(\theta)-x(s)z^{\prime}(s)\sin^{2}( \theta)+x^{\prime}(s)z(s) \tag{4.5}\] \[=-x(s)z^{\prime}(s)+x^{\prime}(s)z(s).\] From (4.5) and Remark 3, we get \[g(x,y) =\langle\nabla F,N\rangle\] \[=\langle(x,y),N\rangle-\langle N,E_{3}\rangle(y+f(y)f^{\prime}(y))\] \[=-x(s)z^{\prime}(s)+x^{\prime}(s)z(s)-x^{\prime}(s)\left(z(s)- \frac{b^{2}}{a^{2}}z(s)\right) \tag{4.6}\] \[=-x(s)z^{\prime}(s)+x^{\prime}(s)z(s)\frac{b^{2}}{a^{2}}.\] A straight forward computation shows that \[k_{1}=x^{\prime}(s)z^{\prime\prime}(s)-x^{\prime\prime}(s)z^{ \prime}(s)\text{ and }k_{2}=\frac{z^{\prime}(s)}{x(s)}. \tag{4.7}\] If \(z^{\prime}(s)\neq 0\), we can write \[k_{1}(s)=-\frac{x^{\prime\prime}(s)}{z^{\prime}(s)}. \tag{4.8}\] Then, using (4) and (4.2) \[\tilde{\lambda}_{1} =1+k_{1}g(x,y)\] \[=1-\frac{x^{\prime\prime}(s)}{z^{\prime}(s)}\left(-x(s)z^{\prime} (s)+x^{\prime}(s)z(s)\frac{b^{2}}{a^{2}}\right)\] \[=1+x^{\prime\prime}(s)\left(x(s)-\frac{x^{\prime}(s)}{z^{\prime} (s)}z(s)\frac{b^{2}}{a^{2}}\right)\geq 0.\] If \(z^{\prime}(s)=0\), since the curve is parameterized by the arc length, then \(x^{\prime}(s)^{2}=1\). Using (4) and (4.3), we get \[\tilde{\lambda}_{1} =1+k_{1}g(x,y)\] \[=1+(x^{\prime}(s)z^{\prime\prime}(s)-x^{\prime\prime}(s)z^{ \prime}(s))\left(-x(s)z^{\prime}(s)+x^{\prime}(s)z(s)\frac{b^{2}}{a^{2}}\right)\] \[=1+z^{\prime\prime}(s)z(s)\frac{b^{2}}{a^{2}}\geq 0.\] Finally, using again that the curve is parameterized by the arc length, together with (4.6) and (4.4), we obtain \[\tilde{\lambda}_{2}(s) =1+k_{2}g(x,y)\] \[=1+\frac{z^{\prime}(s)}{x(s)}\left(-x(s)z^{\prime}(s)+x^{\prime}(s )z(s)\frac{b^{2}}{a^{2}}\right)\] \[=\frac{x(s)-x(s)z^{\prime}(s)^{2}+z^{\prime}(s)x^{\prime}(s)z(s) \frac{b^{2}}{a^{2}}}{x(s)}\] \[=\frac{x(s)x^{\prime}(s)^{2}+z^{\prime}(s)x^{\prime}(s)z(s)\frac{ b^{2}}{a^{2}}}{x(s)}\geq 0.\] Therefore, \(\tilde{\lambda}_{1}(s)\geq 0\) and \(\tilde{\lambda}_{2}(s)\geq 0\) as desired. The function \[\rho(s)=x(s)-\frac{x^{\prime}(s)}{z^{\prime}(s)}z(s)\frac{b^{2}}{a^{2}}. \tag{4.9}\] that appears in (4.2) has an important geometric meaning. In fact, if \(\rho(s_{0})=0,\) then we can proof that \(\Sigma\) is orthogonal to the rotational ellipsoid \(E\) given by \[a^{2}x^{2}+a^{2}y^{2}+b^{2}z^{2}=R^{2},\] where \(R^{2}:=a^{2}x(s_{0})^{2}+b^{2}z(s_{0})^{2}.\) In particular we have the following lemma. **Lemma 5**.: _Assume that \(\beta(s)\) is defined for \(s\in[c,d]\) and consider \(\mathcal{Z}=\{s\in[c,d];z^{\prime}(s)=0\}.\) Let us consider \(a\) and \(b\) positive real numbers, such that \(a^{2}\leq b^{2}\) and define the function \(\rho:[c,d]\setminus\mathcal{Z}\to\mathbb{R}\) by_ \[\rho(s)=x(s)-\frac{x^{\prime}(s)}{z^{\prime}(s)}z(s)\frac{b^{2}}{a^{2}}.\] _Let \(s_{1}<s_{2}\) be two values in \([c,d]\) such that:_ _(i) \(\rho(s_{1})=\rho(s_{2})=0\),_ _(ii) \(a^{2}x(s_{1})^{2}+b^{2}z(s_{1})^{2}=a^{2}x(s_{2})^{2}+b^{2}z(s_{2})^{2}:=R^{2}\) and_ _(iii) \(a^{2}x(s)^{2}+b^{2}z(s)^{2}<R^{2}\) for all \(s\in(s_{1},s_{2}).\)_ _Then, the rotation of \(\beta_{|_{[s_{1},s_{2}]}}\) produces a free boundary surface \(\Sigma\) inside the rotational ellipsoid \(E\) given by_ \[a^{2}x^{2}+a^{2}y^{2}+b^{2}z^{2}=R^{2}. \tag{4.10}\] Proof.: The ellipsoid given in (4.10) can be parametrized by \[\bar{X}(s,\theta)=\left(\frac{R}{a}\cos(s)\cos(\theta),\frac{R}{a}\cos(s)\sin (\theta),\frac{R}{b}\sin(s)\right).\] A straight calculation show that \[\bar{N}=\frac{(\frac{1}{b}\cos(s)\cos(\theta),\frac{1}{b}\cos(s)\sin(\theta), \frac{1}{a}\sin(s))}{\sqrt{\frac{cos^{2}(s)}{b^{2}}+\frac{\sin^{2}(s)}{a^{2}}}}.\] Now, observe that if \(\rho(s_{1})=\rho(s_{2})=0,\) then \[0=\rho(s_{i})=x(s_{i})-\frac{x^{\prime}(s_{i})}{z^{\prime}(s_{i})}z(s_{i}) \frac{b^{2}}{a^{2}}.\] We have, \(z(s_{i})\neq 0\). In fact, if \(z(s_{i})=0\), we conclude that \(x(s_{i})=0\), what does not happen. Thus, we can write \[x^{\prime}(s_{i})=\frac{a^{2}}{b^{2}}\frac{x(s_{i})}{z(s_{i})}z^{\prime}(s_{i}),\] \(i=1,2.\) Therefore, \[\beta^{\prime}(s_{i}) =\left(\frac{a^{2}}{b^{2}}\frac{x(s_{i})}{z(s_{i})}z^{\prime}(s_{i}),0,z^{\prime}(s_{i})\right)\] \[=z^{\prime}(s_{i})z(s_{i})\frac{a}{b}\left(\frac{a}{b}x(s_{i}),0, \frac{b}{a}z(s_{i})\right)\] On the other hand, using (ii) we have that the curve \(\beta\) intersects the ellipsoid at the points \(\beta(s_{i})\). The normal at these points is given by \[\bar{N}(\beta(s_{i}))=\frac{\left(\frac{a}{b}x(s_{i}),0,\frac{b}{a}z(s_{i}) \right)}{R\sqrt{\frac{cos^{2}(s)}{b^{2}}+\frac{\sin^{2}(s)}{a^{2}}}}.\] Then, \[\beta^{\prime}(s_{i})=z^{\prime}(s_{i})z(s_{i})\frac{a}{b}R\sqrt{\frac{cos^{2} (s)}{b^{2}}+\frac{\sin^{2}(s)}{a^{2}}}\bar{N}(\beta(s_{i})).\] Thus, the rotation of \(\beta_{|_{|_{s_{1},s_{2}}|}}\) is orthogonal to the ellipsoid in (4.10). As by hypothesis we have \(a^{2}x(s)^{2}+b^{2}z(s)^{2}<R^{2}\) for all \(s\in(s_{1},s_{2})\) we get that \(\Sigma\subset E.\) Before presenting examples of CMC free boundary surfaces, let us introduce an example in the case where \(H=0,\) that is, a minimal free boundary surface in the rotational ellipsoid. **Example 1**.: _Consider \(\Sigma\) the catenoid obtained by revolving the curve \(\beta(s)=(\cosh(s),0,s)\) around the \(z\)-axis. Parameterizing by arc length we obtain the curve \(\bar{\beta}(s)=(\cosh(\sinh^{-1}(s)),0,\sinh^{-1}(s)).\) Taking \(a^{2}=1\) and \(b^{2}=2\) in (4.9), we get that \(\rho(s)=0\) if and only if_ \[\frac{1}{2\sinh^{-1}(s)}=\tanh(\sinh^{-1}(s)).\] _Solving the equation we get that \(s_{1}=-0,755...\) and \(s_{2}=0,755...\) are such that \(\rho(s_{i})=0\) for \(i=1,2.\) The parity of the functions \(\cosh(s)\) and \(\sinh^{-1}\) ensures that_ \[(\cosh(\sinh^{-1}(s_{1}))^{2}+2(\sinh^{-1}(s_{1}))^{2}=(\cosh(\sinh^{-1}(s_{2} )))^{2}+2(\sinh^{-1}(s_{2}))^{2},\] _once \(s_{1}=-s_{2}\). Then, let us define_ \[R^{2}:=(\cosh(\sinh^{-1}(s_{1}))^{2}+2(\sinh^{-1}(s_{1}))^{2}=(\cosh(\sinh^{-1 }(s_{2})))^{2}+2(\sinh^{-1}(s_{2}))^{2}.\] _This way, the degrowth and growth of \(\cosh(s)\) in \((s_{1},0)\) and \((0,s_{2})\), respectively, and the fact that \(\sinh^{-1}(s)\) is increasing guarantee that \((\cosh(\sinh^{-1}(s))^{2}+2(\sinh^{-1}(s))^{2}<R^{2}\) for all \(s\in(s_{1},s_{2}).\) Then, \(\Sigma\) is a free boundary surface in the ellipsoid \(E\) given by_ \[x^{2}+y^{2}+2z^{2}=R^{2}.\] _Furthermore, with some calculations we get that_ \[-1-x^{\prime\prime}(s)\left(x(s)-\frac{x^{\prime}(s)}{z^{\prime}(s)}z(s)\frac{ b^{2}}{a^{2}}\right)=-1-\frac{1}{(1+s^{2})^{\frac{3}{2}}}\left(\cosh(\sinh^{-1}(s) )-2s\sinh^{-1}(s)\right)\leq 0\] _and_ \[-x(s)x^{\prime}(s)^{2}-z^{\prime}(s)x^{\prime}(s)z(s)\frac{b^{2}}{a^{2}}=- \frac{s}{1+s^{2}}\left(s\cosh(\sinh^{-1}(s))+2\sinh^{-1}(s)\right)\leq 0,\] _for all \(s\in[s_{1},s_{2}].\) Then, from Lemma 4, \(\Sigma\) satisfies the condition_ \[|\phi|^{2}g(x,y)^{2}\leq\frac{1}{2}(2+Hg(x,y))^{2}.\] Now, let us consider a smooth curve parametrized by arc length in the \(xz\)-plane \(\beta(s)=(x(s),0,z(s))\), with \(x(s)>0\), where \[x(s)=\frac{1}{H}\sqrt{1+B^{2}+2B\sin(Hs+\frac{3\pi}{2})} \tag{4.11}\] and \[z(s)=\int_{\frac{3\pi}{2H}}^{s+\frac{3\pi}{2H}}\frac{1+B\sin(Ht)}{\sqrt{1+B^{2 }+2B\sin(Ht)}}dt, \tag{4.12}\] are given by the solution of Kenmotsu [17, Section 2, Equation (11)], where \(B,H\in\mathbb{R}\), with \(H>0,\ B\geq 0\) and \(B\neq 1\). Let denote by \(\Sigma\) the surface obtained by rotation of \(\beta\) around the \(z\)-axis. From Delaunay's Theorem, we know that any complete surface of revolution with constant mean curvature is a sphere, a catenoid, or a surface whose generating curve is given by \(\beta\). A surface whose generating curve is given by \(\beta\) is called a Delaunay surface, with parameters \(H\) and \(B\), which can be of different types. If \(B=0\) we get right cylinders. If \(0<B<1\), Delaunay surfaces are embedded and they are called unduloids. If \(B>1\) they are only immersed and called nodoids. Observe that the components of the velocity vector of the curve \(\beta(s)\) in the \(xz\)-plane are given by \[x^{\prime}(s)=\frac{B\cos(Hs+\frac{3\pi}{2})}{\sqrt{1+B^{2}+2B\sin(Hs+\frac{3 \pi}{2})}}\text{ and }z^{\prime}(s)=\frac{1+B\sin(Hs+\frac{3\pi}{2})}{\sqrt{1+B^{2}+2B\sin(Hs+ \frac{3\pi}{2})}}.\] And the acceleration components are given by \[x^{\prime\prime}(s)=\frac{-BH(B+\sin(Hs+\frac{3\pi}{2}))(B\sin(Hs+\frac{3\pi} {2})+1)}{(1+B^{2}+2B\sin(Hs+\frac{3\pi}{2}))^{\frac{3}{2}}}\] and \[z^{\prime\prime}(s)=\frac{HB^{2}\cos(Hs+\frac{3\pi}{2})(B+\sin(Hs+\frac{3\pi} {2}))}{(1+B^{2}+2B\sin(Hs+\frac{3\pi}{2}))^{\frac{3}{2}}}.\] Let us assume that \(0<B<1\). The key observation in this case is that the function \(z\) satisfies \(z^{\prime}(s)>0\) for all \(s\). Let \(s_{0}\) be the smaller positive value such that \(x^{\prime\prime}(s_{0})=0\). One can easily check that \(s_{0}=s_{0}(H,B)=\frac{1}{H}\sin^{-1}(-B)+\frac{\pi}{2H}\), where \(\sin^{-1}:[-1,1]\rightarrow[-\frac{\pi}{2},\frac{\pi}{2}]\). Thus, given \(s\in(-s_{0},s_{0})\) we have \(z^{\prime}(s)>0\) and \(x^{\prime\prime}(s)>0\). **Remark 4**.: _In this case, we only have \(x^{\prime}=0\) at point \(0\), so the tangent is only vertical at this point. Therefore, we only have one wave of the unduloid inside the ellipsoid._ Now, let us see some properties of the function \(\rho\) that we will need later. **Lemma 6**.: _Fix \(0<B<1\), \(H>0\), and consider the function \(\rho:[-s_{0},s_{0}]\rightarrow\mathbb{R}\) given by (4.9). Then, i) \(\rho(0)>0\). ii) \(\rho^{\prime}(0)=0\) and \(\rho^{\prime}(s_{0})\leq 0\). iii) \(\rho\) is increasing in \((-s_{0},0)\) and decreasing in \((0,s_{0})\)._ Figure 1. Catenoid free boundary in the ellipsoid Proof.: Observe that i) follows directly since \[\rho(0)=x(0)-\frac{x^{\prime}(0)}{z^{\prime}(0)}z(0)\frac{b^{2}}{a^{2}}=\frac{1-B} {H}>0.\] To proof ii) we observe that, since \(\beta\) is parametrized by arc length, then \[\rho^{\prime}(s) =x^{\prime}(s)-\frac{b^{2}}{a^{2}}\frac{((x^{\prime}(s)z(s))^{ \prime}z^{\prime}(s)-x^{\prime}(s)z(s)z^{\prime\prime}(s))}{z^{\prime}(s)^{2}}\] \[=x^{\prime}(s)+\frac{b^{2}}{a^{2}}\frac{x^{\prime}(s)z(s)z^{ \prime\prime}(s)-x^{\prime\prime}(s)z(s)z^{\prime}(s)-x^{\prime}(s)z^{\prime} (s)^{2}}{z^{\prime}(s)^{2}}\] \[=\left(1-\frac{b^{2}}{a^{2}}\right)x^{\prime}(s)+\frac{b^{2}}{a^ {2}}z(s)\left(\frac{x^{\prime}(s)z^{\prime\prime}(s)-x^{\prime\prime}(s)z^{ \prime}(s)}{z^{\prime}(s)^{2}}\right)\] \[=\frac{(a^{2}-b^{2})}{a^{2}}x^{\prime}(s)-\frac{b^{2}}{a^{2}}z(s) \left(\frac{x^{\prime}(s)}{z^{\prime}(s)}\right)^{\prime}.\] As \(x^{\prime}(0)=0\) and \(z(0)=0\) it follows that \(\rho^{\prime}(0)=0.\) On the other hand, using the expressions for \(k_{1}\) given in (4.7) and (4.8) we get \[\rho^{\prime}(s) =\frac{(a^{2}-b^{2})}{a^{2}}x^{\prime}(s)-\left(\frac{-x^{\prime} (s)z^{\prime\prime}(s)+x^{\prime\prime}(s)z^{\prime}(s)}{z^{\prime}(s)^{2}} \right)z(s)\frac{b^{2}}{a^{2}}\] \[=\frac{(a^{2}-b^{2})}{a^{2}}x^{\prime}(s)+\frac{k_{1}(s)}{z^{ \prime}(s)^{2}}z(s)\frac{b^{2}}{a^{2}}\] \[=\frac{(a^{2}-b^{2})}{a^{2}}x^{\prime}(s)-\frac{x^{\prime\prime}( s)}{z^{\prime}(s)^{3}}z(s)\frac{b^{2}}{a^{2}}.\] Then, since \(x^{\prime\prime}(s_{0})=0\) we have that \[\rho^{\prime}(s_{0}) =(a^{2}-b^{2})x^{\prime}(s_{0})\] \[=\frac{(a^{2}-b^{2})}{a^{2}}\frac{B\cos(Hs_{0}+\frac{3\pi}{2})}{ \sqrt{1+B^{2}+2\sin(Hs_{0}+\frac{3\pi}{2})}}\] \[=\frac{(a^{2}-b^{2})}{a^{2}}\frac{B\sqrt{1-B^{2}}}{\sqrt{1-B^{2}}}\] \[=\frac{(a^{2}-b^{2})}{a^{2}}B\leq 0.\] Finally, since \(x^{\prime\prime}(s)>0\) and \(x^{\prime}(0)=0\) we get that \(x^{\prime}(s)>0\) for all \(s\in(0,s_{0})\) and \(x^{\prime}(s)<0\) for all \(s\in(-s_{0},0).\) In the same way, we have \(z(s)>0\) in \((0,s_{0})\) and \(z(s)<0\) in \((-s_{0},0)\), then we obtain \[\rho^{\prime}(s)=\frac{(a^{2}-b^{2})}{a^{2}}x^{\prime}(s)-\frac{x^{\prime \prime}(s)}{z^{\prime}(s)^{3}}z(s)\frac{b^{2}}{a^{2}}<0\] in \((0,s_{0})\), and \[\rho^{\prime}(s)=\frac{(a^{2}-b^{2})}{a^{2}}x^{\prime}(s)-\frac{x^{\prime \prime}(s)}{z^{\prime}(s)^{3}}z(s)\frac{b^{2}}{a^{2}}>0\] in \((-s_{0},0)\). Therefore, \(\rho\) is increasing in \((-s_{0},0)\) and decreasing in \((0,s_{0})\). The next lemma gives us conditions to have an unduloid that is a free boundary surface on the rotational ellipsoid. **Lemma 7**.: _Fix \(0<B<1\), \(H>0\), and set \(z_{0}=\frac{1-B^{2}}{HB}\). If \(z(s_{0})\geq z_{0}\), then \(\rho(\bar{s})=0\) for some \(\bar{s}\in(0,s_{0}]\). In particular, the surface obtained by rotation of \(\beta|_{[-\bar{s},\bar{s}]}\) is a free boundary CMC surface inside the rotational ellipsoid \(E\) given by_ \[a^{2}x^{2}+a^{2}y^{2}+b^{2}z^{2}=\bar{R}^{2},\] _where \(\bar{R}^{2}:=a^{2}x(\bar{s})^{2}+b^{2}z(\bar{s})^{2}\)._ Proof.: If \(z(s_{0})\geq z_{0}\), then we get \[\rho(s_{0}) =x(s_{0})-\frac{x^{\prime}(s_{0})}{z^{\prime}(s_{0})}z(s_{0})\frac{b ^{2}}{a^{2}}\] \[\leq x(s_{0})-\frac{x^{\prime}(s_{0})}{z^{\prime}(s_{0})}z_{0}\frac {b^{2}}{a^{2}}\] \[=\frac{(a^{2}-b^{2})}{a^{2}}\frac{\sqrt{1-B^{2}}}{H}\leq 0.\] By assertion i) of Lemma 6, \(\rho(0)>0\), and then by continuity there is \(\bar{s}\in(0\ s_{0}]\) such that \(\rho(\bar{s})=0\). Using the parity of functions \(\sin(Ht+\frac{3\pi}{2})\) and \(\sin(Ht)\), we get \[x(-s)=x(s),\ x^{\prime}(-s)=-x^{\prime}(s),\ z(-s)=-z(s)\ \text{and}\ z^{ \prime}(-s)=z^{\prime}(s),\] and thus, \[\rho(-\bar{s})=\rho(\bar{s})=0.\] Moreover, \(x^{\prime}(0)=0\) and \(x^{\prime\prime}(s)>0\) imply that \(x^{\prime}(s)>0\) for all \(s\in(0,\bar{s}]\). Therefore, \(x^{\prime}(s)>0\) and \(z^{\prime}(s)>0\) in \((0,\bar{s})\), and it ensures \(a^{2}x^{2}(s)+b^{2}z^{2}(s)<\bar{R}^{2}:=a^{2}x^{2}(\bar{s})+b^{2}z^{2}(\bar{s})\) for all \(s\in(0,\bar{s}]\). Because the curve \(\beta\) is symmetric with respect to \(x\)-axis we get \(a^{2}x^{2}(s)+b^{2}z^{2}(s)\leq\bar{R}^{2}\) for all \(s\in[-\bar{s},\bar{s}]\) and we conclude that the surface is free boundary by Lemma 5. **Example 2**.: _Fix \(B=0,9\) and \(H=0,1\), so we have \(z_{0}=\frac{1-B^{2}}{HB}=2,111...\) and \(s_{0}=10\sin^{-1}(-0,9)+5\pi\approx 4,51026.\) Then, we get_ \[z_{0}(s_{0})=\int_{15\pi}^{4,51026+15\pi}\left(\frac{1+(0,9)\sin(0,1t)}{\sqrt {1+(0,9)^{2}+(1,8)\sin(0,1t)}}\right)dt\approx 2,71697.\] _Therefore, \(z(s_{0})\geq z_{0}\). From Lemma 7, there is \(\bar{s}\in(0,s_{0}]\) such that the surface obtained by rotation of \(\beta|_{[-\bar{s},s]}\) is a free boundary CMC surface inside the rotational ellipsoid \(E\) given by_ \[a^{2}x^{2}+a^{2}y^{2}+b^{2}z^{2}=\bar{R}^{2}, \tag{4.13}\] _where \(\bar{R}^{2}:=a^{2}x(\bar{s})^{2}+b^{2}z(\bar{s})^{2}\)._ The next example says essentially that there are portions of unduloids that are free boundary in the ellipsoid given by (4.13) and satisfy the conditions of Lemma 4, there is, satisfy \[|\phi|^{2}g(x,y)^{2}\leq\frac{1}{2}(2+Hg(x,y))^{2},\] on \(\Sigma\). Figure 2. Unduloid free boundary in the ellipsoid **Example 3**.: _Fix \(0<B<1\) and \(H>0\) and consider \(\beta(s)=(x(s),0,z(s))\) as above and set \(z_{0}=\frac{1-B^{2}}{HB}.\) Let \(s_{0}\) be the smaller positive value such that \(x^{\prime\prime}(s_{0})=0,\) in other words, \(s_{0}=\frac{1}{H}\sin^{-1}(-B)+\frac{\pi}{2H}.\) Suppose \(z(s_{0})\geq z_{0}.\) From Lemma (7), the surface \(\Sigma\) obtained by rotation of \(\beta|_{[-\bar{s},\bar{s}]},\) for some \(\bar{s}\in(0,s_{0}],\) is a free boundary CMC surface inside the rotational ellipsoid \(E\) given by (4.10). Moreover, in this case, for all \(s\in[-\bar{s},\bar{s}]\) we have_ _(i) \(x^{\prime\prime}(s)\geq 0.\) In fact, we have \([-\bar{s},\bar{s}]\subset[-s_{0},s_{0}],\) where \(s_{0}\) was chosen to be the largest neighborhood of \(0\) where \(x^{\prime\prime}(s)\geq 0.\)_ _(ii) \(\rho(s):=x(s)-\frac{x^{\prime}(s)}{z^{\prime}(s)}z(s)\frac{b^{2}}{a^{2}}\geq 0.\) Indeed, from Lemma 7, \(\rho(\bar{s})=0\). From Lemma 6, \(\rho\) is increasing in \((-s_{0},0)\) and decreasing in \((0,s_{0})\). Therefore, \(\rho(s)\geq 0.\)_ _(iii) \(z(s)x^{\prime}(s)\geq 0.\) In fact, since \(z^{\prime}(s)>0\) and \(x^{\prime\prime}(s)>0\) in \((-s_{0},s_{0})\), we get that \(z\) and \(x^{\prime}\) are both growing in \((-s_{0},s_{0}).\) Since \(z(0)=x^{\prime}(0)=0\), we conclude that \(x^{\prime}\) and \(z\) has the same sing._ _The items (i), (ii) and (iii) guarantee that the inequalities in Lemma 4 are satisfied. In fact, from (i) and (ii) we get (4.2). Since \(z^{\prime}>0\), we do not need to show the validity of (4.3). Using that \(x>0\) and (iii) we get (4.4). Therefore,_ \[|\phi|^{2}g(x,y)^{2}\leq\frac{1}{2}(2+Hg(x,y))^{2},\] _on \(\Sigma.\)_ Now, let us assume that \(B>1\). Let \(r_{0}\) be the smaller positive value such that \(z^{\prime}(r_{0})=0.\) We can check that \(r_{0}=r_{0}(H,B)=\frac{1}{H}\sin^{-1}\left(-\frac{1}{B}\right)+\frac{\pi}{2H},\) where \(\sin^{-1}:[-1,1]\rightarrow[-\frac{\pi}{2},\frac{\pi}{2}].\) In this case, we have \(z^{\prime}(r)<0\) and \(x^{\prime\prime}(r)>0\) for all \(r\in(-r_{0},r_{0}).\) **Remark 5**.: _In this case, since \(z^{\prime}\neq 0\) for all \(r\in(-r_{0},r_{0})\), we do not have horizontal tangents. Therefore, the node of the nodoids does not lie inside the ellipsoid._ In the next Lemma we are going to show that there are portions of nodoids that are free boundary in the ellipsoid given by (4.13) and satisfy the conditions of Lemma 4, there is, satisfy \[|\phi|^{2}g(x,y)^{2}\leq\frac{1}{2}(2+Hg(x,y))^{2},\] on \(\Sigma.\) **Lemma 8**.: _Fix \(B>1\) and \(H>0\) and consider \(\beta(r)=(x(r),0,z(r))\), with \(x\) and \(z\) given in (4.11) and (4.12), respectively. Let \(r_{0}\) as above, then, there is \(\bar{r}\in(-r_{0},r_{0})\) such that \(\rho(\bar{r})=0\) and the surface obtained by rotation of \(\beta|_{[-\bar{r},\bar{r}]}\) is a free boundary CMC surface inside the rotational ellipsoid \(E\) given by_ \[a^{2}x^{2}+a^{2}y^{2}+b^{2}z^{2}=\bar{R}^{2},\] _where \(\bar{R}^{2}:=a^{2}x(\bar{s})^{2}+b^{2}z(\bar{s})^{2}\). Furthermore, we have_ \[|\phi|^{2}g(x,y)^{2}\leq\frac{1}{2}(2+Hg(x,y))^{2},\] _on \(\Sigma.\)_ Proof.: In fact, we have that \[\rho(0)=x(0)-\frac{x^{\prime}(0)}{z^{\prime}(0)}z(0)\frac{b^{2}}{a^{2}}=\frac {|1-B|}{H}>0,\] and \(\rho(r)\rightarrow-\infty\) when \(r\to r_{0}\). Then, by continuity there is \(\bar{r}\in(0,r_{0})\) such that \(\rho(\bar{r})=0.\) Using the parity of function \(\rho\) we have \[\rho(\bar{r})=\rho(-\bar{r})=0.\] Moreover, \(x^{\prime}(0)=0\) and \(x^{\prime\prime}(r)>0\) imply that \(x^{\prime}(r)<0\) for all \(r\in(-\bar{r},0)\). Therefore, \(x^{\prime}(r)<0\) and \(z^{\prime}(r)<0\) in \((-\bar{r},0)\), and it ensures \(a^{2}x^{2}(r)+b^{2}z^{2}(r)<\bar{R}^{2}:=a^{2}x^{2}(-\bar{r})+b^{2}z^{2}(-\bar{r})\) for all \(r\in(-\bar{r},0)\). Because the curve \(\beta\) is symmetric with respect to \(x\)-axis we get \(a^{2}x^{2}(r)+b^{2}z^{2}(r)\leq\bar{R}^{2}\) for all \(r\in[-\bar{r},\bar{r}]\) and we conclude that the surface is free boundary by Lemma 5. Furthermore, in this case, for all \(r\in[-\bar{r},\bar{r}]\) we have (i) \(\rho(r)\geq 0.\) Indeed, as already calculated in Lemma 6, we have \[\rho^{\prime}(r)=\frac{(a^{2}-b^{2})}{a^{2}}x^{\prime}(r)-\frac{x^{\prime\prime} (r)}{z^{\prime}(r)^{3}}z(r)\frac{b^{2}}{a^{2}}.\] Since \(x^{\prime\prime}(r)>0\) and \(x^{\prime}(0)=0\) we get that \(x^{\prime}(r)<0\) for all \(r\in(-r_{0},0)\) and \(x^{\prime}(r)>0\) for all \(r\in(0,r_{0})\). Similarly, we have \(z(r)>0\) in \((-r_{0},0)\) and \(z(r)<0\) in \((0,r_{0})\), then we obtain \[\rho^{\prime}(r)>0\ \forall r\in(-\bar{r},0)\] and \[\rho^{\prime}(r)<0\ \forall r\in(0,\bar{r}).\] Therefore, \(\rho\) is increasing in \((-r_{0},0)\) and decreasing in \((0,r_{0}).\) Since \(\rho(0)>0\), we conclude that \(\rho(r)\geq 0\), for all \(r\in[-\bar{r},\bar{r}]\). (ii) \(x^{\prime}(r)z(r)\leq 0.\) In fact, since \(z^{\prime}(r)<0\) and \(x^{\prime\prime}(r)>0\) in \((-r_{0},r_{0})\), we get that \(x^{\prime}\) is growing in \((-r_{0},r_{0})\) and \(z\) is descending in \((-r_{0},r_{0})\). Since \(z(0)=x^{\prime}(0)=0\), we conclude that \(x^{\prime}\) and \(z\) have opposite signs. The items (i), (ii) and (iii) guarantee that the inequalities in Lemma 4 are satisfied. In fact, from \(x^{\prime\prime}(r)>0\) and (i) we get (4.2). Since \(z^{\prime}<0\), we do not need to show the validity of (4.3). Using that \(x>0\) and (ii) we get (4.4). Therefore, \[|\phi|^{2}g(x,y)^{2}\leq\frac{1}{2}(2+Hg(x,y))^{2},\] on \(\Sigma\). **Example 4**.: _Fix \(B=1,1\) and \(H=0,1\). Then, we have \(r_{0}\approx 10\sin^{-1}(-0.91)+5\pi\approx 4,297...\). Therefore, \(z^{\prime}(r_{0})=0\) and from Lemma 8, there is \(\bar{r}\in(0,r_{0}]\) such that the surface obtained by rotation of \(\beta|_{[-\bar{r},\bar{r}]}\) is a free boundary CMC surface inside the rotational ellipsoid \(E\) given by_ \[a^{2}x^{2}+a^{2}y^{2}+b^{2}z^{2}=\bar{R}^{2},\] _where \(\bar{R}^{2}:=a^{2}x(\bar{r})^{2}+b^{2}z(\bar{r})^{2}\)._ ## Funding The first and second authors have been partially supported by Conselho Nacional de Desenvolvimento Cientifico e Tecnologico (CNPq) of the Ministry of Science, Technology and Innovation of Brazil, Grants 316080/2021-7, 200261/2022-3, and 306524/2022-8. The authors also were supported by Paraiba State Research Foundation (FAPESQ), Grants 3025/2021 and 2021/3175 (A. Freitas). The third author was partially supported by Grant 2022/1963, Paraiba State Research Foundation (FAPESQ). ## Acknowledgements This work is a part of the Ph.D. thesis of the third author. The authors would like to thank Ezequiel Barbosa and Luciano Mari for their discussions about the object of this paper and several valuable suggestions. The first author would like to thank the hospitality of the Mathematics Department of Universita degli Studi di Torino, where part of this work was carried out. The third author would like to express her gratitude for the hospitality and support during her visit to the Mathematics Department of Universidade Federal de Minas Gerais in April/May 2023. ## Data availability statement This manuscript has no associated data.
2309.13118
Topological dualities via tensor networks
The ground state of the toric code, that of the two-dimensional class D superconductor, and the partition sum of the two-dimensional Ising model are dual to each other. This duality is remarkable inasmuch as it connects systems commonly associated to different areas of physics -- that of long range entangled topological order, (topological) band insulators, and classical statistical mechanics, respectively. Connecting fermionic and bosonic systems, the duality construction is intrinsically non-local, a complication that has been addressed in a plethora of different approaches, including dimensional reduction to one dimension, conformal field theory methods, and operator algebra. In this work, we propose a unified approach to this duality, whose main protagonist is a tensor network (TN) assuming the role of an intermediate translator. Introducing a fourth node into the net of dualities offers several advantages: the formulation is integrative in that all links of the duality are treated on an equal footing, (unlike in field theoretical approaches) it is formulated with lattice precision, a feature that becomes key in the mapping of correlation functions, and their possible numerical implementation. Finally, the passage from bosons to fermions is formulated entirely within the two-dimensional TN framework where it assumes an intuitive and technically convenient form. We illustrate the predictive potential of the formalism by exploring the fate of phase transitions, point and line defects, topological boundary modes, and other structures under the mapping between system classes. Having condensed matter readerships in mind, we introduce the construction pedagogically in a manner assuming only minimal familiarity with the concept of TNs.
C. Wille, J. Eisert, A. Altland
2023-09-22T18:00:17Z
http://arxiv.org/abs/2309.13118v2
# Topological dualities via tensor networks ###### Abstract The ground state of the toric code, that of the two-dimensional class D superconductor, and the partition sum of the two-dimensional Ising model are dual to each other. This duality is remarkable inasmuch as it connects systems commonly associated to different areas of physics - that of long range entangled topological order, (topological) band insulators, and classical statistical mechanics, respectively. Connecting fermionic and bosonic systems, the duality construction is intrinsically non-local, a complication that has been addressed in a plethora of different approaches, including dimensional reduction to one dimension, conformal field theory methods, and operator algebra. In this work, we propose a unified approach to this duality, whose main protagonist is a tensor network (TN) assuming the role of an intermediate translator. Introducing a fourth node into the net of dualities offers several advantages: the formulation is integrative in that all links of the duality are treated on an equal footing, (unlike in field theoretical approaches) it is formulated with lattice precision, a feature that becomes key in the mapping of correlation functions, and their possible numerical implementation. Finally, the passage from bosons to fermions is formulated entirely within the two-dimensional TN framework where it assumes an intuitive and technically convenient form. We illustrate the predictive potential of the formalism by exploring the fate of phase transitions, point and line defects, topological boundary modes, and other structures under the mapping between system classes. Having condensed matter readerships in mind, we introduce the construction pedagogically in a manner assuming only minimal familiarity with the concept of TNs. ## I Introduction Where they exist, dualities are powerful aides in understanding the physics of nominally different complex systems. As a case in point, consider the ground state of the _toric code_ (TC), that of a topological _superconductor_ (SC) in symmetry class D, and the partition sum of the classical two-dimensional _Ising model_ (IM) -- three of the main protagonists of this work. In a sense to be made precise in the following, these systems are connected by duality transformations [1; 2; 3; 4; 5; 6; 7; 8; 9]. In the case at hand, these draw connections between bosonic and fermionic systems, ground states and partition sums, and between classical and quantum systems. They also link systems which are at the forefront of interest to different communities. For example, the toric code ground state is a paradigmatic example of a long range entangled state of matter (hence featuring intrinsic topological order) [10], while the topological superconductor is a free fermion system belonging to the family of topological insulators [11]. All three systems display phase transitions -- between an ordered phase and a topological spin liquid, a trivial and a topological superconductor and a ferro- and a paramagnet, respectively -- and the duality establishes the equivalence between these. The same applies to physics at various defect structures, for example, the formation of gapless boundary modes in the superconductor related to the behavior of anyonic excitations at the boundary of the toric code. Dualities in condensed matter physics are generically established via a toolbox of recurrent concepts. These include the mappings between \(d\)-dimensional quantum systems and \((d+1)\)-dimensional partition sums, the taking of continuum limits mapping to (conformal) field theories and dimensional analysis, or the comparison of operator commutator algebras on different sides of the duality. For example, one way to go from the two-dimensional Ising model to the superconductor, is to first apply an anisotropic scaling deformation to map the former to the transverse magnetic field quantum Hamiltonian, then equate this bosonic system to a fermionic Majorana chain via Jordan-Wigner [12; 13; 14] transformation, and finally re-discertize time to arrive at the two-dimensional lattice Hamiltonian describing a superconductor in the Majorana basis [5; 6; 7]. In this work, we consider the TC/SC/IM triplet to illustrate how _tensor networks_ (TN) offer an efficient and intuitive alternative approach to duality[15; 16; 17; 18]. The idea is to place a TN in the so-called matchgate category [19; 20; 21; 22; 23] as an intermediate between the three systems. There are manifold advantages to bringing in a fourth system as a translator. First, the TN comes in two incarnations, a bosonic and a fermionic one, and the passage between the two is established directly on the two-dimensional lattice by what in effect is a 'two-dimensional Jordan-Wigner transformation' [12; 13; 14; 24]. In this way, we may pass from bosons to fermions avoiding dimensional detours. (The operation is conceptually similar, but somewhat more direct than previous constructions [4; 6] based on the commutator algebra.) Second, the mapping is microscopic and explicitly relates to the operator contents of all three theories. This level of detail, which is lost in continuum approaches, supports intuition and is essential in the construction of dual representations of correlation functions. We will illustrate this point on the equivalence between free Majorana correlation functions of the SC with more complex correlations between composites of spin and disorder operators [25; 26] (for a definition of disorder operators, see below) in the IM. Finally, the approach keeps all three partners of the duality construction in permanent sight. In this regard, it is different from previous approaches focusing on one specific link of the duality. The principle behind this high level of versatility is that a tensor network per se (unlike a Hamiltonian) has no pre-assigned physical interpretation. More precisely, while for a TN with _open_ indices these are identified with physical degrees of freedom, for a TN without open indices, the interpretation of individual tensors and their indices is not canonical. This ambiguity can be exploited to yield relations between seemingly unrelated physical systems. For example, the partition sum of a two-dimensional statistical mechanics model with local interactions affords interpretation in terms of a two-dimensional tensor network. Alternatively, consider the ground state of a spin system represented as a _projected entangled pair state_ (PEPS), i.e., a tensor network on a two-dimensional graph with open indices corresponding to the Hilbert spaces of the local spins [27]. The overlap of this state with itself can again be interpreted as a classical partition function. It is then natural to go one step further and establish a _local_ correspondence between these systems by looking at the respective fine structure of their TN representations. Supplemented with a boson-fermion mapping on the level of the TN, these ideas become even more powerful. Below, we will use such ambiguities of TN representations as a resource to discuss the full web of dualities in a comprehensive manner. In particular, we discuss a local equivalence between the partition sum of the IM and expectation values of the ground state of the toric code with string tension [8]. After a boson-fermion mapping, one can furthermore equate the TN to a Grassmann integral describing the ground state of a band-insulator Hamiltonian in symmetry class D. While the technical elements of this construction are known in the theory of matchgate tensor networks, we here present them in a comprehensive manner, aiming to introduce the key ideas to the community of condensed matter physicists. This endeavor is not just of pedagogical value: all three systems linked by the duality show rich behavior when translational symmetry is broken via the introduction of spatial phase boundaries, or defect structures. Examples include vortices, and the formation of gapless boundary modes in the superconductor, the binding of anyonic excitations at the ends of line defects in the toric code, or the physics carried by string-like 'disorder operators' in the Ising model. All these are subject of the duality mapping. However, the specific ways in which they transform are not always obvious. For example, the above-mentioned Majorana correlation function probes the propagation of quasi-particles injected into the superconductor ground state from one point to another. In the Ising context it becomes more complex, and now describes the correlations of a composite of a spin- and a disorder operator. The definition of the latter is non-trivial because it responds to the precise positioning of the composite operator on the Ising lattice [25; 26]. This and various other mappings will illustrate the application of the TN construction and are meant to introduce some powerful tricks of TN algebra to condensed matter practitioners. In a follow up publication [28] we will push this framework into less charted territory, including the presence of translationally invariance breaking disorder, the inclusion of nonlinear correlations in the TN, and that of geometrically distorted ('holographic') background geometries [29]. The remainder of this work is structured as follows. In Section II, we introduce the formalism of matchgate tensor networks and their representation as free fermion partition functions. Section III is the main focus of this work. It contains the derivation of the aforementioned dualities for the translation invariant case and discusses the phase transitions across the different systems. In Section IV, we extend the dualities to situations where translation invariance is broken and discuss how the duality map between different correlation functions. We summarize our work in Section V and provide an outlook to future research. ## II Matchgate tensor networks The workhorse by which the connections discussed in this work will be drawn are _matchgate tensor networks_ (MGTN) [19; 20; 21; 22; 23]. In the following, we introduce these objects in a manner assuming only a minimal level of familiarity with TNs (for introductory reviews, see Refs. [27; 30; 31; 32; 33]). ### Bosonic matchgate tensors _Matchgate tensors_. A bosonic normalized, even _matchgate_ (MG) tensor \(T\) tensor carries \(n\) indices \(i_{j}=0,1\) of bond dimension two. As such it is described by complex coefficients \(T_{i_{1}i_{2}\ldots i_{n}}\) such as \(T_{0100\ldots}\). To make it a matchgate tensor, we need to add further structures. Concretely, a matchgate tensor satisfies the following rules. * (Normalization) \(T_{0\ldots 0}=1\). * (Evenness) If \(i_{1}+\ldots+i_{n}\) is odd, \(T_{i_{1}\ldots i_{n}}=0\). * (Gaussianity) The \(\binom{n}{2}=n(n-1)/2\) so-called second moments \(T_{i_{1}\ldots i_{n}}\) with \(i_{1}+\ldots+i_{n}=2\) are independent. We collect them in an anti-symmetric \(n\times n\)- matrix \(A^{T}=-A\), where \(A_{12}=T_{110\ldots 0}\), \(A_{13}=T_{1010\ldots 0}\), etc. By definition, higher moments of \(T\), i.e., entries with \(i_{1}+\ldots+i_{n}>2\), are given by _Pfaffians_ of submatrices of \(A\). These submatrices are obtained by deleting all rows and columns \(k\) for which \(i_{k}=0\). For example, for a tensor with six indices the tensor entry \(T_{111100}\) is given by \(\mathrm{Pf}A|_{56}\), where \(A_{56}\) is the sub-matrix of \(A\) obtained by deleting the rows and columns 5 and 6. For a tensor with four indices, the matrix \[A=\begin{pmatrix}0&a_{12}&a_{13}&a_{14}\\ -a_{12}&0&a_{23}&a_{24}\\ -a_{13}&-a_{23}&0&a_{34}\\ -a_{14}&-a_{24}&-a_{34}&0\end{pmatrix} \tag{1}\] uniquely specifies all tensor entries. For example, \(T_{1100}=a_{12}\), \(T_{1010}=a_{13}\), etc. The only non-trivial higher moment is given by \(T_{1111}=\mathrm{Pf}(A)\). _Matchgate tensor networks_. We now consider two-dimensional square-lattice networks of matchgate tensors as shown in Fig. 1. First consider the fully contracted TN, without open physical indices. As such, it is just a number (much as a partition sum is just a number), and not very interesting in its own right. To obtain information about, e.g., correlations, we may cut bonds to define open indices. For example we can calculate the correlation function \(\langle O_{1}(x_{1})O_{2}(x_{2})\rangle\) of local observables \(O_{1,2}\) by incisions at two points \(x_{1},x_{2}\) as shown in Fig. 1. ### Mapping to Gaussian fermionic tensor networks Bosonic matchgate tensor networks afford a reinterpretation as Gaussian fermionic tensor networks [20]. To establish this connection, we first review the concept of (Gaussian) fermionic tensor networks as such. _Fermionic tensor networks_ have been introduced to represent many-body states of fermions [34; 35; 36; 37; 38; 39]. In one [38] of several optional representations a fermionic tensor \(T_{\text{f}}\) with \(n\) indices of bond dimension two contains \(n\) Grassmann variables \(\theta_{j}\) \[T_{\text{f}}=T_{i_{1}\ldots i_{n}}\theta_{1}^{i_{1}}\ldots\theta_{n}^{i_{n}} \tag{2}\] (see also Refs. [40]). We may identify these with fermion states, \(\theta_{i}\mapsto c_{i}^{\dagger}|0\rangle\), which are either occupied or unoccupied depending on the value \(i_{j}=0,1\). Throughout, we work with tensors of even fermion parity, \(T_{i_{1}\ldots i_{n}}=0\) if \(i_{1}+\ldots+i_{n}\mod 2=1\). For two tensors \(A=A_{i_{1}i_{2}\ldots}\theta_{A1}^{i_{1}}\theta_{A2}^{i_{2}}\ldots\) and \(B=B_{k_{1}k_{2}\ldots}\theta_{B1}^{k_{1}}\theta_{B2}^{k_{2}}\ldots\), the formal product \(AB\) represents a superposition of states with up to \(2n\) fermions via the above identification. We define a _contraction_ of indices \(Aj\) and \(Bl\) by projection onto all states with equal occupation of \(Aj\) and \(Bl\) fermions. The basic mathematical identity realizing this contraction reads \[\int\mathrm{d}\theta_{Aj}\mathrm{d}\theta_{Bl}\,e^{\theta_{Bl}\theta_{Aj}} \theta_{Bl}^{k_{l}}\theta_{Aj}^{i_{j}}=\delta^{k_{l}i_{j}}.\] This contraction carries an orientation as contracting \(Aj\) with \(Bl\) differs from contracting \(Bl\) with \(Aj\). Before using the identity above the Grassmann variables \(\theta_{Aj}\) and \(\theta_{Bl}\) first need to be permuted through the remaining variables such that they come to stand upfront in the indicated order. The contraction of generic indices thus comes with a sign factor. The generalization to multiple tensors \(T^{\alpha}\), \(\alpha=1,\ldots,N\) with \(n_{\alpha}\) fermions each is straightforward: introduce the vector \(\underline{\theta}=(\theta_{11},\ldots,\theta_{1n_{1}},\ldots,\theta_{N1}, \ldots\theta_{Nn_{N}})\) containing all fermionic modes, and an anti-symmetric matrix \(C\) indicating the pattern of (oriented) contractions as \(C_{\alpha i,\beta j}=1\), if the \(i\)-th mode of tensor \(T^{\alpha}\) is contracted with the \(j\)-th mode of tensor \(T^{\beta}\). The contraction of the network is then implemented by the integral \[\text{TN}_{(C,T)}=\int(\mathrm{d}\theta)_{C}\,e^{\frac{1}{2}\underline{\theta }^{T}C\underline{\theta}}\,T^{1}\ldots T^{N}\, \tag{3}\] where \((\mathrm{d}\theta)_{C}\) is a shorthand notation for the product of all ordered pairs \(\mathrm{d}\theta_{\alpha i}\mathrm{d}\theta_{\beta j}\) with \(C_{\alpha i,\beta j}=1\). _Gaussian fermionic tensor networks._ A tensor \(T\) with \(n\) indices is a _fermionic Gaussian_ (fG) tensor if there exists a real anti-symmetric \(n\times n\)-matrix \(A=-A^{T}\) such that \[T_{\text{fG}}=e^{\frac{1}{2}\underline{\theta}^{T}A\underline{\theta}}\,\quad \underline{\theta}^{T}=(\theta_{1},\ldots,\theta_{n}). \tag{4}\] The tensor product of two fG tensors \(T_{1}\), \(T_{2}\) is again a Gaussian tensor given by \[T_{1}T_{2}=e^{\frac{1}{2}\underline{\theta}^{T}(A_{1}\oplus A_{2})\underline{ \theta}}\,\quad\underline{\theta}=(\underline{\theta}_{1},\underline{\theta}_{2}). \tag{5}\] Including the contractions in Eq. (3), we write the contracted fermionic Gaussian tensor network as \[\text{TN}_{(C,A)}=\int(\mathrm{d}\theta)_{C}\,e^{\frac{1}{2}\underline{\theta }^{T}(A+C)\underline{\theta}}\, \tag{6}\] where \(A=\oplus_{i}A_{i}\) is the direct sum of all individual characteristic functions of the tensors \(T_{i}\). Note that a real fermionic Gaussian tensor network can be interpreted as the partition sum \(Z=\int\mathrm{d}\underline{\theta}e^{-S}\) with the weight \[S=\frac{\mathrm{i}}{2}\theta^{T}H\theta\,\quad H=\mathrm{i}(A+C)\, \tag{7}\] where \(H=H^{\dagger}\) and \(H=-H^{T}\). Within the framework of the tenfold symmetry classification of free fermions systems, this is a Hamiltonian in symmetry class D. _Matchgates as Gaussian fermionic tensors._ Eq. (4) implies the advertised connection between fermionic Gaussian and bosonic matchgate tensors: Given a bosonic \(T_{\text{MG}}\) with second moments \(A\), we define \[T_{\text{fG}}=(T_{\text{MG}})_{i_{1}\ldots i_{n}}\theta_{1}^{i_{1}}\ldots\theta _{n}^{i_{n}}=e^{\frac{1}{2}\underline{\theta}^{T}A\underline{\theta}}. \tag{8}\] The indicated ordering of Grassmann variables is an essential element of the map \(T_{\text{MG}}\leftrightarrow T_{\text{fG}}\). The assignment Eq. (8) remains formal unless we have settled the following consistency issue: For a (partially) contracted matchgate tensor network one may either first turn to a fermionic representation of the individual tensors and then contract according to Eq. (6), or contract first and then fermionize the result. We must make sure that the ordering Figure 1: Translation invariant two-dimensional square lattice tensor network of tensors \(T\) (left) and tensor network with incisions (right) that allow to calculate arbitrary \(n\)-point correlation functions of local observables, here \(\langle O_{1}(x_{1})O_{2}(x_{2})\rangle\). of operations does not matter. Referring for a detailed discussion to App. A, we need to choose an orientation of contracted fermionic bonds and a matching ordering of contracted bosonic indices. It turns out that for any tensor network patch with disk topology an assignment consistent according to the above criterion is possible. More precisely, the order of operations is inessential up to a known factor depending on the parity of the number of uncontracted boundary fermion modes which does not have an affect of bulk properties of the tensor network. We caution that more care is required in situations with more complex boundaries. These arise, for example, in the calculation of \(n\)-point correlation functions, corresponding to \(n\) additional punctures of the patch (see Section IV.1 for the discussion of such a setting.) We finally note that in our approach the above two-dimensional construction is key to the duality between bosonic and fermionic systems; it here assumes a role otherwise taken by the one-dimensional Jordan-Wigner transformations in approaches operating by dimensional reduction. ### Factorizing tensors As a final prerequisite for formulating our duality, we need a few added structures: A \(\mathbb{Z}_{2}\)_-tensor_\(T_{\mathbb{Z}_{2}}\) has bond dimension \(2\) and is defined by the parity condition \((T_{\mathbb{Z}_{2}})_{abcd}=\delta_{a+b+c+d\,\text{mod},0}\). It is straightforward to verify that the \(\mathbb{Z}_{2}\) tensor satisfies the matchgate condition. In a next step, we generalize the tensor to the presence of additional weights, \(W_{i}\) attached to the links, cf. Fig. 3(a), where the \(\mathbb{Z}_{2}\) tensors are circles, and the weights boxes. The latter are defined as \(W_{i}=\text{diag}(1,w_{i})\), i.e., diagonal matrices. This generalization, too, satisfies the matchgate condition, with the defining matrix given by \(A_{ij}=w_{i}w_{j}\). Conversely, matchgate tensors whose Gaussian weights can be written in this way are called _factorizing_. While simple parameter counting shows that not every matchgate tensor can be factorizing [41], they will be sufficient for our purposes. Specifically, the uniform matchgate tensor defined by a single parameter \(a\) is the simplest example of a factorizing matchgate and its weights are \(W_{i}=W:=\text{diag}(1,\sqrt{a})\). ## III Dualities from matchgate tensor networks In this section, we will employ the bosonic and the fermionic TNs introduced above as tools to establish the duality between the three systems mentioned in the introduction, the ground state of the toric code, that of the two-dimensional class D superconductor, and the partition sum of the classical Ising model. Our focus here, will be on the physics of the translationally invariant bulk systems, and their respective phase transitions. The fully contracted bosonic TN then corresponds to the partition function of the IM and likewise, the norm of a toric code ground state with string-tension, while the fully contracted fermionic TN evaluates to the Pfaffian of a free-fermion Hamiltonian in symmetry class D. However, the identifications between these three systems can be made _locally_ on patches of the respective tensor networks. In Section IV, we take advantage of this fact and extend the mappings to correlation functions and defect structures, starting what could be called a dictionary between the different models dual to each other. ### Previous work To put our discussion into a larger context, we begin this section with a review of previous studies of specific links of the duality web. \(\mathit{IM}\to\mathit{SC}\).Previous mappings of the classical Ising partition sum onto the SC ground state can roughly be divided into two categories. The first starts from a representation of the Ising partition function on a \(L\times L\) square lattice as a product of \(L\) transfer matrices, where each \(2^{L}\times 2^{L}\)-matrix represents one column of the Ising model. This formulation stands in the tradition of Onsager's solution [12], which was subsequently simplified by Kaufman [13], and later by Schultz, Mattis and Lieb [14]. One dimensional Jordan-Wigner transformations are then performed on the \(2^{L}\)-dimensional representation spaces of the transfer matrix to arrive at an interpretation in terms of (Majorana) fermion ground states [42]. A more isotropic approach has been developed by Kac and Ward [43] who have expressed the partition function in terms of the determinant of a \(4L\times 4L\) matrix by purely combinatorial considerations. This was followed by Hurst and Green [44] who suggested a formulation using the Pfaffian method (later refined by Blackmann and others [1; 2; 3; 5]). They noted, that the matrix for which one computes the Pfaffian is essentially that of a tight-binding Hamiltonian and as such can be interpreted as a free fermion system in 2D. This approach was recast in the language of Grassmann variables by Berezin [4] and a formulation close in spirit to the one discussed below has been presented by Dotsenko and Dotsenko in Ref. [6]. Finally, in more recent works [7; 45; 46; 47], the connection between the IM and non-interacting fermions was discussed in the context of network models and a connection between network models and Gaussian fermionic tensor networks was noted in Ref. [48]. \(\mathit{TC}\to\mathit{IM}\).To the best of our knowledge, the duality of the IM partition sum and a TC with string tension was first explored by Castelnovo and Chamon [8] and later generalized in Ref. [9]. \(\mathit{SC}\to\mathit{TC}\).We are not aware of previous mappings of the TC ground state to that of the SC, although they are of course implied by the sequence \(\text{TC}\to\text{IM}\to\text{SC}\). ### This work Our starting point in this work is a translationally invariant matchgate tensor network on a square lattice as shown in Fig. 1. Its tensors are characterized by a single real parameter \(a\) via \(a_{ij}=a\) (see Eq. (1)). All three target systems, IM, TC, SC, too, are controlled by single parameters, to be interpreted as dimensionless coupling strength, the string tension and the band-inversion parameter, respectively. Our discussion will show how these are to be related to the TN parameter \(a\). In each case, the parameter \(a\) drives a phase transition -- the Ising ferro-/paramagnetic transition, the transition between a topological spin liquid and an ordered state, and the transition between a topological and a trivial superconductor (cf. Fig. 2). While the latter two are between states of different topological order, the first is a conventional symmetry breaking transition. The equivalence between these drastically different types of phase transitions is not a contradiction since the duality transform establishing it is intrinsically non-local. ### Classical two-dimensional Ising model We start by discussing the interpretation of a factorizing matchgate tensor network as the partition function of the classical two-dimensional Ising model. This connection follows from the option to realize the even parity constraint obeyed by matchgate tensors in terms of superpositions of weighted closed loops reminiscent of the high-temperature expansion of the Ising partition function. Here, we derive the equivalence exploiting the invariance of a tensor network under basis changes in the virtual space. We start from a representation of our tensors in terms of \(\mathbb{Z}_{2}\)-tensors with weights \(W\) (see left panel in Fig. 3(d)). Next, we insert products of Hadamard matrices \[H=\frac{1}{\sqrt{2}}\left(\begin{array}{cc}1&1\\ 1&-1\end{array}\right) \tag{9}\] at all legs as shown in the center of Fig. 3(d). With \(H=H^{\dagger}=H^{-1}\) this is conceptually a gauge transformation. It is straightforward to verify using the relations shown in Fig. 3(c) that this operation induces a transformation (\(\mathbb{Z}_{2}\)-tensors) \(\mapsto\) (\(2\times\delta\)-tensors) where the \(\delta\)-tensor is defined by the condition \[\delta_{abcd}=\delta_{ab}\delta_{bc}\delta_{cd} \tag{10}\] (see Fig. 3(b)). The transformed weight matrices assume the form [49] \[T:=\sqrt{2}HW^{2}H=\frac{1}{\sqrt{2}}\left(\begin{array}{cc}1+a&1-a\\ 1-a&1+a\end{array}\right). \tag{11}\] Assuming \(a<1\), we define \(\beta>0\) by \[J\beta=\mathrm{artanh}(a) \tag{12}\] and \(h_{00}=h_{11}=-J\) and \(h_{01}=h_{10}=J\), in order to identify \(T\) with the transfer matrix of the Ising model, \[T=\begin{pmatrix}e^{-\beta h_{00}}&e^{-\beta h_{01}}\\ e^{-\beta h_{10}}&e^{-\beta h_{11}}\end{pmatrix}. \tag{13}\] The identification of the two non-vanishing configurations (all links 1 or all links 0) admitted by the central \(\delta\)-tensor identified with an Ising spin then implies an equivalence of the tensor network with the classical partition sum of the Ising model. The two-dimensional Ising model has its magnetic phase transition at \(2J_{c}\beta_{c}=\ln(1+\sqrt{2})\), or \(a_{-}:=\sqrt{2}-1\), consistent with the above assumption \(a<1\) (see the upper panel of Fig. 2). In the opposite case, \(a>1\), we apply a global rescaling of each bond by \(a^{-1}\) to send the weight matrices to \(W=\text{diag}(a^{-1/2},1)\). We then use the invariance of the \(\mathbb{Z}_{2}\)-tensors under a simultaneous spin flip \(\sigma_{\sqrt{2}}^{\otimes 4}\) to transform to weights \(W^{\prime}=\sigma_{\overline{x}}W\sigma_{x}=\text{diag}(1,a^{-1/2})\), i.e., we effectively map \(a\mapsto a^{-1}\). In this case, the phase transition is at \(a_{+}=a_{-}^{-1}=\sqrt{2}+1\). For \(a<0\), a global rescaling \(a\mapsto-a\) shows the equivalence to the \(a>0\) parameter domain. We conclude that our one-parameter family of tensor networks supports the four parameter intervals, \(a<-1,-1\leq a<0,0\leq a<1\) and \(1\leq a\) which are individually equivalent to the Ising models with critical points at \(a=\pm a_{\pm}\), respectively. ### Toric code with string tension We now turn to our second interpretation of a factorizing matchgate tensor network and show how it is related to the wavefuntion of the toric code with string tension. (For a direct link TC \(\leftrightarrow\) IM, not using a TN intermediate, we refer to Refs. [8; 50].) The _toric code Hamiltonian_ (without string tension) [10] is given by \[H_{\text{TC}}=-\sum_{v\in\text{vertices}}\;\prod_{i\in v}\sigma_{z}^{(i)}- \sum_{p\in\text{plaque}}\;\prod_{i\in p}\sigma_{x}^{(i)}, \tag{14}\] Figure 2: Phase transitions in a homogeneous single parameter matchgate tensor network interpreted in terms of three different physical systems. The control parameters \(a_{\pm}=\sqrt{2}\pm 1\). In the lower panel, \(c\) denotes the topological (Chern) index. and its ground state vector \(|\Psi_{0}\rangle\) is an equal weight superposition of closed loops of \(|1\rangle\)-vectors in a background of \(|0\rangle\)-vectors on the underlying square lattice. The toric code is a most paradigmatic example of a \(\mathbb{Z}_{2}\) spin liquid and at the same time the most studied code for topological _quantum error correction_[51]. This state affords a representation in terms of a simple \(\mathbb{Z}_{2}\) factorizing matchgate tensor network [52; 53; 54]: First consider a configuration with uniform \(a=1\). This is equivalent to a network of \(\mathbb{Z}_{2}\)-tensors with trivial weights. The job of the former is to admit configurations with \(0,2\) or \(4\)\(|1\rangle\) state vectors at each vertex, with equal weight. Summation over all of these is equivalent to a uniform weight closed loop superposition. To turn this sum into a quantum state, we add \(3\)-leg \(\delta\)-tensors at each vertex (see Fig. 4(a)). The tensor product of uncompensated physical indices then defines the quantum ground state. In the toric code context, the loop sum may be turned into a weighted one by adding string tension which penalizes or favors loops of increasing length. To mimic this effect, we generalize the tensor network to the presence of weights \(W=\text{diag}(1,\sqrt{t})\) per half-edge defining the state vector \(|\Psi(t)\rangle\). Each \(|1\rangle\) link now comes with a factor \(t\), compared to \(1\) for \(|0\rangle\) links. In the extreme case of \(t=0\), we obtain a trivial ferromagnet polarized in the \(|0\rangle\)-state, in the opposite limit \(t\to\infty\) the system is polarized in the \(|1\rangle\)-state. The exact parent Hamiltonian of this state is given by \(H=H_{\text{TC}}+H_{\text{ST}}\) with \[H_{\text{ST}}=\sum_{v\in\text{vertices}}\prod_{i\in v}e^{-\ln t\,\sigma^{(i)} _{z}}. \tag{15}\] In the vicinity of \(t\simeq 1\), the string tension term reduces to a conventional on-site magnetic field Hamiltonian \(H_{\text{ST}}\simeq-2\ln t\sum_{i}\sigma^{(i)}_{z}\). The critical values of the string tensions inducing a transition between the topological spin liquid state and ferromagnetic states with \(|1\rangle\) or \(|0\rangle\) polarization \(t^{2}=a_{\pm}=\sqrt{2}\pm 1\) can be derived via the mapping to the classical Ising model. The idea is to detect a phase transition via the emergence of power law correlations in correlation functions \(C_{t,\alpha\beta}=\langle\Psi(t)|O_{\alpha}O_{\beta}|\Psi(t)\rangle\), where \(O_{\alpha}\) and \(O_{\beta}\) are local operators of the spin variables at sites \(\alpha\) and \(\beta\). The operator insertion between two states means that we are now dealing with a two layered tensor network where all physical indices except for those at \(\alpha,\beta\) of that representing \(|\Psi(t)\rangle\) are contracted with that representing \(\langle\Psi(t)|\) (see Fig. 4(b)). The consequences of this almost complete contraction are detailed in Appendix C and can be summarized as follows: the contraction of the \(\delta\)-tensors sitting on the bonds effectively causes a collapse of double bonds (representing bra- and ket-state) to a single one. On this effective bond, we have the square of two amplitude weights, i.e., the effective weight \(W^{2}=\text{diag}(1,t)\). This is equivalent to a matchgate tensor with uniform \(A\) matrices for which \(a_{ij}=t^{2}\) and implies criticality at \(t_{\pm}^{2}=a_{\pm}\), in agreement with the values reported in Refs. [8; 9; 50] (see middle panel of Fig. 2). Figure 3: (a) A factorizing matchgate tensor decomposes into a \(\mathbb{Z}_{2}\) tensor and weight matrices. (b) Graphical notation for the \(\mathbb{Z}_{2}\) tensor (white circle), the weight matrices \(W\)(unfilled square), the Hadamard matrix \(H\) (half filled circle, the Kronecker \(\delta\)-tensor (black circle) and the IM transfer matrix (black square). (c) Tensor identities used to transform a MGTN to the partition sum of the IM. (d) Transforming a weighted \(\mathbb{Z}_{2}\)-tensor network (left) to the Ising partition function tensor network (right) via a gauge transformation. ### Class D superconductor Our third construction establishes a connection to the topological SC. For the convenience of readers not well-versed in the physics of topological superconductivity, a brief review is included in Appendix B. The punchlines of this discussion are that (a) the free fermion Hamiltonian of a class D superconductor affords a representation in terms of a _Majorana bilinear form_ \[\hat{H}:=i\Theta^{T}H\Theta, \tag{16}\] where \(H\) is the first quantized Hamilton matrix, and the components of \(\Theta\) Majorana operators, to be identified as real and imaginary parts of complex fermion creation and annihilation operators. (b) In a translationally invariant system, the eigenstates of \(H\) define Bloch bands, labeled \(n\), which individually carry Chern numbers \(c_{n}\). (c) Transitions between states of different topology change these numbers (at a conserved total number \(\sum_{n}c_{n}=0\)) and are signaled by a change of the \(\mathbb{Z}\)-valued topological index \(c=\sum_{n=1}^{N}c_{n}\) over Chern numbers carried by individual Bloch bands, \(n=1,\ldots,N\), with band energies below the superconductor band gap at \(\epsilon=0\). (d) Such changes of integer invariants require touching of the bands \(n\) and \(n+1\) involved in the change of Chern numbers. In the vicinity of these hotspots in the Brillouin zone, the local Hamiltonian may be approximated by a _two-dimensional Dirac Hamiltonian_ \[H^{(2)}=\kappa(q_{1}\sigma_{1}+q_{2}\sigma_{3}+(m+\alpha q^{2})\sigma_{2})+ \mathcal{O}(q^{3}), \tag{17}\] where \(\kappa\) is an overall real constant, \(q=(q_{1},q_{2})^{T}\) is the momentum difference from the band touching point, \(m\) a mass parameter measuring the distance to the critical point, and \(\alpha\) a parameter entering the second order expansion in the local dispersion relation. In this representation, the assignment of Chern numbers reads \((c_{n},c_{n+1})=(0,0)\) for \(m\alpha>0\), and \((+1,-1)\) for \(m\alpha<0\), i.e., the topological index is determined by the sign of the mass gap. (Exchange \(q_{1}\leftrightarrow q_{2}\) corresponds to the sign change \((-1,+1)\).) We now establish a connection to the fermionic representation of our TN by identifying the bilinear form Eq. (16) with Eq. (7), with the matrix defining a matchgate tensor network on a square lattice with tensors of uniform weight, \(A_{ij}^{\alpha}=a\). Of course, this connection remains formal unless contact with the momentum space topology of a superconductor is made. To this end, note that the TN structure includes unit cells comprising four Majorana fermions with all to all connection of equal strength \(a\). Connecting these cells to a square network (see Fig. 5), and switching to a momentum space representation, \(\tilde{H}=\sum_{k}\Theta_{k}H_{k}\Theta_{-k}\) the system is represented in terms of the effective four band Hamiltonian \[H= i\,a\begin{pmatrix}0&1&1&1\\ -1&0&1&1\\ -1&-1&0&1\\ -1&-1&-1&0\end{pmatrix}+i\begin{pmatrix}0&0&e^{ik_{x}}&0\\ 0&0&0&e^{ik_{y}}\\ -e^{-ik_{x}}&0&0&0\\ 0&-e^{-ik_{y}}&0&0\end{pmatrix}.\] The numerical brute force computation of Berry curvatures for this Hamiltonian reveals a sequence of six topological phase transitions at \(a\)-values \((\sqrt{2}-1,1,\sqrt{2}+1))=: \((a_{-},1,a_{+})\) and the negative of these. Labeling the energy bands \(1,2,3,4\) in ascending order in energy, we have \(N=2\) occupied bands \(n=1,2\) with a pattern of Chern numbers shown in Table 1: Starting from a topologically trivial phase at \(a>a_{+}\), a phase transition involving bands \(n=2,3\) at \(a=a_{+}\) (see Fig. 6) defines the entry into a topological superconductor phase with Figure 5: Lattice structure comprising four-site unit cells with all to all connection of uniform strength \(a\) extended to a square lattice. Figure 4: Toric code with string tension. (a) Ground state vector \(|\Psi(t)\rangle\) represented as a PEPS, physical indices coming out of the plane. (b) The overlap \(\langle\Psi(t)|\Psi(t)\rangle\) given by contracting the physical indices of the PEPS and its conjugate, represented by the mirror image. \(c=1\). At \(a=1\), a phase transition involving the topological order of bands \(1,2\) (and \(3,4\)) leads to a redistribution of band Chern numbers without changing the total topological index, \(c\). Finally, at \(a=a_{-}\), we have a transition back to \(c=0\), however, this vanishing value is nontrivial in that it implies two bands with mutually canceling non-vanishing index, \(c_{1}=1\) and \(c_{2}=-1\). To obtain an explicit low energy reduction of the system near, say, the critical point \(a=a_{-}\), we verify that at the momentum hot-spot \(k=(\pi,\pi)\) our Hamiltonian has two zero eigenvalue states, \[v_{1} := \frac{1}{2}(-\sqrt{2},-1,0,1), \tag{18}\] \[v_{2} := \frac{1}{2}(0,1,\sqrt{2},1). \tag{19}\] Setting \(a=a_{-}+m/(2(1+\sqrt{2}))\), and \(k_{i}=\pi+q_{i}\), the two-band reduction, \(H^{(2)}\), of the Hamiltonian obtained by projection onto the space spanned by \(v_{1,2}\) assumes the form of the Haldane [55] Chern insulator \[H^{(2)}=-\frac{1}{2} \left(\sin(q_{2})\sigma_{3}+\sin(q_{1})\sigma_{1}\right.\] \[+\left.(2+m-\cos(q_{1})-\cos(q_{2}))\sigma_{2}\right).\] For \(m>0\) and \(m<0\), the two bands associated to this Hamiltonian carry winding (Chern) numbers \((0,0)\) and \((-1,1)\), respectively. The low energy Dirac approximation Eq. (17) (with \(\kappa=-\frac{1}{2}\)) is obtained by expansion in \(q\) up to second order. With these identifications, the equivalence between the TN and the SC is established, and from here one may -- via the non-local boson-fermion mapping outlined in Section II -- pass to the bosonic systems TC and IM. Notice how in our discussion of the fermion system, the emphasis shifted from real space to momentum space structures. Nevertheless, the full microscopic structure of the system remains under control, and this will become essential in the next section when we turn to the discussion of correlation functions. ## IV Extension to inhomogeneous structures So far, we have looked at the duality between our three systems in the translationally invariant case. However, they all show rich behavior when translational invariance is broken by domain walls or other defect structures, examples including vortices binding Majorana zero modes in the SC [56], non-local 'disorder' operators describing the correlations between endpoints of defect links in the IM [25], anyonic excitations as endpoints of error strings in the TC [10], or the long-rangedness of correlation functions at criticality (where we consider a correlation function as the result of an insertion of infinitesimally weak probing inhomogeneities.) The general duality must include a mapping between these defect structures, and their accompanying correlations. However, we anticipate that the passage from fermionic to bosonic systems involved in going from the SC to the IM or the TC will introduce an element of non-locality: the correlation between objects that are local in one setting may turn into the insertion of non-local string like objects in another. These complications find their perhaps most vivid manifestation in the duality mapping of the simplest correlation function probing the superconductor, that between two Majoranas, Eq. (21) below. The dual representation of this function in the Ising context is famously complex [6; 25] and it involves the pairing of a spin and a disorder operator to a hybrid operator. The simultaneous appearance of a local (spin) and a non-local (disorder) operator reflects that we are representing the correlations of a fermion in the language of a bosonic model. Specifically, the spin-disorder dual of the Majorana correlation function responds sensitively to the relative placement of the two compound operators, as discussed in Ref. [25] and explored explicitly in a dual fermionic description in Ref. [6]. In the following, we show how the tensor network allows one to map defect structures and correlation functions with maximal explicitness. We will illustrate the construction on two examples. The first is the mapping of the above SC Majorana two-point correlation function. We will construct local pairs of spin and disorder operators as fractionalized representatives of the Majorana, and address the importance of their relative ordering on the lattice. The second example is motivated by the question what form these structures assume in the TC language. We find that for general parameter values the answer assumes the form of an exponentially decaying and not very illuminating ground state operator expectation value. However, the analysis of the latter becomes more rewarding once we introduce a spatial domain wall, i.e. an object which in superconductor language defines the surface of a topological insulator. In this case, our correlation function describes the spatial extension of a topological boundary mode, and we \begin{table} \begin{tabular}{c|c c c c} \(a\in\) & \((0,a_{-})\) & \((a_{-},1)\) & \((1,a_{+})\) & \((a_{+},\infty)\) \\ \hline \hline \(c_{4}\) & -1 & -1 & 0 & 0 \\ \(c_{3}\) & 1 & 0 & -1 & 0 \\ \(c_{2}\) & -1 & 0 & 1 & 0 \\ \(c_{1}\) & 1 & 1 & 0 & 0 \\ \hline \(c\) & 0 & 1 & 1 & 0 \\ \end{tabular} \end{table} Table 1: Chern numbers carried by the four bands of the system for positive values of the parameter \(a\). (The pattern is symmetric around \(a=0\) and in this way can be extended to negative values.) The phase transitions changing the topological index, \(c\), occur at \(a_{\pm}\). Figure 6: A cut at \(k_{y}=0,\pi,\pi\) (left, center, right) through the two-dimensional dispersion relation at parameter values \(a=a_{+}+0.3,b+0.04,a_{-}+0.02\) close to the topological phase transition points. Notice how the transition at \(b\) involves the formation of a Dirac point between the occupied bands \(c=1,2\). will discuss how this object turns into a non-decaying string operator expectation value in the toric code. ### Correlation functions Correlation functions are described by a TN modified at the two points between which correlations are measured. These modifications can interfere with the fermionization procedure discussed in the previous section. Specifically, if the observables under consideration are of odd fermion parity, the mapping between bosonic and fermionic representations introduces a string-like object connecting the two point observables. To illustrate the principle - and to be entirely concrete - we start out from the fermionic two point correlation function \[\langle\theta_{i}\theta_{j}\rangle:=\int(\mathrm{d}\theta)_{C}\,\theta_{i} \theta_{j}\,e^{\frac{1}{2}\theta^{T}(A+C)\theta}\, \tag{21}\] where \(C\) denotes the signed adjacency matrix of the network and \(A\) the collection of all matchgate tensor generating matrices. We may think of this as the contraction of a fermionic tensor network subject to two incisions as illustrated in Fig. 7. Doing the integral, we obtain \[\langle\theta_{i}\theta_{j}\rangle\propto H^{-1}|_{ij}\, \tag{22}\] with the Hamiltonian \(H=\mathrm{i}(A+C)\). Close to a critical point, e.g., \(a=a_{-}\), \(H\) can be approximated by the two-band Hamiltonian Eq. (17) linearized around \(q_{x},q_{y}=0\). At criticality, \(m=0\), the gap vanishes and the correlation function approaches an \(l^{-1}\) power law where the exponent follows from dimensional analysis and \(l\) is the distance between \(i\) and \(j\). However, in addition to this asymptotic distance behavior, we have additional short range lattice structures to consider. A single point \(i=((x,y)b)\) is defined by unit cell coordinates \((x,y)\) and an intra-cell index \(b=1,\dots,4\). It turns out that only pairs \(i,j\) with select combinations of this data survive projection onto the two-band reduction, and hence are long-range correlated. For example, considering points separated along the \(x\)-direction, we find \(\langle\theta_{(x,y)1}\theta_{(x+l,y)1}\rangle=0\), while \(\langle\theta_{(x,y)1}\theta_{(x+l,y)3}\rangle\propto 1/l\). Fermion boson mapping.We next aim to represent this correlation function in the language of the bosonic TN. The challenge here is the presence of fermion parity odd tensors at sites \(i\) and \(j\). To deal with this situation, we decompose the tensor network into two parts which are individually fermion-parity even. The first of these, \(A\), contains \(\theta_{i}\) and \(\theta_{j}\), and hence is fermion even in total. The complement, \(\bar{A}\), contains the rest of the tensor network. For the sake of simplicity, we chose \(A\) to be as small as possible, namely as a chain of tensors connecting sites \(i\) and \(j\) (see Fig. 7 top). Referring to Appendix A.1 for a more detailed discussion, the goal now is to contract all tensors in \(A\) and reorder the fermionic modes into standard ordering. (The tensors of \(\bar{A}\) can be assumed to be in standard ordering to begin with.) As a necessary byproduct, this operation introduces a string of fermion parity tensors between sites \(i\) and \(j\). In a final step, we contract the bosonic versions of \(A\) and \(\bar{A}\) to obtain a bosonic tensor network in standard ordering. Figure 7: Top. Fermionic TN for \(\langle\theta_{i}\theta_{j}\rangle\) divided into fermion parity even regions \(A\) and \(\bar{A}\). Center. The TN after fermion-to-boson mapping. We obtain a bosonic weighted \(\mathbb{Z}_{2}\) TN with \(\sigma_{z}\) matrices (transparent yellow squares) acting along a defect line on the dual lattice (yellow) and projections onto the state vector \(|1\rangle\) at the sites \(i\) and \(j\), respectively. Bottom. The TN of the IM partition function obtained via a Hadamard gauge transformation. The \(\sigma_{z}\)-matrices are transformed to transfer matrices \(T_{\mathrm{Dl}}\). of inverted coupling strengths \(J\rightarrow-J\) (filled yellow squares) along the defect line and the spins at site \(i\) and \(j\) (blue dots) contribute a sign factor to the partition function leading to the expression in Eq. (24). The dashed gray line represents a (potential) boundary between regions of different \(a\)-values discussed in Sec. IV.2. In bosonic language, the above parity string is expressed through \(\sigma_{z}\)-matrices acting on the virtual bonds of the TN (see Fig. 7 center). The \(\theta\)-modes themselves become projections onto spin-up, i.e., \(|1\rangle\)-state at sites \(i\) and \(j\). The resulting bosonic tensor network is shown in the middle panel of Fig. 7. Having defined two alternative representations of the TN subject to point sources at sites \(i\) and \(j\), we now turn to the interpretation of these structures in terms of condensed matter correlation functions. **SC:** Eq. (21) affords an obvious interpretation as the Majorana ground state correlation function of a superconductor. Specifically, we think about the r.h.s. of Eq. (22) as the ground state, or zero energy, \(\epsilon=0\), matrix element \(G_{ij}\) of the resolvent \(G=(\epsilon-H)^{-1}\). (Due to the presence of a spectral gap and the absence of convergence issues in the Majorana functional integral, it is not necessary to shift \(\epsilon\) into the complex plane as for generic Green functions.) Its algebraic decay then reflects the long-range correlation of Majorana quasiparticles at the gap closing transition of the topological superconductor. **IM:** Next, we interpret the bosonic incarnation of the sourced tensor network in terms of an Ising model correlation function. As before, we obtain the corresponding partition sum by performing a Hadamard transform (see Fig. 3) which in the presence of sources has the following effect: the \(|1\rangle\)-projections become projection to the \(|-\rangle\)-state, meaning, if the spin at position \(i\) is up, we obtain a minus sign. The same holds for the spin at position \(j\). We identify the parity string as a _defect line_ (DL) and observe that the transfer matrices at all bonds crossed by that defect line are given by \[T_{\text{DL}}=\sqrt{2}HW^{2}\sigma_{z}H=\frac{1}{\sqrt{2}}\begin{pmatrix}1-a& 1+a\\ 1-a&1+a\end{pmatrix}. \tag{23}\] Comparing this to the result in Eq. (11) and Eq. (12), we note that these are transfer matrices with inverted coupling strength, i.e., \(T_{\text{DL}}=T_{J\rightarrow-J}\). In summary, the tensor network after Hadamard transform shown at the bottom of Fig. 7 is a spin-spin correlation function of sites \(i\) and \(j\), but with a Hamiltonian \(H_{\text{DL}}\) that has a modified coupling strength \(J\mapsto-J\) along a defect line \[Z_{\text{corr},ij}=\sum_{\{s_{i}\}=\pm 1}s_{i}s_{j}e^{-\beta H_{\text{DL}}(\{s_ {i}\})}. \tag{24}\] The specific path entering the construction of the defect line depends on the arbitrary choice made in dividing the tensor network into \(A\) and \(\bar{A}\). These building blocks entering the construction of the Ising representation of the fermion correlation function have a long history of research. Specifically, \(\sigma_{z}\) string line defects extending from a point of the lattice along arbitrary paths to infinity are called _disorder operators_ and and have been introduced in Ref. [25]. They owe their name to the fact that they assume finite expectation values in the disordered high-temperature phase of the IM, and they are related by (Kramers-Wannier) duality to the native local spin operators. Composite correlation functions involving pairs of disorder operators (now connected by a finite defect line) and spin operators were considered in the same reference, where the importance of the precise relative positioning of spin and disorder operator (addressed above in the language of the Majorana representation) was discussed. In a lattice construction conceptually close to the present one but formulated directly with the framework of the IM, Ref. [6] investigated the braiding properties of the composite operator to demonstrate that it defines an effective (Majorana fermion). Within our present construction the bridge between free Majorana fermions and composite IM operators is established in the explicit and arguably maximally concise mapping of the fermionic to the bosonic TN. **TC:** We finally turn to the an interpretation of the correlation function in terms of toric code ground state expectation values. To be more precise, we consider the bosonic TN in the middle panel of Fig. 7 and try to identify a toric code ground state \(|\Psi\rangle\) such that \(\langle\Psi|\dots|\Psi\rangle\) corresponds to the given TN locally and the ellipses stand for appropriate operators to be identified as well. For general \(a\)-values, this procedure leads to complicated and not very revealing expressions. However, at the toric code fixed point, \(a=1\), i.e. in the absence of string tension, the construction becomes straightforward. Focusing Figure 8: Toric code with smooth boundaries. The Hamiltonian in the bulk (shaded in grey) is given by the conventional square lattice Hamiltonian. Along the boundary all vertex operators are modified such that they are given by a product of \(\sigma_{z}\)-matrices acting on the edges inside the grey shaded region only. The plaquette operators remain the same. Figure 9: Toric code with smooth boundaries around two holes that arise from the correlation function ‘incisions’ at sites \(i\) and \(j\) (cf. Fig. 7 center). The string of \(\sigma_{x}\)-matrices, \(S_{x}\) (blue), creates parity violations (\(m\)-excitations) at its end-points. The string of \(\sigma_{z}\)-matrices, \(S_{z}\) (yellow), connects the incision sites and creates \(e\)-excitations at its endpoints. on this case, we note that for \(a=1\) the weight matrices are identity matrices, meaning that we can omit them in the pictorial representation of the TN in the middle panel of Fig. 7. We next note that the TN modified for the presence of source terms has a number of specific features which help to identify the state vector \(\ket{\Psi}\) and the operators featuring in the expectation value. First, it is missing a bond to the left (right) of site \(i\) (\(j\)). This missing bond suggests to choose \(\ket{\Psi}\) as the ground state vector of the toric code with holes at sites \(i\) and \(j\). In the context of the toric code, the minimal surface bounding these holes is known as a _smooth boundary_. The ground state of a toric code containing smooth boundaries is a superposition of all closed loops of spin-up states [57], where the loops are allowed to include boundary links (cf. Fig. 8). The remaining features of the TN we need to reproduce are the string of sign flips along the defect line, and the \(\ket{1}\)-projections at sites \(i\) and \(j\). The constructive modeling of these structures is a nice illustration of tensor network constructions and detailed in App. C. We here simply state the result and note that the state we are looking for is \(\ket{\Psi}:=S_{x}\ket{\Psi_{\square}}\), where \(\ket{\Psi_{\square}}\) is the ground state of the TC with smooth boundaries mentioned above and \(S_{x}=\sigma_{x}\otimes\ldots\otimes\sigma_{x}\) is a string of \(\sigma_{x}\)-matrices (bit flips) between sites \(i\) and \(j\). Applied to the ground state, this operator generates an excited state with two \(m\)-anyons located at the endpoints of the string (cf. Fig. 9) [10]. The overlap \(\langle S_{x}\Psi_{\square}|S_{x}\Psi_{\square}\rangle\) reproduces the \(\ket{1}\)-projection of the TN in the middle panel of Fig. 7. The sought after operator replacing the ellipses in \(\bra{\Psi}\ldots\ket{\Psi}\) is \(S_{z}=\sigma_{z}\otimes\ldots\otimes\sigma_{z}\) - a string of \(\sigma_{z}\)-matrices (phase flips) along a path connecting sites \(i\) and \(j\) on the dual lattice. This operator is known to generate an excited state with \(e\)-anyons at its endpoint (cf. Fig. 9) [10]. Here, the \(e\)-anyons lie inside the two holes with smooth boundary and instead give rise to another (orthogonal) ground state of the Hamiltonian with smooth boundary. The expectation value of this operator for a TC ground (or excited) state reproduces the string of sign flips in the TN. In summary, the expectation value reducing to the TN in the middle panel of Fig. 7 and satisfying the above locality condition is given by \(\bra{\Psi}S_{z}\Psi\rangle=\langle S_{x}\Psi_{\square}|S_{z}|S_{x}\Psi_{ \square}\rangle\). Since the support of \(S_{x}\) and \(S_{z}\) can be chosen to be non-overlapping, these two operators commute. With \(S_{x}^{2}=\mathbb{1}\), we find \(\bra{\Psi}S_{z}\Psi\rangle=\bra{\Psi_{\square}}S_{z}\Psi_{\square}\rangle\). Let us discuss this result. We have seen that the presence of the \(S_{x}\) strings is irrelevant in the expectation values. The job of these operators was to implement the \(\ket{1}\)-projections, which in turn were dual to the spin operators in the Ising context. Since \(a=1\) translates to \(J\beta=\infty\) (cf. Eq. (12)) this dual IM is deeply within the ferromagnetic phase, where spins are uniformly aligned and hence drop out of correlation functions. We next note that the operator \(S_{z}\) applied to the ground state generates a state with two \(e\)-anyons at its endpoints. This state is orthogonal to the state \(\ket{\Psi_{\square}}\) and the overlap \(\bra{\Psi_{\square}}S_{z}\Psi_{\square}\rangle\) is trivially zero. This, again, is expected by duality - the \(S_{z}\) operator corresponds to the disorder correlation function in the IM which vanishes in the ferromagnetic phase. We thus conclude that all the so far effort has led to the dual TC representation of a trivially vanishing correlation function. This vanishing is due to the fact that we are considering correlations on top of a bulk background which, for \(a=1\), is fully gapped. However, the situation becomes more interesting, when we allow for the presence of a system boundary spatially aligned to our probe operators. ### Boundary phenomena Turning back to the TN, assume a separation of our system into a 'topological' region, \(R\) defined by a value \(a>a_{-}\) of the coupling constant, and a 'non-topological' complement with \(a<a_{-}\). In SC language, this will define an interface between a topological and a non-topological superconductor, which we know supports gapless Majorana boundary modes. On this basis, we anticipate that the TC representation, too, will support long range correlations, which we can probe by putting two observation points \(i\) and \(j\) next to it. This is the setup considered in the following. Consider the TN in the middle panel of Fig. 7 where now the weight matrices above the dashed line have \(a<a_{-}\). For simplicity, we set \(a=0\), which reduces these matrices to projections onto the state vector \(\ket{0}\). Below the dashed line, where \(a>a_{-}\), we choose \(a=1\), implying that the weights become unit matrices. In effect, this network has all bonds above the dashed line, including those that are crossed by it, removed. This defines a maximally simple interface between a topological region and 'vacuum'. **TN:** In TN language, the above construction introduces another smooth boundary along the interface, in addition to that surrounding the correlation function observation points. As before, we aim to identify the correlation function generalized for the presence of the interface with a suitable ground state expectation value. Reconsidering the construction in the previous section, we conclude that it remains unchanged, only that the ground state in question now is that of the system with the generalized boundary, \(\ket{\Psi_{\square,\ket{i}}}\). In the expectation value \(\langle S_{x}\Psi_{\square,\ket{i}}|S_{z}|S_{x}\Psi_{\square,\ket{i}}\rangle\), the \(S_{x}\) string is irrelevant as before, leading to \(\bra{\Psi_{\square,\ket{i}}}S_{z}\Psi_{\square,\ket{i}}\). So far, we have not specified the positioning of the sites \(i\) and \(j\) relative to the boundary. The situation gets interesting when they come close to it (as depicted in the middle panel of Fig. 7). Once they touch, the holes surrounding \(i\) and \(j\) are 'cut open' and partially lie _outside_ the bulk. This has a dramatic effect. While previously, we found that \(\bra{\Psi_{\square}}S_{z}\Psi_{\square}\rangle=0\), we now find that \(\bra{\Psi_{\square}}S_{z}\Psi_{\square}\rangle=1\). This follows from the fact that the \(e\)-excitation at the end of the \(S_{z}\) string are no longer trapped in the bulk, instead, they lie outside of the bulk region and by using a sequence of straightforward TN manipulation (cf. identity (4) in Fig. 17), we can remove them entirely from the system. The intuition behind this construction is illustrated in Fig. 10. ## V Summary and discussion In this work, we have considered three reference systems which are individually of outstanding importance to condensed matter research: the toric code as an exactly solvable model of long range entangled matter, the class D superconductor as an example of a topological insulator, and the two-dimensional Ising model as a maximally simple proxy for systems with a discrete symmetry breaking phase transition. These three system classes are dual to each other. More precisely, the duality connects the ground states of the TC and the SC with the partition sum of the IM. Being exact, it extends to all phenomena displayed by the three partner systems, including their topological or thermal phase transitions, the buildup of algebraic correlations at criticality, and the presence of topological boundary modes at domain walls. These equivalences are remarkable in that they connect phenomena conventionally addressed in different hemispheres of physics -- such as the phase transition between a spin polarized phase and a topological spin liquid vs. the band closing transition between a trivial and a topological superconductor. The dualities discussed in this work are largely known in principle, and have been derived in previous work by different methods. As they include the duality between fermionic and bosonic systems, a standard approach is to take a detour via a transient mapping to one dimensional quantum systems, for which Bose-Fermi duality is established by Jordan-Wigner transformation. An alternative approach is to look at them through the coarse graining lens of CFT and establish equalities between differently realized operators in, say, the Ising and the free Majorana CFT at criticality. These approaches illustrate the principle, but arguably lack in the microscopic resolution required when it comes to the precise comparison of correlation functions. To the best of our knowledge, this point has first been made in Ref. [6], an observation being that, e.g., the free Majorana correlation function in the SC becomes that between a spin and a disorder composite operator in the IM where the exact positioning of the two compound operators on the lattice becomes crucial. That reference has solved the problem by staying on the two dimensional lattice and employing (string-) operator algebra to demonstrate that the spin-disorder operator satisfies the commutation relations of a Majorana fermion. In this work, we have proposed an alternative and more comprehensive approach to the duality, the key idea being to use a _tensor network_ as an intermediate. While at first sight a formulation introducing a fourth player into a situation that looks complicated already may not look appealing, engaging a translator TN has various advantages. First, the TN comes in two incarnations a bosonic and a fermionic one, the passage between the two being explicit, with no dimensional detours required. Second, both realizations of the TN are elementary. Being bond dimension two networks, the compound tensors involved in the construction of the net assume the form of \(2\times 2\) matrices, binary Kronecker \(\delta\)'s and \(\mathbb{Z}_{2}\)-parity projection tensors. After passing through a mild learning curve, one can use powerful graphical relations of tensor algebra as a resource to obtain results which arguably assume a substantially more complicated and tedious form in different formulations. Another attractive feature is that the fermionic TN assumes the form of a Gaussian Majorana integral. As an alternative to using tensor relations, one may proceed via techniques otherwise employed in the analysis of free fermion systems, an approach naturally relevant to the understanding of the superconductor. In this way, we not only bridge different frameworks within a single formalism, but also the mindsets of different scientific communities. While the present work has focused on known manifestations of the duality in the focus of attention, there are several obvious extensions into less charted territory. The first is a generalization to non-translationally invariant systems which, depending on the context, means bit errors, random magnetic exchange, or static impurities. The tensor network construction has no issues with the presence of spatially fluctuating bonds, and a natural approach will be to perform ensemble averaging directly on the level of the TN to generalize the latter to an effective continuum field theory [29]. Other generalizations include the addition of nonlinear contributions to the TN, asking if they, too, afford a condensed matter interpretation. One may also study geometric deformations of the TN, for instance the hyperbolic geometries entering the construction of holographic networks. Here again, it is natural to ask if and how the holographic bulk boundary correspondence manifests itself in the dual system classes. ## VI Acknowledgements We acknowledge support from the Deutsche Forschungsgemeinschaft (DFG) under Germany's Excellence Strategy Cluster of Excellence Matter and Light for Quantum Computing (ML4Q) EXC 2004/1 390534769 (A.A.) and within the CRC network TR 183 (project grant 2777101999) as part of projects B04 (A.A and J.E.). It has also received funding Figure 10: Deforming and removing a \(S_{z}\)-string at a smooth boundary by repeated application of identity (4) from Fig. 17. from Germany's Excellence Strategy Cluster of Excellence MATH+ (J.E.) and the BMBF (RealistiQ). C.W. acknowledges support from the European Research Council under the European Union Horizon 2020 Research and Innovation Programme via Grant Agreement No. 804213-TMCS.
2301.00020
Subharmonic fidelity revival in a driven PXP model
The PXP model hosts a special set of nonergodic states, referred to as quantum many-body scars. One of the consequences of quantum scarring is the periodic revival of the wave function fidelity. It has been reported that quantum fidelity revival occurs in the PXP model for certain product states, and periodic driving of chemical potential can enhance the magnitude of quantum revival, and can even change the frequencies of revival showing the subharmonic response. Although the effect of the periodic driving in the PXP model has been studied in the limit of certain perturbative regimes, the general mechanism of such enhanced revival and frequency change has been barely studied. In this work, we investigate how periodic driving in the PXP model can systematically control the fidelity revival. Particularly, focusing on the product state so called a Neel state, we analyze the condition of driving to enhance the magnitude of revival or change the frequencies of revival. To clarify the reason of such control, we consider the similarities between the PXP model and the free spin-1/2 model in graph theoretical analysis, and show that the quantum fidelity feature in the PXP model is well explained by the free spin-1/2 model. In addition, under certain limit of the driving parameters, analytic approach to explain the main features of the fidelity revival is also performed. Our results give an insight of the scarring nature of the periodically driven PXP model and pave the way to understand their (sub-)harmonic responses and controls.
HaRu K. Park, SungBin Lee
2022-12-30T19:00:01Z
http://arxiv.org/abs/2301.00020v1
# Subharmonic Fidelity Revival in a Driven PXP model ###### Abstract The PXP model hosts a special set of nonergodic states, referred to as quantum many-body scars. One of the consequences of quantum scarring is the periodic revival of the wave function fidelity. It has been reported that quantum fidelity revival occurs in the PXP model for certain product states, and periodic driving of chemical potential can enhance the magnitude of quantum revival, and can even change the frequencies of revival showing the subharmonic response. Although the effect of the periodic driving in the PXP model has been studied in the limit of certain perturbative regimes, the general mechanism of such enhanced revival and frequency change has been barely studied. In this work, we investigate how periodic driving in the PXP model can systematically control the fidelity revival. Particularly, focusing on the product state so called a Neel state, we analyze the condition of driving to enhance the magnitude of revival or change the frequencies of revival. To clarify the reason of such control, we consider the similarities between the PXP model and the free spin-\(1/2\) model in graph theoretical analysis, and show that the quantum fidelity feature in the PXP model is well explained by the free spin-\(1/2\) model. In addition, under certain limit of the driving parameters, analytic approach to explain the main features of the fidelity revival is also performed. Our results give an insight of the scarring nature of the periodically driven PXP model and pave the way to understand their (sub-)harmonic responses and controls. ## I Introduction Eigenstate Thermalization Hypothesis(ETH) [1; 2; 3; 4] is a key concept explaining the thermalization of the quantum many-body system. Recently, beyond the quantum thermalization, the system which strongly violates the ETH has been actively studied, such as integrable systems and many-body localization. Here, the "strong violation" of the ETH means every eigenstate breaks the ergodicity and never get thermalized. There are also the systems which "weakly violate" the ETH, implying only a small portion of the eigenstates violates the ergodicity while all the other states get thermalized. In particular, quantum many-body scarring(QMBS) systems are the examples which weakly violates ETH, containing a small number of highly excited non-thermal eigenstates, called scar eigenstates [5; 6]. Such scar eigenstates show exotic physical behavior compared to the thermal Gibbs state. For example, while the thermal Gibbs state predicts the entanglement entropy proportional to the volume of subsystem in the middle of the spectrum, the entanglement entropy of the scar eigenstate scales proportionally to the area of the subsystem. Because the QMBS often appear in a tower of scar states where a set of states with equidistant energy spacing exists [7], if the initial product state is a superposition of the scar states, then the system exhibits the perfect revival. Conversely, it has been also shown that the persistent revival of product state implies QMBS [8]. Hence, it is important to observe the fidelity revival as evidence for the QMBS in experiments. In recent experiments of a Rydberg atom simulator, it has been observed that certain product states show persistent, though imperfect revival under van der Waals interaction. This indicates the presence of a QMBS in the system [9; 10; 11]. Taking the extreme limit of strong van der Waals interaction in the Rydberg blockade of the Rydberg atom chain gives rise to the so-called PXP model, which has a Hilbert space projected onto the states with no neighboring excited states. The QMBS structure of the PXP model has been actively studied theoretically, including the report that it also shows the imperfect revival [12]. There exist many attempts to reach high fidelity revival, for instance, by enhancing its weakly broken \(SU(2)\) symmetry [13; 14]. There is another way which enhances the quantum revival of the states in the PXP model: periodic driving. In most of the cases, periodic driving induces thermalization of the scar states, and thus destructs the fidelity revival. However, recent experimental and theoretical studies show [10; 15] that periodic driving with certain amplitudes surprisingly enhances the fidelity revival. Furthermore, it has also been observed that the subharmonic response of fidelity revival exists with doubled period compared to the driving mode. This kind of subharmonic response breaks the discrete time translational symmetry of the driving mode, and hence get attentions as a time-version of crystalline order, called a discrete time-crystal (DTC) which is also a recently studied subject [16; 17]. Although earlier research had demonstrated this subharmonic fidelity revival in the limit of large driving amplitude and high frequency [18], the general mechanism of such subharmonic revival has not been explored well. In this paper, we study the periodically driven PXP model, focusing on the subharmonic fidelity revivals. In Section II, we introduce the PXP model with square pulse driving modes. By calculating the average fidelity and the Fourier components of the fidelity signal, we study the conditions under which the fidelity is enhanced and subharmonic response occurs. Then, based on the similarity of the Hamiltonian adjacent graph between PXP model and free spin model [19], we show that they can be explained by the free spin-\(1/2\) model with the same driving which is exactly solvable. In Section III, we introduce an analytic approach for the driven PXP model. Within perturbative analysis, we derive the driving conditions for subharmonic responses in the driven PXP model and discuss their applications. In Section IV, we summarize our works and suggest interesting future directions. PXP model with square pulse drive In this section, we first introduce the static PXP model and then represent how the fidelity revival is controlled under periodic driving. Particularly, based on the graph theoretical similarity between PXP model and free spin-\(1/2\) model, we argue that various phenomena in the driven PXP model, such as the revival enhancement and (sub-) harmonic response, can be explained by the free spin-\(1/2\) model. The PXP model, describing Rydberg atoms with strong interaction, is represented as following, \[H_{\text{PXP}}=\Omega\sum_{j}P_{j-1}X_{j}P_{j+1}. \tag{1}\] Here, the spin at each site consists of two states, \(|0\rangle\) and \(|1\rangle\), which represent a ground state and an excited state, respectively. \(X_{j}=|0_{j}\rangle\langle 1_{j}|+|1_{j}\rangle\langle 0_{j}|\) is the Pauli \(x\) matrix at site \(j\), and \(P_{j}=|0_{j}\rangle\langle 0_{j}|\) is the projection operator for ground state at site \(j\). This Hamiltonian describes the system which prohibits the spin-flip, unless the neighboring sites are in the ground state, i.e. only the transition, \(|\cdots 010\cdots\rangle\cdots\leftrightarrow|\cdots 000\cdots\rangle\), is allowed. Since this transition does not generate or annihilate excited states in any two neighboring sites, one can exclude the states where the consecutive neighbors are excited. Although the PXP model is a non-integrable chaotic system [1], there are certain product states which show non-ergodicity and decent revival under time evolution, such as \(|\mathbb{Z}_{2}\rangle\equiv|0101\cdots 01\rangle\) called a Neel state. The non-ergodic property of such product states is unique, in a sense that the number of them increases linearly with the system size, whereas, other ergodic product states show exponential increase with the system size. It has been understood that their fidelity revival is originated from the quantum scarring, i.e., the product state with short-time revival is a linear combination of scar eigenstates with equivalent energy spacing [7]. The \(|\mathbb{Z}_{2}\rangle\) state is mainly composed of the quantum scar states with almost equal energy spacing. However, because their energy spacing is not perfectly even, it has been pointed out that the fidelity revival of the \(|\mathbb{Z}_{2}\rangle\) state is also imperfect [12]. There have been many suggestions to adjust the system enhancing this imperfect revivals. As one promising way, it has been studied in both theoretically and experimentally to the addition of cosine modulation. This modulation plays a role of controlling chemical potential to the PXP model, and can enhance the fidelity revival of the \(|\mathbb{Z}_{2}\rangle\) state or even induce subharmonic responses in certain driving condition [10; 15]. However, the systematic ways to find such driving have not been studied in detail, which is the focus of this study. The periodically driven PXP model is represented as following. \[H(t)=H_{\text{PXP}}+\Delta_{\text{sq}}(t)\sum_{j}n_{j}, \tag{2}\] where \(H_{\text{PXP}}\) is defined in Equation 1 and the second term represents the periodic driving, where \(n_{j}=|1_{j}\rangle\langle 1_{j}|\) counts the number of excited states on each site. In terms of the periodic driving, we adopt the square pulse driving protocol as also introduced in earlier studies for analysis. It shares a very similar fidelity profile with the cosine driving case and thus well explains the experiments [15]. The square pulse driving protocol \(\Delta_{\text{sq}}(t)\) within the period \(T\) is defined as, \[\Delta_{\text{sq}}(t)=\begin{cases}\Delta_{0}+\Delta_{m},&0\leq t\leq T/4\\ \Delta_{0}-\Delta_{m},&T/4<t\leq 3T/4\\ \Delta_{0}+\Delta_{m},&3T/4<t<T.\end{cases} \tag{3}\] This corresponds to the periodic driving with frequency \(\omega_{0}=2\pi/T\), average chemical potential \(\Delta_{0}\), and driving amplitude \(\Delta_{m}\). Now, we introduce the wave function fidelity \(F(t)\equiv|\langle\psi(t)|\psi(0)\rangle|^{2}\) to measure how the revival of the initial product state, \(|\psi(0)\rangle=|\mathbb{Z}_{2}\rangle\), changes as the driving parameters \((\Delta_{0},\Delta_{m})\) are tuned. As a tool to measure the subharmonic response, we also introduce the Fourier component of the wave function fidelity defined as, \[\overline{F}(\omega)=\left|\frac{1}{nT}\int_{0}^{nT}F(t)e^{-i\omega t}dt \right|. \tag{4}\] Later, we will discuss the three main quantities, \(\overline{F}(0),\overline{F}(\omega_{0})\) and \(\overline{F}(\omega_{0}/2)\), as functions of \((\Delta_{0},\Delta_{m})\): \(\overline{F}(0)\) introduces how the fidelity revival gets enhanced, and \(\overline{F}(\omega_{0})\) and \(\overline{F}(\omega_{0}/2)\) are responsible for harmonic and subharmonic responses, respectively. Note that if \(|\psi(t)\rangle\) is an eigenstate of \(H(t)\) with driving parameters \((\Delta_{0},\Delta_{m})\), then \(\prod_{j}Z_{j}|\psi(t)\rangle\) with Pauli \(z\) matrix \(Z_{j}=|1_{j}\rangle\langle 1_{j}|-|0_{j}\rangle\langle 0_{j}|\) is also an eigenstate of \(H(t)\) with the same driving parameters. Hence, without loss of generality, we only plot the region, \(\Delta_{m}\geq 0\). Before investigating the fidelity profile of the \(|\mathbb{Z}_{2}\rangle\) state in the PXP model, let us introduce another model which shows very similar feature: the free spin-\(1/2\) chain model with the same driving \(\Delta_{\text{sq}}(t)\), \[H_{\text{free}}(t)=\sum_{j}X_{j}+\Delta_{\text{sq}}(t)\sum_{j}n_{j}. \tag{5}\] For the \(|\mathbb{Z}_{2}\rangle\) state, the free spin-\(1/2\) model and the PXP model share common features in terms of graph theoretical point of view. To understand it, notice that the PXP model is nothing but the free spin-\(1/2\) model with constraints. Hence, the graph of length \(L\) PXP model, for instance, is a subgraph of the free spin-\(1/2\) model with the same length \(L\). Conversely, consider the length \(L\) PXP model with even \(L\). If we give the stronger constraint to the PXP model and only allow the states with \(|0\rangle\) states at odd sites, then each states \(|0x0y\cdots 0z\rangle\) can be mapped to the state \(|xy\cdots z\rangle\) in length \(L/2\) free spin-\(1/2\) model, where \(x,y,\cdots,z\) are either \(0\) or \(1\). This shows that the graph of length \(L/2\) free spin-\(1/2\) model is a subgraph of length \(L\) PXP model. Figure 1(a) shows the graph of the PXP Hamiltonian with \(L=8\), and Figure 1(b) shows the graph of the free spin-\(1/2\) Hamiltonian for \(L=4\). The blue square in Figure 1(a) marks the Neel state \(|01010101\rangle\) in PXP Hamiltonian, which corresponds to the \(|1111\rangle\) state in free spin-\(1/2\) model also marked by the blue square in Figure 1(b). The yellow triangles in Figure 1(a) and (b) represent the polarized state \(|00000000\rangle\) state and \(|00000\rangle\) state, respectively. The vertices and edges colored in red show the difference between the two graphs and show that the graph in Figure 1(b) is indeed a subgraph of Figure 1(a). Despite their difference, they share a common feature particularly for the \(|\mathbb{Z}_{2}\rangle\) state, marked by the blue square. This common feature is generally applicable for the graphs of the length \(L\) PXP model and length \(L/2\) free spin-\(1/2\) model. To argue it in detail, let's consider the expansion, \(\langle\mathbb{Z}_{2}(t)|\mathbb{Z}_{2}\rangle=\langle\mathbb{Z}_{2}|e^{iHt}| \mathbb{Z}_{2}\rangle=\sum_{n}\frac{(it)^{n}}{n!}\langle H^{n}\rangle\) with \(\langle H^{n}\rangle=\langle\mathbb{Z}_{2}|H^{n}|\mathbb{Z}_{2}\rangle\). In the graph theoretical point of view, \(\langle H^{n}\rangle\) counts the number of walks with length \(n\) which starts and ends at \(|\mathbb{Z}_{2}\rangle\) vertex. Because the difference between PXP graph and free spin graph mostly occurs for the states with high Hamming distance, their difference only affects on the long walks, i.e. \(\langle H^{n}\rangle\) with large \(n\). Hence, we may expect the similar behavior in \(F(t)\) for a short time scale \(t\) in between PXP model and free spin model. Later, we will show that it is indeed the case by comparing calculation of the fidelity revival on both PXP model and free spin model. For completeness we note that this argue is not applicable for ergodic initial states: for example the polarized state \(|0_{p}\rangle\equiv|0000\cdots 00\rangle\), marked by the yellow triangle, shows a large difference in graphs even for the nearest neighbors, indicating the different fidelity profile between PXP model and free spin model. In the presence of driving, one may also suggest the similarities between PXP model and free spin-\(1/2\) model, with respect to the graph theoretical approach. Figure 1(c) and 1(d) show the values of \(\overline{F}(0)\) for PXP model and free spin-\(1/2\) model respectively, as functions of driving parameters \(\Delta_{0}\) and \(\Delta_{m}\). As discussed earlier, \(\overline{F}(0)\) indicates the enhancement of the fidelity revival. Indeed, Figures 1(c) and 1(d) show very similar features up to scale. For calculation, we choose the periodicity \(T_{0}=4.788\) for the PXP model and \(T_{f}=\pi\) for the free spin model respectively, which are optimized values for the fidelity revival observed in the static cases. The system size \(L=12\) is chosen with the time range \(\left[0,10T_{0}\right]\) for the PXP model, and \(L=6\) with the time range \(\left[0,10T_{f}\right]\) for the free spin model. In Figure 1(c) and 1(d), we point out several common features as following. We first note the "butterfly"-shaped peaks on top of each figure, marked by a red circle, and high average fidelity region on the lower left and right side. Next, there is a wide \(V\)-shaped region having relatively small values of \(\overline{F}(0)\) in-between, with the following substructures: On the left side in both Figures 1(c) and (d), the butterfly peaks are connected to the lower left region by some "bridges", marked by ma Figure 1: (Top) The Hamiltonian adjacency graph of (a) PXP model for \(L=8\) and (b) free spin \(1/2\) model for \(L=4\). The blue square and yellow triangle in Figure (a) represent the states \(|01010101\rangle\) and \(|00000000\rangle\) respectively, and the blue square and yellow triangle in Figure (b) represent the states \(|1111\rangle\) and \(|0000\rangle\) respectively. (Bottom) Average fidelity, \(\overline{F}(0)\), with initial state \(|\mathbb{Z}_{2}\rangle\) through \(10T\) time domain. Figure (c) shows \(\overline{F}(0)\) of PXP model with the driving period \(T=4.788\) for system size \(L=12\); Figure (d) shows \(\overline{F}(0)\), of free spin-\(1/2\) model with driving period \(T=\pi\) for system size \(L=6\). The “butterfly” peaks are encircled by red line, “bridge” peaks are encircled by magenta line, “local” (in Figure (c)) or “steep bridge” (in Figure (d)) peaks are encircled by cyan line, and ”separator” peaks are encircled by blue line. genta lines. In addition, there are "local" peaks between the bridges marked by cyan line in Figure 1(c), but instead there are "steep bridge" peaks in Figure 1(d). Later, we will explain that they indicate the same phenomena. On the right side in both Figures 1(c) and (d), there are long and thin "separator" peak marked by green line, which separates the butterfly peaks and the lower right region. To determine the origin of these peaks, the frequency profiles of the fidelity are investigated. Figures 2a and 2b plot the Fourier component values of the fidelity for the \(|\mathbb{Z}_{2}\rangle\) state in the PXP model, \(\overline{F}(\omega_{0})\) and \(\overline{F}(\omega_{0}/2)\), with \(\omega_{0}=2\pi/T_{0}\). By comparing them with Figure 1(c), one can conclude that the butterfly peaks (marked by red line) and the local peaks (cyan line) represent harmonic revivals, whereas, the bridges (magenta line) and separators (green line) represent the subharmonic revivals. Notice that the bridges and the separators also appear in Figure 2a. However, this does not imply that they are harmonic responses, since the subharmonic response with nonzero \(\overline{F}(\omega_{0}/2)\) also has finite values of \(\overline{F}(\omega_{0})\). Therefore, Figure 2b is a direct evidence, showing the subharmonic response indeed occurs due to the driving. For comparison, we also investigate the free spin-\(1/2\) model case. Figures 3a and 3b plot the values of \(\overline{F}(\omega_{f})\) and \(\overline{F}(\omega_{f}/2)\) for the free-spin model, showing harmonic and subharmonic responses respectively, with \(\omega_{f}=2\pi/T_{f}=2\). The butterfly peaks (marked by red line) and the steep bridge peaks (cyan line) in Figure 3a again shows the harmonic response, while the bridge peaks (magenta line) and the separator peaks on the right side (green line) in Figure 3b shows the subharmonic response. Indeed, these features are consistent with the case of the PXP model which is explained earlier (see Figure 2). ## III Perturbative and exact calculations on the models Until now, we have shown that the periodic driving of the PXP model can induce the subharmonic responses of the \(|\mathbb{Z}_{2}\rangle\) state fidelity and have interpreted them based on the graph theoretical similarities with the free spin-\(1/2\) model. In the following, for more rigorous argument, alternative analytic approaches are presented to understand subharmonic responses and to determine the optimal driving conditions. Since our focus lies on the subharmonic response of driven PXP model, we perform the appropriate perturbation limit which represents the \(V\)-shaped region in Figure 1(c), where every subharmonic peaks lies on. We note that perturbation approach in another limit has been already performed in earlier work [18]. Its perturbative limit explains the "butterfly" peaks, marked by the red circle in Figure 1(c). In contrast, our perturbative limit explains every peaks on the \(V\)-shaped region, including "bridge", "local", and "separator" peaks. Before moving on, we redefine some notations for simplicity. Take \(\Delta_{\pm}\equiv(\Delta_{0}\pm\Delta_{m})/2\), \(\overline{X}_{j}\equiv P_{j-1}X_{j}P_{j+1}\), and \(H^{\pm}\equiv\sum_{j}H^{\pm}_{j}\) with \(H^{\pm}_{j}\equiv\Omega\overline{X}_{j}+\Delta_{\pm}Z_{j}\). Notice that \(H^{\pm}=H_{\text{PXP}}+2\Delta_{\pm}\sum_{j}n_{j}-\Delta_{\pm}\sum_{j}I_{j}\), hence the evolution operator in the presence of square pulse driving, \[U=e^{iH^{+}T_{0}/4}e^{iH^{-}T_{0}/2}e^{iH^{+}T_{0}/4}, \tag{6}\] is equivalent to the evolution operator of \(H(t)\) in one period up to phase. Consider the limit \(\Delta_{+}\gg\Omega\gg\Delta_{-}\), which is the right side of the \(V\)-shaped region with low values of \(\overline{F}(0)\) in Figure 1. We will show that this limit always gives subharmonic response. In this limit, we can approximate \(H^{+}\simeq\Delta_{+}\sum_{j}Z_{j}\) and \(H^{-}\simeq\Omega H_{\text{PXP}}\) taking the leading terms. Because our product state is an eigenstate of \(H^{+}\), if we calculate \(\langle\mathbb{Z}^{\prime}_{2}|U|\mathbb{Z}_{2}\rangle\), where \(\mathbb{Z}^{\prime}_{2}\equiv|1010\cdots 10\rangle\) is a translated Neel state, one can easily show \(\langle\mathbb{Z}^{\prime}_{2}|U|\mathbb{Z}_{2}\rangle\simeq\langle\mathbb{Z} ^{\prime}_{2}|e^{iH_{\text{PXP}}T_{0}/2}|\mathbb{Z}_{2}\rangle\) up to phase. For \(L=12\) case, this value is quite large \(\simeq 0.9658\). This results in the \(2T\)-periodic revival with high lower bound of \(|\langle\mathbb{Z}_{2}|U^{2}|\mathbb{Z}_{2}\rangle|\), satisfying, \[|\langle\mathbb{Z}_{2}|U^{2}|\mathbb{Z}_{2}\rangle|\geq|\langle\mathbb{Z}_{2} |e^{iH_{\text{PXP}}T_{0}/2}|\mathbb{Z}^{\prime}_{2}\rangle|^{2},\quad\Delta_{+ }\gg\Omega\gg\Delta_{-}. \tag{7}\] See Appendix A for detailed proof. Thus, one can claim the persistent subharmonic revival indeed occurs in \(\Delta_{+}\gg\Omega\gg\Delta_{-}\). We also show that this revival is robust even with \(\mathcal{O}(\Omega)\) order terms in Appendix A. Figure 3: Frequency profile of the fidelity \(F(t)\), (a) \(\overline{F}(\omega_{0})\) and (b) \(\overline{F}(\omega_{0}/2)\) with initial state \(|\mathbb{Z}_{2}\rangle\), for the free spin model with driving period \(T=\pi\) and system size \(L=12\). The “butterfly” peaks are encircled by red line, “bridges” peaks are encircled by magenta line, ”steep bridge” peaks are encircled by cyan line, and ”separator” peaks are encircled by blue line. Figure 2: Frequency profile of the fidelity \(F(t)\), (a) \(\overline{F}(\omega_{0})\) and (b) \(\overline{F}(\omega_{0}/2)\) with initial state \(|\mathbb{Z}_{2}\rangle\), for the PXP model with driving period \(T=4.788\) and system size \(L=12\). The ”butterfly” peaks are encircled by red line, ”bridges” peaks are encircled by magenta line, “local” peaks are encircled by cyan line, and ”separator” peaks are encircled by blue line. Now consider the limit \(\Delta_{-}\gg\Omega\gg\Delta_{+}\), which is the left side of the \(V\)-shaped region with low values of \(\overline{F}(0)\) in Figure 1. Similar analysis with the case \(\Delta_{+}\gg\Omega\gg\Delta_{-}\) leads \(H^{+}\simeq\Omega H_{\text{PXP}}\) and \(H^{-}\simeq\Delta_{-}\sum_{j}Z_{j}\). We focus on the driving conditions for the parameters \(\Delta_{-}=\frac{n\pi}{2\beta}=\frac{n\omega_{0}}{2}\), and our aim is to show that for even \(n\) case the fidelity presents subharmonic response and for odd \(n\) case the fidelity presents harmonic response. First, let \(n=2k\) be even. In this case, \(e^{iH^{-}T/2}=1\), and thus one again achieve \(\langle\mathbb{Z}_{2}^{\prime}|U|\mathbb{Z}_{2}\rangle\simeq\langle\mathbb{Z}_ {2}^{\prime}|e^{H_{\text{PXP}}T_{0}/2}|\mathbb{Z}_{2}\rangle\). Thus, we get the very similar result with Equation 7, \[|\langle\mathbb{Z}_{2}|U^{2}|\mathbb{Z}_{2}\rangle|\geq|\langle \mathbb{Z}_{2}|e^{iH_{\text{PXP}}T_{0}/2}|\mathbb{Z}_{2}^{\prime}\rangle|^{2},\] \[\Delta_{-}=k\omega_{0}\gg\Omega\gg\Delta_{+}, \tag{8}\] showing the subharmonic response mainly occurs at \(\Delta_{-}=k\omega_{0}\) for integer \(k\)'s. On the other hand, for odd \(n=2k+1\)'s, \(e^{iH^{-}T_{0}/2}=\prod_{j}Z_{j}\). Using the anti-commutation relation between \(\prod_{j}Z_{j}\) and \(H_{\text{PXP}}\), the evolution operator is represented as, \[U \simeq e^{iH_{\text{PXP}}T_{0}/4}\left(\prod_{j}Z_{j}\right)e^{iH_ {\text{PXP}}T_{0}/4}\] \[=\prod_{j}Z_{j}e^{-iH_{\text{PXP}}T_{0}/4}e^{iH_{\text{PXP}}T_{0 }/4}=\prod_{j}Z_{j}, \tag{9}\] and this results in, \[|\langle\mathbb{Z}_{2}|U|\mathbb{Z}_{2}\rangle|\simeq 1,\quad\Delta_{-}= \left(k+\frac{1}{2}\right)\omega_{0}\gg\Omega\gg\Delta_{+}. \tag{10}\] Thus, the harmonic response mainly occurs for \(\Delta_{-}=(k+1/2)\omega_{0}\) region. We again show that this revival is robust up to \(\mathcal{O}(\Omega)\) order, see Appendix A. In summary of this section, our perturbative analysis provides reasonable explanation why the several peaks in \(V\)-shaped region are emerging in Figure 1. Specifically, Equation 7 explains the long diagonal "separator" peaks on the right side of \(V\)-shaped region, Equation 8 explains the "bridge" peaks on the left side, and Equation 10 explains the "local" peaks between bridge peaks. It is important to note that the derivation for Equations 7, 8 and 10 can be generally applicable. ## IV Discussion and Conclusion In this work, we study the wave function fidelity revival on the periodically driven PXP model. First, we show that the driving on PXP model induces various interesting responses, including subharmonic responses. Based on the graph theoretical similarities between PXP model and free spin-\(1/2\) model, we have claimed and numerically confirmed that the driving condition which induces the subharmonic response in the PXP model can be captured by the free spin-\(1/2\) model. Then, considering perturbative analysis, the generic driving conditions for subharmonic responses in the PXP model are derived. Our work will shed a light on the Rydberg atom simulator, studying subharmonic responses of the driven quantum many-body scarring systems. As an interesting future work, one may extend our studies with finite van der Waals interaction, and explore the conditions of the subharmonic revival as interaction changes. Since the strength of the van der Waals interaction is determined by the distance between the two Rydberg atoms \(r\) as \(\sim\frac{1}{r^{6}}\), 9, one could control the atom distance to tune their interaction strength and track the revival property of the \(|\mathbb{Z}_{2}\rangle\) state. One can also consider the effect of further neighbor van der Waals interactions, and explore how the fidelity revival condition changes, which we will leave as a future work. ###### Acknowledgements. _Acknowledgments.--_ We thank Junmo Jeon for valuable discussions. This work is supported by National Research Foundation Grant (No. 2020R1A4A3079707, No. 2021R1A2C1093060),). ## Appendix A Bound of the (sub)harmonic revival on PXP model In this section, we show the harmonic and subharmonic revival is stable under small \(\Omega\) values up to first order, which is discussed in Section III. We show the Equations 7, 8 and 10 still holds if we include the \(\mathcal{O}(\Omega)\) terms. We first consider the condition \(\Delta_{+}\gg\Omega\gg\Delta_{-}\). In this case, because for different sites \(j\neq k\), we have \[[H_{j}^{+},H_{k}^{+}]\sim\mathcal{O}(\Omega^{2}), \tag{11}\] hence we may write \[e^{iH^{+}T_{0}/4}\simeq\prod_{j}e^{iH_{j}^{+}T_{0}/4}=\prod_{j}e^{i(\Omega \overline{X}_{j}+\frac{\Delta_{+}}{2}Z_{j})T_{0}/4} \tag{12}\] up to \(\mathcal{O}(\Omega)\) order. This can be expanded to cosine and sine functions in \(\mathcal{O}(\Omega)\) order, achieving \[e^{iH_{j}^{+}T_{0}/4}\simeq\cos\frac{\Delta_{+}T_{0}}{4}+i\sin\frac{\Delta_{+ }T_{0}}{4}\left(\frac{\Omega}{\Delta_{+}}\overline{X}_{j}+Z_{j}\right). \tag{13}\] If we product all the \(e^{iH_{j}^{+}T_{0}/4}\) terms and left only the \(\mathcal{O}(\Omega)\) order terms, then we finally get \[e^{iH^{+}T_{0}/4}\] \[\simeq e^{i\Delta_{+}\sum_{j}Z_{j}T_{0}/4}\left(1+i\sin\frac{ \Delta_{+}T_{0}}{4}\sum_{j}e^{-i\Delta_{+}Z_{j}T_{0}/4}\frac{\Omega}{\Delta_{+ }}\overline{X}_{j}\right)\] \[=\left(1+i\sin\frac{\Delta_{+}T_{0}}{4}\sum_{j}\frac{\Omega}{ \Delta_{+}}\overline{X}_{j}e^{-i\Delta_{+}Z_{j}T_{0}/4}\right)e^{i\Delta_{+} \sum_{j}Z_{j}T_{0}/4}. \tag{14}\] To calculate the subharmonic response, we consider the value \(\langle\mathbb{Z}_{2}^{\prime}|U|\mathbb{Z}_{2}\rangle\): if this value is large enough then it guarantees the \(2T\)-periodic revival with \(|\langle\mathbb{Z}_{2}|U^{2}|\mathbb{Z}_{2}\rangle|\geq|\langle\mathbb{Z}_{2}^{ \prime}|U|\mathbb{Z}_{2}\rangle|^{2}\), because \[\langle\mathbb{Z}_{2}|U^{2}|\mathbb{Z}_{2}\rangle =|\langle\mathbb{Z}_{2}|U|\mathbb{Z}_{2}^{\prime}\rangle|^{2}+\sum_ {i}|\langle\mathbb{Z}_{2}|U|\psi_{i}\rangle|^{2}\] \[\geq|\langle\mathbb{Z}_{2}|U|\mathbb{Z}_{2}^{\prime}\rangle|^{2}, \tag{10}\] where \(\{\psi_{i}\}\) are the basis of the Hilbert space orthogonal to \(|\mathbb{Z}_{2}^{\prime}\rangle\). Now because we are considering \(\Omega\gg\Delta_{-}\) limit, we ignore \(\Delta_{-}\), giving \(e^{iH^{-}T_{0}/2}\simeq e^{iH_{\text{F}\text{F}\text{F}}T_{0}/2}\), then we have \[\langle\mathbb{Z}_{2}^{\prime}|U|\mathbb{Z}_{2}\rangle\simeq \langle\mathbb{Z}_{2}^{\prime}|e^{iH_{\text{F}\text{F}}T_{0}/2}|\mathbb{Z}_{2}\rangle\] \[+i\frac{\Omega}{\Delta_{+}}\sin\frac{\Delta_{+}T_{0}}{4}\left[ \langle\mathbb{Z}_{2}^{\prime}|e^{iH_{\text{F}\text{F}}T_{0}/2}\sum_{j} \overline{X}_{j}e^{-i\Delta_{+}Z_{j}T_{0}/4}|\mathbb{Z}_{2}\rangle\right]\] \[+i\frac{\Omega}{\Delta_{+}}\sin\frac{\Delta_{+}T_{0}}{4}\left[ \langle\mathbb{Z}_{2}^{\prime}|\sum_{j}e^{-i\Delta_{+}Z_{j}T_{0}/4}\overline {X}_{j}e^{iH_{\text{F}\text{F}}T_{0}/2}|\mathbb{Z}_{2}\rangle\right]. \tag{11}\] For the second term, observe that \[\sum_{j}\overline{X}_{j}e^{-i\Delta_{+}Z_{j}T_{0}/4}|\mathbb{Z}_{2}\rangle=e^{ -i\Delta_{+}Z_{j}T_{0}/4}\sum_{j}\overline{X}_{j}|\mathbb{Z}_{2}\rangle, \tag{12}\] because there are always excited states between two ground states. Hence we get \[\langle\mathbb{Z}_{2}^{\prime}|\sum_{j}e^{-i\Delta_{+}Z_{j}T_{0}/ 4}\overline{X}_{j} e^{iH_{\text{F}\text{F}\text{F}}T_{0}/2}|\mathbb{Z}_{2}\rangle\] \[=\langle\mathbb{Z}_{2}^{\prime}|e^{iH_{\text{F}\text{F}}T_{0}/2} H_{\text{F}\text{F}\text{F}}|\mathbb{Z}_{2}\rangle\] \[=\partial_{t}\left.\langle\mathbb{Z}_{2}^{\prime}|e^{iH_{\text{F }\text{F}}\text{F}}|\mathbb{Z}_{2}\rangle\right|_{t=T_{0}/2}\] We numerically check that \(\langle\mathbb{Z}_{2}^{\prime}|e^{iH_{\text{F}\text{F}}\text{F}}|\mathbb{Z}_{2}\rangle\) becomes maximized at \(t=T_{0}/2\), and hence conclude this term vanishes. Arguing similar for the third term, we get \[\langle\mathbb{Z}_{2}^{\prime}|U|\mathbb{Z}_{2}\rangle\simeq\langle\mathbb{Z} _{2}^{\prime}|e^{iH_{\text{F}\text{F}}T_{0}/2}|\mathbb{Z}_{2}\rangle, \tag{13}\] showing the persistent subharmonic revival because \(|\langle\mathbb{Z}_{2}^{\prime}|e^{iH_{\text{F}\text{F}}T_{0}/2}|\mathbb{Z}_{ 2}\rangle|\) can be taken high enough: for example, \(|\langle\mathbb{Z}_{2}^{\prime}|e^{iH_{\text{F}\text{F}}T_{0}/2}|\mathbb{Z}_{ 2}\rangle|\simeq 0.9658\) for \(L=12\). Now we consider the condition \(\Delta_{-}\gg\Omega\gg\Delta_{+}\) region, which is the left side of the \(V\)-shaped low \(\overline{F}(0)\) region. In this case, by the similar way achieving 11 we achieve \[e^{iH_{j}\overline{T}_{0}/2}\simeq\cos\frac{\Delta_{-}T_{0}}{2}+i\sin\frac{ \Delta_{-}T_{0}}{2}\left(\frac{\Omega}{\Delta_{-}}\overline{X}_{j}+Z_{j}\right). \tag{14}\] Here, we specifically focus on the area where \(\Delta_{-}=\frac{n\pi+2\eta}{T_{0}}\) for integer \(n\)'s, with small \(\eta\ll L^{-1}\). We start with even \(n=2k\), giving \[e^{iH_{j}^{-}T_{0}/2}\simeq\pm 1\pm i\eta\left(\frac{\Omega}{\Delta_{-}}\overline{X} _{j}+Z_{j}\right), \tag{15}\] and \[e^{iH^{-}T_{0}/2}\simeq 1\pm i\eta\sum_{j}\left(\frac{\Omega}{\Delta_{-}} \overline{X}_{j}+Z_{j}\right). \tag{16}\] Now we calculate \[\langle\mathbb{Z}_{2}^{\prime}|U|\mathbb{Z}_{2}\rangle\] \[=\langle\mathbb{Z}_{2}^{\prime}|e^{iH_{\text{F}\text{F}}T_{0}/2}| \mathbb{Z}_{2}\rangle\] \[+\pm i\eta\langle\mathbb{Z}_{2}^{\prime}|e^{iH_{\text{F}\text{F} }T_{0}/4}\left[\sum_{j}\left(\frac{\Omega}{\Delta_{-}}\overline{X}_{j}+Z_{j} \right)\right]e^{iH_{\text{F}\text{F}}T_{0}/4}|\mathbb{Z}_{2}\rangle. \tag{17}\] Because the second term can be squeezed by \(L(1+\Omega/\Delta_{-})\) again, we get \[|\langle\mathbb{Z}_{2}^{\prime}|U|\mathbb{Z}_{2}\rangle|\geq|\langle\mathbb{Z} _{2}^{\prime}|e^{iH_{\text{F}\text{F}\text{F}\text{F}_{0}/2}}|\mathbb{Z}_{2} \rangle|-\eta L\left(1+\frac{\Omega}{\Delta_{-}}\right), \tag{18}\] and since the first term is large enough, it shows that the subharmonic response mainly occurs near \(\Delta_{-}=\frac{2k\pi}{T_{0}}=k\omega_{0}\). Finally, we take odd \(n=2k+1\). In this case, we get \[e^{iH_{j}^{-}T_{0}/2}\simeq\pm\eta\pm i\left(\frac{\Omega}{\Delta_{-}} \overline{X}_{j}+Z_{j}\right) \tag{19}\] and thus \[e^{iH^{-}T_{0}/2}\simeq\pm\prod_{j}Z_{j}\left[1+\sum_{j}Z_{j}\left(\eta\pm i \frac{\Omega}{\Delta_{-}}\overline{X}_{j}\right)\right]. \tag{20}\] Now by using the fact that \(H_{PXP}\) and \(\prod_{j}Z_{j}\) anticommutes, we get \[U\simeq\pm e^{-iH_{\text{F}\text{F}}T_{0}/4}\left[1+\sum_{j}Z_{j}\left(\eta\pm i \frac{\Omega}{\Delta_{-}}\overline{X}_{j}\right)\right]e^{iH_{\text{F}\text{F} \text{F}}T_{0}/4}. \tag{21}\] Calculating \(\langle\mathbb{Z}_{2}|U|\mathbb{Z}_{2}\rangle\), the first term gives \(\pm 1\). For the second term, notice that the \(\eta\)-dependent term squeezes \(\sum_{j}Z_{j}\) operator, which gives its value at most \(L\), and hence squeezed by the value \(\eta L\). For the \(\frac{\Omega}{\Delta_{-}}\) term squeezing \(\sum_{j}Z_{j}\overline{X}_{j}=i\sum_{j}\overline{Y}_{j}\) where \(\overline{Y}_{j}=P_{j-1}Y_{j}P_{j+1}\) with \(Y_{j}\) a Pauli \(y\) matrix \(Y_{j}=i|0_{j}\rangle\langle 1_{j}|-i|1_{j}\rangle\langle 0_{j}|\), we can numerically check that this value squeezes below \(\delta\simeq 0.2\). Therefore, \[|\langle\mathbb{Z}_{2}|U|\mathbb{Z}_{2}\rangle|\leq 1-\eta L-\frac{\Omega}{ \Delta_{-}}\delta \tag{22}\] and because \(\eta L\) and \(\delta\) are small enough, this represents the persistent harmonic revival.
2309.14677
XGV-BERT: Leveraging Contextualized Language Model and Graph Neural Network for Efficient Software Vulnerability Detection
With the advancement of deep learning (DL) in various fields, there are many attempts to reveal software vulnerabilities by data-driven approach. Nonetheless, such existing works lack the effective representation that can retain the non-sequential semantic characteristics and contextual relationship of source code attributes. Hence, in this work, we propose XGV-BERT, a framework that combines the pre-trained CodeBERT model and Graph Neural Network (GCN) to detect software vulnerabilities. By jointly training the CodeBERT and GCN modules within XGV-BERT, the proposed model leverages the advantages of large-scale pre-training, harnessing vast raw data, and transfer learning by learning representations for training data through graph convolution. The research results demonstrate that the XGV-BERT method significantly improves vulnerability detection accuracy compared to two existing methods such as VulDeePecker and SySeVR. For the VulDeePecker dataset, XGV-BERT achieves an impressive F1-score of 97.5%, significantly outperforming VulDeePecker, which achieved an F1-score of 78.3%. Again, with the SySeVR dataset, XGV-BERT achieves an F1-score of 95.5%, surpassing the results of SySeVR with an F1-score of 83.5%.
Vu Le Anh Quan, Chau Thuan Phat, Kiet Van Nguyen, Phan The Duy, Van-Hau Pham
2023-09-26T05:05:34Z
http://arxiv.org/abs/2309.14677v1
XGV-BERT: Leveraging Contextualized Language Model and Graph Neural Network for Efficient Software Vulnerability Detection ###### Abstract With the advancement of deep learning (DL) in various fields, there are many attempts to reveal software vulnerabilities by data-driven approach. Nonetheless, such existing works lack the effective representation that can retain the non-sequential semantic characteristics and contextual relationship of source code attributes. Hence, in this work, we propose XGV-BERT, a framework that combines the pre-trained CodeBERT model and Graph Neural Network (GCN) to detect software vulnerabilities. By jointly training the CodeBERT and GCN modules within XGV-BERT, the proposed model leverages the advantages of large-scale pre-training, harnessing vast raw data, and transfer learning by learning representations for training data through graph convolution. The research results demonstrate that the XGV-BERT method significantly improves vulnerability detection accuracy compared to two existing methods such as VulDeePecker and SySeVR. For the VulDeePecker dataset, XGV-BERT achieves an impressive F1-score of 97.5%, significantly outperforming VulDeePecker, which achieved an F1-score of 78.3%. Again, with the SySeVR dataset, XGV-BERT achieves an F1-score of 95.5%, surpassing the results of SySeVR with an F1-score of 83.5%. keywords: Deep Learning, Software Security, Vulnerability Detection, Graph Neural Networks, NLP + Footnote †: journal: Elsevier ## 1 Introduction Recently, as technology continues its rapid evolution, the software development landscape has witnessed an exponential surge. While these software innovations offer unprecedented convenience, they also bring forth a looming specter: a problem of software vulnerabilities. These vulnerabilities are formidable adversaries to the seamless functioning of software systems [1]. The global economic toll, both direct and indirect, inflicted by these vulnerabilities has surpassed billions of dollars, making it an issue of paramount concern. It is an undeniable fact that the vast majority of software applications harbor various types of vulnerabilities. Notable among these are buffer overflow vulnerabilities like CVE-2019-8917, library/API function call vulnerabilities like CVE-2016-10033, array usage vulnerabilities like CVE-2020-12345, and many more extensively cataloged within the Common Vulnerabilities and Exposures (CVE) database [2]. As time elapses, the longer a vulnerability persists unaddressed, the more it becomes an inviting target for malicious actors. This, in turn, exposes companies and organizations to the ominous specter of substantial damages [3]. Consequently, the quest for the automated detection of software vulnerabilities within stringent timeframes stands at the vanguard of advanced research endeavors [4], [5], [6]. On the other hand, Deep Learning (DL) technology can provide the capability to achieve more accurate automatic vulnerability detection [7], [8], [9], [10]. With the continuous innovation and development of DL technology, significant advancements have been made in Natural Language Processing (NLP). Models such as GPT [11] and BERT [12] have propelled NLP technology forward. Source code is essentially a text in a specific format, making it logically feasible to utilize NLP techniques for code analysis. In fact, models like CodeBERT [13] have been proposed by several researchers, and some code-level tasks have been addressed, yielding promising results. These findings demonstrate the potential of using NLP technology for automated vulnerability detection research. However, there are various directions of research that employ DL for security vulnerability detection. According to the survey by Zeng et al. [14], there are four main research directions. The first involves using DL models to learn the semantic representations of programs, as proposed by Wang et al. [15]. The second direction focuses on end-to-end solutions for detecting Buffer Overflow vulnerabilities, as explored by Choi et al. [16]. The third direction involves extracting vulnerability-containing code patterns to train models, as demonstrated by Li et al. [17]. Finally, the fourth direction addresses vulnerability detection for binary code, as studied by Liu et al. [18]. Each of these research directions has its own advantages and limitations. Based on the previous research outcomes, extracting vulnerability-containing patterns to create a training dataset has achieved promising results, displaying relatively good effectiveness and potential for further development [19]. Notable examples of this approach include the VulDeePecker paper by Li et al. [17] and SySeVR by Li et al. [20], as well as VulDeBERT by Kim et al. [21]. However, both SySeVR and VulDeBERT still exhibit certain shortcomings; namely, the processed data consists solely of isolated code statements extracted from the source code, lacking contextual linkage. This deficiency inherently diminishes the precision of the model. Meanwhile, to retain the non-sequential semantic attributes inherent in the source code of the program, certain graph-based methodologies have been introduced, as documented by [22], [23], [24], [25], [26], [27], [28], [29]. These studies advocate the transformation of contract source code into a graph representation, followed by the utilization of graph neural networks for vulnerability detection. These existing works indicate that there is the potential for utilizing graph representations for inspecting the relationship of source code components to reveal software vulnerability. Specifically, the slices extracted from the source code are transformed into graph representations and subsequently incorporated into the deep learning model for training. The conversion of slices into graphs aims to capture relationships between words, as well as between words and slices, enhancing the model's capacity to understand code patterns and relationships among various components. Additionally, in the evolving landscape of software security, Natural Language Processing (NLP) has emerged as a potent tool with the potential to revolutionize the field. Its application extends to bridging the substantial semantic gap between Natural Language (NL) and programming languages (PL). Nonetheless, a significant semantic disparity exists between Natural Language (NL) and programming languages (PL). To mitigate this divergence and gain a deeper comprehension of the semantic content within NL, scholars introduced CodeBERT [13], a language model that employs the masked language model and token replacement detection approach to pre-train both NL and PL. CodeBERT has demonstrated remarkable generalization abilities, exhibiting a strong competitive edge in various downstream tasks related to multiple programming languages. The embedding vectors derived from CodeBERT encapsulate a wealth of information, thereby enhancing the DL model's ability to yield optimal results post-training. Therefore, to cope with the above challenges, we propose the XGV-BERT model for better obtaining the contextual representation of programs. Specifically, we leverage a pre-trained model named CodeBERT for code embedding because it is a model pre-trained on multiple programming languages that understand better the source code. Subsequently, the integration of the GNN model with graphs constructed from extracted data helps to enhance the connections between words and slices in source code to disclose the vulnerability in the software program. In summary, the contributions of our work are illustrated as follows: * We leverage the language model to construct a method of representing vulnerable source code for security defect detection, using CodeBERT embedding model to replace the Word2vec embedding method used in previous studies [17][20]. * We propose a vulnerability detection system called XGV-BERT that utilizes CodeBERT with a Graph Convolutional Network (GCN) model for analyzing C and C++ source code. * Our experimental results indicate that XGV-BERT outperforms the state-of-the-art method [17][20] on the SARD [30] and NVD [31] datasets. The remaining sections of this article are constructed as follows. Section 3 introduced some related works in detecting vulnerabilities in software. In Section 2, we give the overview of background knowledge used in our work. Next, the proposed framework and methodology are discussed in Section 4. Section 5 describes the experimental settings and result analysis of vulnerability detection on various datasets. Finally, we conclude the paper in Section 6. ## 2 Background ### Abstract Syntax Tree - AST #### 2.1.1 Definition In terms of software, an Abstract Syntax Tree (AST) [32] embodies a tree-like depiction of the abstract syntax structure inherent in a fragment of text, commonly referred to as source code, authored in a formal language. Each node within this tree serves as a representation of a discernible structure found within the text. More specifics, abstraction in the AST is manifested by not representing every detail present in the actual syntax but focusing on the structure and content-related aspects. For instance, unnecessary single quotes in the syntactic structure are not represented as separate nodes in the tree. Similarly, a syntax structure like the "if" conditional statement can be represented by a single node with three branches. The AST is a vital tool in parsing programming languages. It provides an abstract structural representation of the source code, enabling programs to understand and process code more easily. The abstraction in the AST allows us to concentrate on essential syntax components and overlook irrelevant details. This simplifies the language analysis and processing, while providing a convenient structure for working with and interacting with the source code. #### 2.1.2 Design The design of the AST is often closely intertwined with the design of the compiler. The core requirements of the design include the following: * Preservation of Variables: Variables must be retained, along with their declaration positions in the source code. * Representation of Execution Order: The order of execution statements must be represented and explicitly determined. * Proper Handling of Binary Operators: The left and right components of binary operators must be stored and accurately determined. * Storage of Identifiers and Assigned Values: The identifiers and their assigned values must be stored within the assignment statements. ### CodeBERT CodeBERT [13] is a pre-trained BERT model that combines both natural language (NL) and programming language (PL) encodings to create a comprehensive model suitable for fine-tuning on source code tasks. The model is trained on a large dataset sourced from code repositories and programming documents, leading to improved effectiveness in software program training and source code analysis. During the pre-training stage, the input data is formed by combining two segments with special separator token: \([CLS]\), \(w_{1},w_{2},..w_{n},[SEP]\), \(c_{1},c_{2},...,c_{m},[EOS]\), with \([CLS]\) is classification token, \([SEP]\) is separator token and \([EOS]\) is "end of the sequence" token. One segment represents natural language text, while the other represents code from a specific programming language. The \([CLS]\) token is a special token placed before the two segments. Following the standard text processing in Transformer, the natural language text is treated as a sequence of words and divided into WordPieces [33]. A code snippet is regarded as a sequence of tokens. The output of CodeBERT includes (1) contextualized vector representations for each token, encompassing both natural language and code, and (2) the representation of \([CLS]\), serving as a summarized representation. ### Graph Neural Network - GNN #### 2.3.1 Overview A graph is a data structure in computer science comprising two components: nodes and edges \(G=(V,E)\). Each node has edges (\(E\)) connecting it to other nodes (\(V\)). A directed graph has arrows on its edges, indicating directional dependencies, while undirected graphs lack such arrows. Graphs have attracted considerable attention in Machine Learning due to their powerful representational capabilities. Each node is embedded into a vector, establishing its position in the data space. Graph Neural Networks (GNNs) are specialized neural network architectures that operate on graphs. The primary goal of GNN architecture is to learn an embedding vector containing information about its local neighborhood. This embedding can be used to address various tasks, such as node labeling, node and edge prediction, and more. In essence, GNNs are a subclass of DL techniques specifically designed for performing inference on graph-structured data. They are applied to graphs and have the ability to perform prediction tasks at the node, edge, and graph levels. #### 2.3.2 Classification GNNs are divided into three types: * Recurrent Graph Neural Network: In this network, the graph is bidirectional, where data flows in both directions. It applies graph message passing over edges to propagate the output from the initial direction back to the graph nodes, but adjusts edge weights based on the previously applied gradients for that node. * Spectral Convolutional Network: This type shares a similar idea with CNNs. In CNNs, convolution is performed by summing up the values of neighboring data points around a central data point using learnable filters and weights. Spectral-based networks operate on a similar principle, aggregating the attributes of neighboring nodes for a central node. However, spectral-based methods often have higher computational complexity and have gradually been replaced by spatial-based methods. * Spatial Convolutional Network: This approach provides a simpler and more efficient way to handle data. It embeds nodes based on their neighboring nodes. This spatial-based method has become popular due to its simplicity and effectiveness in processing data. ### Graph Convolutional Network - GCN GCN (Graph Convolutional Network) [34] is a powerful neural network architecture designed for machine learning on graphs. In fact, it is so powerful that even a randomly initialized two-layer GCN can produce meaningful feature representations for nodes in the graph. Specifically, the GCN model takes the graph data \(G=(V,E)\) as input, where: * \(N\times F^{0}\) is the input feature matrix, denoted as \(X\), where \(N\) is the number of nodes, and \(F^{0}\) is the number of input features for each node. * \(N\times N\) is the adjacency matrix \(A\), representing the structural information about the graph. Thus, a hidden layer in GCN can be written as \(H^{i}=f(H^{i-1},A)\), where \(H^{0}=X\), and \(f\) is the propagation rule. Each layer \(H^{i}\) corresponds to a feature matrix \(N\times F^{i}\), where each row represents a feature representation of a node. At each layer, these features are aggregated to create the features for the next layer using the propagation rule \(f\). This way, the features become increasingly abstract at each consecutive layer. Variants of GCN differ only in the choice of propagation rule \(f\). ## 3 Related work ### Software Vulnerability #### 3.1.1 Concept Software vulnerabilities represent errors, weaknesses, or imperfections within software or operating systems that are susceptible to the influence of attacks or malevolent actions that may inflict harm upon the system or the information it processes. Software vulnerabilities can be exploited by malicious actors to carry out actions such as unauthorized system access, pilfering sensitive information, impeding the normal functioning of the system, or facilitating other forms of attacks. With the swift development of novel attack techniques, the severity of software vulnerabilities is continuously escalating. All systems inherently harbor latent vulnerabilities; however, the pertinent question remains whether these vulnerabilities are exploited and result in deleterious consequences. #### 3.1.2 The current state An increasing number of cyberattacks originate from software vulnerabilities, resulting in user data breaches and tar-nishing the reputation of companies [35]. Despite numerous research efforts proposed to aid in vulnerability detection, vulnerabilities continue to pose a threat to the secure operation of IT infrastructure [36]. The number of disclosed vulnerabilities in the Common Vulnerabilities and Exposures (CVE) and National Vulnerability Database (NVD) repositories has surged from approximately 4,600 in 2010 to 8,000 in 2014 before skyrocketing to over 17,000 in 2017 [17; 37]. These vulnerabilities may have led to potential threats concerning the secure usage of digital products and devices worldwide [38]. #### 3.1.3 Exploitation Mechanisms of Vulnerabilities Upon discovery of a security vulnerability, the attacker can capitalize on it by crafting programs to infiltrate and take control of the targeted device. Once successful in gaining access to the target, attackers may conduct system reconnaissance to familiarize themselves with its workings. Consequently, they can execute diverse actions such as accessing critical files or deploying malicious code. Leveraging such control, attackers can hijack the computer and pilfer data from the victim's device. Vulnerabilities are sometimes identified either by software developers themselves or through user and researcher alerts. However, in certain cases, hackers or espionage organizations may uncover intrusion techniques but refrain from notifying the developers, leading to so-called "zero-day" vulnerabilities, as developers have not had an opportunity to patch them. As a result, software or hardware remains exposed to threats until patches or fixes are distributed to users. Software vulnerabilities can lead to grave consequences, granting attackers unauthorized access and control over devices. To obviate such calamities, the detection and remediation of vulnerabilities assume utmost significance. Nevertheless, on certain occasions, vulnerabilities remain latent until they are maliciously leveraged, wreaking considerable havoc upon users. To mitigate risks, regular software and hardware updates to apply patches and fixes are essential. ### Related research works There are myriad research directions employing DL for security vulnerability detection. According to the survey conducted by Peng Zeng and colleagues [14], four primary research avenues are observed: * Utilizing DL Models for Semantic Program Representations: This direction involves the automatic acquisition of semantic program representations using DL models, as proposed by Wang [15]. * Buffer Overflow Vulnerability Prediction from Raw Source Code: Choi's approach [16] entails predicting buffer overflow vulnerabilities directly from raw source code. * Vulnerability Detection for Binary Code: Liu's approach [18] targets vulnerability detection within binary code. * Extraction of Vulnerability-Containing Code Patterns from Source Code: Li's methodology [17] revolves around the extraction of vulnerability-containing code patterns from source code to train models. _Direction 1 - Automating Semantic Representation Learning for Vulnerability Prediction_ Wang's research [15] is a pioneering study that employs Deep Belief Networks (DBNs) to delve into the semantic representations of programs. The study's aim is to harness high-level semantic representations learned by neural networks as vulnerability-indicative features. Specifically, it enables the automatic acquisition of features denoting source code vulnerabilities without relying on manual techniques. This approach is not only suited for predicting vulnerabilities within a single project but also for cross-project vulnerability prediction. Abstract Syntax Trees (ASTs) are employed to represent programs as input for DBNs in training data. They proposed a data pre-processing approach, comprising four steps: * Tokenization: The first step involves parsing the source code into tokens. * Token Mapping: The second step maps tokens to integer identifiers. * DBN-based Semantic Feature Generation: The third step employs DBNs to autonomously generate semantic features. * Vulnerability Prediction Model Establishment: The final step utilizes DBNs to establish a vulnerability prediction model. _Direction 2 - End-to-End Buffer Overflow Vulnerability Prediction from Raw Source Code using Neural Networks_ Choi's research [16] stands as the inaugural work providing an end-to-end solution for detecting buffer overflow vulnerabilities. Experimental studies substantiate that neural networks possess the capability to directly learn vulnerability-relevant characteristics from raw source code, obviating the need for code analysis. The proposed neural network is equipped with integrated memory blocks to retain extensive-range code dependencies. Consequently, adapting this network is pivotal in identifying buffer overflow vulnerabilities. Test outcomes demonstrate the method's precision in accurately detecting distinct types of buffer overflow. However, this approach still harbors limitations, necessitating further enhancements. A primary constraint lies in its inability to identify buffer overflow incidents occurring within external functions, as input data excludes code from external files. Another limitation is the requirement for each line to encompass data assignments for the model to function. Applying this method directly to source code containing conditional statements proves intricate, as attention scores are computed to locate the most relevant code positions. _Direction 3 - Vulnerability Detection Solution for Binary Code_ Liu's research [18] introduces a DL-based vulnerability detection tool for binary code. This tool is developed with the intent of expanding the vulnerability detection domain by mitigating the scarcity of source code. To train the data, binary segments are fed into a Bidirectional Long Short-Term Memory network with Attention mechanism (Att-BiLSTM). The data processing involves three steps: * Initially, binary segments are collected by applying the IDA Pro tool on the original binary code. * In the second step, functions are extracted from binary segments and labeled as "vulnerable" or "non-vulnerable". * In the third step, binary segments are used as binary features before feeding them into the embedding layer of Att-BiLSTM. The granularity of detection lies at the function level. Multiple experiments were conducted on open-source project datasets to evaluate the proposed method. The results of these experiments indicate that the proposed approach outperforms other binary code-based vulnerability detection methods. However, this method still has limitations. Notably, the detection accuracy is relatively low, falling below 80% in each dataset. _Direction 4 - Extracting Vulnerable Code Patterns for Model Training_ Li's VulDeePecker [17] is the pioneering study that employs the BiLSTM model for vulnerability detection. This research direction aligns with our team's pursuit as well. This study employs BiLSTM to extract and learn long-range dependencies from code sequences. The training data for the tool is derived from code gadgets representing programs, serving as input for the BiLSTM. The processing of code gadgets involves three stages: * The initial stage entails extracting corresponding program slices of library/API function calls. * The second stage revolves around creating and labeling code gadgets. * The final stage focuses on transforming code gadgets into vectors. Experimental outcomes demonstrate VulDeePecker's capacity to address numerous vulnerabilities, and the integration of human expertise can enhance its effectiveness. However, this method exhibits certain limitations that require further improvement. Firstly, VulDeePecker is restricted to processing programs written in C/C++. Secondly, it can solely address vulnerabilities linked to library/API function calls. Lastly, the evaluation dataset is relatively small-scale, as it only encompasses two types of vulnerabilities. The reason we opted for the Direction 4 is that we assessed that this research direction offers certain advantages over the other three. In the case of _Direction 1 - Automating Semantic Representation Learning for Vulnerability Prediction_, this research is limited to extracting semantic features from source code alone. Conversely, Direction 4 has seen consecutive studies that extract both semantic and syntactic features, as exemplified by SySeVR [20], enhancing the reliability of model training data. Regarding _Direction 2 - End-to-End Buffer Overflow Vulnerability Prediction from Raw Source Code using Neural Networks_, it has a drawback in that it exclusively detects Buffer Overflow vulnerabilities. In contrast, Direction 4 employs data extraction methods that can increase the number of vulnerabilities to as many as 126 CWEs, divided into four categories, as we will discuss in Section 5. As for _Direction 3 - Vulnerability Detection Solution for Binary Code_, it holds significant potential as it can detect vulnerabilities in various programming languages by utilizing binary code. However, its accuracy remains lower compared to the other research directions. For these reasons, our team has determined to select Direction 4 as a reference and propose XGV-BERT for further improvement. In our proposed XGV-BERT, the fusion of CodeBERT model and Graph Neural Networks (GNN) represents a compelling strategy within the domain of software vulnerability detection. CodeBERT, an advanced transformer-based model, excels in acquiring meaningful code representations, while GNNs exhibit exceptional prowess in encoding semantic connections present in code graphs. This harmonious combination of CodeBERT and GNNs elevates the precision and efficiency of software vulnerability detection, facilitating the discovery of intricate vulnerabilities that conventional approaches might struggle to uncover. ## 4 Methodology ### The overview architecture To detect vulnerabilities using DL, we need to represent programs in a way that captures both syntactic and semantic information relevant to the vulnerabilities. There are two main contributions presented in Li's research [20]. To start with, treating each function in the program as a region proposal [39] similar to image processing. However, this approach is overly simplistic because vulnerability detection tools not only need to determine the existence of vulnerabilities within functions but also need to pinpoint the locations of the vulnerabilities. This means we require detailed representations of the programs to effectively detect vulnerabilities. Secondly, they treat each code line or statement (used interchangeably) as a unit for vulnerability detection. However, their approach has two drawbacks. More specifics, most statements in a program do not contain any vulnerabilities, resulting in a scarcity of vulnerable samples. In addition, many statements have semantic relationships with each other, but they are not considered as a whole. To synthesize the advantages of the two above proposals, we divide the program into smaller code segments (i.e., a few lines of code), corresponding to region proposals, and represent the syntactic and semantic characteristics of the vulnerabilities. After studying Li's method [20], our research group proposes the XGV-BERT method for efficiently detecting software vulnerability. We extract feature patterns from the source code using the CodeBERT model to embed the vectors, and then we feed them into various DL models, including RNN, LSTM, Bi-LSTM, GRU, Bi-GRU, and the proposed GCN model for comparison and evaluation. Figure 1 illustrates the specific architecture of the steps involved in our proposed software vulnerability detection method. The architecture we propose uses the original source code as input, followed by the extraction of program slices based on syntax features. From the program slices, we further extract lines of code that have semantic relationships with each program slice, creating Semantics-based Vulnerability Candidates (SeVCs) [20] or code gadgets [17], depending on the dataset, and label them accordingly. Then, we tokenize the SeVCs or code gadgets and feed them into the CodeBERT model for vector embedding. Finally, we construct a DL model for training and predicting vulnerable source code using the embedded vectors. ### Embedding Vectors In this section, we delve into the process of tokenizing and embedding the SeVCs extracted from the source code for training in the DL model. The following steps outline our approach: * Symbolic Representation of SeVCs: To ensure the independence of SeVCs from user-defined variables and function names while capturing the program's semantic information, each SeVC undergoes transformation into a symbolic representation. * Removal of non-ASCII characters and comments from the code. * Mapping of user-defined variable names to symbolic names (e.g., "V1", "V2") in a one-to-one correspondence. * Mapping of user-defined function names to symbolic names (e.g., "F1", "F2") in a one-to-one correspondence. It is important to note that different SeVCs may share the same symbolic representation, enhancing the generalizability of the approach. * Tokenize the symbolic representations: To achieve this, Professor Li's team [20] proposed to split the symbol representation of SeVC (e.g., "V1=V2-8;") into a sequence of symbols through lexical analysis (e.g., "V1", "=", "V2", "-", "8", and ";"). Each symbol in the resulting sequence is considered a token. This process is performed for each code line in the SeVC, resulting in a list of tokens for each SeVC. * After obtaining the list of tokens, we use the CodeBERT model to embed the data. The CodeBERT model used Figure 1: The workflow of XGV-BERT framework for vulnerability detection. in this study has been pretrained on source code data and is retrained with the input of the tokenized vectors. The model architecture consists of multiple Transformer layers, which extract features from the dataset and enhance contextual information for the vectors. The output of the model will be the embedded vectors, which we use as inputs for the DL models to classify the dataset. ### Training DL Models In the final part, we utilize DL models for training and classifying the dataset. Specifically, we employ a total of 6 models: RNN, LSTM, Bi-LSTM, GRU, Bi-GRU, and GNN. The input to these models is the embedded vectors. Among these models, the most significant one we propose is GNN, named XGV-BERT. For models such as RNN and its variants, including LSTM, GRU, Bi-LSTM, and Bi-GRU, we implement these models with inputs being the embedded vectors. The architecture of these models consists of two hidden layers, accompanied by fully connected layers to converge the output for text classification purposes. For the GNN model, we employ the GCN architecture for training. Our GCN model is designed to take input data in the form of graphs. The embedded vectors obtained in section 4.2 need to be processed into graph-structured data before being fed into our training model. To accomplish this, we create adjacency matrices for the embedded vectors, and these vectors are transformed into graph nodes. Figure 2 illustrates the architecture using the proposed GCN model for training and predicting results on the dataset. Specifically, we construct a non-homogeneous graph consisting of both word nodes and slice nodes based on the idea proposed by Yao [40]. In this adjacency matrix, each word or slice is represented as a one-hot vector and used as input for the GCN model. We create edges between nodes based on the occurrence of a word in a slice (slice-word edge) and the co-occurrence of words across the entire dataset (word-word edge). The weight of an edge between two nodes \(i\) and \(j\) is defined as follows: \[A_{i,j}=\begin{cases}\text{PPMI}(i,j),\quad i,\text{ $j$ are words and $i\neq j$}\\ \text{TF-IDF}(i,j),\quad i\text{ is slice, $j$ is word}\\ 1,\quad i=j\\ 0,\quad\text{otherwise}\end{cases}\] The Term Frequency-Inverse Document Frequency (TF-IDF) value of a word in a slice determines the weight of the edge between a slice node and a word node. This value is used to assess the importance of a word in a slice, where a higher value indicates a higher importance of the word for the slice. Specifically: * TF (Term Frequency) is the number of times the word appears in the slice. * IDF (Inverse Document Frequency) is calculated as follows: \[\text{IDF}=\log\frac{N}{n_{t}}\] (1) Where \(N\) is the total number of slices in the dataset, and \(n_{t}\) is the number of slices that contain the word. IDF helps evaluate the importance of a word for a slice, assigning lower IDF scores to frequently occurring words. * The TF-IDF weight is computed by multiplying the Term Frequency (TF) and the Inverse Document Frequency (IDF). The Positive Pointwise Mutual Information (PPMI) is used to determine the weight of word pairs, where a higher PPMI value indicates a stronger relationship and co-occurrence frequency between two words. The PPMI value of \(i\) and \(j\) is calculated using the formula: \[\text{PPMI}(i,j)=\text{max}(\log\frac{p(i,j)}{p(i)p(j)},0) \tag{2}\] Where: * \(p(i)\) is the probability of word \(i\) appearing in a slice. * \(p(j)\) is the probability of word \(j\) appearing in a slice. * \(p(i,j)\) is the joint probability of both \(i\) and \(j\) appearing in a slice. For the nodes in the adjacency matrix, we utilize the output embeddings of the CodeBERT model and use them as input representations for the slice nodes. Once the data, including the adjacency matrix and the nodes, is constructed, we utilize both of them as inputs for the GCN model. Our GCN model consists of two hidden layers, along with fully connected layers, to converge the output for text classification purposes. ## 5 Experiments and Analysis In this section, we conduct experiments to compare the XGV-BERT's detection accuracy to the state-of-the-art solutions, VulDeePecker [17] and SySeVR [20]. Before discussing the effectiveness of XGV-BERT, we first go over the implementation specifics and datasets used in the trials. ### Dataset and Preprocessing #### 5.1.1 Benchmark Dataset For the evaluation of our approach, we use two datasets from two research papers: SySeVR [20] and VulDeePecker [17]. Both datasets were collected from two sources: the National Vulnerability Database (NVD) [31] and the Software Assurance Reference Dataset (SARD) [30]. The NVD dataset provides vulnerable code snippets from various software products, including both vulnerable code and their corresponding patches. On the other hand, the SARD dataset offers a collection of test, synthetic, and academic programs, labeled as "good" (having no vulnerabilities), "bad" (containing vulnerabilities), and "mixed" (having vulnerabilities whose patched versions are also available). Regarding the VulDeePecker dataset, they offer program pieces that concentrate on two categories of CWE related to Library/API Function Call vulnerabilities, including resource management error vulnerabilities (CWE-399) and buffer error vulnerabilities (CWE-119). We produced 29,313 non-vulnerable code gadgets and 10,440 vulnerable code gadgets for CWE-119. We produced 10,440 vulnerable and 14,600 non-vulnerable code gadgets for CWE-399. The number of code gadgets that we extracted from the VulDeePecker dataset is shown in Table 1. The specific steps for code gadget extraction are as follows: * Extracting library/API function calls and their corresponding program slices: * Categorize library/API function calls into two types: forward library/API function calls and backward library/API function calls. The forward type receives inputs directly from external sources, like a command line, program, or file. The backward type does not receive direct inputs from external sources or the program environment. * Generate program slices corresponding to the arguments of the library/API function calls extracted in the previous step. Program slices are further classified into forward slices and backward slices. Forward slices contain statements affected by specific arguments, while backward slices comprise statements influencing specific arguments. * Extracting code gadgets and assigning labels: * Extract code gadgets: * Construct a part of the code gadget by combining statements containing arguments from the library/API function calls. These statements Figure 2: The proposed GCN model in XGV-BERT framework. belong to the same user-defined function and are ordered based on the corresponding program slice. Any duplicate statements are eliminated. * Complete the code gadget by incorporating statements from other functions containing the arguments from the library/API function calls, following the order in the corresponding program slice. * Label the code gadgets: Those without vulnerabilities receive the label "0," while those containing vulnerabilities are labeled "1." For the SySeVR dataset, it provides C/C++ programs containing 126 CWE related to four types of vulnerabilities: Library/API Function Call (FC-kind), Array Usage (AU-kind), Pointer Usage (PU-kind), and Arithmetic Expression (AE-kind). In total, we have extracted 547,347 SeVCs from the dataset, comprising 52,227 SeVCs containing vulnerabilities and 495,120 SeVCs without vulnerabilities. The distribution of SeVCs based on vulnerability types and their corresponding CWE identifiers is presented in Table 2. * Step 1. Extract Syntax-based Vulnerability Candidates (SyVCs). * Represent each function as an Abstract Syntax Tree (AST). The root of the AST corresponds to the function, the leaves represent the tokens of the function, and the intermediate nodes correspond to the statements of the function. * Compare the code elements (comprising one or more consecutive tokens, including identifiers, operators, constants, and keywords) in the AST with a set of syntactic vulnerability patterns. If a code element matches any element in this set, it becomes a SyVC. * Step 2. Transform SyVCs into Semantic-based Vulnerability Candidates (SeVCs). * Create CFGs for each function in the program. From CFGs, generate PDGs for each function. * Based on the PDG, create program slices ps\({}_{i}\) for each SyVC. * Interprocedural forward slice fs'\({}_{i}\) is formed by merging the forward slice fs\({}_{i}\) of function \(f_{i}\) with the forward slices of functions called by \(f_{i}\). * Interprocedural backward slice bs'\({}_{i}\) is formed by merging the backward slice bs'\({}_{i}\) of function \(f_{i}\) with the backward slices of functions called by \(f_{i}\) and the functions that call \(f_{i}\). * Finally, program slice ps\({}_{i}\) is created by merging fs'\({}_{i}\) and bs'\({}_{i}\). * Transform the program slice into SeVCs with the following steps: * Convert the statements belonging to function \(f_{i}\) and appearing in program slice ps\({}_{i}\) as a node into SeVCs while preserving the original order of statements in function \(f_{i}\). * Convert the statements belonging to other functions, which are related to function \(f_{i}\) through function calls, into SeVCs. * Step 3. Label the SeVCs: To differentiate between vulnerable and safe code patterns, we label the SeVCs, and their corresponding vectors, accordingly. A SeVC containing a known vulnerability is labeled as "1," while it is labeled as "0" if the SeVC is safe and does not contain any vulnerabilities. By leveraging these datasets, we were able to comprehensively evaluate the effectiveness and performance of our proposed method in detecting software vulnerabilities. ### Performance Metrics #### 5.2.1 Detection metrics To accordingly evaluate the model prediction, we discussed and defined ground truth values as follows: true positive (TP) represents the number of vulnerable samples that are detected as vulnerable; true negative (TN) represents the number of samples that are not vulnerable and are detected as not vulnerable; False positive (FP) represents the number of samples are not vulnerable but are detected as vulnerable; False negative (FN) represents the number of vulnerable samples that are detected as not vulnerable. Therefore, we use four metrics as follows for our experiments: * _Accuracy_ is the ratio of correct and total predictions. \[Accuracy=\frac{TP+TN}{TP+TN+FP+FN}\] (3) \begin{table} \begin{tabular}{|c|c|c|c|} \hline Dataset & Total & Vulnerable code gadgets & Non-vulnerable code gadgets \\ \hline CWE-119 & 39,753 & 10,440 & 29,313 \\ \hline CWE-399 & 21,885 & 7,285 & 14,600 \\ \hline All Dataset & 61,637 & 17,725 & 43,913 \\ \hline \end{tabular} \end{table} Table 1: Number of code gadgets extracted from the VulDeePecker dataset \begin{table} \begin{tabular}{|c|c|c|c|} \hline Dataset & Total & Vulnerability & Non-vulnerable \\ & & SeVCs & Vulnerable \\ \hline FC-kind & 141,023 & 17,005 & 124,018 \\ \hline AU-kind & 55,772 & 7,996 & 47,776 \\ \hline PU-kind & 340.324 & 25.377 & 314.946 \\ \hline AE-kind & 10.234 & 1.848 & 8.386 \\ \hline All Dataset & 61,637 & 17,725 & 43,913 \\ \hline \end{tabular} \end{table} Table 2: Number of SeVCs extracted from the SySeVR dataset * _Precision_ is the ratio of truly vulnerable samples among the detected vulnerable samples. \[Precision=\frac{TP}{TP+FP}\] (4) * _Recall_ is the proportion of truly vulnerable samples among the samples that were predicted as not containing vulnerabilities. \[Recall=\frac{TP}{TP+FN}\] (5) * _F1-score_ measures the overall effectiveness by calculating the harmonic mean of Precision and Recall. \[F1-score=2\cdot\frac{Recall\cdot Precision}{Recall+Precision}\] (6) ### Experimental Settings We conducted our experiments on a virtual machine environment running Ubuntu 20.04 with a 8 core CPUs, 81.5GB of RAM, 40GB of GPU RAM, and a storage capacity of 100GB. TABLE 3 and TABLE 4 show the architecture for the LSTM model and XGV-BERT, respectively. To perform our experiments, we trained the datasets using the following configuration on both models: _Adam Optimizer_ with \(learning\_rate=0.001\), \(epoch=50\) and \(batch\_size=32\) for RNN, LSTM, BiLSTM, GRU and BiGRU models and the same configuration for XGV-BERT model, but with \(epoch=4\). In the settings for both VulDeePecker and SySeVR datasets, we choose 80% samples for the training set and the remaining 20% for the test set. metrics. Notably, for the VulDeePecker dataset, our proposed XGV-BERT method achieved the highest rating on all four indices, outperforming the remaining five models. Similarly, for the SySeVR dataset, XGV-BERT achieved the highest scores on three out of four metrics, including accuracy, precision and F1-score. Based on this evaluation result, we can see that XGV-BERT gives the best classifier performance in the compared DL models. These results indicate that XGV-BERT, which leverages the CodeBERT and GNN, can represent the contextual data to identify the vulnerable code in the software with high performance. In summary, the integration of the CodeBERT model with GNN has proven as a promising approach in the realm of software vulnerability detection. CodeBERT, a pre-trained Transformer-based model, excels at learning representations of source code, while GNNs possess a remarkable capability to capture semantic relationships within code graphs. This synergy between CodeBERT and GNNs enhances the accuracy and efficacy of software vulnerability detection, enabling the identification of complex vulnerabilities that may remain elusive through conventional methods. ## 6 Conclusion In concluding, this study introduces a novel method employing contextual embedding and deep learning techniques for the classification of software programs with vulnerabilities. The newly devised framework, termed XGV-BERT, leverages the sophisticated capabilities of contextualized embeddings through CodeBERT to delineate the intricate interconnections among code attributes essential for identifying security flaws. Within the realm of source code analysis, such embeddings are pivotal in discerning the nuanced relationships between tokens or \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Dataset & Metrics & RNN & LSTM & BiLSTM & GRU & BGRU & XGV-BERT \\ \hline \multirow{4}{*}{CWE} & Accuracy & 69.3 & 77.4 & 79.4 & 78.5 & 79.1 & **98.4** \\ & Precision & 63.6 & 70.0 & 73.3 & 71.5 & 75.0 & **97.9** \\ 119 & Recall & 90.3 & 96.0 & 92.6 & 94.6 & 87.5 & **98.1** \\ & F1-score & 74.6 & 81.0 & 81.8 & 81.5 & 80.8 & **97.7** \\ \hline \multirow{4}{*}{CWE} & Accuracy & 80.6 & 89.0 & 91.0 & 89.3 & 91.0 & **98.3** \\ & Precision & 72.3 & 84.5 & 92.4 & 86.5 & 91.7 & **97.9** \\ 399 & Recall & 72.3 & 84.5 & 92.4 & 86.5 & 92.4 & **98.1** \\ & F1-score & 82.4 & 88.5 & 91.1 & 89.0 & 91.2 & **98.0** \\ \hline \multirow{4}{*}{All} & Accuracy & 76.8 & 83.5 & 86.0 & 82.4 & 86.0 & **97.8** \\ & Precision & 74.1 & 80.9 & 89.5 & 79.6 & 81.6 & **97.3** \\ \cline{1-1} & Recall & 82.4 & 87.7 & 85.3 & 87.2 & 94.4 & **97.7** \\ \cline{1-1} & F1-score & 78.1 & 84.2 & 87.3 & 83.2 & 87.6 & **97.5** \\ \hline \end{tabular} \end{table} Table 7: Test performance of various models using CodeBERT embedding method on VulDeePecker dataset \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Dataset & Metrics & RNN & LSTM & BiLSTM & GRU & BGRU & XGV-BERT \\ \hline \multirow{4}{*}{Library/API} & Accuracy & 88.4 & 91.5 & 93.0 & 92.1 & 92.8 & **97.2** \\ & Precision & 81.8 & 85.6 & 87.9 & 86.5 & 87.5 & **94.4** \\ & Recall & **99.8** & 99.7 & 99.7 & 99.7 & 99.7 & 92.5 \\ & F1-score & 89.5 & 92.1 & **93.4** & 92.7 & 93.2 & **93.4** \\ \hline \multirow{4}{*}{Array Usage} & Accuracy & 87.3 & 91.3 & 91.9 & 91.1 & 92.2 & **98.6** \\ & Precision & 79.9 & 85.4 & 86.4 & 94.9 & 86.6 & **96.7** \\ & Recall & **99.8** & 99.7 & 99.6 & 99.9 & 99.7 & 97.9 \\ & F1-score & 88.7 & 92.0 & 92.5 & 91.8 & 92.7 & **97.3** \\ \hline \multirow{4}{*}{Pointer Usage} & Accuracy & 89.4 & 94.2 & 94.6 & 94.0 & 94.6 & **99.7** \\ & Precision & 82.7 & 89.8 & 90.7 & 89.6 & 90.6 & **98.5** \\ & Recall & 99.6 & **99.7** & 99.4 & 99.6 & 99.5 & 97.5 \\ & F1-score & 90.4 & 94.5 & 94.9 & 94.3 & 94.9 & **98.0** \\ \hline \multirow{4}{*}{Arithmetic Expression} & Accuracy & 78.9 & 84.5 & 88.7 & 85.4 & 87.6 & **95.5** \\ & Precision & 72.1 & 76.5 & 81.6 & 77.5 & 80.6 & **90.1** \\ & Recall & 94.3 & 99.5 & **99.7** & **99.7** & 98.9 & 96.3 \\ & F1-score & 81.7 & 86.5 & 89.8 & 87.2 & 88.8 & **92.8** \\ \hline \multirow{4}{*}{All Dataset} & Accuracy & 88.4 & 93.3 & 93.8 & 93.0 & 93.9 & **97.8** \\ & Precision & 81.4 & 88.3 & 89.1 & 87.8 & 89.4 & **94.8** \\ \cline{1-1} & Recall & 99.7 & **99.8** & 99.7 & 99.7 & 99.7 & 96.2 \\ \cline{1-1} & F1-score & 89.6 & 93.7 & 94.1 & 93.4 & 94.3 & **95.5** \\ \hline \end{tabular} \end{table} Table 8: Test performance of various models using CodeBERT embedding method on SySeVR dataset words present in code fragments. Such embeddings empower the model to depict this variable uniquely, contingent on its positional context. Furthermore, XGV-BERT integrates CodeBERT with the advanced Graph Convolutional Network (GCN) deep learning paradigm. A salient feature of GCNs is their adeptness at assimilating contextual intelligence from elaborate graph formations. These networks intrinsically evolve context-sensitive attributes, obviating the necessity for labor-intensive feature crafting. Significantly, GCNs excel in identifying multi-layered contextual associations by analyzing not only the immediate context of a given entity but also the surrounding environment of its neighboring entities and their interconnections. This intrinsic property renders GCNs exceptionally equipped for apprehending multifaceted dependencies within graph-centric data, thereby bolstering their utility across diverse applications. Such an amalgamation augments the depth of learning and information extraction from the multifarious segments inherent in source codes. Our framework can help cybersecurity experts detect errors and vulnerabilities in software programs automatically with high accuracy. The experimental results on the two benchmark datasets, including VulDeePecker and SySeVR, demonstrate the effectiveness of the proposed framework in improving the performance and detection accuracy of DL-based vulnerability detection systems. In the future, we aim to enhance the source code extraction framework. Our primary objective is to refine vulnerability detection granularity. At present, our system operates at the slice-level, focusing on multiple semantically interrelated lines of code. Additionally, we aspire to expand our vulnerability detection capabilities across diverse programming languages, as the current framework is limited to extracting information solely from C/C++ source code. ## Acknowledgment This research was supported by The VNUHCM-University of Information Technology's Scientific Research Support Fund.
2303.18016
Modeling meteorite craters by impacting melted tin on sand
To simulate the heated exterior of a meteorite, we impact a granular bed with melted tin. The morphology of tin remnant and crater is found to be sensitive to the temperature and solidification of tin. By employing deep learning and convolutional neural network, we can quantify and map the complex impact patterns onto network systems based on feature maps and Grad-CAM results. This gives us unprecedented details on how the projectile deforms and interacts with the granules, which information can be used to trace the development of different remnant shapes. Furthermore, full dynamics of granular system is revealed by the use of Particle Image Velocimetry. Kinetic energy, temperature and diameter of the projectile are used to build phase diagrams for the morphology of both crater and tin remnant. In addition to successfully reproducing key features of simple and complex craters, we are able to detect a possible artifact when compiling crater data from field studies. The depth of craters from high-energy impacts in our work is found to be independent of their width. However, when mixing data from different energy, temperature and diameter of projectile, a bogus power-law relationship appears between them. Like other controlled laboratory researches, our conclusions have the potential to benefit the study of paint in industry and asteroid sampling missions on the surface of celestial bodies.
H. Y. Huang, P. R. Tsai, C. Y. Lu, H. Hau, Y. L. Chen, Z. T. Ling, Y. R. Wu, Tzay-Ming Hong
2023-03-31T12:47:54Z
http://arxiv.org/abs/2303.18016v1
# Modeling meteorite craters by impacting melted tin on sand ###### Abstract To simulate the heated exterior of a meteorite, we impact a granular bed by melted tin. The morphology of tin remnant and crater is found to be sensitive to the temperature and solidification of tin. By employing deep learning and convolutional neural network, we can quantify and map the complex impact patterns onto network systems based on feature maps and Grad-CAM results. This gives us unprecedented details on how the projectile deforms and interacts with the granules, which information can be used to trace the development of different remnant shapes. Furthermore, full dynamics of granular system is revealed by the use of Particle Image Velocimetry. Kinetic energy, temperature and diameter of the projectile are used to build phase diagrams for the morphology of both crater and tin remnant. In addition to successfully reproducing key features of simple and complex craters, we are able to detect a possible artifact when compiling crater data from field studies. The depth of craters from high-energy impacts in our work is found to be independent of their width. However, when mixing data from different energy, temperature and diameter of projectile, a bogus power-law relationship appears between them. Like other controlled laboratory researches, our conclusions have the potential to benefit the study of paint in industry and asteroid sampling missions on the surface of celestial bodies. ## I Introduction Meteorite impact events have appeared in many disaster movies. However, Chelyabinsk meteor impact event and 2019 OK asteroid sweeping past the Earth remind us that meteorite impacts remain a menace to life on earth. Since the days of Galileo, geologists and astronomers have accumulated a lot of knowledge on the meteorite craters via telescope and field studies. This information can help us not only prepare for and mitigate potential risks posed by future impacts, but also understand the history and geology of habitable exoplanets. Besides, impact craters provide valuable insights into the formation and location of recoverable mineral resources both on Earth and beyond. Starting roughly 30 years ago, scientists began to smash various projectiles on a granular bed to simulate the impact event [1; 2; 3]. This endows precious information, esp. on how the impact energy \(E\), proportional to drop height \(h\), affects the width \(w\) and depth \(d\) of crater. Roughly proportional to \(d\), \(w\) scales as \(E^{\alpha}\) with \(\alpha=0.25\)[4] or 0.17 [5] depending on whether a hard (e.g., steel ball) or deformable (water droplet) bullet is used. Meanwhile, it was found that some specific features, e.g., the central peak in complex craters, are better reproduced by adopting a granular projectile [6]. As opposed to field studies, knowledge accumulated from controlled laboratory research like these also benefits the study of soil erosion in agriculture [7; 8], paint in industry [9], and asteroid sampling missions on the surface of celestial bodies [10; 11]. It is thus important to continue lab experiments of impacting granular beds. The important role of heat transfer and solidification for melted tin impacting on a solid surface has been discussed [12] to gain insight on thermal spraying and additive manufacturing, and extreme ultraviolet lithography in chip production. In this Letter we adopt melted tin droplets as the bullet to better simulate the high-temperature environment typical of meteorite impact events. Not limited to \(h\) in Fig. 1, the tin remnant also changes its shape with temperature \(T\). This information will be summarised in diagrams and shown to be correlated to the morphology of crater and tin remnant. As in previous studies [13], we also investigate \(w(E)\) and \(d(E)\) to see whether the solidification of melted tin will create any new feature. Particle image velocimetry (PIV) and image processing are employed to analyze the flow field of granules, while image processing enables us to project the tin shape on a 2-D plane and witness its deformation in real time. Additionally, we use deep learning - convolutional neural network (CNN) and reverse back-propagation Grad-CAM to detect features of tin during the impact process and discuss their network properties. ## II Experimental setup As shown in Fig. 2(a), our setup is typical of impact experiments [14], except that it comprises of an elevated station with two soldering irons controlled at \(T=300\), \(340\), \(380\), \(420\), and \(460\)\({}^{\circ}\)C, respectively. The spacing and relative orientation between irons can be adjusted to produce tin droplets of mass 0.30, 0.32, 0.34, 0.36, and 0.38 g or equivalently 6.4, 6.5, 6.6, 6.8, and 6.9 mm in diameter \(R\). For the height \(h\) of release from 0.05 to 1 m, the decrease in temperature during free fall is negligible. To eliminate the effect of granular compaction, we follow the practice of Walsh by reloading the sand and scraping off extra sands by a ruler to flatten its surface before each trial [1]. The impacts are characterized by the inverse Frode number \(\mathrm{Fr}^{-1}=gD_{b}/2v^{2}\), the ratio of gravitational to dynamic pressures [1]. Here \(g\) denotes the gravitational acceleration, \(D_{b}\) the projectile diameter, and \(v\) the impact velocity. In our system, \(10^{-3}\leq\mathrm{Fr}^{-1}\leq 10^{-2}\) overlaps with \(10^{-6}\leq\mathrm{Fr}^{-1}\leq 10^{-2}\) for meteorite craters, which justifies the analogy to astronomical impact events [3]. Figure 1: (a\(\sim\)d) Snapshots of melted tin of \(R=6.4\) mm at 380 °C impacting sand for \(h=100\), 250, 400, and 700 mm. Scale bars: 6.0 mm. False colors are used as an aid in visualizing features. The shape of tin remnant, highlighted by dark solid lines, is categorized into sphere, disk, shotgun, and snowflake, as labelled by red circle, black rhombus, green square, and blue triangle. Figure 2: (a) Schematic of experimental setup. (b) Phase diagram with \(h\) and \(T\) as axes shows the morphology of impact crater created by melted tin of \(R=6.4\) mm can mimic either letter U (in blue) or V (brown). Green area denotes an intermediate phase, and labels for the remnant shape follow those in Fig. 1. (c) Similar phase diagram with \(h\) and \(R\) at 380 °C. (d) Schematic profile of U- and V-shape craters. (e) Shape ratio \(r_{s}\) vs \(h\) at 300 and 420 °C, represented by red square and black rhombus. ## III Experimental results The morphology of melted tin in Fig. 1 is recorded by a high-speed camera of 8146 fps. Irrespective of \(h\), the process can be roughly separated into four stages based on the configuration of tin droplet: (1) circumference increases with time, but remains approximately circular, (2) irregular shape emerges, (3) its boundary retracts, and (4) static configuration is reached. As the impact energy increases, the final shape of tin remnant can roughly be categorized into four shapes: sphere, disk, shotgun, and snowflake, as illustrated in Fig. 1. The sequence of appearance for these shapes can be understood physically. Since the distortion of tin melt raises its surface energy, we expect the aspect ratio of disk to increase with impact energy. As the thickness of disk reaches the threshold of Rayleigh-Plateau (RP) instability [15], clumping ensues and produces many small and smooth tin balls in the stage-4 photo of Fig. 1(c), which mimics the shotgun pattern. As more kinetic energy is fueled, fast spreading of melted tin hastens the conduction rate of its heat to the sand bed. As a result, the transition from liquid to solid phase precedes and prohibits the RP instability. This creates a splash-like [16] irregular configuration in one piece, which we term snowflake in Fig. 1(d). ### Phase Diagrams for Morphology #### iii.1.1 Tin Remnants Figure 2(b, c) show the phase diagrams for the morphology of both the crater and tin remnant as a function of parameters (\(T\), \(h\)) and (\(R\), \(h\)). At low impact energies or for \(h=50\)\(\sim\)250 mm, a higher \(T\) renders the melted tin more susceptible to flattening upon impact. At slightly higher energies or \(h=400\)\(\sim\)500 mm, \(h_{c}\) roughly marks the emergence of shotgun pattern at high temperatures, again because the fluid behavior is less restricted which allows the RP instability to enter the picture. Within the same scenario that high temperature promotes fluidity, the transition from shotgun to snowflake around \(h_{s}\) is analogous to the effect of increasing \(h\) at high energies. We do not understand why two shapes can be observed in some intervals of \(R\) in Fig. 2(c), as marked by crosses, daggers, and stars. The trend of phase boundary in Fig. 2(c) for remnant shape at low energies mimics that in Fig. 2(b). It is because the dissipation rate per unit mass is proportional to the surface-area-to-volume ratio which for a sphere drops as \(R\) increases. A larger melted tin thus gets to retain its malleability longer and better. This effect of preferring disk over sphere at low impact energy is similar to raising \(T\). Ostensibly, the trend of \(h_{c}\) and \(h_{s}\) in Fig. 2(b) are both reversed in Fig. 2(c). Their physical explanations are in fact similar and consistent. The reduction in thickness as we decrease \(R\) favors the occurrence of RP instability at medium \(h\), while making it easier to lose heat and solidify before RP occurs at high \(h\) - both effects are similar to those by increasing \(T\). For complex meteorite craters, due to extremely large mass, more energy will transform into heat during the impact process [17]. As a result, the material on the surface of celestial bodies will change to liquid or gas phases. It was observed that the edge of these large craters is so steep that it often collapses under its own weight and concentrates at the bottom of craters [18]. This is one of several mechanisms of the central peak in the majority of complex craters [19]. Note that the spherical shape reappears at higher-\(h\) in Fig. 2(b, c), which is associated with a steeper impact crater in stage 2 and will be denoted by sphere-II. The appearance of this shape is always accompanied by a quasi-bump forms in the middle of crater as the sand on the wall and rim of the transient cavity collapses and collides at the center in stage 3. This mimics the central-peak structure in complex craters. #### iii.1.2 Crater We first try to reproduce the general observation that the complex craters tend to be more plate-like with \(d/w=0.05\sim 0.1\), as opposed to bowl-shaped with \(d/w=0.14\sim 0.2\) for simple ones [19]. With laser displacement sensors and a homemade translation stage, we can obtain detailed information on the profile of our impact crater. In Fig. 2(b, c), two critical heights, \(h_{c}\) and \(h_{s}\), are defined to manifest the shape change of tin remnant. Physically we expect the transition to leave its mark on the morphology of crater and, thus, introduce the shape ratio \(r_{s}\equiv w_{r}/w\) to quantify the profile where \(w_{r}\) is the distance between inflection points as shown in Fig. 2(d). According to Fig. 2(e), it turns out that \(h<h_{c}\) indeed corresponds to a smaller \(r_{s}\), i.e., resembling V-shape or a bowl. On the other hand, U-shape or plate-like craters are more likely to occur at \(h>h_{s}\). This is consistent with the distinctive shapes for simple and complex craters. We speculate that this dichotomy in shape has to do with the viscoelastic property of sand and soil [2] - a high-energy collision triggers more elastic response that is more effective at halting the forward motion of tin and forcing it to spread out horizontally. ### Deep-learning-augmented Analysis #### iii.2.1 Expanded Area Projection and PIV In order to confirm our speculations in Sec. III.A.1 regarding the different shapes of tin remnant, we employ image processing techniques and a high-speed camera to measure the top-view area \(A\) of tin at successive times in Fig. 3(a). Note that the exhibition of a peak value for all lines signals the time beyond which the melted tin starts to retract. Two things worth mentioning. First, the maximum \(A\) value of sphere-II is smaller than that of sphere. This is against our intuition because the former is dropped from a higher \(h\). It turns out that sphere-II causes a steeper cavity in stage 2 which hinders the spread of tin. Second, there exists a plateau immediately following the peak for shotgun and snowflake. This is ascribed to the faster solidification due to their lesser thickness when spread out to a larger area. As for the sudden drop following the plateau for shotgun, it is due to the breakup of melted tin into several small spheres. Figure 4: (a) Grad-CAM results for different shapes of tin remnant. Network density \(D\) vs \(L\) are shown in (b\(\sim\)e) for stage 1\(\sim\)4 respectively where the symbols are defined in (a). Figure 3: (a) The top-view area \(A\) of tin remnant versus time \(t\) for sphere, sphere-II, disk, shotgun, and snowflake shapes is denoted by red circle, orange triangle, black rhombus, green square, and blue triangle, respectively. (b) The total kinetic energy \(K\) of sand for different tin shapes is plotted as a function of \(t\). (c) PIV results for snapshots show the evolution of melted tin with \(R=6.4\) mm, 380 \({}^{\circ}\)C and \(h=50\) mm where velocity colorbars for sand on the left and tin on the right. Image of tin remnant after texture segmentation [20] is inserted on the upper right corner of each photo. Using PIV analysis and image processing, we can measure the speed of moving granules and calculate their total kinetic energy \(K\), as shown in Fig. 3(b). Physically \(K\) has to compete with the surface energy of melted tin for the energy converted from the gravitational potential during the impact. The reason why shotgun allocates lesser \(K\) for sand than other shapes from a lower height is that the surface energy already gets the lion's share. Aside from the leftmost image that illustrates the initial impact, the successive images in Fig. 3(c) correspond to the four stages. #### iii.2.2 Network of Tin Remnants Creating the equivalent network for the tin remnant allows us to distinguish the highly complex pattern of tin remnants quantitatively and more objectively. This is made possible by employing CNN and Grad-CAM [21]. The structure and learning curve of CNN can be found in Ref.[20]. We adopt the features, used by Grad-CAM in distinguishing remnant shapes, to create distance-based networks [20]. The Grad-CAM results are shown in Fig. 4(a). Based on the network density \(D\) and the average distance \(L\) between nodes, we are able to not only pinpoint when the morphology starts to develop different shapes, but also quantify their relative proximity. According to Fig. 4(b\(\sim\)e), we learn that the shape branches off in stage 3 or 4. In contrast to Fig. 2(b,c) using \(h\), \(T\) and \(R\) as parameters, the relative position of shapes in Fig. 4(e) is based on the network properties. The proximity of shapes, say, disk and sphere-II, disk and shotgun, and shotgun and snowflake, revealed by both figures is consistent. What is surprising is that snowflake and sphere-II that are separated by a third shape in Fig. 2(b,c) turn out to share a very similar network, whales the opposite happens to sphere and disk, i.e., bordering shapes in Fig. 2(b,c) turn out to drift apart in Fig. 4(e). ### Diameter and Depth of Crater Figure 5(a\(\sim\)d) reveals how the width \(w\) and depth \(d\) of craters are determined by the impact energy or equivalently the height \(h\). Consistent with previous studies, \(w(h)\) is found to obey a power-law relation, i.e., \(w\propto h^{\alpha}\)[22]. However, in contrast to \(\alpha\)=0.17 for water droplet [23] and 0.25 for steel ball [4], both values occur in Fig. 5(a, c), depending on whether \(h\) is smaller or greater than \(h_{c}\) as defined in Fig. 2(b, c). This change of behavior was reminiscent of the observation for granular ball [6], except that the latter exhibits a discontinuous increase and the \(\alpha\) on both sides are close to 0.25. The \(d\propto h^{\beta}\) is monotonically increasing with \(\beta=1/3\)[24] or 0.25 [4] for steel ball and \(\beta=0.17\)[23] for water droplet, but exhibits a discontinuous drop for granular ball [6]. In Fig. 5(b, d) for melted tin, \(d(h)\) is not monotonic - increase at \(h<h_{c}\), and decrease continuously when \(h_{c}<h<h_{s}\) before finally saturate at \(h>h_{s}\). Note that the data points in Fig. 5(b, d) for melted tin at \(h<h_{c}\) are higher than that for steel ball with the Figure 5: (a) Crater width \(w\) vs \(h\) for steel balls (in black circle) and melted tin with 300, 340, 380, 420 and 460 °C (red star, orange triangle, yellow square, green rhombus and cyan triangle). (b) Crater depth \(d\) vs \(h\) with the same symbols as (a). (c) \(w\) vs \(h\) for steel balls (black circle) and melted tin with 6.36, 6.5, 6.63, 6.76 and 6.88 mm (red star, orange triangle, yellow square, green rhombus and cyan triangle). (d) \(d\) vs \(h\) with the same symbols as (c). The insets in (a\(\sim\)d) show the same data in full-log plots. (e) \(d\) vs \(w\) for V- (red square) and U-shape craters (black rhombus). same weight. At first glance, this is counter-intuitive because melted tin ought to have less energy to dig the crater while expanding its area at the same time. The answer to this puzzle is that the crater is largely formed in stage 2 before melted tin has fully retracted to become a sphere or disk. It was checked [20] that while the bottom of this V-shape tin sheet or, equivalently, \(d\) for crater is deeper than steer ball, their centers of mass are in fact at roughly the same depth in stage 2. This explanation is consistent with the fact that the discrepancy of \(d\) for melted tin and steer ball increases with temperature. By compiling the data in Fig. 5(a\(\sim\)d), we are able to obtain \(d\propto w^{0.61}\) for \(h>h_{s}\) in Fig. 5(e) which is in line with the consensus that there exists a power law and the exponent is greater than 0, but less than 1 for complex meteorite craters [25]. However, this consistency is alarming because our data points for \(h>h_{s}\) in fact consist of several constant functions of \(w\). In other words, the non-zero exponent in Fig. 5(e) is spurious because it is an artifact of compiling data from different \(T\) and \(R\), which inevitably is the case for meteorite scholars. ## IV Conclusion and Discussions In contrast to solid and liquid projectiles used in previous studies of impact experiments, we adopt melted tin to better simulate the high temperature involved in real meteorite events. Modifying tin temperature enables us to discern V- and U-shape craters, analogous to the distinction of simple and complex craters in real events. Similar to previous research, we also found the crater width to follow a power-law relation with the impact energy, except that low and high-energy impacts turn out to exhibit different exponents. The fact that the correlation of depth and width is lost as the depth saturates at high energies allows us to uncover a possible misinterpretation when combining data from different parameters, as is inevitable in field studies. Specifically, we were misled to conclude that depth vs width follows a power law when data from different tin diameter and temperature are plotted on the same graph, which in fact consists of several constant functions of \(d(w)\). To aid the analysis and visualization of the complex interactions between melted tin and granules, we employed a couple of technical methods, including deep learning and image processing. Mapping the complex patterns generated by tin in its phase transition to a network allows us to quantitatively compare and elucidate the solidification process. We expect our conclusions are not unique to the use of melted tin, but general to other substances that experience a similar phase transition during the impact process, such as thermoplastic materials. It is our hope that the advantages of recruiting such a material will encourage more applications. For instance, the fluid-solidification mechanisms in porous granules, which are highly applicable to additive manufacturing or metal production processes [26] and extreme ultraviolet (EUV) lithography in chip production [27], where molten tin is used to generate EUV radiation. In a nutshell, our work is not limited to academic interests, but of potential industrial applications.
2309.12206
BOMs Away! Inside the Minds of Stakeholders: A Comprehensive Study of Bills of Materials for Software Systems
Software Bills of Materials (SBOMs) have emerged as tools to facilitate the management of software dependencies, vulnerabilities, licenses, and the supply chain. While significant effort has been devoted to increasing SBOM awareness and developing SBOM formats and tools, recent studies have shown that SBOMs are still an early technology not yet adequately adopted in practice. Expanding on previous research, this paper reports a comprehensive study that investigates the current challenges stakeholders encounter when creating and using SBOMs. The study surveyed 138 practitioners belonging to five stakeholder groups (practitioners familiar with SBOMs, members of critical open source projects, AI/ML, cyber-physical systems, and legal practitioners) using differentiated questionnaires, and interviewed 8 survey respondents to gather further insights about their experience. We identified 12 major challenges facing the creation and use of SBOMs, including those related to the SBOM content, deficiencies in SBOM tools, SBOM maintenance and verification, and domain-specific challenges. We propose and discuss 4 actionable solutions to the identified challenges and present the major avenues for future research and development.
Trevor Stalnaker, Nathan Wintersgill, Oscar Chaparro, Massimiliano Di Penta, Daniel M German, Denys Poshyvanyk
2023-09-21T16:11:00Z
http://arxiv.org/abs/2309.12206v2
# BOMs Away! Inside the Minds of Stakeholders: ###### Abstract. Software Bills of Materials (SBOMs) have emerged as tools to facilitate the management of software dependencies, vulnerabilities, licenses, and the supply chain. While significant effort has been devoted to increasing SBOM awareness and developing SBOM formats and tools, recent studies have shown that SBOMs are still an early technology not yet adequately adopted in practice. Expanding on previous research, this paper reports a comprehensive study that investigates the current challenges stakeholders encounter when creating and using SBOMs. The study surveyed 138 practitioners belonging to five stakeholder groups (practitioners familiar with SBOMs, members of critical open source projects, AI/ML, cyber-physical systems, and legal practitioners) using differentiated questionnaires, and interviewed 8 survey respondents to gather further insights about their experience. We identified 12 major challenges facing the creation and use of SBOMs, including those related to the SBOM content, deficiencies in SBOM tools, SBOM maintenance and verification, and domain-specific challenges. We propose and discuss 4 actionable solutions to the identified challenges and present the major avenues for future research and development. Software Bill of Materials, Survey, Interviews, Software Supply Chain, Open Source Software + Footnote †: journal: Software Engineering (ICSE ’24), April 14–20, 2024, Lisbon, Portugal.
2309.14111
Non-Hermitian Mott Skin Effect
We propose a novel type of skin effects in non-Hermitian quantum many-body systems which we dub a non-Hermitian Mott skin effect. This phenomenon is induced by the interplay between strong correlations and the non-Hermitian point-gap topology. The Mott skin effect induces extreme sensitivity to the boundary conditions only in the spin degree of freedom (i.e., the charge distribution is not sensitive to boundary conditions), which is in sharp contrast to the ordinary non-Hermitian skin effect in non-interacting systems. Concretely, we elucidate that a bosonic non-Hermitian chain exhibits the Mott skin effect in the strongly correlated regime by closely examining an effective Hamiltonian. The emergence of the Mott skin effect is also supported by numerical diagonalization of the bosonic chain. The difference between the ordinary non-Hermitian skin effect and the Mott skin effect is also reflected in the time-evolution of physical quantities; under the time-evolution spin accumulation is observed while the charge distribution remains spatially uniform.
Tsuneya Yoshida, Song-Bo Zhang, Titus Neupert, Norio Kawakami
2023-09-25T13:10:07Z
http://arxiv.org/abs/2309.14111v3
# Non-Hermitian Mott Skin Effect ###### Abstract We propose a novel type of skin effects in non-Hermitian quantum many-body systems which we dub a non-Hermitian Mott skin effect. This phenomenon is induced by the interplay between strong correlations and the non-Hermitian point-gap topology. The Mott skin effect induces extreme sensitivity to the boundary conditions only in the spin degree of freedom (i.e., the charge distribution is not sensitive to boundary conditions), which is in sharp contrast to the ordinary non-Hermitian skin effect in non-interacting systems. Concretely, we elucidate that a bosonic non-Hermitian chain exhibits the Mott skin effect in the strongly correlated regime by closely examining an effective Hamiltonian. The emergence of the Mott skin effect is also supported by numerical diagonalization of the bosonic chain. The difference between the ordinary non-Hermitian skin effect and the Mott skin effect is also reflected in the time-evolution of physical quantities; under the time-evolution spin accumulation is observed while the charge distribution remains spatially uniform. _Introduction-._ Since the discovery of topological insulators, extensive efforts have been devoted to understanding topological aspects of condensed matter systems [1; 2; 3; 4; 5; 6; 7; 8; 9]. While topological insulators are originally reported for free fermions, it has turned out that the interplay between strong correlations and non-trivial topology triggers further exotic phenomena [10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33]. For instance, strong correlations can induce fractional quantum Hall states [10; 11; 15; 21; 22; 23; 24; 25; 26]. Furthermore, topological Mott insulators exhibit the unique bulk-edge correspondence due to the interplay between correlations and the non-trivial topology [34; 35; 36]. Namely, corresponding to the non-trivial topology in the bulk, gapless edge modes emerge only in the spin excitation spectrum (i.e., the charge excitation spectrum is gapped even around edges). Along with the above progress, the topological band theory of non-Hermitian systems has been developed [37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67] and revealed unique phenomena due to the non-Hermitian point-gap topology which do not have Hermitian counterparts[68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82]. A representative example is the non-Hermitian skin effect [83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105] which results in the novel bulk-edge correspondence unique to non-Hermitian systems; because of the non-trivial point-gap topology in the bulk, the eigenstates and eigenvalues exhibit extreme sensitivity to the presence or absence of boundaries [91; 92]. Especially, in one-dimensional systems, most of eigenstates are localized only around one of the edges under open boundary conditions (OBC), while eigenstates extend to the bulk under periodic boundary conditions (PBC). The above localized eigenstates are known as skin modes. The above progress on correlated systems and the non-interacting non-Hermitian topology naturally leads us to the following crucial question: _how do strong correlations affect the non-Hermitian skin effects?_ The significance of this issue is further enhanced by resent advances in experiments with cold atoms [106; 107; 108] and quantum circuits [109] where both dissipation and correlations can be introduced. Correlation effects on the non-Hermitian topological properties have been studied extensively [110; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140], but not on non-Hermitian skin effects, which is particularly the case for bosonic systems. In this paper, we address this question for a one-dimensional bosonic system and discover a novel type of skin effects, a _non-Hermitian Mott skin effect_, which induces striking skin modes. Namely, the non-trivial point-gap topology results in the skin modes in which only the spin degree of freedom is involved (i.e., charges are distributed uniformly even under OBC). This behavior is in sharp contrast to non-Hermitian skin effects in non-interacting systems where bosons are localized around the edge. We elucidate the emergence of the Mott skin effect by examining an effective spin model in the strong correlation regime, which hosts skin modes and possesses the non-trivial point-gap topology characterized by the spin winding number. We also support the emergence of the Mott skin effect by employing numerical diagonalization. Our numerical analysis also elucidates unique real-time dynamics induced by the Mott skin effect; dynamical spin accumulation is observed while the charge distribution remains spatially uniform. _Model-._ Let us consider a one-dimensional chain of interacting bosons. The Hamiltonian reads \[\hat{H} = \hat{H}_{0}+\hat{H}_{\rm int}, \tag{1}\] \[\hat{H}_{0} = \sum_{j\sigma}(-t_{{\rm R}e}\hat{b}^{\dagger}_{j+1\sigma}\hat{b}_{j \sigma}-t_{{\rm L}\sigma}\hat{b}^{\dagger}_{j\sigma}\hat{b}_{j+1\sigma}),\] (2) \[\hat{H}_{\rm int} = -iV\sum_{j\sigma}\hat{n}_{j\sigma}(\hat{n}_{j\sigma}-1)-iU\sum_{j }\hat{n}_{j\uparrow}\hat{n}_{j\downarrow}, \tag{3}\] where \(\hat{b}^{\dagger}_{j\sigma}\) (\(\hat{b}_{j\sigma}\)) creates (annihilates) a boson at site \(j=0,1,2\ldots,L-1\) and in the spin state \(\sigma=\uparrow,\downarrow\)[141]. The non-reciprocal hopping integrals are denoted by \(t_{{\rm R}\uparrow}=t_{{\rm L}\downarrow}=t_{+}\) and \(t_{{\rm L}\uparrow}=t_{{\rm R}\downarrow}=t_{-}\) with real numbers \(t_{+}\) and \(t_{-}\). The operator \(\hat{n}_{j\sigma}\) denotes the number operator of bosons of spin \(\sigma\) at site \(j\); \(\hat{n}_{j\sigma}:=\hat{b}^{\dagger}_{j\sigma}\hat{b}_{j\sigma}\). The first (second) term of \(\hat{H}_{\rm int}\) describes the on-site interaction between bosons with the same (opposite) spin. The parameters \(V\) and \(U\) are non-negative numbers. Throughout this paper, we suppose the relation \(t_{+}>t_{-}\). _Symmetry and a topological invariant-._ The above model preserves charge U(1) symmetry and spin U(1) symmetry \[[\hat{H},\hat{N}] = 0, \tag{4}\] \[[\hat{H},\hat{S}^{z}] = 0, \tag{5}\] with \(\hat{N}=\hat{N}_{\uparrow}+\hat{N}_{\downarrow}\) and \(2\hat{S}^{z}=\hat{N}_{\uparrow}-\hat{N}_{\downarrow}\). Here, \(\hat{N}_{\sigma}\) denotes the number operator of bosons in the spin state \(\sigma\); \(\hat{N}_{\sigma}=\sum_{j}\hat{n}_{j\sigma}\) (\(\sigma=\uparrow,\downarrow\)). Therefore, the Hamiltonian can be block-diagonalized into \(N_{\uparrow}\) and \(N_{\downarrow}\) sectors. We denote eigenvalues of \(\hat{N}_{\uparrow}\) and \(\hat{N}_{\downarrow}\) as \(N_{\uparrow}\) and \(N_{\downarrow}\), respectively. In order to characterize the point-gap topology [142], we introduce the many-body spin winding number \[W_{\rm s} = \int_{0}^{2\pi}\frac{d\theta_{\rm s}}{2\pi i}\frac{\partial}{ \partial\theta_{\rm s}}\log\left[\det\left(\hat{H}_{[N_{\uparrow},N_{\downarrow }]}(\theta_{\rm s})-E_{\rm ref}\right)\right], \tag{6}\] which is a variant of the previously introduced many-body winding number [125; 131; 132]. Here, in order to compute \(W_{\rm s}\), we have imposed twisted boundary conditions where the twist angle of the down-spin state \(\theta_{\downarrow}\) is opposite to that of the up-spin state \(\theta_{\uparrow}\)[143]; \(\hat{b}^{\dagger}_{0\sigma}\hat{b}_{L-1\sigma}\to e^{i\theta_{\rm s}}\hat{b} ^{\dagger}_{0\sigma}\hat{b}_{L-1\sigma}\) with \((\theta_{\uparrow},\theta_{\downarrow})=(\theta_{\rm s},-\theta_{\rm s})\) and the twist angle \(\theta_{\rm s}\). For the Fock space specified by \([N_{\uparrow},N_{\downarrow}]\), the block-diagonalized Hamiltonian is denoted by \(\hat{H}_{[N_{\uparrow},N_{\downarrow}]}(\theta_{\rm s})\). _Analysis in the strong coupling region-._ Before presenting numerical results, we derive an effective model which provides an intuition for the Mott skin effect. In the strong coupling region (i.e., \(V,U\gg t_{+},t_{-}\)), longest lived states of the system are described by an effective spin model, i.e., eigenstates shift in the negative direction of the imaginary axis if bosons occupy the same site. Applying the second order perturbation theory yields the following effective spin model for the states where each site is occupied by one boson \[\hat{H}_{\rm spin}(\theta_{\rm s}) = \sum_{j=0}^{L-2}\left[J_{z}\hat{S}^{z}_{j+1}\hat{S}^{z}_{j}+J_{+} \hat{S}^{+}_{j+1}\hat{S}^{-}_{j}+J_{-}\hat{S}^{-}_{j+1}\hat{S}^{+}_{j}\right] \tag{7}\] \[+J_{+}e^{2i\theta_{\rm s}}\hat{S}^{+}_{0}\hat{S}^{-}_{L-1}+J_{-} e^{-2i\theta_{\rm s}}\hat{S}^{-}_{0}\hat{S}^{+}_{L-1}+E_{0},\] with \(J_{z}=-4i(t_{+}t_{-})(\frac{1}{V}-\frac{1}{U})\), \(J_{\pm}=-2i\frac{t_{0}^{2}}{U}\), and \(E_{0}=-iL(t_{+}t_{-})(\frac{1}{V}+\frac{1}{U})\). Here, \(\hat{S}^{\mu}_{j}\) (\(\mu=x,y,z\)) denotes spin operators at site \(j\). The spin raising and lowering operators are defined as \(S^{\pm}_{j}=S^{z}_{j}\pm iS^{y}_{j}\). In terms of creation and annihilation operators of bosons, the spin operators are written as \(\hat{S}^{z}_{j}=(\hat{n}_{j\uparrow}-\hat{n}_{j\downarrow})/2\), \(\hat{S}^{+}_{j}=\hat{b}^{\dagger}_{j\uparrow}\hat{b}_{j\downarrow}\) and \(\hat{S}^{-}_{j}=\hat{b}^{\dagger}_{j\downarrow}\hat{b}_{j\uparrow}\) for the subspace where each site is occupied by one boson. Details of the derivation are provided in Sec. S1 of Supplemental Material [144]. The above Hamiltonian (7) preserves spin U(1) symmetry and can be block-diagonalized with \(S^{z}\), the eigenvalue of \(\hat{S}^{z}=\sum_{j}\hat{S}^{z}_{j}\). Applying the Jordan-Wigner transformation elucidates that the effective model (7) gives rise to the Mott skin effect (for the detailed derivation, see Sec. S1 of Supplemental Material [144]); the spin model [Eq. (7)] can be mapped to the following spinless fermion model \[\hat{H}_{\rm spin}(\theta) = \sum_{j=0}^{L-2}\left(J_{+}\hat{f}^{\dagger}_{j+1}\hat{f}_{j}+J_{- }\hat{f}^{\dagger}_{j}\hat{f}_{j+1}\right) \tag{8}\] \[-(-1)^{\hat{N}^{\rm f}}\left(J_{+}e^{2i\theta_{\rm s}}\hat{f}^{ \dagger}_{0}\hat{f}_{L-1}+J_{-}e^{-2i\theta_{\rm s}}\hat{f}^{\dagger}_{L-1}\hat {f}_{0}\right)\] \[+J_{z}\sum_{j=0}^{L-1}\hat{n}^{\rm f}_{j+1}\hat{n}^{\rm f}_{j}-J_ {z}\left(\hat{N}^{\rm f}-\frac{L}{4}\right)+E_{0},\] with \(\hat{N}^{\rm f}=\sum_{j}\hat{n}^{\rm f}_{j}\), \(\hat{S}^{+}_{j}=e^{i\pi\hat{N}^{<}_{j}}\hat{f}^{\dagger}_{j}\), and \(\hat{S}^{z}_{j}=(\hat{n}^{\rm f}_{j}-\frac{1}{2})\). Here, \(\hat{N}^{<}_{j}\) and \(\hat{n}^{\rm f}_{j}\) are defined as \(\hat{N}^{<}_{j}=\sum_{j=0}^{L-1}\hat{n}^{\rm f}_{j}\) and \(\hat{n}^{\rm f}_{j}=\hat{f}^{\dagger}_{j}\hat{f}_{j}\). Operators \(\hat{f}^{\dagger}_{j}\) (\(\hat{f}_{j}\)) create (annihilate) a spinless fermion at site \(j\). In particular, for the subspace with \(S^{z}=1-\frac{L}{2}\) (i.e., \([N_{\uparrow},N_{\downarrow}]=[1,L-1]\)), there exists only one fermion created by \(\hat{f}^{\dagger}_{j}\). Therefore, the above Hamiltonian is simplified as \[\hat{H}_{\rm spin}(\theta) = \sum_{j=0}^{L-2}\left(J_{+}\hat{f}^{\dagger}_{j+1}\hat{f}_{j}+J_{-} \hat{f}^{\dagger}_{j}\hat{f}_{j+1}\right) \tag{9}\] \[+\left(J_{+}e^{2i\theta_{\rm s}}\hat{f}^{\dagger}_{0}\hat{f}_{L-1 }+J_{-}e^{-2i\theta_{\rm s}}\hat{f}^{\dagger}_{L-1}\hat{f}_{0}\right)\] \[+J_{z}(\frac{L}{4}-1)+E_{0}.\] This model is nothing but the Hatano-Nelson chain [44; 45] which exhibits the skin effect. Specifically, substituting the Hamiltonian (9) to \(\hat{H}_{[1,L-1]}(\theta_{\rm s})\), we obtain the winding number \(W_{\rm s}=2\) for the subspace with \([N_{\uparrow},N_{\downarrow}]=[1,L-1]\). Here, we have set the reference energy \(E_{\rm ref}\) located inside of the loop formed by the eigenvalues. We recall that \(iJ_{+}>iJ_{-}\) holds for \(t_{+}>t_{-}\) [see below Eq. (7)]. Corresponding to this non-trivial point-gap topology, skin modes emerge only around the right edge which are described by fermions created by \(\hat{f}_{j}^{\dagger}\). We note that the eigenvalues are aligned on the imaginary axis because \(J_{+}\) and \(J_{-}\) are purely imaginary [145]. Recalling the relation between spin operators and operators \(\hat{f}_{j}^{\dagger}\), we can conclude that the system exhibits the Mott skin effect. Namely, only the spin degree of freedom is involved in the skin modes induced by the non-trivial point-gap topology. _Numerical results: non-interacting case-._ In order to numerically analyze this model, we employ exact diagonalization. Unless otherwise noted, we set \((t_{+},t_{-})=(1,0.1)\). In the non-interacting case, the system is decomposed into two bosonic Hatano-Nelson models where the hopping in the right (left) direction is dominant for \(\sigma=\uparrow\) (\(\sigma=\downarrow\)). Such non-reciprocal hoppings result in the ordinary non-Hermitian skin effect as discussed below. In the following, we analyze the many-body Hamiltonian (1) for the Fock space with \((N_{\uparrow},N_{\downarrow})=(1,L-1)\) with \(L=6\) (for analysis of the one-body Hamiltonian, see Sec. S2 of Supplemental Material [144]). Figure 1(a) displays the spectral flow of \(\hat{H}(\theta_{\rm s})\) for \(0\leq\theta_{\rm s}\leq 2\pi\) where \(E_{m}(\theta_{\rm s})\) (\(m=0,1,2,\ldots\)) are eigenvalues of \(\hat{H}(\theta_{\rm s})\). In this figure, we can see that the spectral flow forms the loop structure, which indicates the point-gap topology characterized by the many-body spin winding number taking a non-zero value for \(E_{\rm ref}=-0.01i\) (for more details, see Sec. S3 of Supplemental Material [144]). Corresponding to this non-trivial point-gap topology, the spectrum shows extreme sensitivity to the boundary conditions. Imposing OBC significantly changes the spectrum of \(\hat{H}\); as shown in Fig. 1(a) the eigenvalues under OBC are aligned on the real axis in contrast to the eigenvalues under PBC. Eigenstates also show such sensitivity to the boundary conditions [146]. In order to show this, we compute the expectation values of \(\hat{n}_{j}=\sum_{\sigma}\hat{n}_{j\sigma}\) \[\langle\hat{n}_{j}\rangle = \tfrac{{\rm R}\langle\Psi_{m}|\hat{n}_{j}|\Psi_{m}\rangle_{\rm R }}{{\rm R}\langle\Psi_{m}|\Psi_{m}\rangle_{\rm R}}, \tag{10}\] with \(|\Psi_{m}\rangle_{\rm R}\) (\(m=0,1,2,\ldots\)) being right eigenvectors of the many-body Hamiltonian, \(\hat{H}|\Psi_{m}\rangle_{\rm R}=E_{m}|\Psi_{m}\rangle_{\rm R}\). As displayed in Fig. 1(c), bosons are localized around edges under OBC in contrast to the case of PBC (the data for PBC is provided in Sec. S4 of Supplemental Material [144]). Due to the localization of bosons, spin polarization is observed only in the presence of boundaries. Figure 1(c) displays the expectation values of \(\hat{S}_{j}^{z}=(\hat{n}_{j\uparrow}-\hat{n}_{j\downarrow})/2\). Spin polarization is observed under OBC in contrast to the case of PBC. As discussed above, in the non-interacting cases, the system shows the non-Hermitian skin effect which results in extreme sensitivity of the eigenvalues and eigenstates to the presence or absence of the boundaries. Accordingly, the localization of bosons is observed only under OBC. _Numerical results: interacting case-._ Now, we demonstrate that the interplay between strong correlations and non-reciprocal hoppings induces the Mott skin effect. Namely, the non-trivial point-gap topology results in extreme sensitivity to the boundary conditions only in the spin degree of freedom The spin-charge separation plays an essential role in the emergence of the Mott skin effect. In the following, we focus on the Fock space with \((N_{\uparrow},N_{\downarrow})=(1,L-1)\)[147]. Figures 2(a) and 2(b) display the spectral flow of \(\hat{H}(\theta_{\rm s})\) for \((V,U)=(100,20)\) with increasing \(\theta_{\rm s}\) from \(0\) to \(2\pi\). The interactions \(V\) and \(U\) shift most of the loops in the negative direction of the imaginary axis [see Fig. 2(a)]. However, one of the loops remains around the origin of the complex plane [see Fig. 2(b)]. This is because bosons do not feel the on-site interactions unless multiple bosons occupy the same site. The states remaining around the origin of the complex plane exhibit the Mott skin effect. Because of the loop structure, the many-body spin winding number takes \(W_{s}=2\) for \(E_{\rm ref}=-0.01i\) (for more details, see Sec. S3 of Figure 1: Energy eigenvalues and expectation values for \(V=U=0\). (a): Energy eigenvalues under twisted boundary conditions and OBC (red), respectively. (b): A magnified version of the range \(-1.2\leq{\rm Im}E\leq 1.2\). The data obtained under OBC are represented by red dots. With increasing \(\theta_{\rm s}\) from \(0\) to \(2\pi\), the eigenvalues wind the origin of the complex plane, which indicates \(W_{s}\) takes a non-zero value for \(E_{\rm ref}=-0.01i\). (c) [(d)]: Expectation values \(\langle\hat{n}_{j}\rangle\) [\(\langle\hat{S}_{j}^{z}\rangle\)] at each site under OBC. Here, the expectation values are computed from right eigenvectors of the many-body Hamiltonian labeled by \(m\) [see Eq. (10)]. These data are obtained for subspace with \((N_{\uparrow},N_{\downarrow})=(1,5)\) and for \((t_{+},t_{-})=(1,0.1)\) and \(L=6\). Supplemental Material [144]). Corresponding to the non-trivial point-gap topology, eigenstates exhibit extreme sensitivity to the presence or absence of the boundaries. As shown in Fig. 2(b), the eigenvalues are aligned on the imaginary axis under OBC in contrast to the case of PBC where eigenvalues are distributed on the complex plane [148]. Such sensitivity to the boundary conditions is also observed for eigenstates. However, only the spin degree of freedom is involved in the sensitivity of the boundary conditions, which is in sharp contrast to the non-interacting case. Figure 2(c) displays the expectation values \(\langle\hat{n}_{j}\rangle\). As shown in this figure, strong correlations prevent bosons from localizing around the edges, which suppresses the sensitivity to the boundary conditions for the charge degree of freedom. Instead, the spin degree of freedom exhibits extreme sensitivity to the boundary conditions. Figure 2(d) displays the expectation values \(\langle\hat{S}_{j}^{z}\rangle\). As shown in this figure, spin polarization is observed under OBC in contrast to the case of PBC (the data for PBC are provided in Sec. S4 of Supplemental Material [144]). The above results reveal that the system exhibits the Mott skin effect resulting in extreme sensitivity to the boundary conditions only in the spin degree of freedom (i.e., such sensitivity is not observed in the charge degree of freedom). The emergence of the Mott skin effect is due to the interplay between strong correlations and non-Hermitian point-gap topology. We note that the Mott skin effect is observed also for the subspace with \((N_{\uparrow},N_{\downarrow})=(2,4)\) and \(L=6\) (for more details see Sec. S5 of Supplemental Material [144]). _Real-time dynamics-_. As seen above, the Mott skin effect involves only the spin degree of freedom in contrast to the skin effect in the non-interacting case. This difference is also reflected in the time-evolution of physical quantities. Let us start with the non-interacting case. Figure 3(a) displays the expectation values \[\langle\hat{n}_{j}(t)\rangle = \frac{\langle\Phi(t)|\hat{n}_{j}|\Phi(t)\rangle}{\langle\Phi(t)| \Phi(t)\rangle}, \tag{11}\] with \(|\Phi(t)\rangle=e^{-i\hat{H}t}|\Phi(0)\rangle\) and \(t\) being time [149]. The initial state is chosen so that all of the bosons occupy site \(j=0\); \(\sqrt{(L-1)!}|\Phi(0)\rangle=\hat{b}_{0\uparrow}^{\dagger}(\hat{b}_{0\downarrow }^{\dagger})^{L-1}|0\rangle\). Figure 3(a) indicates that the bosons remain localized around the left edge due to the skin effect [150]. Figure 3(b) indicates that the above localization is also observed in the expectation values \(\langle S_{j}^{z}(t)\rangle=\langle\Phi(t)|\hat{S}_{j}^{z}|\Phi(t)\rangle/ \langle\Phi(t)|\Phi(t)\rangle\). This is because bosons with spin up (spin down) are localized around the right (left) edge (see Sec. S2 of Supplemental Material [144]). The above results demonstrate that in the non-interacting case, the dynamical properties of the spin degree of freedom are tied to those of charge degree of freedom. Now, we turn to the real-time dynamics in the correlated case. In contrast to the non-interacting case, dynamics of the spin degree of freedom exhibit an essential difference from those of the charge degree of freedom. Figure 3(c) displays the time-evolution of \(\langle\hat{n}_{j}(t)\rangle\) for \((V,U)=(100,20)\). As seen in Fig. 3(c), bosons immediately extend to the bulk. In particular, for \(0.125\lesssim t\) Figure 2: Energy eigenvalues and expectation values for \((V,U)=(100,20)\). (a): Energy eigenvalues under twisted boundary conditions and OBC, respectively. (b): A magnified version of the range \(-0.14\leq\text{Im}E\leq 0.1\). In these figures the data obtained under OBC are represented by red dots. With increasing \(\theta_{\text{s}}\) from \(0\) to \(2\pi\), the eigenvalues wind the origin of the complex plane, which indicates \(W_{\text{s}}=2\) for \(E_{\text{ref}}=-0.01i\). (c) [(d)]: Expectation values \(\langle\hat{n}_{j}\rangle\) [\(\langle\hat{S}_{j}^{z}\rangle\)] at each site under OBC. As is the case of \(V=U=0\), the expectation values are computed from the right eigenvectors (see Fig. 1). These data are obtained for \((t_{+},t_{-})=(1,0.1)\) and \(L=6\). each site is occupied by one boson. This is because the states where each site is occupied by one boson have a longer lifetime \(\tau\sim 1/\text{Im}E_{m}\) than others. Contrary to the sudden change in the charge degree of freedom, the dynamics of the spin degree of freedom show gradual changes. The time-evolution of \(\langle\hat{S}_{j}^{z}\rangle\) is plotted in Fig. 3(d). In this figure, dynamical spin accumulation is observed for \(20\lesssim t\). This behavior is observed only under OBC (for data under PBC, see Sec. S6 of Supplemental Material [144]). As seen above, the Mott skin effect induces the dynamical spin accumulation while the charge degree of freedom remains spatially uniform. These dynamical properties in the strongly correlated case are in sharp contrast to those in the non-interacting case. The above dynamical properties are considered to be observed for open quantum systems even in the presence of quantum jumps. _Conclusion-_. We have proposed a non-Hermitian Mott skin effect induced by the interplay between strong correlations and the non-Hermitian point-gap topology. In contrast to the ordinary non-Hermitian skin effect, the Mott skin effect results in extreme sensitivity to the boundary conditions only in the spin degree of freedom. We have demonstrated the emergence of the Mott skin effect by analyzing the bosonic non-Hermitian Hamiltonian with non-reciprocal hoppings and on-site interactions. The difference from the ordinary skin effect is also reflected in the dynamical properties; spin accumulation is observed while charge discribution remains spatially uniform. We finish this paper with a remark on the relevance to open quantum systems described by the Lindblad equation. In this paper, we have analyzed the non-Hermitian Hamiltonian as a toy model. However, we argue that our Hamiltonian is relevant to open quantum systems, and that the above dynamical spin accumulation can be observed even under the presence of the jump term. _Note added-_. While finising this paper, we noticed Ref. [151] posted on arXiv which has some overlap with our results. _Acknowledgments-_. T.Y. thanks Manfred Sigrist for fruitful discussion. T.Y. particulary thanks Shunsuke Furukawa and Yoshihito Kuno for fruitful discussions on the technical details of exact diagonalization. T.Y. is grateful to the long term workshop YITP-T-23-01 held at YITP, Kyoto University, where a part of this work was done. This work is supported by JSPS KAKENHI Grants No. JP21K13850 and No. JP22H05247.
2309.04302
Have We Ever Encountered This Before? Retrieving Out-of-Distribution Road Obstacles from Driving Scenes
In the life cycle of highly automated systems operating in an open and dynamic environment, the ability to adjust to emerging challenges is crucial. For systems integrating data-driven AI-based components, rapid responses to deployment issues require fast access to related data for testing and reconfiguration. In the context of automated driving, this especially applies to road obstacles that were not included in the training data, commonly referred to as out-of-distribution (OoD) road obstacles. Given the availability of large uncurated recordings of driving scenes, a pragmatic approach is to query a database to retrieve similar scenarios featuring the same safety concerns due to OoD road obstacles. In this work, we extend beyond identifying OoD road obstacles in video streams and offer a comprehensive approach to extract sequences of OoD road obstacles using text queries, thereby proposing a way of curating a collection of OoD data for subsequent analysis. Our proposed method leverages the recent advances in OoD segmentation and multi-modal foundation models to identify and efficiently extract safety-relevant scenes from unlabeled videos. We present a first approach for the novel task of text-based OoD object retrieval, which addresses the question ''Have we ever encountered this before?''.
Youssef Shoeb, Robin Chan, Gesina Schwalbe, Azarm Nowzard, Fatma Güney, Hanno Gottschalk
2023-09-08T13:02:36Z
http://arxiv.org/abs/2309.04302v1
# Have We Ever Encountered This Before? ###### Abstract In the life cycle of highly automated systems operating in an open and dynamic environment, the ability to adjust to emerging challenges is crucial. For systems integrating data-driven AI-based components, rapid responses to deployment issues require fast access to related data for testing and reconfiguration. In the context of automated driving, this especially applies to road obstacles that were not included in the training data, commonly referred to as out-of-distribution (OoD) road obstacles. Given the availability of large uncurated recordings of driving scenes, a pragmatic approach is to query a database to retrieve similar scenarios featuring the same safety concerns due to OoD road obstacles. In this work, we extend beyond identifying OoD road obstacles in video streams and offer a comprehensive approach to extract sequences of OoD road obstacles using text queries, thereby proposing a way of curating a collection of OoD data for subsequent analysis. Our proposed method leverages the recent advances in OoD segmentation and multi-modal foundation models to identify and efficiently extract safety-relevant scenes from unlabeled videos. We present a first approach for the novel task of text-based OoD object retrieval, which addresses the question "Have we ever encountered this before?". ## 1 Introduction Hypothetically, consider a scenario where a self-driving vehicle is involved in a collision with a dog. Following the incident, an investigation team of experts is set up to find the root cause. For data-driven AI-based components, the investigation team would prioritize the acquisition of sensory data, such as videos similar to other encounters with dogs, to reproduce the error and conduct an in-depth assessment of the perception system. Having awareness of the full situation, which also includes the perception of the environment before the actual incident, could prevent future road hazards by adjusting the self-driving car's driving policy. The previous example illustrates the need to acquire targeted video data in future life cycles of perception components in self-driving cars to enable anticipatory driving. Promptable video synthesis using generative models [25, 55, 64] could be a suitable way to acquire such data. However, similar examples may exist in already collected data, and questions about the coverage of the generated distribution and the extent of the domain gap would still prevail [43, 53]. An alternative that we follow here is to retrieve relevant data from real-world recordings. However, the usage of existing video retrieval approaches [40, 46, 54] requires processing up to millions of hours of recorded data, Figure 1: **Overview of Our Method**: We extract specific safety-critical driving scenes due to out-of-distribution (OoD) road obstacles from unlabeled videos based on a text query, such as “dog”. The approach leverages single-frame OoD segmentation, object tracking, and multi-modal feature encoding of OoD images to enable _text-to-video retrieval_ of _OoD road obstacles_. which is highly resource-intensive and slow when applied. This is why an efficient screening and preselection of relevant scenes is key. In this work, we focus on safety-critical driving scenes containing unknown road obstacles. Typically, deep neural networks (DNNs) are employed in self-driving cars for perception tasks, and they are trained to identify and locate objects within images given a predefined set of object categories [22, 39, 50]. The number of these predefined classes for standard automated driving datasets ranges from 11 classes in KITTI [35] to 19 classes in BDD100K [65] and Cityscapes [15]. Those classes include common semantic categories such as pedestrian, road, or sidewalk. However, the diversity of the real world offers a boundless set of possible object categories, making DNNs particularly error-prone when processing previously unseen and semantically unknown objects, commonly known as out-of-distribution (OoD) objects. A particular and safety-critical OoD subset in automated driving consists of OoD road obstacles, which are unknown objects present within the drivable area of a self-driving car [29, 37, 47]. Identifying those objects is a crucial prerequisite for building up an OoD database for further analysis and subsequent adjustment of the perception system [2, 18]. Overall, our target is to enable the perception stack to efficiently retrieve safety-relevant video sequences of OoD road obstacles from prior recordings using text queries. For the general related task of image retrieval, one primary challenge is aligning image and query features into a joint embedding space for fast retrieval. Additionally, in the context of OoD road obstacle retrieval, the absence of existing OoD video segmentation approaches (instead of per-frame segmentation) poses another methodological challenge of identifying the same OoD road obstacles over multiple consecutive frames. A parallel line of research in image retrieval explores the construction of feature embeddings based on visual similarities by utilizing DenseNet feature encodings [27] to cluster OoD objects [57, 41, 58, 45]. Those approaches, however, come with limitations that constrain the application of targeted retrieval of objects in the use-case of automated driving: (1) The process of clustering in the embedding space is driven by visual similarities, wherefore distinct instances may be assigned into separate clusters even if they belong to the same semantic category, (2) The retrieval from already formed clusters can only be performed content-based, which requires an image query or manually assigned labels for clusters provided by human annotators and (3) All current approaches only retrieve single frames and not complete sequences. In this work, we propose a method for processing unlabeled video data from commonly available in-vehicle cameras and extracting driving scenes that contain OoD road obstacles. In the first step, our approach provides detailed information about the presence and trajectory of a singular OoD road obstacle within a video _sequence_, thereby extending beyond the conventional task of identifying any OoD object in a single frame to a set of consecutive frames. Next, we offer a method to retrieve sequences that contain the same or similar OoD road obstacles matching a _textual description_ provided by a user. The combination of the two steps leads to a novel approach for resource-efficient and fast text-to-video retrieval of safety-critical driving scenes that leverages the most recent advances both in single-frame OoD segmentation and multi-modal feature encoding. In particular, we perform single-frame OoD segmentation first and track identified OoD road obstacles through frames using a lightweight object tracker. The single frames are then embedded in a multi-modal embedding space. This use of semantically meaningful embedding space enables the retrieval of frames containing OoD road obstacles that match the given text query, while the tracking information allows for the retrieval of the complete sequence of frames where the OoD road obstacle was present. This is the first work to leverage the recent progress in single frame OoD segmentation [44] and the power of the recently established multi-model foundation models [48] for combined image and language understanding for OoD retrieval with application in automated driving. An overview of our method is shown in Figure 1. We summarize our contribution as follows: * We propose a novel modular approach for efficient text-to-video retrieval of safety-critical driving scenes containing OoD road obstacles. Our framework implements and validates the key ideas: (1) leveraging a multi-modal embedding space for text-to-image retrieval, (2) utilizing temporal information and object persistency by making use of tracking to extend single-frame retrieval to video data, and (3) using meta classification for segment-wise false positive removal to refine the OoD segmentations for better accuracy. * By using CLIP's multi-modal embedding space [48], we generate clusters of OoD road obstacle sequences in low-dimensional feature space that enable proper text-to-video retrieval based on semantic similarities rather than visual similarities. * Through extensive experiments, we investigate the interaction of each component within our framework and their impact on the overall retrieval performance. Our findings underline the significance as well as the apparent positive effect of each module on the OoD road obstacle retrieval performance, which are object-level processing, object tracking, prediction of region on interest, and meta classification. ## 2 Related Work This section reviews existing work related to the individual components that constitute our proposed method. **OoD Segmentation**: Semantic segmentation models group pixels in an image into segments that adhere to specific predefined semantic classes. Segments that belong to a semantic class not included in the predefined set are referred to as out-of-distribution (OoD) segments. Typically, semantic segmentation models struggle to detect OoD segments [10]. One approach to overcome this limitation is to leverage sampling-based uncertainty estimation approaches, such as Monte-Carlo dropout [19], ensembles [33], or variational inference [20]. Those quantify the predictive uncertainty of the model and use it to identify OoD objects as unknown. However, these methods are computationally expensive and suffer from numerous false positives in boundary regions between objects, as these exhibit natural uncertainties [32]. A more effective approach is to include auxiliary training data as a proxy for unknown objects to either maximize the softmax entropy [11] or minimize the maximum logits score [44] of unknown objects. Other methods have achieved promising results by aggregation of pixel-level uncertainty information into mask-level predictions [23, 51, 44]. These approaches use segmentation models that perform mask-level segmentation to make predictions about unknown objects. More recent techniques [61] detect road obstacles by explicitly learning a feature embedding space that models the multi-modal appearance of road surfaces. The approach we consider most promising is the one proposed by [44], as indicated by their results on the Segment-MelfYouCan road obstacle segmentation benchmark [10]. Their approach succeeds due to the use of mask classification to preserve objectness and a scoring function that eliminates irrelevant sources of uncertainty. We follow the same method for segmenting OoD road obstacles; however, we enhance the frame-based detection module of [44] by incorporating segment tracking on videos to eliminate some of the false positive detections. **Multiple Object Tracking**: Multi-object tracking (MOT) is the task of determining the spatial and temporal location of multiple objects in a sequence of images. Two possible approaches for MOT tracking are: (a) Converting existing detectors into trackers and combining both tasks in the same framework. These methods either use 3D convolutions on consecutive frames to incorporate temporal information [30, 31, 60] or propagate frame-level information to subsequent frames [3, 67, 5]. However, combining tracking and detection into one model sacrifices the modularity of the tasks, which is desirable for reuse and inspectability in safety-relevant applications [28, 5.4.2.c)]. (b) Tracking by detection methods, which first utilize a pre-trained object detector to detect objects and then track them through a sequence of frames, for example, via data association [8, 34], visual cues [63, 21], or motion patterns [6, 9]. [38] proposed an approach for tracking developed explicitly for use in open-world conditions. Their method uses optical flow and an appearance-based similarity score to detect and track moving objects in an open-world setting. In this work, we use the lightweight tracking by detection approach proposed in [42] to track OoD objects in a sequence of images. This tracking method is a post-processing method based on the overlap of detections between consecutive frames. **Retrieval Methods** Retrieval methods are generally designed to identify and recover samples from a large database that correspond to a given query. For image retrieval, methods can be classified into two categories: content-based image retrieval and text-based image retrieval [12, 66]. Content-based image retrieval methods are based on a query image. These methods aim to select images from a database representing a similar content as the query image. Content-based retrieval techniques analyze visual features of images, including color, texture, or shape, to establish similarity between the images in the database and the given query image [13, 17]. Text-based image retrieval methods focus on selecting images that exhibit the highest level of relevance to a given text query. These systems utilize textual information, such as keywords or natural language descriptions, to retrieve images from a database that best aligns with the provided text [24, 26, 1]. For the task of text-video retrieval, a rich line of research has evolved from the global matching of features via video-sentence alignment [40] to more fine-grained matching via frame-word alignment [62]. These studies have demonstrated remarkable performance and significantly outperformed previous models on the task of text-video retrieval. This is mainly due to the powerful pre-aligned visual and textual representation offered by open-source models like CLIP [48]. In [40], the authors utilize a temporal transformer on top of CLIP to fuse sequential features into a single high-dimensional representation and directly retrieve video segments. However, for the automotive use case, hardware constraints have to be fulfilled. Therefore, in this work, we use a lightweight tracking module on top of CLIP to perform text-video retrieval. ## 3 Methodology This work focuses on retrieving OoD road obstacle sequences from unlabeled videos based on a text query. Our method consists of three key steps. First, we identify the occurrence of OoD road obstacles in single frames. Second, we track the OoD road obstacles through consecutive frames, creating sequences of frames where one and the same road obstacle appears. Third, we enable user interaction via text-based retrieval of sequences. Our method is set up such that after the first and second steps, a database with driving scenes containing OoD road obstacles can be estab lished. Considering that such scenes are substantially less prevalent, this approach resolves problems with the bandwidth constraints of autonomous vehicles and potential storage limitations within cloud-based systems. Afterward, the crops of each OoD road obstacle can be embedded once in a vision-text embedding space; this embedding space allows for retrieving sequences when a user provides a text query. Since each crop can be associated with its respective video sequence, fast retrieval of sequences containing OoD road obstacles is enabled. To accomplish this, our proposed method integrates various auxiliary tasks, including OoD segmentation, multi-object tracking, and text-based image retrieval. These tasks collectively constitute the overall framework. The following sections present a detailed description of each task. ### OoD Road Obstacle Segmentation A complete overview of the OoD road obstacle segmentation method is shown in Figure 2. The initial phase of the OoD road obstacle segmentation module involves using a semantic segmentation network. In our experiments, we use the Mask2Former model [14] initially trained on the Cityscapes dataset [15]. Mask2former decouples localization and classification of objects in semantic segmentation by splitting the task into two steps. Given an \(H\times W\) sized image, Mask2former computes \(N\) pairs \(\{(\mathbf{m}_{i},\mathbf{p}_{i})\}_{i=1}^{N}\), where \(\mathbf{m}_{i}\in[0,1]^{H\times W}\) are mask predictions associated with some semantically related regions in the input image and \(\mathbf{p}_{i}\in[0,1]^{K+1}\) class probabilities classifying to which semantic category the mask \(\mathbf{m}_{i}\) belongs to. Here, the masks can be assigned to one of the \(K\) known Cityscapes classes or to one auxiliary void class. The final semantic segmentation inference is carried out by an ensemble-like approach over the pairs \(\{(\mathbf{m}_{i},\mathbf{p}_{i})\}_{i=1}^{N}\) yielding pixel-wise class scores \[\mathbf{q}[h,w,k]=\sum_{i=1}^{N}\mathbf{p}_{i}(k)\cdot\mathbf{m}_{i}[h,w]\ \ \in[0,N] \tag{1}\] for image pixel locations \(h=1,\ldots,H,w=1,\ldots,W\) and classes \(k=1,\ldots,K\). Then, OoD detection is performed via the anomaly score defined by \[\mathbf{RbA}[h,w]=-\sum_{k=1}^{K}\phi(\mathbf{q}[h,w,k])\ \ \in[0,K] \tag{2}\] with \(\phi\) being the \(\tanh\) activation function. Intuitively, \(\mathbf{RbA}\) in Equation (2) is a measure of whether a pixel cannot be associated to any known class, and thus "Rejected by All" (Rba), of the \(K\) known classes. This scoring function has been introduced in [44]. In the same work, the authors additionally fine-tune Mask2Former for OoD detection by training for low-class scores of the known classes on OoD instances from COCO [36], which has shown to enhance OoD segmentation performance further. This fine-tuned Mask2Former serves as our method for OoD road obstacle segmentation in this work. **Post-processing OoD predictions**: To reduce false positive predictions, meta-classification [51, 11, 52] is used to obtain quality ratings for the OoD predictions. Meta-classification uses hand-crafted metrics like entropy, geometry, and location information of predicted instances to learn the features of false positive predictions on the training set. During runtime, the meta-classification model, in our case a logistic regression, can remove false positives without any ground truth information. We refer the reader to [11] for a detailed description of the approach. **Post-processing road segmentation**: By definition, road obstacles are objects on the road. Consequently, we can restrict our predictions exclusively to objects on the road by establishing a region of interest mask that encompasses the road area. The region of interest mask can be obtained by extracting the road predictions from the Mask2Former segmentation model and morphologically closing [56] the prediction to fill gaps where potential road obstacles might be present. The final OoD road obstacle predictions are obtained by multiplying the region of interest with the OoD predictions. Figure 2: **OoD Road Obstacle Segmentation Overview: Our method segments OoD objects and the road, refines OoD objects using meta classification, generates a region of interest through road mask dilation and erosion, and obtains final OoD road obstacle predictions by combining OoD predictions and the region of interest mask.** ### Tracking Given predictions of OoD road obstacles in each frame, as described in section 3.1, we match subsequent predictions through consecutive frames by measuring the segment-wise intersection over union (IoU) and the geometric centers between consecutive detections. The first step of the tracking approach assigns random identifiers to all the predicted segments in the first frame. For the subsequent frames, each segment is matched with segments in the previous frames if their overlap is sufficiently large and their geometric centers are close enough. Over consecutive frames, linear regression is applied to account for misdetections and temporal occlusions. Segments that do not match with previous detections are assigned new identifiers, and then the process is iterated. We note that this lightweight tracker does not apply any motion models to anticipate the shifted center points of the detections. Hence, the assumption is that the differences between consecutive frames are minimal, leading to a substantial Intersection over Union (IoU) across frames. To reduce the number of false positive detections, tracked segments in sequences of frames that have a length of less than ten frames are filtered out. The assumption is that in the context of automated driving, informative OoD road obstacles persist in the field of view of the vehicle for a couple of frames. The final output of the tracking module is a sequence of cropped segments that belong to a single instance of an OoD road obstacle. An overview of the OoD tracking module is shown in Figure 3. ### Retrieval of Road Obstacle Sequences OoD road obstacle segmentation and tracking allows for the creation of a database of video sequences, with each sequence consisting of consecutive crops of an OoD road obstacle from a video recording. Given a textual query, the goal of OoD road obstacle retrieval now is to find those video sequences that best match the query. For this, we utilize CLIP [48] to align image and text features into a joint embedding space where their similarity can be quantified. Using this approach, natural language supervision guides the model to understand that latent representations of semantically similar contents of images should be close in the embedding space. We retrieve video sequences of OoD objects similar to a textual query as follows: In our database, each element comprises consecutive image crops of OoD road obstacles, which we identify and associate during the OoD detection and tracking steps. We then compare the embedding space representation of the given text query to the by-frame OoD road obstacle crops. To determine the similarity between a video sequence and the text query, we aggregate the similarities of the sequence's individual OoD crops. We retrieve and present the most similar sequences to the user. Specifically, we use cosine similarity to measure the similarity between the embedding representation of the query text and the individually cropped OoD road obstacle detections. For each uniquely detected object in a given sequence, we measure the frame-to-text similarity. The highest similarity score among all the crops in a sequence determines the overall similarity score for a sequence of crops of an OoD road obstacle. In more detail, for every cropped OoD road obstacle detection \(\mathbf{x}_{j}\) in a detected sequence of crops \(\mathbf{S}_{k}=\{\mathbf{x}_{j}\}_{j=1}^{n_{k}}\), the image embeddings \(g_{j}\) are computed as \[\mathbf{g}_{j}=\mathcal{E}_{\text{image}}(\mathbf{x}_{j})\;\in\mathbb{R}^{d} \tag{3}\] where \(\mathcal{E}_{\text{image}}\) is a Vision Transformer ViT-B/32 [16] image encoder. Then given a text query \(t\), a text embedding \(\mathbf{f}\) is computed as: \[\mathbf{f}=\mathcal{E}_{\text{text}}(t)\;\in\mathbb{R}^{d} \tag{4}\] Figure 4: **OoD Road Obstacle Retrieval Overview: Given detections of OoD road obstacles and a text query, both text and images are embedded in a single multi-model embedding space. Using embedding space, all OoD road obstacles within a given threshold \(\tau\) from the text query embedding are retrieved.** Figure 3: **OoD Tracking Overview: Given subsequent frames of OoD obstacles, we use a lightweight tracker that assigns visually and spatially similar segments to the same tracking ID.** where \(\mathcal{E}_{\text{text}}\) is a Transformer text-encoder [59] with modifications described in [49]. To quantify the _semantic similarity_ between an image-text pair, we measure the pairwise cosine similarities between their embeddings. Cosine similarity quantifies the angle between the representation vectors and is a typical similarity measure for text embedding space; it is calculated as follows: \[s(g_{j},f)=\frac{\mathbf{g}_{j}{}^{\top}\mathbf{f}}{\|\mathbf{g}_{j}\|_{2}\;\| \mathbf{f}\|_{2}}\quad\in[-1,1] \tag{5}\] A sequence \(\mathbf{S}_{k}\) is considered a _positive_ match to a text query \(\mathbf{f}\) if, for any of the frames in the sequence, the similarity score of its embeddings exceeds a chosen similarity threshold \(\tau\in[-1,1]\), _i.e_. if \[\exists\;\mathbf{g}_{j}\in\mathbf{S}_{k}:s(\mathbf{g}_{j},\mathbf{f})\geq \tau\;. \tag{6}\] Note that retrieving the image with the highest similarity to the text query is sufficient to retrieve the entire corresponding OoD road obstacle sequence as the remaining images of the sequences are associated by tracking information, _cf_. Figure 4 ## 4 Experiments This section presents our experimental findings and setup. Since this specific task has not been addressed in previous literature, there are no standard baselines available to compare against. Therefore, we present two main experiments: (1) an investigation into the importance of object-level processing as opposed to direct image-level processing for retrieval and (2) an ablation study of the single components of our proposed method. The investigation into object-level processing compares the approach of segmenting, tracking, and retrieving based on cut-outs of OoD road obstacles against direct retrieval on entire images. Additionally, the effects of tracking are evaluated. The ablation study consists of three experiments. In the first experiment, we evaluate the efficacy of our proposed method for the task of OoD retrieval. We report the results of our proposed method for segmentation, tracking, and retrieving OoD road obstacles using two different OoD segmentation networks. Additionally, we compare the retrieval performance against the same approach but using perfect detections. The second and third experiments evaluate the effects of the region of interest segmentation and meta-classification on the detection, tracking, and retrieval performance, respectively. **Datasets:** We perform experiments on the publicly available Street Obstacle Sequences (SOS), Carla-WildLife (CWL), and Wuppertal Obstacle Sequences (WOS) [41]. The SOS dataset contains 20 real-world video sequences with 13 different OoD objects. The CWL dataset contains 26 synthetic video sequences with 18 different OoD objects. WOS contains 44 real-world video sequences with seven different OoD objects. Note that WOS originally did not contain any segmentation annotations. As part of our research effort, we label the dataset and provide these pixel-accurate annotations publicly. In all the above, we consider OoD objects as objects not included in Cityscapes labels. We target retrieving all occurrences of the different OoD objects from the three datasets. **Segmentation Evaluation:** We follow the standard evaluation protocol for the pixel-level performance measures adopted from [7, 47]. Namely, these are the Area Under Precision-Recall Curve (AUPRC) and the False Positive Rate at 95% of True Positive Rate (FPR\({}_{95}\)). From a practitioner's perspective, it is often sufficient only to recognize a fraction of the pixels of an OoD object to detect and localize them. For evaluating the component-level performance of the OoD segmentation model, the averaged component-wise score \(\overline{F}_{1}\)[10] serves as our main evaluation metric. We note that the standard evaluation protocol for OoD segmentation only evaluates predictions that fall into the ground truth road regions [10, 47]. Since expensive ground truth segmentation labels cannot be assumed to be available for large-scale OoD analysis, this assumption must be relaxed for applications that utilize the OoD predictions for downstream tasks. Therefore, we report the \(\overline{F}_{1}\) score on the predicted road regions instead of ground truth road regions. **Tracking Evaluation:** We evaluate the object tracker performance using the common multiple object tracking (MOT) metrics [4]. These metrics quantify the algorithm's ability to accurately detect the number of objects present and determine the position of each object. The Multiple Object Tracking Accuracy (MOTA) is a metric that evaluates the tracking algorithm's performance in detecting objects and maintaining their trajectories, regardless of the precision with which the object positions are estimated. On the other hand, the Multiple Object Tracking Precision (MOTP) assesses the tracker's ability to accurately estimate the positions of objects, irrespective of its detection capabilities. **Retrieval Evaluation:** To evaluate our retrieval performance, we provide a textual query, in our case, the name of the ground truth classes from the OoD datasets, and we evaluate how well our method succeeds in retrieving the matching OoD road obstacle. We use instance-based precision and recall as metrics. A retrieved instance (such as an image crop supposed to contain an OoD road obstacle object) is called true positive if the majority of the pixels within the corresponding image bounding box semantically belong to the query. Consequently, precision is the fraction of retrieved instances that match the query and recall is the fraction of all instances in the dataset according to the query, which are correctly retrieved. As the retrieval performance depends on a similarity threshold, we report the Precision-Recall Curve for all queries in each dataset. ### Object-level vs Image-level Processing In the first experiment, we present a comprehensive evaluation of our proposed method for object retrieval. We compare our approach of segmenting, tracking, and embedding cut-outs of OoD road obstacles against the conventional approach of retrieving directly on embeddings of the full image. Our approach is rooted in our observation that object-level information is necessary for retrieving OoD road obstacles in complex driving scenes where OoD road obstacles make up the minority of the full driving scene. Furthermore, we examine the impact of tracking on retrieval performance. For this experiment, we assume optimal conditions where all OoD road obstacles were detected and tracked correctly. The results for the precision-recall curve for each of the methods and datasets are shown in Figure 5. The results demonstrate our method's significant advantage in performance over the baseline approach. This is primarily attributed to the fact that OoD road obstacles typically only occupy a minor portion of the overall driving scene. Therefore, relying solely on full-frame retrieval leads to inferior performance. The results also show that tracking plays a role in improving the retrieval results. This is because far-away detections are more challenging to retrieve than closer ones. Therefore, creating a link between detections closer to the camera and far away detections via tracking improves the retrieval performance. ### Ablation Study We conduct an ablation study to understand the contribution and significance of the individual components of our method. In the first experiment, we evaluate the efficacy of our proposed method for the task of OoD road obstacle retrieval. Figure 6 shows our retrieval performance measured by the area under the precision-recall curve for the different datasets and compared to the setting with perfect OoD detections. We note that the tracking performance of the proposed lightweight algorithm is almost perfect when evaluated on ground truth detection. Therefore, to achieve Figure 5: **Precision-Recall curve for each dataset**: Each curve illustrates the trade-off between precision and recall for varying thresholds. Dashed curves represent the baseline approach of retrieving based on full-scale driving scenes. Solid curves represent our method of retrieving based only on a cut-out of the OoD road obstacle with tracking information, and dotted curves represent our method but without tracking information. Figure 6: **Average precision-recall curve for each dataset**: The area under the curve (AUPRC) provides a comprehensive measure of the retrieval performance for each dataset. Solid curves represent our method, where we segment and track the video streams, and dashed curves represent the retrieval performance on ground truth detections and tracking. Figure 7: Examples of retrieved video sequences with the corresponding query, sequence length (\(n\)), and similarity score (\(s\)). From left to right, the images correspond to the first and last frame of the sequence. better trajectories of objects, we require either a more robust tracking method that can compensate for the errors in detections or enhance the segmentation model to reduce false positives. We found that, although the performance is reasonable when we assume that all OoD road obstacles are detected, there is still room for improvement. Regarding our OoD segmentation method, despite utilizing a state-of-the-art network, it falls short of capturing all instances of OoD objects (as indicated by dashed curves not reaching a recall value of one). Table 1 summarizes our evaluation results for OoD segmentation, tracking, and retrieval across three video datasets using different segmentation networks. The table shows a strong correlation between the OoD segmentation network performance (\(\overline{F}_{1}\) score) and the tracking and retrieval performance. This signifies the importance of OoD segmentation for retrieval. Figure 7 shows qualitative examples of retrieved video sequences. **Detection and Retrieval under Perfect Regions of Interest**: After analyzing our method, we identify a pattern of multiple false positives occurring on the sidewalk. This can be attributed to the fact that the sidewalk often contains objects that are considered OoD, and our road segmentation and region of interest generation methods are not perfect, sometimes resulting in a region of interest that includes the sidewalk. Therefore, we evaluate our proposed method under perfect regions of interest masks obtained from the ground truth road and OoD road obstacle masks. We evaluate the performance gain in segmentation, tracking, and retrieval. Table 2 shows the results of this experiment and highlights the additive performance gains in comparison to our method. We note that the improvement is due to the decrease in the number of false positive predictions, which leads to better tracking and, therefore, better retrieval scores. We remove the pixel-wise evaluation metrics (AUPRC and \(\mathrm{FPR}_{95}\)) from the evaluation since these were evaluated using the standard evaluation protocol of [10], which is only limited to the ground truth region of interest. **The Effect of Meta Classification**: Meta classification [11, 52] poses an additional but negligible computational overhead to the OoD prediction pipeline that significantly reduces false positives. We evaluate the impact of removing Meta Classification from our pipeline. Table 3 presents our findings related to the impact of meta-classification on segmentation, tracking, and retrieval performance. As expected, removing meta-classification reduces the (\(\overline{F}_{1}\) score ) due to increased false positives, leading to worse tracking and retrieval performance. ## 5 Conclusion This work presents a first approach for the novel task of text-to-video OoD road obstacle retrieval. Our primary aim is to address the question of "_Have we ever encountered this before?_", a critical question arising during the life cycle of AI components in real-world scenarios of automated driving. This would advance the development of automated driving systems by enabling the adaption of driving policies in constantly changing environments. By leveraging single-frame OoD segmentation, object tracking, and multi-modal embedding of OoD road obstacles, our method provides an effective and efficient solution to retrieve relevant video data in response to AI-related field issues. The empirical results showcase the clear advantages of our approach of object-level processing over the straight-forward baseline that relies solely on complete image information. Through exploration of the retrieval task's dependence on segmentation and tracking, we uncover valuable insights into enhancing performance. Specifically, we note the need for better post-segmentation methods to eliminate false-positive predictions, the prediction of the drivable area as a region of interest for OoD road obstacles and meta classification for automated segment-wise false-positive removal. We believe this work lays the groundwork for further research into the issue of OoD road obstacle retrieval for a fast response to AI-related safety concerns during deployment. Thereby, we contribute towards resolving real-world challenges arising in the life cycle of data-driven AI components in automated driving perception systems. \begin{table} \begin{tabular}{l|c|c|c|c|c|c} \hline \hline & \multicolumn{3}{c|}{Segmentation} & \multicolumn{3}{c|}{Tracking} & Retrieval \\ \hline Dataset & Method & AUPRC + \(\mathrm{FPR}_{95}\) \(\downarrow\) & \(\overline{F}_{1}\) & MOTA + \(\overline{F}_{1}\) & MOTP + \(\overline{F}_{1}\) & AUPRC + \(\overline{F}_{1}\) \\ \hline SOS & Entropy max & 85.20 & 1.30 & 50.40 & 0.32 & 12.45 & 37.53 \\ & RA & **89.47** & **0.33** & **53.08** & **0.36** & **5.93** & **52.44** \\ \hline WOS & Entropy max & **94.92** & **0.99** & 30.13 & 0.13 & 51.17 & 26.03 \\ & RA & 93.76 & 0.81 & **48.52** & **0.23** & **16.88** & **58.83** \\ \hline CWL & Entropy max & 79.54 & 1.38 & 47.64 & 0.28 & 18.91 & 26.33 \\ & RA & **86.93** & **0.99** & **60.17** & **0.52** & **7.01** & **37.48** \\ \hline \hline \end{tabular} \end{table} Table 1: OoD object segmentation, tracking, and retrieval results. \begin{table} \begin{tabular}{l|c|c|c|c} \hline \hline & Segmentation & \multicolumn{3}{c|}{Tracking} & Retrieval \\ \hline Dataset & \(\overline{F}_{1}\) + & MOTA + \(\overline{F}_{1}\) & MOTP \(\downarrow\) & AUPRC + \(\overline{F}_{1}\) \\ \hline SOS & 19.68 (-33.90) & -1.57 (-1.93) & 30.41 (+24.48) & 24.29 (-28.15) \\ WOS & 28.31 (-20.21) & -0.99 (-1.22) & 16.66 (+0.22) & 32.55 (-26.28) \\ CWL & 22.98 (-37.19) & -0.69 (-1.21) & 14.21 (+7.20) & 35.95 (-1.53) \\ \hline \hline \end{tabular} \end{table} Table 3: OoD object segmentation, tracking, and retrieval results without meta classification, with comparative performance loss in comparison to RbA method in Table 1. \begin{table} \begin{tabular}{l|c|c|c|c} \hline \hline & Segmentation & \multicolumn{3}{c|}{Tracking} & Retrieval \\ \hline Dataset & \(\overline{F}_{1}\) + & MOTA + \(\overline{F}_{1}\) & MOTP \(\downarrow\) & AUPRC + \(\overline{F}_{1}\) \\ \hline SOS & 19.68 (-33.90) & -1.57 (-1.93) & 30.41 (+24.48) & 24.29 (-28.15) \\ WOS & 28.31 (-20.21) & -0.99 (-1.22) & 16.66 (+0.22) & 32.55 (-26.28) \\ CWL & 22.98 (-37.19) & -0.69 (-1.21) & 14.21 (+7.20) & 35.95 (-1.53) \\ \hline \hline \end{tabular} \end{table} Table 2: OoD object segmentation, tracking, and retrieval results under perfect region of interest, with comparative performance gains in comparison to RbA in Table 1.
2309.08538
Jittering and Clustering: Strategies for the Construction of Robust Designs
We discuss, and give examples of, methods for randomly implementing some minimax robust designs from the literature. These have the advantage, over their deterministic counterparts, of having bounded maximum loss in large and very rich neighbourhoods of the, almost certainly inexact, response model fitted by the experimenter. Their maximum loss rivals that of the theoretically best possible, but not implementable, minimax designs. The procedures are then extended to more general robust designs. For two-dimensional designs we sample from contractions of Voronoi tessellations, generated by selected basis points, which partition the design space. These ideas are then extended to $k$-dimensional designs for general k.
Douglas Wiens
2023-09-15T17:02:41Z
http://arxiv.org/abs/2309.08538v5
# Jittering and Clustering: ###### Abstract We first discuss, and give examples of, methods for randomly implementing some minimax robust designs from the literature. These have the advantage, over their deterministic counterparts, of having bounded maximum loss in large and very rich neighbourhoods of the, almost certainly inexact, response model fitted by the experimenter. Their maximum loss rivals that of the theoretically best possible, but not implementable, minimax design. The procedures are then extended to more general robust designs. For two-dimensional designs we sample from contractions of Voronoi tessellations, generated by selected basis points, which partition the design space. These ideas are then extended to \(k\)-dimensional designs for general \(k\). keywords: central composite design, deterministic design, minimax, random design, robustness, Voronoi. Msc: Primary 62F35, Secondary 62K05 + Footnote †: journal: Journal of Optimization ## 1 Introduction and summary In this article we investigate various methods of implementing experimental designs, robust against model inadequacies. We begin with a review of the'minimax' theory of robustness of design, and of some minimax designs from the literature. It will be seen that the designs which protect against a large class of alternative response models are necessarily absolutely continuous, and so lose their optimality when approximated by implementable, discrete (deterministic) designs. Two remedies for this and other issues are proposed, suggested by work of Waite and Woods (2022), who propose and study _random design_ strategies. The first remedy is a random design strategy termed _jittering_. The designs are obtained by uniform sampling from small neighbourhoods of an optimal set \(t^{*}=\{t_{i}|i=1,...,n\}\) of points, chosen to approximate the minimax design density. Both completely random and stratified random - i.e. random within each neighbourhood - are considered. We assess these designs by looking at the sample distributions of the mean squared prediction errors incurred; with respect to these measures both sampling strategies typically lead to designs very nearly optimal, with the stratification strategy clearly outperforming its completely random counterpart. We then investigate a strategy leading to _cluster_ designs, motivated by the observation that robust designs for a particular response model tend to place their mass near those points \(t_{i}\) at which classically optimal designs, focussed solely on variance minimization, are replicated - but with their support points spread out in clusters of nearby points, rather than being replicated. In clustering the idea is to sample from densities concentrated near the \(t_{i}\). An advantage to this method over jittering is that there is no need for the minimax design to already have been derived. Both these approaches parallel the 'random translation design strategy' of Waite and Woods (2022), who sample uniformly in small neighbourhoods of a chosen set of points, but with some significant differences. The choice of \(t^{*}\) in jittering allows for designs whose maximum expected loss rivals that of the minimax, absolutely continuous design. In clustering, both the support of the non-uniform densities from which we sample, and the extent of their concentration near the \(t_{i}\), are governed by a user-chosen parameter \(\nu\), representing the bias/variance trade-off desired by a user seeking robustness against model misspecifications. We start by applying these ideas in several one-dimensional cases for which the minimax designs have been derived. We then consider two-dimensional applications in which intervals containing the \(t_{i}\) are replaced by less regular regions formed by shrinking Voronoi tessellations generated by \(t^{*}\). We finish with recommendations for the construction of \(k\)-dimensional designs for \(k\geq 3\). The examples were prepared using matlab; the code is available on the author's website. ## 2 Minimax robustness of design The theory of robustness of design was largely initiated by Box and Draper (1959), who investigated the robustness of some classical experimental designs in the presence of certain model inadequacies, e.g. designs optimal for a low order polynomial response when the true response was a polynomial of higher order. Huber (1975) derived _minimax_ designs for straight line regression; these minimize the maximum integrated mean squared error, with the maximum taken over a large class of alternative responses. Wiens (1990, 1992) extended these results to multiple regression responses and in a variety of other directions - see Wiens (2015) for a summary of these and other approached to robustness of design. Specifically, the general problem is phrased in terms of an approximate regression response \[E\left[Y\left(\boldsymbol{x}\right)\right]\approx\boldsymbol{f}^{\prime}\left( \boldsymbol{x}\right)\boldsymbol{\theta}, \tag{1}\] for \(p\) regressors \(\boldsymbol{f}\), each functions of \(q\) independent variables \(\boldsymbol{x}\), and a parameter \(\boldsymbol{\theta}\). Since (1) is an approximation the interpretation of \(\boldsymbol{\theta}\) is unclear; we _define_ this target parameter by \[\boldsymbol{\theta}=\arg\min_{\boldsymbol{\eta}}\int_{\mathcal{X}}\left(E \left[Y\left(\boldsymbol{x}\right)\right]-\boldsymbol{f}^{\prime}\left( \boldsymbol{x}\right)\boldsymbol{\eta}\right)^{2}\mu\left(d\boldsymbol{x} \right), \tag{2}\] where \(\mu\left(d\boldsymbol{x}\right)\) represents either Lebesgue measure or counting measure, depending upon the nature of the _design space_\(\mathcal{X}\). We then define \(\psi\left(\boldsymbol{x}\right)=E\left[Y\left(\boldsymbol{x}\right)\right]- \boldsymbol{f}^{\prime}\left(\boldsymbol{x}\right)\boldsymbol{\theta}\). This results in the class of responses \(E\left[Y\left(\boldsymbol{x}\right)\right]=\boldsymbol{f}^{\prime}\left( \boldsymbol{x}\right)\boldsymbol{\theta}+\psi\left(\boldsymbol{x}\right)\), with - by virtue of (2) - \(\psi\) satisfying the orthogonality requirement \[\int_{\mathcal{X}}\boldsymbol{f}\left(\boldsymbol{x}\right)\psi\left( \boldsymbol{x}\right)\mu\left(d\boldsymbol{x}\right)=\boldsymbol{0}. \tag{3}\] Assuming that \(\mathcal{X}\) is large enough that the matrix \(\boldsymbol{A}=\int_{\mathcal{X}}\boldsymbol{f}\left(\boldsymbol{x}\right) \boldsymbol{f}^{\prime}\left(\boldsymbol{x}\right)\mu\left(d\boldsymbol{x}\right)\) is invertible, the parameter defined by (2) and (3) is unique. We identify a design with its design measure - a probability measure \(\xi\left(d\boldsymbol{x}\right)\) on \(\mathcal{X}\). Define \[\boldsymbol{M}_{\xi}=\int_{\mathcal{X}}\boldsymbol{f}\left(\boldsymbol{x} \right)\boldsymbol{f}^{\prime}\left(\boldsymbol{x}\right)\xi\left(d\boldsymbol {x}\right),\ \ \boldsymbol{b}_{\psi,\xi}=\int_{\mathcal{X}}\boldsymbol{f}\left( \boldsymbol{x}\right)\psi\left(\boldsymbol{x}\right)\xi\left(d\boldsymbol{x} \right),\] and assume \(\xi\) is such that \(\boldsymbol{M}_{\xi}\) is invertible. The covariance matrix of the least squares estimator \(\boldsymbol{\hat{\theta}}\), assuming homoscedastic errors with variance \(\sigma_{\varepsilon}^{2}\), is \(\left(\sigma_{\varepsilon}^{2}/n\right)\boldsymbol{M}_{\xi}^{-1}\), and the bias is \(E\left[\boldsymbol{\hat{\theta}}-\boldsymbol{\theta}\right]=\boldsymbol{M}_{ \xi}^{-1}\boldsymbol{b}_{\psi,\xi}\); together these yield the mean squared error (_mse_) matrix \[\textsc{mse}\left[\boldsymbol{\hat{\theta}}\right]=\frac{\sigma_{\varepsilon }^{2}}{n}\boldsymbol{M}_{\xi}^{-1}+\boldsymbol{M}_{\xi}^{-1}\boldsymbol{b}_{ \psi,\xi}\boldsymbol{b}_{\psi,\xi}^{\prime}\boldsymbol{M}_{\xi}^{-1}\] of the parameter estimates, whence the _mse_ of the fitted values \(\hat{Y}\left(\boldsymbol{x}\right)=\boldsymbol{f}^{\prime}\left(\boldsymbol{x }\right)\boldsymbol{\hat{\theta}}\) is \[\textsc{mse}\left[\hat{Y}\left(\boldsymbol{x}\right)\right]=\frac{\sigma_{ \varepsilon}^{2}}{n}\boldsymbol{f}^{\prime}\left(\boldsymbol{x}\right) \boldsymbol{M}_{\xi}^{-1}\boldsymbol{f}\left(\boldsymbol{x}\right)+\left( \boldsymbol{f}^{\prime}\left(\boldsymbol{x}\right)\boldsymbol{M}_{\xi}^{-1} \boldsymbol{b}_{\psi,\xi}\right)^{2}.\] A loss function that is commonly employed is the _integrated mse_ of the predictions: \[\textsc{imse}\left(\xi|\psi\right)=\int_{\mathcal{X}}\textsc{mse}\left[\hat{Y }\left(\boldsymbol{x}\right)\right]d\boldsymbol{x}=\frac{\sigma_{\varepsilon }^{2}}{n}tr\left(\boldsymbol{A}\boldsymbol{M}_{\xi}^{-1}\right)+\boldsymbol{b} _{\psi,\xi}^{\prime}\boldsymbol{M}_{\xi}^{-1}\boldsymbol{A}\boldsymbol{M}_{ \xi}^{-1}\boldsymbol{b}_{\psi,\xi}+\int_{\mathcal{X}}\psi^{2}\left( \boldsymbol{x}\right)\mu\left(d\boldsymbol{x}\right). \tag{4}\] The dependence on \(\psi\) is eliminated by adopting a _minimax_ approach, according to which one first maximizes (4) over a neighbourhood of the assumed response. This neighbourhood is constrained by (3) and by a bound \(\int_{\mathcal{X}}\psi^{2}\left(\mathbf{x}\right)\mu\left(d\mathbf{x}\right)\leq\tau^{2}/n\), required so that errors due to bias and to variation remain of the same order, asymptotically. Huber (1975) took \(\mathcal{X}\) to be an interval of the real line and assumed that the minimax design measure had a density \(m\left(x\right)\); Wiens (1992) justified this assumption by proving that any design whose design space \(\mathcal{X}\) has positive Lebesgue measure, and which places positive mass on a set of Lebesque measure zero, necessarily has \(\sup_{\psi}\)imse\(\left(\xi|\psi\right)=\infty\). Thus in order that a design on an interval, hypercube, etc. have finite maximum loss, it must be absolutely continuous. For such a design \(\max_{\psi}\)imse\(\left(\xi|\psi\right)\) is \(\left(\sigma_{\varepsilon}^{2}+\tau^{2}\right)/n\) times \[I_{\nu}\left(\xi\right)=\left(1-\nu\right)trAM_{\xi}^{-1}+vch_{\max}\mathbf{K}_{ \xi}\mathbf{H}_{\xi}^{-1}, \tag{5}\] where \[\mathbf{H}_{\xi}=\mathbf{M}_{\xi}\mathbf{A}^{-1}\mathbf{M}_{\xi},\ \mathbf{K}_{\xi}=\int_{ \mathcal{X}}\mathbf{f}\left(\mathbf{x}\right)\mathbf{f}^{\prime}\left(\mathbf{x}\right)m^{2} \left(\mathbf{x}\right)d\mathbf{x},\] \(ch_{\max}\) denotes the maximum eigenvalue and \(\nu=\tau^{2}\left/\left(\sigma_{\varepsilon}^{2}+\tau^{2}\right)\in\left[0,1\right]\), representing the relative importance, to the experimenter, of errors due to bias rather than to variance. With \[\mathbf{G}_{\xi} =\] \[\mathbf{\mathrm{r}}_{\xi}\left(\mathbf{x}\right) = \frac{\tau}{\sqrt{n}}G_{\xi}^{-1/2}\left(m\left(\mathbf{x}\right)\mathbf{ I}_{p}-\mathbf{M}_{\xi}\mathbf{A}^{-1}\right)\mathbf{f}\left(\mathbf{x}\right), \tag{6}\] the least favourable contaminant is \[\psi_{\xi}\left(\mathbf{x}\right)=\mathbf{\mathrm{r}}_{\xi}^{\prime}\left(\mathbf{x} \right)\beta_{\xi}, \tag{7}\] where \(\beta_{\xi}\) is the unit eigenvector belonging to the maximum eigenvalue of \(\mathbf{G}_{\xi}^{1/2}\mathbf{H}_{\xi}^{-1}\mathbf{G}_{\xi}^{1/2}+\mathbf{I}_{p}\). See Wiens (2015) for details and further references. ### Random designs In the following sections we construct distributions \(\Phi\left(\mathbf{x}\right)\), with densities \(\phi\left(\mathbf{x}\right)\), and propose randomly choosing design points from \(\Phi\). An \(n\)-point design \(D=\{\mathbf{x}_{i}\}_{i=1}^{n}\) chosen in this way has design measure \(\delta=n^{-1}\sum\delta_{\mathbf{x}_{i}}\), where \(\delta_{\mathbf{x}_{i}}\) is point mass at \(\mathbf{x}_{i}\sim\Phi\). By the preceding any such design has unbounded imse once it is chosen. Of interest however is the _expected_ imse against a common alternative; for this we evaluate at the least favourable contaminant \(\psi_{\Phi}\), given by (6) and (7) but with \(\xi\) replaced by \(\Phi\). It is shown in the Appendix that \[E_{\Phi}\left[\text{imse}\left(\delta|\psi_{\Phi}\right)\right]=\left(\sigma_ {\varepsilon}^{2}+\tau^{2}\right)/n\times J_{\nu}\left(\Phi\right), \tag{8}\] where \[J_{\nu}\left(\Phi\right) = \left(1-\nu\right)E_{\Phi}\left[tr\boldsymbol{A}\boldsymbol{M}_{ \delta}^{-1}\right]+\nu E_{\Phi}\left[\gamma_{\delta}\right],\,\text{for} \tag{9}\] \[\gamma_{\delta} = \beta_{\Phi}^{\prime}\boldsymbol{G}_{\Phi}^{-1/2}\left( \boldsymbol{M}_{\phi}\boldsymbol{M}_{\delta}^{-1}-\boldsymbol{M}_{\Phi} \boldsymbol{A}^{-1}\right)\boldsymbol{A}\left(\boldsymbol{M}_{\delta}^{-1} \boldsymbol{M}_{\phi}-\boldsymbol{A}^{-1}\boldsymbol{M}_{\Phi}\right) \boldsymbol{G}_{\Phi}^{-1/2}\beta_{\Phi}+1.\] Here \(\boldsymbol{M}_{\delta}\stackrel{{ def}}{{=}}\frac{1}{n}\sum_{x _{i}\in D}\boldsymbol{f}\left(\boldsymbol{x}_{i}\right)\boldsymbol{f}^{\prime }\left(\boldsymbol{x}_{i}\right)\) and \(\boldsymbol{M}_{\phi}\stackrel{{ def}}{{=}}\frac{1}{n}\sum_{x _{i}\in D}\boldsymbol{f}\left(\boldsymbol{x}_{i}\right)\phi\left(\boldsymbol {x}_{i}\right)\boldsymbol{f}^{\prime}\left(\boldsymbol{x}_{i}\right)\); \(\beta_{\Phi}\) is the unit eigenvector belonging to the maximum eigenvalue of \(\boldsymbol{G}_{\Phi}^{1/2}\boldsymbol{H}_{\Phi}^{-1}\boldsymbol{G}_{\Phi}^{1 /2}+\boldsymbol{I}_{p}\). Note that both \(\boldsymbol{M}_{\delta}\) and \(\boldsymbol{M}_{\phi}\) are random. The expectations in (9) can be estimated by averaging over a large number of realizations of \(\delta\) - see SS4.1 for an example of this. In the special case that \(\phi\left(\boldsymbol{x}\right)\) is constant on its support - as is the case in SS3 - \(\boldsymbol{M}_{\delta}^{-1}\boldsymbol{M}_{\phi}\) is a constant multiple of \(\boldsymbol{I}_{p}\), \(\gamma_{\delta}\) is non-random, and these formulas simplify considerably - see (12). ## 3 Jittering There are obvious issues in implementing an absolutely continuous design measure within this framework, since any discrete approximation necessarily suffers from the drawback, as above, that the maximum loss is infinite. Noting that in this case the least favourable contaminating function \(\psi\) is largely concentrated on a set of measure zero - an unlikely eventuality against which to seek protection - Wiens (1992, p. 355) states that "Our attitude is that an approximation to a design which is robust against more realistic alternatives is preferable to an exact solution in a neighbourhood which is unrealistically sparse." He places one observation at each of the quantiles \[t_{i}=\xi^{-1}\left(\frac{i-1/2}{n}\right),\,\,i=1,...,n, \tag{10}\] which is the \(n\)-point design closest to \(\xi\) in Kolmogorov distance (Fang and Wang 1994; see Xu and Yuen 2011 for other possibilities). Despite the disclaimer above, such discrete implementations have become controversial; see in particular Bischoff (2010). In this article we investigate a resolution to these difficulties offered by Waite and Woods (2022), who propose randomly sampling the design points from uniform densities highly concentrated in small neighbourhoods of an optimally chosen set of deterministic points. In our case we propose random sampling from a piecewise uniform density \[\phi\left(x;c\right)=\frac{1}{2c}\sum_{i=1}^{n}I\left[t_{i}-\frac{c}{n}\leq x \leq t_{i}+\frac{c}{n}\right], \tag{11}\] for chosen \(c\in\left(0,1\right)\). We illustrate the method in the context of straight line regression - \(\mathcal{X}=\left[-1,1\right]\) and \(\boldsymbol{f}\left(x\right)=\left(1,x\right)^{\prime}\) - for which Huber (1975) obtained the minimax density \[m\left(x\right)=3\left(x^{2}-\alpha\right)^{+}/d\left(\alpha\right),\] with \(\alpha\) chosen to minimize (5), which in terms of \[\mu_{2}(\alpha)=\int_{-1}^{1}x^{2}m\left(x\right)dx,\ \kappa_{0}(\alpha)=\int_{-1}^ {1}m^{2}\left(x\right)dx,\ \kappa_{2}(\alpha)=\int_{-1}^{1}x^{2}m^{2}\left(x\right)dx\] is \[K_{\nu}(\alpha)=2\left(1-\nu\right)\left(1+\frac{1}{3\mu_{2}}\right)+2\nu\max \left(\kappa_{0},\frac{\kappa_{2}}{3\mu_{2}^{2}}\right).\] Apart from minor modifications resulting from the change in the support to \(\left[-1,1\right]\) from \(\left[-1/2,1/2\right]\), the details of the construction of \(m\) are as in Huber (1975). We assume that \(\max\left(\kappa_{0},\kappa_{2}/3\mu_{2}^{2}\right)=\kappa_{0}\) and check this once \(m\) is obtained. We find \[d\left(\alpha\right)=\left\{\begin{array}{cc}2\left(1-3\alpha\right),&\alpha \leq 0,\\ 2\left(1-\sqrt{\alpha}\right)^{2}\left(1+2\sqrt{\alpha}\right),&\alpha\geq 0,\end{array}\right.\] with \(\alpha\) and \(\nu\) related by \[\nu^{-1}=\left\{\begin{array}{cc}1+\frac{9\left(3-5\alpha\right)^{2}}{25 \left(1-3\alpha\right)^{3}},&\alpha\leq 0,\\ 1+\frac{9\left(3+6\sqrt{\alpha}+4\alpha+2\alpha^{3/2}\right)^{2}}{25\left(1- \sqrt{\alpha}\right)^{2}\left(1+2\sqrt{\alpha}\right)^{3}},&\alpha\geq 0. \end{array}\right.\] The limiting cases are (i) \(\alpha\rightarrow-\infty\), \(\nu\to 1\), \(m\left(x\right)\rightarrow.5\) (the uniform density), (ii) \(\alpha=0\), \(\nu=25/106\), \(m\left(x\right)=3x^{2}/2\), and (iii) \(\alpha\rightarrow\infty\), \(\nu\to 0\), \(m\left(x\right)\rightarrow\) point masses of \(1/2\) at \(\pm 1\). It is a fortuitous consequence of the choice of _imse_ as loss that for all \(\nu\in\left[0,1\right]\), \(\max\left(\kappa_{0},\kappa_{2}/3\mu_{2}^{2}\right)=\kappa_{0}\), the choice used in the derivation of the minimizing density \(m\). For other common choices - D-, A- and E-optimality for instance - the situation is far more complicated. See Daemi and Wiens (2013). ### Jittered designs for SLR In the construction of the sampling density (11) for this example we will take \(\alpha\leq 0\) - the case of most interest from a robustness standpoint - and then for \(m\) as above, the symmetrically placed points \(t_{i}\) are determined by \[t_{i}^{3}-3\alpha t_{i}=\left(1-3\alpha\right)\left(\frac{2i-1-n}{n}\right), \ i=1,...,n.\] This equation has an explicit solution furnished by Cardano's formula: \[t_{i}=\left(-s/2+\sqrt{\Delta}\right)^{1/3}+\left(-s/2-\sqrt{\Delta}\right)^{1/3},\] for \[s=-\left(1-3\alpha\right)\left(\frac{2i-1-n}{n}\right),\ \Delta=s^{2}/4-\alpha^{3}>0.\] From (10), and the bowl-shape of \(m(x)\), one infers that the distances between adjacent \(t_{i}\) are smallest near \(\pm 1\), largest near \(0\). Thus the intervals of support of \(\phi\) will be non-overlapping, and within \(\left[-1,1\right]\), as long as \(t_{1}-c/n\geq-1\), i.e. \(c\leq n\left(1+t_{1}\right)\). Note that the interpretation of \(c\) is that it is the proportion of the design space being randomly sampled. A comparison of the maximum loss (5) of \(\xi\) versus that of the design measure \(\Phi\) corresponding to \(\phi\) is obtained from \[I_{\nu}\left(\xi\right) = 2\left(1-\nu\right)\left(1+\frac{1}{3\mu_{2}(\alpha)}\right)+ \nu\left(1+\frac{5}{4}\left(3\mu_{2}(\alpha)-1\right)^{2}\right),\] \[I_{\nu}\left(\Phi\right) = 2\left(1-\nu\right)\left(1+\frac{1}{3\lambda_{2}(c)}\right)+ \frac{\nu}{c}\max\left(1,\frac{1}{3\lambda_{2}(c)}\right),\] where \[\mu_{2}(\alpha)=\frac{3-5\alpha}{5\left(1-3\alpha\right)},\ \text{and}\ \lambda_{2}(c)=\int_{-1}^{1}x^{2}\phi\left(x;c\right)dx=\frac{1}{n}\sum_{i=1 }^{n}t_{i}^{2}+\frac{c^{2}}{3n^{2}}.\] Note that \(I_{\nu}\left(\Phi\right)\) is evaluated at the least favourable contaminant \(\psi_{\Phi}\), determined as at (7) with \(\xi\) replaced by \(\Phi\). See Figure 1, where we present plots and comparative values, when placing equal weight on protection against bias versus variance (\(\nu=.5\)), \(50\%\) of the design space to be sampled from \(\Phi\) (\(c=.5\)) and \(n=10\). In Figure 1(d) we give values of the loss (4) against a particular \(\psi\) of interest. With these designs the alternative of most concern is probably quadratic, and so we take \(\psi(x)=\left(\tau/\sqrt{n}\right)\psi_{*}(x)\), with \(\psi_{*}(x)=\sqrt{45/8}\left(x^{2}-1/3\right)\), orthogonal to \(\left(1,x\right)\) and having \(\int_{-1}^{1}\psi_{*}^{2}(x)dx=1\). From (7) we see that this in fact gives the least favourable contaminant for the design \(\xi\). For any symmetric design \(\pi\) with \(\mu_{2}\left(\pi\right)=\int_{-1}^{1}x^{2}\pi\left(dx\right)\), the loss (4) applied to this quadratic alternative is \(\left(\sigma_{\varepsilon}^{2}+\tau^{2}\right)/n\) times \[{}_{\text{QIMSE}}\left(\pi|\psi_{*}\right) = \left(1-\nu\right)tr\left(\boldsymbol{A}\boldsymbol{M}_{\pi}^{-1 }\right)+\nu\left(\boldsymbol{b}_{\psi_{*,\pi}}^{\prime}\boldsymbol{M}_{\pi}^ {-1}\boldsymbol{A}\boldsymbol{M}_{\pi}^{-1}\boldsymbol{b}_{\psi_{*,\pi}}+1\right)\] \[= 2\left(1-\nu\right)\left(1+\frac{1}{3\mu_{2}\left(\pi\right)} \right)+\nu\left(\frac{5}{4}\left(3\mu_{2}\left(\pi\right)-1\right)^{2}+1 \right).\] In Figure 1(d) we plot \(\textsc{qimse}(\Phi|\psi_{*})\) and highlight \(\textsc{qimse}(\xi|\psi_{*})\), which by virtue of the observation above coincides with \(I_{\nu}\left(\xi\right)\). As noted in SS2.1, \(E_{\Phi}\left[\textsc{imse}\left(\delta|\psi_{\Phi}\right)\right]\) simplifies considerably for these jittered designs and then \(J_{\nu}\left(\Phi\right)\) is very similar to \(I_{\nu}\left(\Phi\right)\). We show in the Appendix that in this case (9) becomes \[J_{\nu}\left(\Phi\right)=\left(1-\nu\right)E_{\Phi}\left[tr\boldsymbol{AM}_{ \delta}^{-1}\right]+\frac{\nu}{c}\max\left(1,\frac{1}{3\lambda_{2}\left(c \right)}\right). \tag{12}\] We note that \[E_{\Phi}\left[tr\boldsymbol{AM}_{\delta}^{-1}\right]=2\left\{1+E_{\Phi}\left[ \frac{\mu_{\delta}^{2}+\frac{1}{3}}{\sigma_{\delta}^{2}}\right]\right\},\] where \(\mu_{\delta}\) and \(\sigma_{\delta}^{2}\) are the mean and variance of the design. From both (c) and (d) of Figure 1 we see that the loss associated with the design \(\Phi\) decreases with \(c\), i.e. as the design becomes closer to the uniform design on all of \(\chi\), for which the bias vanishes. This is in line with the remark of Box and Draper (1959): "The optimal design in typical situations in which both variance and bias occur is very nearly the same as would be obtained if _variance were ignored completely_ and the experiment designed so as to _minimize bias alone_." Figure 2: Top: Values of qimse from 1000 simulated random designs. Bottom: Values of \(\textsc{imse}(\delta|\psi_{\Phi})\) and their average, estimating \(E\left[\textsc{imse}\left(\delta|\psi_{\Phi}\right)\right]\). These use the same inputs as in Figure 1. (a), (c): Completely random sampling. (b), (d): Stratified random sampling. #### 3.1.1 Sampling methods We simulated \(M=1000\) completely random and stratified random designs, in order to assess their performance. A completely random design consisted of \(n=10\) points chosen from \(\phi\left(x;c\right)\). Stratification consisted of choosing one design point at random from each bin. In each case we plotted qimse(\(\pi|\psi_{*}\)) (calculated using \(\mu_{2}\left(\pi\right)=\sum_{i=1}^{n}x_{i}^{2}/n\)), and then evaluated various descriptive measures. See Figure 2(a),(b). As was seen in Figure 1 and Table 1 there is a negligible difference in qimse when going from the discrete design with \(c=0\), to its randomization with \(c=.5\), even though the former has infinite maximum loss. The sample averages of the losses from the randomized designs were smaller and closer to their expectation qimse(\(\Phi|\psi_{*}\)) under the stratified sampling scheme, and the losses themselves were much more concentrated near qimse(\(\Phi|\psi_{*}\)), as exhibited by the much shorter tail in (b). Similar comments apply to the values of imse(\(\delta|\psi_{\Phi}\)) in Figure 2(c),(d). In a further calculation, for which the output is not displayed here, we estimated \(J_{\nu}\left(\xi\right)\), as at (9), by drawing 1000 samples from the minimax density \(m\left(x\right)\) and computing imse(\(\delta|\psi_{\xi}\)) for each. The values showed even more variation than those plotted in Figure 5(c), but with an average imse of 2.72 - very close to that in Figure 5(d). From this we infer that jittering combined with stratification gives an efficient, structured implementation of the minimax solution. Simulations using other inputs also resulted in these same conclusions - that our random design strategies typically yield designs very close to optimal with respect to our robustness and efficiency requirements, and that do not suffer from the drawback of deterministic designs of having infinite maximum loss. ## 4 Cluster designs in one dimension Working in discrete design spaces, Wiens (2018) obtained minimax robust designs for a variety of approximate responses. Those shown in Figure 3 are for cubic regression. The classically I-optimal design (\(\nu=0\)) minimizing integrated variance alone was derived by Studden (1977) and places masses of.1545 and.3455 at \(\pm 1\) and \(\pm.4472\). The robust designs can thus be described as taking the replicates of the classical design and spreading their mass out ('clustering') over nearby regions. This same phenomenon has frequently been noticed in other situations (Fang and Wiens 2000, Heo, Schmuland and Wiens 2001 for instance). In this section we aim to formalize this notion in order to obtain designs competing with the minimax designs, but with finite maximum loss even in continuous design spaces, and having the advantage of being much more easily derived - there is no need for the minimax designs to be known. We consider only one-dimensional designs in this section, and will illustrate the methods in polynomial response models of degrees \(p-1=1,2,3\). Suppose that a given static design has \(p\) support points \(t_{1}<\cdots<t_{p}\) in \([-1,1]\). Define midpoints \(s_{i}=\left(t_{i}+t_{i+1}\right)/2\), \(i=1,\cdot\cdot\cdot,p-1\). Put \(s_{0}=\min\left(-1,t_{1}\right)\) and \(s_{p}=\max\left(1,t_{p}\right)\). Then the \(p\) intervals \(I_{i}=\left\{\left[s_{i-1},s_{i}\right],\ i=1,...,p\right\}\) cover \([-1,1]\) and have the properties that \(t_{i}\in I_{i}\) and that any point in \(I_{i}\) is closer to \(t_{i}\) than to any \(t_{j}\), \(j\neq i\). We note that this is then a trivial example of a _Voronoi tessellation_, to be considered when we pass to higher dimensions. We propose designs consisting of points sampled from Beta densities on subintervals of the \(I_{i}\). Specifically, for \(i=1,...,p\) let \(c=c\left(\nu\right)\in[0,1]\) satisfy \(c\left(0\right)=0\) and \(c\left(1\right)=1\). Put \(J_{i}\left(c\right)=\left[t_{i}-c\left(t_{i}-s_{i-1}\right),t_{i}+c\left(s_{i }-t_{i}\right)\right]\equiv\left[k_{i},l_{i}\right]\), with length \(\left|J_{i}\right|=\left(l_{i}-k_{i}\right)=c\left(s_{i}-s_{i-1}\right)=c \left(s_{i}-s_{i-1}\right)\). We note that the \(p\)-th order of \(c\times\left|I_{i}\right|\). Let \(\beta_{a,b}\left(x\right)\) be the Beta\(\left(a,b\right)\) density on \(\left[0,1\right]\). Then \[\frac{1}{\left|J_{i}\right|}\beta_{a,b}\left(\frac{x-k_{i}}{\left|J_{i}\right| }\right),\ x\in J_{i}\left(c\right) \tag{13}\] is this density, translated and scaled to \(J_{i}\left(c\right)\). The interpretation of '\(c\)' is as before - it is the fraction of the design space to be sampled. The parameters \(\left(a_{i},b_{i}\right)\) are chosen so that the mode of (13) is at \(t_{i}\in J_{i}\left(c\right)\), hence the mode \(\delta_{i}\in\left[0,1\right]\) of \(\beta_{a,b}\left(x\right)\) is given by \[\delta_{i}\equiv\frac{t_{i}-k_{i}}{l_{i}-k_{i}}=\left\{\begin{array}{cc} \frac{a_{i}-1}{a_{i}-1+b_{i}-1},&a_{i},b_{i}>1,\\ 0,&a_{i}\leq 1<b_{i},\\ 1,&b_{i}\leq 1<a_{i}.\end{array}\right.\] Then \[\left(a_{i}-1\right)\left(1-\delta_{i}\right)=\left(b_{i}-1\right)\delta_{i}. \tag{14}\] If \(\delta_{i}\neq 0,1\) we determine one of \(\left(a_{i},b_{i}\right)\) in terms of the other through (14). If \(-1=t_{1}\) then \(\delta_{1}=0\) and we set \(a_{1}=1\). If \(t_{p}=1\) then \(\delta_{p}=1\) and we set \(b_{p}=1\). In each case the remaining parameter is set equal to \(1/\nu\), so that the density tends to a point mass at \(t_{i}\) as \(\nu\to 0\) and to uniformity as \(\nu\to 1\). Correspondingly we choose \(c\left(\nu\right)=\nu\). The final density \(\phi\left(\cdot\right)\) from which the design points are to be sampled is a weighted average of those at (13), with weights proportional to the lengths \(\left|I_{i}\right|\) of the \(I_{i}\). Since \(\left|J_{i}\right|=c\left|I_{i}\right|\) we obtain \[\phi\left(x;\nu\right)=\frac{1}{2c}\sum_{i=1}^{p}\beta_{a_{i},b_{i}}\left( \frac{x-k_{i}}{\left|J_{i}\right|}\right)I\left(x\in J_{i}\right). \tag{15}\] Motivated by the designs of SS3.1 we recommend stratified sampling, by which the sample consists of \(\approx n\left|I_{i}\right|/2\) points drawn from (13), subject to an appropriate rounding procedure. ### Polynomial regression We illustrate these proposals in the context of approximate polynomial responses of degrees \(p-1=1,2,3\). As also suggested in 'Heuristic 5.1' of Waite and Woods (2022, p. 1462), \(t^{*}\) will consist of the support points of the classical I-optimal designs. These I-optimal designs \(\xi^{*}\) are obtained from Lemma 3.2 of Studden (1977), and are as follows. \(p=2\): \(\xi^{*}\left(\pm 1\right)=.5\), \begin{table} \begin{tabular}{c c c c c c} \hline & \multicolumn{2}{c}{variance} & \multicolumn{2}{c}{max sqd. bias} & \multicolumn{2}{c}{\(I_{v}\)} \\ \cline{2-6} & \(\nu=.5\) & \(\nu=.04\) & \(\nu=.5\) & \(\nu=.04\) & \(\nu=.5\) & \(\nu=.04\) \\ \cline{2-6} \(p=2\) & 2.94 & 2.67 & 2.67 & 319 & 2.80 & 15.3 \\ \(p=3\) & 4.65 & 4.27 & 2.62 & 213 & 3.64 & 12.6 \\ \(p=4\) & 6.49 & 6.02 & 2.54 & 193 & 4.51 & 13.5 \\ \hline \hline \end{tabular} \end{table} Table 2: Performance measures for the designs of Figure 4. Figure 4: Cluster design densities \(\phi\left(x;\nu=.5\right)\); typical stratified samples using weights (a) \(\{.5,.5\}\), (b) \(\{.25,.5,.25\}\), (c) \(\{.14,.36,.36,.14\}\). Figure 5: Values of \(\textsc{imse}(\delta|\psi_{\Phi})\) from 1000 simulated random designs for polynomial regression and their average, estimating \(E\left[\textsc{imse}\left(\delta|\psi_{\Phi}\right)\right]\). \(p=3\): \(\xi^{*}\left(\pm 1\right)=.25\), \(\xi^{*}\left(0\right)=.5\), \(p=4\): \(\xi^{*}\left(\pm 1\right)=\frac{1}{2\left(1+\sqrt{5}\right)}\approx.1545\), \(\xi^{*}\left(\pm\frac{1}{\sqrt{5}}\approx\pm.4472\right)=\frac{\sqrt{5}}{2 \left(1+\sqrt{5}\right)}\approx.3455\). Figure 4 gives the sampling densities (15), together with the subsample sizes when \(n=10\). Figure 4(a) gives output for the approximate linear model, with a maximum imse, as at (5), of \(I_{\nu}\left(\Phi\right)=2.8039\). This compares very favourably with the design of Figure 1, especially given that its construction does not require the minimax design to be given. This latter point is especially germane for the design of Figure 4(b), since it is the analogue of the absolutely continuous minimax designs for approximate quadratic regression derived - with substantial theoretical and computational difficulty - by Shi, Ye and Zhou (2003) using methods of non-smooth optimization and by Daemi and Wiens (2013) using completely different methods. Figure 5 gives simulated values of imse\(\left(\delta|\psi_{\Phi}\right)\) from 1000 simulated random designs, together with their average, estimating \(E\left[\textsc{imse}\left(\delta|\psi_{\Phi}\right)\right]\). On average the random designs perform almost as well against \(\psi_{\Phi}\) as the continuous design \(\Phi\). It is interesting to note - especially for the design of Figure 4(c) - the close agreement between the I-optimal design weights above, and the weights used in the computation of \(\phi\). See Table 2, where the variance and maximum squared bias components of \(I_{\nu}\) are presented for the designs of Figure 4 (\(\nu=.5\)) and for the corresponding designs with \(\nu=.04\), very closely approximating the I-optimal design (\(\nu=0\)) with maximum loss \(I_{0}=\infty\). That the robustness of the cluster designs is achieved for such a modest premium in terms of increased variance is both startling and encouraging. ## 5 Multidimensional cluster designs See Figure 6, where a robust design, derived for fitting a full second order bivariate model - intercept, linear, quadratic and interaction terms - is depicted. It is a discrete implementation of a design density, optimally robust against model misspecifications in a certain parametric class of densities - see Heo, Schmuland and Wiens (2001) for details. This design can roughly be described as an inscribed Central Composite Design (CCD) with 'clustering' in place of replication. It serves as motivation for the ideas of this section, which we illustrate in the context of \(k\)-dimensional, spherical CCD designs as are often used to fit second order models. Such designs utilize \(2^{k}+2k+1\) points \(\{t_{i}\}\) consisting of \(2^{k}\) corner points with \(\mathbf{t}_{i,j}=\pm 1\) (\(j=1,...,k\)), \(2k\) axial points \(\mathbf{t}_{i}=\left(0,...,\pm\sqrt{k},...,0\right)\) and a centre point \(\mathbf{t}_{i}=\left(0,...,0,...,0\right)\). In this and other multidimensional cases we propose choosing design points from spherical densities concentrated on neighbourhoods of the \(\mathbf{t}_{i}\). A spherical density on a \(k\)-dimensional hypersphere \[\mathcal{S}^{\left(k\right)}\left(\mathbf{t},R\right)=\left\{\mathbf{x}\left\|\left\|\mathbf{ x}-\mathbf{t}\right\|\leq R\right\}\] with centre \(\mathbf{t}\) and radius \(R\), in which the norm \(\left\|\mathbf{x}-\mathbf{t}\right\|\) has a scaled \(\mathit{Beta}\left(1,b\right)\) density, is given by \[f^{\left(k\right)}\left(\mathbf{x};\mathbf{t},R,b\right)=\frac{\Gamma\left(\frac{k}{2 }\right)}{2\pi^{k/2}R^{k}}\cdot\frac{b\left(1-\frac{\left\|\mathbf{x}-\mathbf{t} \right\|}{R}\right)^{b-1}}{\left(\frac{\left\|\mathbf{x}-\mathbf{t}\right\|}{R}\right) ^{k-1}}I\left(\mathbf{x}\in\mathcal{S}^{\left(k\right)}\left(\mathbf{t},R\right) \right).\] Such a density has mode \(\mathbf{t}\) and approaches a point mass at \(\mathbf{t}\) as \(b\rightarrow\infty\), and uniformity as \(b\to 1\). A sample value \(\mathbf{x}\) from \(f^{\left(k\right)}\left(\mathbf{x};\mathbf{t},R,b\right)\) is \(\mathbf{x}=\mathbf{t}+\mathbf{R}\mathbf{y}\), where \(\mathbf{y}\sim f^{\left(k\right)}\left(\mathbf{\cdot};\mathbf{0},1,b\right)\) is obtained by drawing a value of \(\rho=\left\|\mathbf{y}\right\|\sim\mathit{Beta}(1,b)\) and, independently, drawing angles \(\theta_{i}\), \(-\pi/2<\theta_{i}\leq\pi/2\left(i=1,...,k-2\right)\) with densities \(\psi_{i}\left(\theta\right)=\cos^{k-i-1}\theta/\beta\left(1/2,\left(k-i\right) /2\right)\) - equivalently, \(\cos^{2}\theta_{i}\sim\mathit{Beta}(\frac{1}{2},\frac{k-i}{2})\) - and \(\theta_{k-1}\sim\mathit{Unif}\left(-\pi,\pi\right)\). Then \[y_{1} = \rho\sin\theta_{1},\] \[y_{2} = \rho\cos\theta_{1}\sin\theta_{2},\] \[y_{3} = \rho\cos\theta_{1}\cos\theta_{2}\sin\theta_{3},\] \[\cdots\] \[y_{k-1} = \rho\cos\theta_{1}\cos\theta_{2}\cdot\cdot\cdot\cos\theta_{k-2} \sin\theta_{k-1},\] \[y_{k} = \rho\cos\theta_{1}\cos\theta_{2}\cdot\cdot\cdot\cos\theta_{k-2} \cos\theta_{k-1}.\] Figure 6: Design for fitting a full second order model; \(n=48\). To sample \(\theta_{i}\) for \(i<k-1\) we draw \(z\sim Beta(\frac{1}{2},\frac{k-i}{2})\) and set \(\theta_{i}=\pm\arccos\sqrt{z}\), each with probability \(1/2\). ### Two dimensional cluster designs on tessellations In Figure 7(a), the nine points \(\{\mathbf{t}_{i}\}\) which are displayed consist of four corner points \((-1,\pm 1)\), \((1,\pm 1)\), four axial points \(\left(\pm\sqrt{2},0\right)\), \(\left(0,\pm\sqrt{2}\right)\) and the centre point \((0,0)\). These are the generators of the _Voronoi tessellation_ pictured - a tiling with the property that, within the tile \(T_{i}\) containing \(\mathbf{t}_{i}\), all points are closer to \(\mathbf{t}_{i}\) than to any \(\mathbf{t}_{j}\), \(j\neq i\). Figure 7(b) gives a more detailed depiction of the tessellation, restricted to the design space \(\chi=[-2,2]\times[-2,2]\). Within each tile \(T_{i}\), of area \(\left|T_{i}\right|\), we have also plotted a subtile \(J_{i}\left(c\right)\) which is a contraction of \(T_{i}\) with fixed point \(\mathbf{t}_{i}\) and area \(\left|J_{i}\left(c\right)\right|=c\left|T_{i}\right|\). These are then the analogues of the subintervals \(J_{i}\left(c\right)\subseteq I_{i}\) from SS4, and '\(c\)' has the same interpretation - the fraction of the design space to be sampled. Surrounding each \(J_{i}\left(c\right)\) is the smallest enclosing circle \(\mathcal{S}^{(2)}\left(\mathbf{t}_{i},R_{i}\left(c\right)\right)\). We sample design points from \(\mathcal{S}^{(2)}\left(\mathbf{t}_{i},R_{i}\left(c\right)\right)\), accepting only those points which lie in \(J_{i}\left(c\right)\). We specify \(b=1/\nu\) and \(c=\nu^{2}\), then \(f^{(2)}\left(\mathbf{x};\mathbf{t}_{i},R_{i}\left(c\right),b\right)\) approaches a point mass at \(\mathbf{t}_{i}\) as \(\nu\to 0\), and uniformity on \(\mathcal{S}^{(2)}\left(\mathbf{t}_{i},R_{i}\left(c\right)\right)\supseteq T_{i}\) as \(\nu\to 1\). With \[q_{i}(\nu)=\int_{J_{i}\left(c\right)}f^{(2)}\left(\mathbf{x};\mathbf{t}_{i},R_{i} \left(c\right),b\right)\mu\left(d\mathbf{x}\right),\] Figure 7: (a) Voronoi tessellation generated by the points \(\{\mathbf{t}_{i}\}\). (b) Tessellation restricted to \(\chi=[-2,2]^{2}\) with subtiles \(\{J_{i}\left(.25\right)\}\) and enclosing circles \(\left\{\mathcal{S}^{(2)}\left(\mathbf{t}_{i},R_{i}\right)\right\}\). Figure 8: Sampling density \(\phi\left(\mathbf{x};\nu=.5\right)\) constructed for a robust, clustered CCD in two dimensions. Figure 9: (a) A typical random CCD of size 50; \(\nu=.5\). (b) Details of the subsample of 5 points (‘o’) drawn from \(J_{8}\left(.25\right)\). Rejected points are marked as ‘x’. the density of those points accepted into the design upon being drawn from \(\mathcal{S}^{(2)}\left(\mathbf{t}_{i},R_{i}\left(c\right)\right)\) is \[\frac{f^{(2)}\left(\mathbf{x};\mathbf{t}_{i},R_{i}\left(c\right),b\right)}{q_{i}(\nu)}I \left(\mathbf{x}\in J_{i}\left(c\right)\right).\] We again do stratified sampling, with weights \(\omega_{i}=|T_{i}|/\sum|T_{i}|\) proportional to the area \(|T_{i}|\), whence the density of the design on \(\chi\) is \[\phi\left(\mathbf{x};\nu\right)=\sum_{i=1}^{9}\frac{\omega_{i}}{q_{i}(\nu)}f^{(2) }\left(\mathbf{x};\mathbf{t}_{i},R_{i}\left(c\right),b\right)I\left(\mathbf{x}\in J_{i} \left(c\right)\right).\] See Figure 8. Although we evaluate \(q_{i}(\nu)\) by numerical integration, an estimate can be computed after the sampling is done; it is the proportion of those points which were drawn from \(\mathcal{S}^{(2)}\left(\mathbf{t}_{i},R_{i}\left(c\right)\right)\) and then accepted into the sample. This estimate turns out to be quite accurate if an artificially large sample is simulated. Figure 9 illustrates the results of applying the methods of the preceding discussion. We chose a total sample size of \(n=50\), \(\nu=.5\), and obtained subsample sizes \(n_{i}=n\omega_{i}\), rounded to \(\left\{7,7,7,7,5,5,5,5,2\right\}\) with each corner point being allocated 7, each axial point being allocated 5, and the remaining 2 in the centre. The entire sample is shown in Figure 9(a), with Figure 9(b) illustrating the details for Tile 8. The required 5 points were found after 6 points were drawn from \(\mathcal{S}_{8}\). In all, 7 points were rejected as not belonging to the appropriate subtile. ### Extensions to \(k>2\) Although the theory of SS5.1 extends readily to higher dimensions, the lack of appropriate software for constructing and manipulating Voronoi tessellations becomes a severe drawback. But the general idea of sampling from spherical distributions centred on small neighbourhoods of the \(\left\{\mathbf{t}_{i}\right\}\) can still be applied, albeit in a less structured manner. Let \(\left\{\mathbf{t}_{i}\right\}_{i=1}^{q}\) be the \(q=2^{k}+2k+1\) support points of a spherical CCD in variables \(\mathbf{x}=\left(x_{1},...,x_{k}\right)^{\prime}\), as described at the beginning of this section. The minimum distance between these points is \(\min\left(2,\sqrt{k}\right)\), and so hyperspheres \(\mathcal{S}\left(\mathbf{t}_{i},r_{0}\right)\) centred at the \(\mathbf{t}_{i}\) and with radius \(r_{0}=\min\left(1,\sqrt{k}/2\right)\) are disjoint. Define subspheres \[J_{i}\left(c\right)=\mathcal{S}^{\left(k\right)}\left(\mathbf{t}_{i},r_{0}c^{1/k} \right),0<c\leq 1.\] Then \(\int I\left(\mathbf{x}\in J_{i}\left(c\right)\right)d\mathbf{x}=|J_{i}\left(c\right)| =c\left|\mathcal{S}\left(\mathbf{t}_{i},r_{0}\right)\right|\). The density of \(\mathbf{x}\) on \(J_{i}\left(c\right)\) is \(f^{\left(k\right)}\left(\mathbf{\cdot};\mathbf{t}_{i},r_{0}c^{1/k},b\right)\). We again specify \(b=1/\nu\) and set \(c=\nu^{k}\). Then for user chosen weights \(\left\{\omega_{i}\right\}\) the sampling density is \[\phi\left(\mathbf{x};\nu\right)=\sum_{i=1}^{q}\omega_{i}f^{\left(k\right)}\left( \mathbf{x};\mathbf{t}_{i},r_{0}\nu,1/\nu\right)I\left(\mathbf{x}\in\mathcal{S}^{\left(k \right)}\left(\mathbf{t}_{i},r_{0}\nu\right)\right).\] See Figure 10 for an example with \(k=3\). We sampled a design of size \(n=80\) with subsample sizes \(n_{i}=5\) (\(i<15\)) and \(n_{15}=10\). ## Appendix ### Derivations for SS2.1 For an \(n\)-point design \(D=\{\mathbf{x}_{i}\}_{i=1}^{n}\) with design measure \(\delta=n^{-1}\sum\delta_{\mathbf{x}_{i}}\) define \[\mathbf{F}=\left(\mathbf{f}\left(\mathbf{x}_{1}\right),\cdot\cdot\cdot,\mathbf{f}\left(\mathbf{x}_{ n}\right)\right)^{\prime},\psi_{\Phi}=\left(\psi_{\Phi}\left(\mathbf{x}_{1} \right),...,\psi_{\Phi}\left(\mathbf{x}_{n}\right)\right)^{\prime},\ \mathbf{D}_{\phi}=diag\left(\phi\left(\mathbf{x}_{1}\right),...,\phi \left(\mathbf{x}_{n}\right)\right).\] Then \(\mathbf{M}_{\phi}=\frac{1}{n}\mathbf{F}^{\prime}\mathbf{D}_{\phi}\mathbf{F}\). Define as well \[\mathbf{M}_{\delta} = \int_{\chi}\mathbf{f}\left(\mathbf{x}\right)\mathbf{f}^{\prime}\left(\mathbf{x} \right)\delta\left(d\mathbf{x}\right)=\frac{1}{n}\sum_{\mathbf{x}_{i}\in D}\mathbf{f} \left(\mathbf{x}_{i}\right)\mathbf{f}^{\prime}\left(\mathbf{x}_{i}\right)=\frac{1}{n}\bm {F}^{\prime}\mathbf{F},\] \[\mathbf{b}_{\psi_{\Phi},\delta} = \int_{\chi}\mathbf{f}\left(\mathbf{x}\right)\psi_{\Phi}\left(\mathbf{x} \right)\delta\left(d\mathbf{x}\right)=\frac{1}{n}\sum_{\mathbf{x}_{i}\in D}\mathbf{f} \left(\mathbf{x}_{i}\right)\psi_{\Phi}\left(\mathbf{x}_{i}\right)=\frac{1}{n}\mathbf{F}^{ \prime}\psi_{\Phi}.\] Using (7), \[\psi_{\Phi}\left(\mathbf{x}_{i}\right)=\mathbf{r}_{\Phi}^{\prime}\left(\mathbf{x}_{i} \right)\beta_{\Phi}=\frac{\tau}{\sqrt{n}}\left(\phi\left(\mathbf{x}_{i}\right) \mathbf{f}^{\prime}\left(\mathbf{x}_{i}\right)-\mathbf{f}^{\prime}\left(\mathbf{x}_{i}\right) \mathbf{A}^{-1}\mathbf{M}_{\Phi}\right)\mathbf{G}_{\Phi}^{-1/2}\beta_{\Phi},\] so that \[\psi_{\Phi} = \frac{\tau}{\sqrt{n}}\left(D_{\phi}F{-}FA^{-1}M_{\Phi}\right)G_{\Phi }^{-1/2}\beta_{\Phi},\] (A.1) \[\boldsymbol{b}_{\psi_{\Phi},\delta} = \frac{\tau}{\sqrt{n}}\left(M_{\phi}-M_{\delta}A^{-1}M_{\Phi} \right)G_{\Phi}^{-1/2}\beta_{\Phi}.\] (A.2) From (4), \[\mbox{\scriptsize{\sc iMSE}}\left(\delta|\psi_{\Phi}\right)=\frac{\sigma_{ \varepsilon}^{2}}{n}tr\left(A\,M_{\delta}^{-1}\right)+\boldsymbol{b}_{\psi_{ \Phi},\delta}^{\prime}\boldsymbol{M}_{\delta}^{-1}\,\boldsymbol{AM}_{\delta}^ {-1}\boldsymbol{b}_{\psi_{\Phi},\delta}+\int_{\chi}\psi_{\Phi}^{2}\left(x \right)\mu\left(dx\right);\] substituting (A.1) and (A.2) gives \[\mbox{\scriptsize{\sc iMSE}}\left(\delta|\psi_{\Phi}\right) = \frac{\sigma_{\varepsilon}^{2}}{n}tr\left(A\,M_{\delta}^{-1} \right)+\frac{\tau^{2}}{n}\gamma_{\delta},\mbox{ for}\] \[\gamma_{\delta} = \beta_{\Phi}^{\prime}\boldsymbol{G}_{\Phi}^{-1/2}\left( \boldsymbol{M}_{\phi}M_{\delta}^{-1}-\boldsymbol{M}_{\Phi}A^{-1}\right) \boldsymbol{A}\left(\boldsymbol{M}_{\delta}^{-1}M_{\phi}-A^{-1}\,\boldsymbol{ M}_{\Phi}\right)\boldsymbol{G}_{\Phi}^{-1/2}\beta_{\Phi}+1.\] Now (8) and (9) are immediate. ### Derivation of (12) We need only evaluate \(E_{\Phi}\left[\gamma_{\delta}\right]\). Since \(\phi\left(x_{i}\right)\equiv\left(2c\right)^{-1}\) on its support, we have that \(\boldsymbol{M}_{\phi}\boldsymbol{M}_{\delta}^{-1}=\left(2c\right)^{-1} \boldsymbol{I}_{2}\), so that (since \(\beta_{\Phi}^{\prime}\beta_{\Phi}=1\)) \[\gamma_{\delta} = \beta_{\Phi}^{\prime}\boldsymbol{G}_{\Phi}^{-1/2}\left(\frac{1}{ 2c}\boldsymbol{I}_{2}-\boldsymbol{M}_{\Phi}\boldsymbol{A}^{-1}\right)A\left( \frac{1}{2c}\boldsymbol{I}_{2}-\boldsymbol{A}^{-1}\boldsymbol{M}_{\Phi} \right)\boldsymbol{G}_{\Phi}^{-1/2}\beta_{\Phi}+1\] (A.3) \[= \beta_{\Phi}^{\prime}\boldsymbol{G}_{\Phi}^{-1/2}\left(\frac{1}{ 4c^{2}}\boldsymbol{A}-\frac{1}{c}\boldsymbol{M}_{\Phi}+\boldsymbol{M}_{\Phi} \boldsymbol{A}^{-1}\boldsymbol{M}_{\Phi}\right)\boldsymbol{G}_{\Phi}^{-1/2} \beta_{\Phi}+1\] \[= \beta_{\Phi}^{\prime}\boldsymbol{G}_{\Phi}^{-1/2}\left(\frac{1}{ 4c^{2}}\boldsymbol{A}-\frac{1}{c}\boldsymbol{M}_{\Phi}+\boldsymbol{H}_{\Phi} +\boldsymbol{G}_{\Phi}\right)\boldsymbol{G}_{\Phi}^{-1/2}\beta_{\Phi}.\] A calculation gives \[\boldsymbol{G}_{\Phi}^{1/2}\boldsymbol{H}_{\Phi}^{-1}\boldsymbol{G}_{\Phi}^{1/ 2}+\boldsymbol{I}_{2}=\frac{1}{c}diag\left(1,\frac{1}{3\lambda_{2}\left(c \right)}\right),\] so that the maximum eigenvalue is \(ch_{\max}=\frac{1}{c}\max(1,\frac{1}{3\lambda_{2}\left(c\right)})\) and \(\beta_{\Phi}\) is the corresponding unit eigenvector. We claim that \[\gamma_{\delta}=ch_{\max},\] (A.4) from which (12) follows, since then \(\gamma_{\delta}\) does not depend on the design and so is non-random. To establish (A.4), use \(A=\mathbf{M}_{\Phi}\mathbf{H}_{\Phi}^{-1}\mathbf{M}_{\Phi}=4c^{2}\mathbf{K}_{\Phi}\mathbf{H}_{\Phi}^ {-1}\mathbf{K}_{\Phi}\) and \(\mathbf{M}_{\Phi}=2c\mathbf{K}_{\Phi}\) in (A.3) to obtain \[\gamma_{\delta}=\beta_{\Phi}^{\prime}\left[\mathbf{G}_{\Phi}^{-1/2}\left(\mathbf{K}_{ \Phi}\mathbf{H}_{\Phi}^{-1}\mathbf{K}_{\Phi}-\mathbf{K}_{\Phi}\right)\mathbf{G}_{\Phi}^{-1/2} \right]\beta_{\Phi}.\] Substituting \(\mathbf{K}_{\Phi}=\mathbf{G}_{\Phi}+\mathbf{H}_{\Phi}\), this becomes \[\gamma_{\delta}=\beta_{\Phi}^{\prime}\left[\mathbf{G}_{\Phi}^{1/2}\mathbf{H}_{\Phi}^ {-1}\mathbf{G}_{\Phi}^{1/2}+\mathbf{I}_{2}\right]\beta_{\Phi}=ch_{\max},\] as required. ## Acknowledgements This work was carried out with the support of the Natural Sciences and Engineering Council of Canada. It has benefited from conversations with Timothy Waite, University of Manchester and Xiaojian Xu, Brock University.
2309.17418
Calabi-Yau structures on the complexifications of rank two symmeric spaces
In this paper, we prove that there exists a $C^{\infty}$-Calabi-Yau structure on the whole of the complexification $G^{\mathbb C}/K^{\mathbb C}$ of a rank two symmetric space $G/K$ of compact type. The proof is performed by deriving a relation (which differs from the known relation somewhat) between the complex Hessian of a real-valued $C^{\infty}$-function $\psi$ on $G^{\mathbb C}/K^{\mathbb C}$ and the Hessian of the function $\rho:=\psi|_{{\rm Exp}_o(\mathfrak a)}\circ{\rm Exp}_o|_{\mathfrak a}$ on a maximal abelian subspace $\mathfrak a$ of the normal space $T_o^{\perp}(G\cdot o)(=T_o(G^d\cdot o)\subset\mathfrak g^d)$ of the orbit $G\cdot o(=G/K)$ in $G^{\mathbb C}/K^{\mathbb C}$ at the base point $o$, where ${\rm Exp}_o$ is the exponential map of $G^{\mathbb C}/K^{\mathbb C}$ at $o$ and $\mathfrak g^d$ is the Lie algebra of the dual $G^d$ of $G$. This relation is derived in a new calculation method by using the explicit descriptions of the shape operators of the orbits of the isotropy action $K\curvearrowright G^d/K$ and the Hermann type action $G\curvearrowright G^{\mathbb C}/K^{\mathbb C}$.
Naoyuki Koike
2023-09-29T17:27:16Z
http://arxiv.org/abs/2309.17418v4
# Calabi-Yau structures ###### Abstract In this paper, we prove that there exists a \(C^{\infty}\)-Calabi-Yau structure on the whole of the complexification \(G^{\mathbb{C}}/K^{\mathbb{C}}\) of a rank two symmetric space \(G/K\) of compact type. The proof is performed by deriving a relation (which differs from the known relation somewhat) between the complex Hessian of a real-valued \(C^{\infty}\)-function \(\psi\) on \(G^{\mathbb{C}}/K^{\mathbb{C}}\) and the Hessian of the function \(\rho:=\psi|_{\operatorname{Exp}_{o}(\mathfrak{a})}\circ\operatorname{Exp}_{o} |_{\mathfrak{a}}\) on a maximal abelian subspace \(\mathfrak{a}\) of the normal space \(T^{\perp}_{o}(G\cdot o)(=T_{o}(G^{d}\cdot o)\subset\mathfrak{g}^{d})\) of the orbit \(G\cdot o(=G/K)\) in \(G^{\mathbb{C}}/K^{\mathbb{C}}\) at the base point \(o\), where \(\operatorname{Exp}_{o}\) is the exponential map of \(G^{\mathbb{C}}/K^{\mathbb{C}}\) at \(o\) and \(\mathfrak{g}^{d}\) is the Lie algebra of the dual \(G^{d}\) of \(G\). This relation is derived in a new calculation method by using the explicit descriptions of the shape operators of the orbits of the isotropy action \(K\curvearrowright G^{d}/K\) and the Hermann type action \(G\curvearrowright G^{\mathbb{C}}/K^{\mathbb{C}}\). ## 1 Introduction In this paper, a \(C^{\infty}\)_-Calabi-Yau structure_\((r\geq 0,\,\alpha>0)\) of a \(2n\)-dimensional manifold \(M\) means a quadruple \((J,g,\omega,\Omega)\) satisfying the following three conditions: * \((J,g)\) is a \(C^{\infty}\)-Kahler structure of \(M\); * \(\omega\) is the \(C^{\infty}\)-Kahler form of \((J,g)\); * \(\Omega\) is a nonvanishing holomorphic \((n,0)\)-form on \(M\) satisfying (1.1) \[\omega^{n}=(-1)^{n(n-1)/2}n!\left(\frac{\sqrt{-1}}{2}\right)^{n}\Omega\wedge \overline{\Omega}\quad(\text{on }M).\] It follows from (1.1) that \((M,J,g)\) is a \(C^{\infty}\)-Ricci-flat Kahler manifold and that, for any real constant \(\theta\), a \(n\)-form \(\operatorname{Re}(e^{\sqrt{-1}\theta}\Omega)\) is a calibration on \((M,g)\). Hence, special Lagrangian submanifolds of phase \(\theta\) in \((M,J,g,\omega,\Omega)\) are defined as submanifolds calibrated by \(\operatorname{Re}(e^{\sqrt{-1}\theta}\Omega)\). The complexification of \(C^{\omega}\)-Riemannian manifold is defined as follows, where \(C^{\omega}\) means the real analyticity. Let \((M,g)\) be a \(C^{\omega}\)-Riemannian manifold, \(TM\) be the tangent bundle of \(M\) and \(J_{A}\) be the adapted complex structure (associated to \(g\)) defined on a tubular neighborhood \(U_{A}\) of the zero section of \(TM\), where we note that \(J_{A}\) is defined on the whole of \(TM\) (i.e., \(U_{A}=TM\)) when \((M,g)\) is of non-negative curvature. See [23] and [11, 12] about the definition of the adapted complex structure. The complex manifold \((U_{A},J_{A})\) is regarded as the complexification of \((M,g)\) under the identification of the zero-section of \(TM\) with \(M\). Let \(G/K\) be a symmetric space of compact type. Since \(G/K\) is a \(C^{\omega}\)-Riemannian manifold of non-negative curvature, the adapted complex structure \(J_{A}\) of \(G/K\) is defined on the whole of the tangent bundle \(T(G/K)\). As above, \((T(G/K),J_{A})\) is regarded as the complexification of \(G/K\). We can also define the complexification of \(G/K\) as follows. Let \(G^{\mathbb{C}}\) and \(K^{\mathbb{C}}\) be the complexifications of \(G\) and \(K\), respectively. Denote by \(\mathfrak{g},\mathfrak{k},\mathfrak{g}^{\mathbb{C}}\) and \(\mathfrak{k}^{\mathbb{C}}\) be the Lie algebras of \(G,K,G^{\mathbb{C}}\) and \(K^{\mathbb{C}}\), respectively. Let \(\mathfrak{g}=\mathfrak{k}\oplus\mathfrak{p}\) and \(\mathfrak{g}^{\mathbb{C}}=\mathfrak{k}^{\mathbb{C}}\oplus\mathfrak{p}^{ \mathbb{C}}\) be the canonical decompositions associated to the semi-simple symmetric pairs \((G,K)\) and \((G^{\mathbb{C}},K^{\mathbb{C}})\), respectively. Here we note that \(\mathfrak{p}\) and \(\mathfrak{p}^{\mathbb{C}}\) are identified with the tangent spaces \(T_{eK}(G/K)\) and \(T_{eK^{\mathbb{C}}}(G^{\mathbb{C}}/K^{\mathbb{C}})\), respectively, where \(e\) is the identity element of \(G^{\mathbb{C}}\). For the simplicity, set \(o:=eK^{\mathbb{C}}(=eK)\). Define a complex linear transformation \(\mathbf{j}\) of \(\mathfrak{p}^{\mathbb{C}}\) by \(\mathbf{j}(v):=\sqrt{-1}v\ \ (v\in\mathfrak{p}^{\mathbb{C}})\). Since \(\mathbf{j}\) is \(\mathrm{Ad}_{G^{\mathbb{C}}}(K^{\mathbb{C}})\)-invariant, we can define \(G^{\mathbb{C}}\)-invariant complex structure \(\mathbf{J}\) of \(G^{\mathbb{C}}/K^{\mathbb{C}}\) satisfying \(\mathbf{J}_{o}=\mathbf{j}\) uniquely, where \(\mathrm{Ad}_{G^{\mathbb{C}}}\) denotes the adjoint representation of \(G^{\mathbb{C}}\). The complex manifold \((G^{\mathbb{C}}/K^{\mathbb{C}},\mathbf{J})\) is regarded as another complexification of \(G/K\) under the identification of the orbit \(G\cdot o\) of the subaction \(G\curvearrowright G^{\mathbb{C}}/K^{\mathbb{C}}\) of the natural action \(G^{\mathbb{C}}\curvearrowright G^{\mathbb{C}}/K^{\mathbb{C}}\) with \(G/K\). We can define a natural holomorphic diffeomorphism between two complexifications \((T(G/K),J_{A})\) and \((G^{\mathbb{C}}/K^{\mathbb{C}},\mathbf{J})\) of \(G/K\) as follows. Let \(B\) be the Killing form of \(\mathfrak{g}\) and set \(B_{A}:=-2\mathrm{Re}\,B^{\mathbb{C}}\). Since \(B_{A}\) is \(\mathrm{Ad}_{G^{\mathbb{C}}}(K^{\mathbb{C}})\)-invariant, we can define \(G^{\mathbb{C}}\)-invariant pseudo-Riemannian metric \(\beta_{A}\) of \(G^{\mathbb{C}}/K^{\mathbb{C}}\) satisfying \((\beta_{A})_{o}=B_{A}\) uniquely. The pseudo-Riemannian manifold \((G^{\mathbb{C}}/K^{\mathbb{C}},\beta_{A})\) is one of semi-simple pseudo-Riemannian symmetric spaces. Also, the triple \((G^{\mathbb{C}}/K^{\mathbb{C}},J_{A},\beta_{A})\) gives an anti-Kahler manifold. See [6] ([16] also) about the definition of the anti-Kahler manifold. Denote by \(\mathrm{Exp}_{p}\) the exponential map of \((G^{\mathbb{C}}/K^{\mathbb{C}},\beta_{A})\) at \(p\in G^{\mathbb{C}}/K^{\mathbb{C}}\). The natural bijection \(\Psi:(T(G/K),J_{A})\underset{\cong}{\longrightarrow}(G^{\mathbb{C}}/K^{ \mathbb{C}},\mathbf{J})\) is given by \[\Psi(v):=\mathrm{Exp}_{p}(\mathbf{J}_{p}(v))\ \ \ \ (p\in G/K,\ v\in T_{p}(G/K))\] (see Figure 1), where \(v\) in the right-hand side is regarded as a tangent vector of the submanifold \(G\cdot o(\approx G/K)\) in \(G^{\mathbb{C}}/K^{\mathbb{C}}\). It is shown that \(\Psi\) is a holomorphic diffeomorphism between \((T(G/K),J_{A})\) and \((G^{\mathbb{C}}/K^{\mathbb{C}},\mathbf{J})\) (see Section 2 about the proof of this fact). Thus these two complexifications of \(G/K\) are identified through \(\Psi\). In 1993, M. B. Stenzel ([22]) showed that there exists a \(G\)-invariant complete Ricci-flat metric on the whole of the cotangent bundle of a rank one symmetric space \(G/K\) of compact type. In 2004, R. Bielawski ([4]) investigated the existence of \(G\)-invariant metrics with the prescribed Ricci tensor on the complexification \(G^{\mathbb{C}}/K^{\mathbb{C}}\) of a general rank symmetric space \(G/K\) of compact type (see [2] also), where \(G,\,K\) and \(G^{\mathbb{C}}\) are assumed to be connected. Note that, in [4], it is stated that the \(G\)-in prescribed Ricci tensor are defined on the whole of \(G^{\mathbb{C}}/K^{\mathbb{C}}\) but it seems that there is a gap in the process of the proof, where we note that there is a gap in the process of the proof of the main theorem in [2] also. In 2019, the author ([19]) investigated the existence of Calabi-Yau structures on the complexification \(G^{\mathbb{C}}/K^{\mathbb{C}}\) of a general rank symmetric space \(G/K\) of compact type but there are some gaps in the proof of some lemmas in the paper. In this paper, we close the gaps and prove the following fact in the case where \(G/K\) is of rank two. **Theorem A.** If \(G/K\) is a rank two symmetric space of compact type, then there exists a \(G\)-invariant \(C^{\infty}\)-Calabi-Yau structure on the whole of the complexification \(G^{\mathbb{C}}/K^{\mathbb{C}}\) of \(G/K\). \(\Psi(v)=\mathrm{Exp}_{p}(\mathbf{J}_{p}(v))\)\(\Psi(v)=\mathrm{Exp}_{p}(\mathbf{J}_{p}(v \(G^{\mathbb{C}}/K^{\mathbb{C}}\) be the map stated in Introduction. Then the map \(\Psi\) is a holomorphic diffeomorphism of \((T(G/K),J_{A})\) onto \((G^{\mathbb{C}}/K^{\mathbb{C}},\mathbf{J})\). Proof.: It is clear that \(\Psi\) is a diffeomorphism. We shall show that \(\Psi\) is holomorphic. Let \(\gamma:\mathbb{R}\to G/K\) be a geodesic in \(G/K\) and set \(\gamma^{\mathbb{C}}:=\Psi\circ\gamma_{*}\). Then we have \[\gamma^{\mathbb{C}}(s+t\sqrt{-1})=\mathrm{Exp}_{\gamma(s)}(t\mathbf{J}_{ \gamma(s)}(\gamma^{\prime}(s)))\] (see Figure 2), which is the complexification of the geodesic \(\gamma\) in \(G/K(\approx G\cdot o\subset G^{\mathbb{C}}/K^{\mathbb{C}}))\) in the sense of [17, 18]. By using Proposition 3.1 of [18], we can show that \(\gamma^{\mathbb{C}}:\mathbb{C}\to(G^{\mathbb{C}}/K^{\mathbb{C}},\mathbf{J})\) is holomorphic. This fact together with the arbitrariness of \(\gamma\) implies that \(\Psi\) is holomorphic. _Remark 2.1_.: In [19], this fact was stated but the proof was not given. Let \(G/K\) be a symmetric space of compact type and \((G^{\mathbb{C}}/K^{\mathbb{C}},\,\mathbf{J})\) be the complexification of \(G/K\). Let \(\psi:G^{\mathbb{C}}/K^{\mathbb{C}}\to\mathbb{R}\) be a strictly plurisubharmonic function over \(G^{\mathbb{C}}/K^{\mathbb{C}}\), where we note that "strictly plurisubharmonicity" means that the Hermitian matrix \(\left(\frac{\partial^{2}\psi}{\partial z_{i}\partial\bar{z}_{j}}\right)\) is positive (or equivalently, \((\partial\overline{\partial}\psi)(\boldsymbol{Z},\overline{\boldsymbol{Z}})>0\) holds for any nonzero \((1,0)\)-vector \(\boldsymbol{Z}\)). Then \(\omega_{\psi}:=\sqrt{-1}\partial\overline{\partial}\psi\) is a real non-degenerate closed 2-form on \(G^{\mathbb{C}}/K^{\mathbb{C}}\) and the symmetric \((0,2)\)-tensor field \(\beta_{\psi}\) associated to \(\mathbf{J}\) and \(\omega_{\psi}\) is positive definite. Hence \((\mathbf{J},\beta_{\psi})\) gives a Kahler structure of \(G^{\mathbb{C}}/K^{\mathbb{C}}\). Thus, from each strictly plurisubharmonic function over \(G^{\mathbb{C}}/K^{\mathbb{C}}\), we can construct a Kahler structure of \(G^{\mathbb{C}}/K^{\mathbb{C}}\). Denote by \(\mathrm{Exp}_{p}\) the exponential map of the anti-Kahler manifold \((G^{\mathbb{C}}/K^{\mathbb{C}},\beta_{A})\) at \(p(\in G^{\mathbb{C}}/K^{\mathbb{C}})\) and \(\exp\) the exponentional map of the Lie group \(G^{\mathbb{C}}\). Set \(\mathfrak{g}^{d}:=\mathfrak{k}\oplus\sqrt{-1}\mathfrak{p}(\subsetneq\mathfrak{ g}^{\mathbb{C}})\) and \(G^{d}=\exp(\mathfrak{g}^{d})\). Denote by \(\beta_{G/K}\) the \(G\)-invariant (Riemannian) metric on \(G/K\) induced from \(-B|_{\mathfrak{p}\times\mathfrak{p}}\) and \(\beta_{G^{d}/K}\) the \(G^{d}\)-invariant negative definite metric on \(G^{d}/K\) induced from \(-(\operatorname{Re}B^{\mathbb{C}})|_{\sqrt{-1}\mathfrak{p}\times\sqrt{-1} \mathfrak{p}}\). We may assume that the metric of \(G/K\) is equal to \(\beta_{G/K}\) by homothetically transforming the metric of \(G/K\) if necessary. On the other hand, the Riemannian manifold \((G^{d}/K,-\beta_{G^{d}/K})\) is a (Riemannian) symmetric space of non-compact type. The orbit \(G\cdot o\) is isometric to \((G/K,\beta_{G/K})\) and the normal umbrella \(\operatorname{Exp}_{o}(T_{o}^{\perp}(G\cdot o))(=G^{d}\cdot o)\) is isometric to \((G^{d}/K,\beta_{G^{d}/K})\). The complexification \(\mathfrak{p}^{\mathbb{C}}\) of \(\mathfrak{p}\) is identified with \(T_{o}(G^{\mathbb{C}}/K^{\mathbb{C}})\) and \(\sqrt{-1}\mathfrak{p}\) is identified with \(T_{o}(\operatorname{Exp}_{o}(T_{o}^{\perp}(G\cdot o)))\). Let \(\mathfrak{a}\) be a maximal abelian subspace of \(\mathfrak{p}\), where we note that \(\dim\mathfrak{a}=r\). Denote by \(W\) the Weyl group of \(G^{d}/K\) with respect to \(\sqrt{-1}\mathfrak{a}\). This group acts on \(\sqrt{-1}\mathfrak{a}\). Let \(C(\subset\sqrt{-1}\mathfrak{a})\) be a Weyl domain (i.e., a fundamental domain of the action \(W\curvearrowright\sqrt{-1}\mathfrak{a}\)). Then we have \(G\cdot\operatorname{Exp}_{o}(\overline{C})=G^{\mathbb{C}}/K^{\mathbb{C}}\), where \(\overline{C}\) is the closure of \(C\). For a \(W\)-invariant connected open neighborhood \(D\) of \(0\) in \(\sqrt{-1}\mathfrak{a}\), we define a neighborhood \(U_{1}(D)\) of \(o\) in \(\Sigma:=\operatorname{Exp}_{o}(\sqrt{-1}\mathfrak{a})\) by \(U_{1}(D):=\operatorname{Exp}_{o}(D)\), a neighborhood \(U_{2}(D)\) of \(o\) in \(G^{d}/K\) by \(U_{2}(D):=K\cdot U_{1}(D)\) and a tubular neighborhood \(U_{3}(D)\) of \(G\cdot o\) in \(G^{\mathbb{C}}/K^{\mathbb{C}}\) by \(U_{3}(D):=G\cdot U_{1}(D)\). Denote by \(\operatorname{Conv}_{W}^{+}(D)\) the space of all \(W\)-invariant strictly convex (\(C^{\infty}\)-)functions over \(D\), \(\operatorname{Conv}_{K}^{+}(U_{2}(D))\) the space of all \(K\)-invariant strictly convex (\(C^{\infty}\)-)functions over \(U_{2}(D)\) and \(PH_{G}^{+}(U_{3}(D))\) the space of all \(G\)-invariant strictly plurisubharmonic (\(C^{\infty}\)-)functions over \(U_{3}(D)\). The restriction map (which is denoted by \(\mathcal{R}_{32}^{D}\)) from \(U_{3}(D)\) to \(U_{2}(D)\) gives an isomorphism of \(PH_{G}^{+}(U_{3}(D))\) onto \(\operatorname{Conv}_{K}^{+}(U_{2}(D))\) and the composition of the restriction map (which is denoted by \(\mathcal{R}_{31}^{D}\)) from \(U_{3}(D)\) to \(U_{1}(D)\) with \(\operatorname{Exp}_{o}\) gives an isomorphism of \(PH_{G}^{+}(U_{3}(D))\) onto \(\operatorname{Conv}_{W}^{+}(D)\) (see [1]). Hence we suffice to construct \(W\)-invariant strictly convex functions over \(D\) or \(K\)-invariant strictly convex functions over \(U_{2}(D)\) to construct strictly plurisubharmonic functions over \(U_{3}(D)\). Let \(\psi\) be a \(G\)-invariant strictly plurisubharmonic (\(C^{\infty}\)-)function over \(U_{3}(D)\). Set \(\bar{\psi}:=\mathcal{R}_{32}^{D}(\psi)\) and \(\bar{\psi}:=\mathcal{R}_{31}^{D}(\psi)\circ\operatorname{Exp}_{o}\). Conversely, for a \(W\)-invariant strictly convex (\(C^{\infty}\)-)function \(\rho\) over \(D\), denote by \(\rho^{h}\) the \(G\)-invariant strictly plurisubharmonic (\(C^{\infty}\)-)function \(\psi\) over \(U_{3}(D)\) with \(\bar{\psi}=\rho\). Similarly, for a \(K\)-invariant strictly convex (\(C^{\infty}\)-)function \(\sigma\) over \(U_{2}(D)\), set \(\overline{\sigma}:=\mathcal{R}_{21}^{D}(\sigma)\circ\operatorname{Exp}_{o}\) and denote by \(\sigma^{h}\) the \(G\)-invariant strictly plurisubharmonic (\(C^{\infty}\)-)function \(\psi\) over \(U_{3}(D)\) with \(\bar{\psi}=\sigma\). Also, for a \(W\)-invariant strictly convex (\(C^{\infty}\)-)function \(\rho\) over \(D\), denote by \(\rho^{d}\) the \(K\)-invariant strictly convex \(C^{\infty}\)-function \(\sigma\) over \(U_{2}(D)\) with \(\overline{\sigma}=\rho\). Denote by \(Ric_{\psi}\) the Ricci form of \(\beta_{\psi}\). It is known that \(Ric_{\psi}\) is described as \[Ric_{\psi}=-\sqrt{-1}\partial\overline{\partial}\log\,\det\left(\frac{\partial^{2 }\psi}{\partial z_{i}\partial\bar{z}_{j}}\right) \tag{2.1}\] (see P158 of [15]), where \((z_{1},\cdots,z_{n})\) is any complex coordinates of \(G^{\mathbb{C}}/K^{\mathbb{C}}\). Note that the right-hand side of (2.1) is independent of the choice of the complex coordinates \((z_{1},\cdots,z_{n})\) of \(G^{\mathbb{C}}/K^{\mathbb{C}}\). Proof of Theorem A In this section, we shall prove Theorem A stated in Introduction. First we define a natural \(G^{\mathbb{C}}\)-invariant non-vanishing holomorphic \((n,0)\)-form on \(G^{\mathbb{C}}/K^{\mathbb{C}}\). Take an orthonormal basis \((\boldsymbol{e}_{1},\cdots,\boldsymbol{e}_{n})\) of \(\mathfrak{p}\) with respect to \(-B\) and let \((\theta^{1},\cdots,\theta^{n})\) be the dual basis of \((\boldsymbol{e}_{1},\cdots,\boldsymbol{e}_{n})\). Also, let \((\theta^{i})^{\mathbb{C}}\)\((i=1,\cdots,n)\) be the complexification of \(\theta^{i}\). Since \((\theta^{1})^{\mathbb{C}}\wedge\cdots\wedge(\theta^{n})^{\mathbb{C}}\) is \(\mathrm{Ad}_{G^{\mathbb{C}}}(K^{\mathbb{C}})|_{\mathfrak{p}^{\mathbb{C}}}\)-invariant, we obtain the \(G^{\mathbb{C}}\)-invariant non-vanishing holomorphic \((n,0)\)-form \(\Omega\) on \(G^{\mathbb{C}}/K^{\mathbb{C}}\) with \(\Omega_{eK^{\mathbb{C}}}=(\theta^{1})^{\mathbb{C}}\wedge\cdots\wedge(\theta^ {n})^{\mathbb{C}}\). Let \(\mathbf{J}\) be the \(G^{\mathbb{C}}\)-invariant complex structure with \(\mathbf{J}_{eK}=\mathbf{j}(=\sqrt{-1}\,\mathrm{id}_{\mathfrak{p}})\). Take any point \(p_{0}:=\mathrm{Exp}_{o}(\boldsymbol{Z}_{0})\)\((\boldsymbol{Z}_{0}\in\sqrt{-1}\mathfrak{p})\) and an orthonormal basis \((\boldsymbol{e}_{1},\cdots,\boldsymbol{e}_{n})\) of \(\sqrt{-1}\mathfrak{p}\) with respect to \(\beta_{G^{d}/K}\) satisfying \(\boldsymbol{e}_{i}\in\sqrt{-1}\mathfrak{a}\)\((i=1,\cdots,r)\), where \(\mathrm{Exp}_{o}\) is the exponential map of \((G^{\mathbb{C}}/K^{\mathbb{C}},\beta_{A})\) at \(o\). Let \(\widetilde{U}_{o}\) be a sufficiently small neighborhood of the origin \(0\) in \(\mathfrak{p}^{\mathbb{C}}\). Define a local complex coordinates \(\varphi_{o}=(z_{1}^{o},\cdots,z_{n}^{o})\) on \(U_{o}:=\mathrm{Exp}_{o}(\widetilde{U}_{o})\) by \[(\mathrm{Exp}_{o}|_{\widetilde{U}_{o}})^{-1}(p)=\sum_{i=1}^{n}z_{i}^{o}(p) \boldsymbol{e}_{i}\ \ \ \ (p\in U_{o}).\] By Proposition 3.1 of [18], \(\varphi_{o}\) gives a local complex coordinates of the complex manifold \((G^{\mathbb{C}}/K^{\mathbb{C}},\mathbf{J})\). Set \(U_{p_{0}}:=\exp(\boldsymbol{Z}_{0})(U_{o})\), where \(\exp\) is the exponential map of the Lie group \(G^{\mathbb{C}}\). Define a local complex coordinates \(\varphi^{p_{0}}=(z_{1}^{p_{0}},\cdots,z_{n}^{p_{0}})\) on \(U_{p_{0}}\) by \(z_{i}^{p_{0}}=z_{i}^{o}\circ\exp(\boldsymbol{Z}_{0})^{-1}\)\((i=1,\cdots,n)\). Let \(z_{i}^{o}=x_{i}^{o}+\sqrt{-1}y_{i}^{o}\) and \(z_{i}^{p_{0}}=x_{i}^{p_{0}}+\sqrt{-1}y_{i}^{p_{0}}\)\((i=1,\cdots,n)\). We call such a local complex coordinates \(\varphi^{p_{0}}=(z_{1}^{p_{0}},\cdots,z_{n}^{p_{0}})\)_a normal complex coordinates about a point \(p_{0}\) of the real form \(G^{d}\cdot o(=G^{d}/K)\) associated to \((\boldsymbol{e}_{1},\cdots,\boldsymbol{e}_{n})\)_. Set \(\boldsymbol{e}_{i}^{p_{0}}:=\exp_{G^{d}}(\boldsymbol{Z}_{0})_{*o}(\boldsymbol {e}_{i})\). Then we note that the following relation holds: \[\mathrm{Exp}_{p_{0}}^{-1}(p)=\sum_{i=1}^{n}\left(x_{i}^{p_{0}}(p)\boldsymbol{e }_{i}^{p_{0}}+y_{i}^{p_{0}}(p)J_{p_{0}}(\boldsymbol{e}_{i}^{p_{0}})\right)\ \ \ (p\in U_{p_{0}}).\] For the simplicity, we denote \(\sqrt{-1}\mathfrak{p}\) and \(\sqrt{-1}\mathfrak{a}\) by \(\mathfrak{p}^{d}\) and \(\mathfrak{a}^{d}\), respectively. For \(\lambda\in(\mathfrak{a}^{d})^{*}\), we define \(\mathfrak{p}_{\lambda}^{d}\) and \(\mathfrak{k}_{\lambda}\) by \[\mathfrak{k}_{\lambda}:=\{v\in\mathfrak{k}\,|\,\mathrm{ad}(\boldsymbol{Z})^{2} (v)=\lambda(\boldsymbol{Z})^{2}v\ (\forall\,\boldsymbol{Z}\in\mathfrak{a}^{d})\}\] and \[\mathfrak{p}_{\lambda}^{d}:=\{v\in\mathfrak{p}^{d}\,|\,\mathrm{ad}(\boldsymbol{Z })^{2}(v)=\lambda(\boldsymbol{Z})^{2}v\ (\forall\,\boldsymbol{Z}\in\mathfrak{a}^{d})\}.\] Also, we define \(\triangle(\subset(\mathfrak{a}^{d})^{*})\) by \[\triangle:=\{\lambda\in(\mathfrak{a}^{d})^{*}\setminus\{0\}\,|\,\mathfrak{p}_{ \lambda}^{d}\neq\{0\}\ \},\] which is the root system. Let \(\triangle_{+}\) be the positive root subsystem of \(\triangle\) with respect to some lexicographic ordering of \((\mathfrak{a}^{d})^{*}\). Then we have the following root space decompositions \[\mathfrak{k}=\mathfrak{z}_{\mathfrak{k}}(\mathfrak{a}^{d})\oplus\left( \mathop{\oplus}_{\lambda\in\triangle_{+}}\mathfrak{k}_{\lambda}\right)\ \ \ \text{and}\ \ \ \mathfrak{p}^{d}=\mathfrak{a}^{d}\oplus\left(\mathop{\oplus}_{\lambda\in \triangle_{+}}\mathfrak{p}_{\lambda}^{d}\right),\] where \(\mathfrak{z}_{\mathfrak{k}}(\mathfrak{a}^{d})\) is the cetralizer of \(\mathfrak{a}^{d}\) in \(\mathfrak{k}\). Set \(m_{\lambda}:=\dim\mathfrak{p}_{\lambda}^{d}\) (\(\lambda\in\triangle_{+}\)). Note that the first root space decomposition is the simultaneously eigenspace decomposition of the commutative family \[\{\operatorname{ad}(\boldsymbol{Z})^{2}:\mathfrak{k}\to\mathfrak{k}\,|\, \boldsymbol{Z}\in\mathfrak{a}^{d}\}\] of the symmetric transformations of \(\mathfrak{k}\) and that the second root space decomposition is the simultaneously eigenspace decomposition of the commutative family \[\{\operatorname{ad}(\boldsymbol{Z})^{2}:\mathfrak{p}^{d}\to\mathfrak{p}^{d}\,| \,\boldsymbol{Z}\in\mathfrak{a}^{d}\}\] of symmetric transformations of \(\mathfrak{p}^{d}\), where \(\operatorname{ad}\) is the adjoint representation of \(\mathfrak{g}\). Let \(\mathcal{C}(\subset\mathfrak{a}^{d})\) be a Weyl domain (i.e., a fundamental domain of the Weyl group action \(W\curvearrowright\mathfrak{a}^{d}\)). The Wyle domain \(\mathcal{C}\) is given by \[\mathcal{C}:=\{\boldsymbol{v}\in\mathfrak{a}^{d}\,|\,\lambda(\boldsymbol{v})> 0\ (\forall\,\lambda\in\triangle_{+})\}.\] Points of \(W\cdot\mathcal{C}\) and \(G\cdot\operatorname{Exp}_{o}(W\cdot\mathcal{C})\) are called _regular points_. We call \(G\cdot\operatorname{Exp}_{o}(W\cdot\mathcal{C})\) the _regular point set_ of \(G^{\mathbb{C}}/K^{\mathbb{C}}\) and denote it by \((G^{\mathbb{C}}/K^{\mathbb{C}})_{reg}\). Also, we call \(K\cdot\operatorname{Exp}_{o}(W\cdot\mathcal{C})\) the _regular point set_ of \(G^{d}/K\) and denote it by \((G^{d}/K)_{reg}\). Note that \((G^{\mathbb{C}}/K^{\mathbb{C}})_{reg}\) (resp. \((G^{d}/K)_{reg}\)) is an open dense subset of \(G^{\mathbb{C}}/K^{\mathbb{C}}\) (resp. \(G^{d}/K\)). Take a regular point \(p:=\operatorname{Exp}_{o}(\boldsymbol{Z})\) (\(\boldsymbol{Z}\in W\cdot\mathcal{C}\)) of the isotropy group action \(K\curvearrowright G^{d}/K\). Then we have \[(\exp\,\boldsymbol{Z})_{*}^{-1}(T_{p}(K\cdot p))=\mathop{\oplus}_{\lambda\in \triangle_{+}}\mathfrak{p}_{\lambda}^{d} \tag{3.1}\] and \[(\exp\,\boldsymbol{Z})_{*}^{-1}(T_{p}^{\perp}(K\cdot p))=\mathfrak{a}^{d},\] where \(T_{p}^{\perp}(K\cdot p)\) is the normal space of \(K\cdot p\) in \(G^{d}\cdot o(=G^{d}/K)\). Denote by \(A\) and \(h\) the shape tensor and the second fundamental form of the principal orbit \(K\cdot p\) (which is a submanifold in \(G^{d}/K\)), respectively. For any \(\boldsymbol{v}\in\mathfrak{a}^{d}\), we have \[(A_{p})_{(\exp\,\boldsymbol{Z})_{*}(\boldsymbol{v})}|_{(\exp\,\boldsymbol{Z}) _{*}(\mathfrak{p}_{\lambda}^{d})}=-\frac{\lambda(\boldsymbol{v})}{\tanh \lambda(\boldsymbol{Z})}\operatorname{id} \tag{3.2}\] (see [24]). Both Lie groups \(G\) and \(K^{\mathbb{C}}\) are symmetric subgroups of \(G^{\mathbb{C}}\), that is, the actions \(G\curvearrowright G^{\mathbb{C}}/K^{\mathbb{C}}\) and \(K^{\mathbb{C}}\curvearrowright G^{\mathbb{C}}/K^{\mathbb{C}}\) are Hermann type action, where we note that the terminology of "Hermann type action" was used in [17]. Let \(\theta_{G}\) and \(\theta_{K^{\mathbb{C}}}\) be the involutions of \(G^{\mathbb{C}}\) with \((\operatorname{Fix}\theta_{G})_{0}\subset G\subset\operatorname{Fix}\theta_{G}\) and \((\operatorname{Fix}\theta_{K^{\mathbb{C}}})_{0}\subset K^{\mathbb{C}}\subset \operatorname{Fix}\theta_{K^{\mathbb{C}}}\), respectively. Denote by the same symbols the differentials of \(\theta_{G}\) and \(\theta_{K^{\mathbb{C}}}\) at \(e\), respectively. Note that \(\theta_{G}\) is the Cartan involution of \(G^{\mathbb{C}}\). The eigenspace decompositions of \(\theta_{G}\) and \(\theta_{K^{\mathbb{C}}}\) are as \(\mathfrak{g}^{\mathbb{C}}=\mathfrak{g}\oplus\sqrt{-1}\mathfrak{g}\) and \(\mathfrak{g}^{\mathbb{C}}=\mathfrak{k}^{\mathbb{C}}\oplus\mathfrak{p}^{\mathbb{C}}\), respectively. The root space decomposition of \(\mathfrak{p}^{\mathbb{C}}(=T_{o}(G^{\mathbb{C}}/K^{\mathbb{C}})\) for a maximal abelian subspace \(\mathfrak{a}^{d}\) of \(\mathfrak{p}^{\mathbb{C}}\cap\sqrt{-1}\mathfrak{g}=\mathfrak{p}^{d}\) is given by \[\mathfrak{p}^{\mathbb{C}}=\mathfrak{a}^{\mathbb{C}}\oplus\left(\mathop{ \oplus}\limits_{\lambda\in\triangle_{+}}\mathfrak{p}^{\mathbb{C}}_{\lambda} \right).\] Note that this root space decomposition is the simultaneously eigenspace decomposition of the commutative family \[\{\operatorname{ad}(\boldsymbol{v})^{2}:\mathfrak{p}^{\mathbb{C}}\to \mathfrak{p}^{\mathbb{C}}\,|\,\boldsymbol{v}\in\mathfrak{a}^{d}\}\] of symmetric transformations of \(\mathfrak{p}^{\mathbb{C}}\), where \(\operatorname{ad}\) is the adjoint representation of \(\mathfrak{g}^{\mathbb{C}}\). Also, the simultaneously eigenspace decomposition of the extended commutative family \[\{\operatorname{ad}(\boldsymbol{v})^{2}:\mathfrak{p}^{\mathbb{C}}\to \mathfrak{p}^{\mathbb{C}}\,|\,\boldsymbol{v}\in\mathfrak{a}^{d}\}\cup\{\theta_ {G}|_{\mathfrak{p}^{\mathbb{C}}}\}\] of symmetric transformations of \(\mathfrak{p}^{\mathbb{C}}\) is given by \[\mathfrak{p}^{\mathbb{C}}=\mathfrak{a}\oplus\mathfrak{a}^{d}\oplus\left( \mathop{\oplus}\limits_{\lambda\in\triangle_{+}}\mathfrak{p}_{\lambda}\right) \oplus\left(\mathop{\oplus}\limits_{\lambda\in\triangle_{+}}\mathfrak{p}^{d}_{ \lambda}\right). \tag{3.3}\] Fix a regular point \(p:=\operatorname{Exp}_{o}(\boldsymbol{Z})\) (\(\boldsymbol{Z}\in W\cdot\mathcal{C}\)) of the action \(G\curvearrowright G^{\mathbb{C}}/K^{\mathbb{C}}\). Then we have \[(\exp\,\boldsymbol{Z})_{*}^{-1}(T_{p}(G\cdot p))=\mathfrak{a}\oplus\left( \mathop{\oplus}\limits_{\lambda\in\triangle_{+}}\mathfrak{p}_{\lambda}\right) \oplus\left(\mathop{\oplus}\limits_{\lambda\in\triangle_{+}}\mathfrak{p}^{d}_ {\lambda}\right) \tag{3.4}\] and \[(\exp\,\boldsymbol{Z})_{*}^{-1}(T_{p}^{\perp}(G\cdot p))=\mathfrak{a}^{d}.\] Denote by \(\widehat{A}\) and \(\widehat{h}\) the shape tensor and the second fundamental form of the regular orbit \(G\cdot p\) (which is a submanifold in \(G^{\mathbb{C}}/K^{\mathbb{C}}\)), respectively. Then, according to (6.1) and (6.2) in [17], we have \[(\widehat{A}_{p})_{(\exp\,\boldsymbol{Z})_{*}(\boldsymbol{v})}|_{(\exp\, \boldsymbol{Z})_{*}(\mathfrak{p}^{d}_{\lambda})}=-\frac{\lambda(\boldsymbol{v })}{\tanh\lambda(\boldsymbol{Z})}\operatorname{id}\quad\,(\lambda\in\triangle _{+}) \tag{3.5}\] and \[(\widehat{A}_{p})_{(\exp\,\boldsymbol{Z})_{*}(\boldsymbol{v})}|_{(\exp\, \boldsymbol{Z})_{*}(\mathfrak{p}_{\lambda})}=-\lambda(\boldsymbol{v})\tanh \lambda(\boldsymbol{Z})\operatorname{id}\quad\,(\lambda\in\triangle_{+}) \tag{3.6}\] for \(\boldsymbol{v}\in(\exp\,\boldsymbol{Z})_{*}^{-1}(T_{p}^{\perp}(G\cdot o))\). Take an orthonormal base \((\boldsymbol{e}^{0}_{i})_{i=1}^{r}\) of \(\mathfrak{a}^{d}\) and an orthonormal base \((\boldsymbol{e}^{\lambda}_{i})_{i=1}^{m_{\lambda}}\) of \(\mathfrak{p}^{d}_{\lambda}\) with respect to \(\beta_{A}\). Let \(\triangle_{+}=\{\lambda_{1},\cdots,\lambda_{l}\}\) (\(\lambda_{1}<\lambda_{2}<\cdots<\lambda_{l}\)), where \(<\) is a lexicographical ordering of \({\mathfrak{a}}^{d}\) with respect to a basis of \({\mathfrak{a}}^{d}\). Define \(({\boldsymbol{e}}_{1},\cdots,{\boldsymbol{e}}_{n})\) by \[{\boldsymbol{e}}_{i}:=\left\{\begin{array}{ll}{\boldsymbol{e}}_{i}^{0}&(1 \leq i\leq m_{0})\\ {\boldsymbol{e}}_{i}^{\lambda_{1}}&(m_{0}+1\leq i\leq m_{0}+m_{\lambda_{1}}) \\ {\boldsymbol{e}}_{i}^{\lambda_{2}}&(m_{0}+m_{\lambda_{1}}+1\leq i\leq m_{0}+m_ {\lambda_{1}}+m_{\lambda_{2}})\\ \vdots&\vdots\\ {\boldsymbol{e}}_{i}^{\lambda_{l}}&(m_{0}+m_{\lambda_{1}}+\cdots m_{\lambda_{ l-1}}+1\leq i\leq m_{0}+m_{\lambda_{1}}+\cdots+m_{\lambda_{l}})\end{array}\right.\] and \(({\boldsymbol{e}}_{1}^{p},\cdots,{\boldsymbol{e}}_{n}^{p})\) by \({\boldsymbol{e}}_{i}^{p}:=(\exp\,\boldsymbol{Z})_{*}({\boldsymbol{e}}_{i})\)\((i=1,\cdots,n)\). For the convenience, set \(m_{0}:=r\), \({\mathfrak{p}}_{0}^{d}:={\mathfrak{a}}^{d}\) and \(\lambda_{0}:=0\). **Lemma 3.1.** Assume that \(p\in(G^{d}/K)_{\rm reg}\) (i.e., \(\boldsymbol{Z}\in{\cal C}\)). Let \((U,\varphi=(z_{1},\cdots,z_{n}))\) be a normal complex coordinates about a point \(p\) of the real form \(G^{d}\cdot o(=G^{d}/K)\) associated to \(({\boldsymbol{e}}_{1}^{p},\cdots,{\boldsymbol{e}}_{n}^{p})\). Then, for any \(\rho\in{\rm Conv}_{W}^{+}(D)\), we have \[\left.\frac{\partial^{2}\rho^{h}}{\partial z_{i}\partial\overline{z}_{j}} \right|_{p}=\left\{\begin{array}{ll}\frac{1}{4}\,(\nabla^{d}d\rho^{d})_{p}( ({\boldsymbol{e}}_{i}^{0})^{p},({\boldsymbol{e}}_{i}^{0})^{p})&(i=j\in\{1, \cdots,m_{0}\})\\ \frac{1}{4}\,(\,(\nabla^{d}d\rho^{d})_{p}(({\boldsymbol{e}}_{i-m_{0}}^{ \lambda_{1}})^{p},({\boldsymbol{e}}_{i-m_{0}}^{\lambda_{1}})^{p})\\ \qquad+\tanh\lambda_{1}(\boldsymbol{Z})\cdot\lambda_{1}(({\rm grad}\,\rho)_{ \boldsymbol{Z}})\,)&(i=j\in\{m_{0}+1,\cdots,m_{0}+m_{\lambda_{1}}\})\\ \vdots&\vdots\\ \frac{1}{4}\,(\,(\nabla^{d}d\rho^{d})_{p}(({\boldsymbol{e}}_{i-\sum_{a=0}^{l- 1}m_{\lambda_{a}}}^{\lambda_{l}})^{p},({\boldsymbol{e}}_{i-\sum_{a=0}^{l-1}m_ {\lambda_{a}}}^{\lambda_{l}})^{p})&(i=j\in\{m_{0}+\cdots+m_{\lambda_{l-1}}+1, \\ \frac{1}{4}\,(\nabla^{d}d\rho^{d})_{p}({\boldsymbol{e}}_{i}^{p},{ \boldsymbol{e}}_{j}^{p})&(i\neq j).\end{array}\right.\] where \(\lambda_{a}^{\sharp}\) denotes the element of \((\exp\,\boldsymbol{Z})_{*}^{-1}(T_{p}^{\perp}(G\cdot p))\) such that \(\beta_{A}(\lambda_{a}^{\sharp},{\boldsymbol{v}})=\lambda_{a}({\boldsymbol{v}})\) holds for any \({\boldsymbol{v}}\in(\exp\,\boldsymbol{Z})_{*}^{-1}(T_{p}^{\perp}(G\cdot o))\). _Proof._ First we note that \(({\bf J}{\boldsymbol{e}}_{1}^{p},\cdots,{\bf J}{\boldsymbol{e}}_{n}^{p})\) be an orthonormal basis of \(T_{p}^{\perp}(G^{d}/K)\)\((\subset T_{p}(G\cdot p))\). Let \(\gamma_{i}\)\((i=1,\cdots,n)\) be the geodesic in \(G^{d}/K\) with \(\gamma_{i}^{\prime}(0)=e_{i}^{p}\) and \(\widehat{\gamma}_{i}\)\((i=1,\cdots,n)\) be the geodesic in \(G\cdot p\) with \(\widehat{\gamma}_{i}^{\prime}(0)={\bf J}e_{i}^{p}\). Then, from the \(G\)-invariance of \(\rho^{h}\), we can show that \[(\nabla d\rho^{h})_{p}({\bf J}{\boldsymbol{e}}_{i}^{p},{\bf J}{ \boldsymbol{e}}_{i}^{p}) =\left.\frac{d^{2}(\rho^{h}\circ\widehat{\gamma}_{i})}{ds^{2}} \right|_{s=0}-(d\rho^{h})_{p}(\nabla_{\widehat{\gamma}_{i}^{\prime}(0)} \widehat{\gamma}_{i}^{\prime})\] \[=-(d\rho^{d})_{p}(\widehat{h}_{p}({\bf J}{\boldsymbol{e}}_{i}^{p},{\bf J}{\boldsymbol{e}}_{i}^{p})).\] Similarly, we can show \[(\nabla d\rho^{h})_{p}({\bf J}e_{i}^{p}+{\bf J}e_{j}^{p},{\bf J}e_{i}^{p}+{\bf J }e_{j}^{p})=-(d\rho^{d})_{p}(\widehat{h}_{p}({\bf J}{\boldsymbol{e}}_{i}^{p}+{ \bf J}{\boldsymbol{e}}_{j}^{p},{\bf J}{\boldsymbol{e}}_{i}^{p}+{\bf J}{ \boldsymbol{e}}_{j}^{p})).\] Hence, from the symmetricnesses of \((\nabla d\rho^{h})_{p}\) and \(\widehat{h}_{p}\), we have \[(\nabla d\rho^{h})_{p}({\bf J}e_{i}^{p},{\bf J}e_{j}^{p})=-(d\rho^{d})_{p}( \widehat{h}_{p}({\bf J}e_{i}^{p},{\bf J}e_{j}^{p}))\quad\,(1\leq i,j\leq n). \tag{3.8}\] Hence, from (3.6), we obtain \[(\nabla d\rho^{h})_{p}({\bf J}(e_{i}^{\lambda_{a}})^{p},\,{\bf J}(e_{j}^{ \lambda_{b}})^{p})=\delta_{ab}\delta_{ij}\,\tanh\lambda(\mathbf{Z})\,( d\rho^{d})_{p}((\exp\,\mathbf{Z})_{*}(\lambda_{a}^{\sharp})). \tag{3.9}\] Let \(\gamma_{j}\) be the geodesic in \(G^{d}/K\) with \(\gamma_{j}^{\prime}(0)=\mathbf{e}_{j}^{p}\), \(\widetilde{{\bf J}\mathbf{e}_{i}^{p}}\) be the parallel normal vector field of \(G^{d}\cdot o\) along \(\gamma_{j}\) with \((\widetilde{{\bf J}\mathbf{e}_{i}^{p}})_{0}={\bf J}\mathbf{ e}_{i}^{p}\). Also, let \(\widehat{\gamma}^{t}\) be the geodesic in \(G\cdot\gamma_{j}(t)\) with \((\widehat{\gamma}^{t})^{\prime}(0)=(\widetilde{{\bf J}\mathbf{e}_{i }^{p}})_{t}\). Define a geodesic variation \(\delta\) by \(\delta(s,t):=\widehat{\gamma}^{t}(s)\) (see Figure 3). Then we have \[(\nabla d\rho^{h})_{p}({\bf J}\mathbf{e}_{i}^{p},\,\mathbf{ e}_{j}^{p})=\left.\frac{\partial^{2}(\rho^{h}\circ\delta)}{\partial s \partial t}\right|_{s=t=0}-(d\rho^{h})_{p}(\nabla_{\mathbf{e}_{j}^{p} }\widetilde{{\bf J}\mathbf{e}_{i}^{p}})={\bf 0} \tag{3.10}\] **Figure 3 : Geodesic variation \(\delta\)** because \(\rho^{h}\) is \(G\)-invariant and \(G^{d}\cdot o\) is totally geodesic in \(G^{\mathbb{C}}/K^{\mathbb{C}}\). From (3.8) and (3.10), we obtain \[\eqalign{{\partial^{2}\rho^{h}\over\partial z_{i}\partial\overline{z}_{j}}\bigg{|} _{p}=&(\nabla d\rho^{h})_{p}^{\mathbb{C}}\left({1\over 2}(\boldsymbol{e}_{i}^{p}- \sqrt{-1}\mathbf{J}\boldsymbol{e}_{i}^{p}),\,{1\over 2}(\boldsymbol{e}_{j}^{p}+ \sqrt{-1}\mathbf{J}\boldsymbol{e}_{j}^{p})\right)\cr=&{1\over 4}\{\,(\nabla d\rho^{h})_{p}( \boldsymbol{e}_{i}^{p},\boldsymbol{e}_{j}^{p})+(\nabla d\rho^{h})_{p}( \mathbf{J}\boldsymbol{e}_{i}^{p},\mathbf{J}\boldsymbol{e}_{j}^{p})\cr&-\sqrt{ -1}(\nabla d\rho^{h})_{p}(\mathbf{J}\boldsymbol{e}_{i}^{p},\boldsymbol{e}_{j}^ {p})+\sqrt{-1}(\nabla d\rho^{h})_{p}(\boldsymbol{e}_{i}^{p},\mathbf{J} \boldsymbol{e}_{j}^{p})\,\}\] where \(\widehat{h}_{p}(\mathbf{J}\boldsymbol{e}_{i}^{p},\mathbf{J}\boldsymbol{e}_{j }^{p})\) is regarded as an element of \(T_{\boldsymbol{Z}}\mathfrak{a}^{d}\) under the identification of \(T_{p}^{\perp}(G\cdot p)=T_{p}\mathrm{Exp}_{o}(\mathfrak{a}^{d})\) and \(T_{\boldsymbol{Z}}\mathfrak{a}^{d}\). From this relation and (3.6), we can derive the desired relation. Also, we can show the following fact. **Lemma 3.2.**Assume that \(p\in(G^{d}/K)_{\mathrm{reg}}\) (i.e., \(\boldsymbol{Z}\in\mathcal{C}\)). Then, for any \(\rho\in\mathrm{Conv}_{W}^{+}(D)\), we have \[(\nabla^{d}d\rho^{d})_{p}((\boldsymbol{e}_{i}^{0})^{p},(\boldsymbol{e}_{j}^{0 })^{p})=(\nabla^{0}d\rho)_{\boldsymbol{Z}}(\boldsymbol{e}_{i}^{0},\boldsymbol {e}_{j}^{0})=\left.{\partial^{2}\rho\over\partial x_{i}\partial x_{j}}\right| _{Z},\] where \((x_{1},\cdots,x_{r})\) is the Euclidean coordinate of \(\mathfrak{a}^{d}\) with respect to \((\boldsymbol{e}_{1},\cdots,\boldsymbol{e}_{n})\) and \[(\nabla^{d}d\rho^{d})_{p}((\boldsymbol{e}_{i}^{\lambda_{a}})^{p},(\boldsymbol {e}_{j}^{\lambda_{b}})^{p})={1\over\tanh\lambda_{a}(\boldsymbol{Z})}\cdot \lambda_{a}((\mathrm{grad}\,\rho)_{\boldsymbol{Z}})\delta_{ab}\delta_{ij}.\] Also, we have \[(\nabla^{d}d\rho^{d})_{p}((\boldsymbol{e}_{i}^{\lambda_{a}})^{p},(\boldsymbol {e}_{j}^{0})^{p})=0.\] _Proof._ Since \(p\in(G^{d}/K)_{\mathrm{reg}}\), the normal space \(T_{p}^{\perp}\Sigma\) of the submanifold \(\Sigma\) in \(G^{d}/K\) at \(p\) is equal to \(T_{p}(K\cdot p)\). Let \(\gamma_{i}\)\((i=1,\cdots,r)\) be the geodesic in \(\Sigma\) with \(\gamma_{i}^{\prime}(0)=(\boldsymbol{e}_{i}^{0})^{p}\). Then, since \(\gamma_{i}\)\((i=1,\cdots,r)\) is a geodesic in \(G^{d}/K\), we have \[\eqalign{(\nabla^{d}d\rho^{d})_{p}((\boldsymbol{e}_{i}^{0})^{p},(\boldsymbol {e}_{i}^{0})^{p})=&\left.{d^{2}(\rho^{d}\circ\gamma_{i})\over ds^{2}} \right|_{s=0}-d\rho_{p}^{d}(\nabla^{d}_{\gamma_{i}^{\prime}(0)}\gamma_{i}^{ \prime})\cr=&\left.{d^{2}(\rho^{d}\circ\gamma_{i})\over ds^{2}}\right|_{s=0}- d\rho_{\boldsymbol{Z}}(\nabla^{0}_{\gamma_{i}^{\prime}(0)}\gamma_{i}^{ \prime})\cr=&(\nabla^{0}d\rho)_{\boldsymbol{Z}}(\boldsymbol{e}_{i}^{0}, \boldsymbol{e}_{i}^{0}).\cr}\] Similarly, we can show that \[(\nabla^{d}d\rho^{d})_{p}((\mathbf{e}_{i}^{0})^{p}+(\mathbf{e}_{j}^{0})^{p},(\mathbf{e}_{i}^{0 })^{p}+(\mathbf{e}_{j}^{0})^{p})=(\nabla^{0}d\rho)_{\mathbf{Z}}(\mathbf{e}_{i}^{0}+\mathbf{e}_{j} ^{0},\mathbf{e}_{i}^{0}+\mathbf{e}_{j}^{0}).\] Hence, from the symmetricnesses of \((\nabla^{d}d\rho^{d})_{p}\) and \((\nabla^{0}d\rho)_{\mathbf{Z}}\), we have \[(\nabla^{d}d\rho^{d})_{p}((\mathbf{e}_{i}^{0})^{p},(\mathbf{e}_{j}^{0})^{p})=(\nabla^{ 0}d\rho)_{Z}(\mathbf{e}_{i}^{0},\mathbf{e}_{j}^{0}).\] Thus we obtain the desired relation (3.11). Take \(\mathbf{w}\in T_{p}(K\cdot p)\) and let \(\widehat{\gamma}\) be the geodesic in \(K\cdot p\) with \(\widehat{\gamma}^{\prime}(0)=\mathbf{w}\). Then, since \(\rho^{d}\) is \(K\)-invariant, we have \[(\nabla^{d}d\rho^{d})_{p}(\mathbf{w},\mathbf{w})=\left.\frac{d^{2}(\rho^{d}\circ \widehat{\gamma})}{ds^{2}}\right|_{s=0}-d\rho_{p}^{d}(h_{p}(\mathbf{w},\mathbf{w}))=- d\rho_{\mathbf{Z}}(h_{p}(\mathbf{w},\mathbf{w})).\] Hence, from the symmetricnesses of \((\nabla^{d}d\rho^{d})_{p}\) and \(d\rho_{\mathbf{Z}}(h_{p}(\cdot,\cdot))\), we have \[(\nabla^{d}d\rho^{d})_{p}(\mathbf{w}_{1},\mathbf{w}_{2})=-d\rho_{\mathbf{Z}}(h_{p}(\mathbf{w}_ {1},\mathbf{w}_{2}))\] for any \(\mathbf{w}_{1},\mathbf{w}_{2}\in T_{p}(K\cdot p)\). Furthermore, from (3.2), we obtain the desired relation (3.12). Let \(\gamma_{i}\) be the geodesic in \(K\cdot p\) with \(\gamma_{i}^{\prime}(0)=(\mathbf{e}_{i}^{\lambda_{a}})^{p}\). This geodesic \(\widehat{\gamma}_{i}\) is given as \(\widehat{\gamma}_{i}(t):=\exp(tw)(p)\) for some \(w\in\mathfrak{k}\). Define a vector field \(\widetilde{\mathbf{e}_{j}^{0}}\) along \(\widehat{\gamma}_{i}\) by \(\widetilde{\mathbf{e}_{j}^{0}}(t):=\exp(tw)_{*}(\mathbf{e}_{j}^{0})\). It is shown that \(\widetilde{\mathbf{e}_{j}^{0}}\) is a \(K\)-equivariant \(\nabla^{\perp}\)-parallel normal vector field of \(K\cdot p\) along \(\gamma_{i}\) because \(K\)-action on \(G^{d}/K\) is hyperpolar, where \(\nabla^{\perp}\) denotes the normal connection of the submanifold \(K\cdot p\) in \(G^{d}/K\). Since \(\rho^{d}\) is \(K\)-invariant and \(\widetilde{\mathbf{e}_{j}^{0}}\) is \(\nabla^{\perp}\)-parallel, we have \[(\nabla^{d}d\rho^{d})_{p}((\mathbf{e}_{i}^{\lambda_{a}})^{p},(\mathbf{e}_{j}^{0})^{p} )=\left.\frac{d\widetilde{\mathbf{e}_{j}^{0}}(t)(\rho^{d})}{dt}\right|_{t=0}-( \nabla_{\gamma_{i}^{\prime}(0)}^{d}\widetilde{\mathbf{e}_{j}^{0}})(\rho^{d})=((A_ {p})_{(\mathbf{e}_{j}^{0})^{p}}((\mathbf{e}_{i}^{\lambda_{a}})^{p})(\rho^{d})=0.\] Define a non-linear differential operator \(\mathcal{D}:\mathrm{Cong}_{W}^{+}(\mathfrak{a}^{d})\to C^{\infty}(W\cdot \mathcal{C})\) of order one by \[\mathcal{D}(\rho):=\underset{\lambda\in\triangle_{+}}{\Pi}\left(\frac{2( \lambda\circ\mathrm{grad}\,\rho)|_{W\cdot\mathcal{C}}}{\tanh(2\lambda|_{W\cdot \mathcal{C}})}\right)^{m_{\lambda}}\quad(\rho\in\mathrm{Cong}_{W}^{+}( \mathfrak{a}^{d})), \tag{3.14}\] where \(\mathrm{grad}\,\rho\) is the gradient vector field of \(\rho\) with respect to \(\beta_{0}\). It is easy to show that \(\mathcal{D}(\mathrm{Cong}_{W}^{+}(\mathfrak{a}^{d}))\subset C_{W}^{\infty}(W \cdot\mathcal{C})\) holds. From Lemmas 3.1 and 3.2, we obtain the following fact. **Proposition 3.3**.: For \(p:=\mathrm{Exp}_{o}(\mathbf{Z})\)\((\mathbf{Z}\in\mathcal{C})\), the following relations hold: \[\det\left(\left.\frac{\partial^{2}\rho^{h}}{\partial z_{i}^{p}\partial\overline {z}_{j}^{p}}\right|_{p}\right)=\frac{1}{4^{n}}\cdot\det\left(\left.\frac{ \partial^{2}\rho}{\partial x_{i}\partial x_{j}}\right|_{Z}\right)\cdot\mathcal{ D}(\rho)(\mathbf{Z}), \tag{3.15}\] where \((x_{1},\cdots,x_{r})\) is the Euclidean coordinate of \(\mathfrak{a}^{d}\). _Proof._ The relations in Lemmas 3.1 and 3.2, we can derive \[\left.\frac{\partial^{2}\rho^{h}}{\partial z_{i}\partial\overline{z}_{j}} \right|_{p}=\left\{\begin{array}{ll}\frac{1}{2}\cdot\det\left( \left.\frac{\partial^{2}\rho}{\partial x_{i}\partial x_{j}}\right|_{2}\right)& \quad(i,j\in\{1,\cdots,m_{0}\})\\ \frac{1}{2}\cdot\frac{\lambda_{1}((\operatorname{grad}\rho)\boldsymbol{Z})}{ \tanh 2\lambda_{1}(\boldsymbol{Z})}\cdot\delta_{ij}&\quad(i,j\in\{m_{0}+1, \cdots,m_{0}+m_{\lambda_{1}}\})\\ \vdots&\qquad\qquad\qquad\qquad\qquad\vdots\\ \frac{1}{2}\cdot\frac{\lambda_{l}((\operatorname{grad}\rho)_{\boldsymbol{Z} })}{\tanh 2\lambda_{l}(\boldsymbol{Z})}\cdot\delta_{ij}&\quad(i,j\in\{m_{0}+ \cdots+m_{\lambda_{l-1}}+1,\\ \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\cdots,m_{ 0}+\cdots+m_{\lambda_{l}}\}).\end{array}\right. \tag{3.16}\] From this relation, we can derive the desired relation. _Remark 3.1_. The relation (3.15) and (3.16) corresponds to the relations in Corollary 1.3 and Theorem 1.2 of [10] (see Proposition 2.3 of [5] also), where we note that T. Delcroix ([10])) derived the relations in the case \(G^{\mathbb{C}}/K^{\mathbb{C}}\) is the complexification of a compact semi-simple Lie group (i.e., \(G^{\mathbb{C}}/K^{\mathbb{C}}=(G^{\mathbb{C}}\times G^{\mathbb{C}})/\triangle G ^{\mathbb{C}}\)) but it is stated that the relations hold for a general complexified symmetric space \(G^{\mathbb{C}}/K^{\mathbb{C}}\) in [5]. However, our relations (3.15) and (3.16) differ from the relations in [10] (and [5]) somewhat. Also, the above calculation method of (3.15) and (3.16) differs from the calculation method in [10] completely. Let \((x_{1},\cdots,x_{r})\) be the Euclidean coordinate of \(\mathfrak{a}^{d}\). From (2.1) and (3.15), we see that \(\beta_{\rho^{h}}\) is Ricci-flat if and only if the following relation \[\mathop{\Pi}\limits_{\lambda\in\triangle_{+}}2^{m_{\lambda}}(\lambda\circ \operatorname{grad}\rho)^{m_{\lambda}}\cdot\det\left(\frac{\partial^{2}\rho}{ \partial x_{i}\partial x_{j}}\right)=c\cdot\mathop{\Pi}\limits_{\lambda\in \triangle_{+}}\tanh^{m_{\lambda}}(2\lambda) \tag{3.17}\] holds for some nonzero constant \(c\). In 1990, L. A. Caffarelli proved the following fact (see Theorem 1-(b) in [8]). **Proposition 3.4([8]).** Let \(D\) be a convex bounded domain of \(\mathbb{R}^{r}\) with \(B^{r}(\varepsilon)\subset\partial D\subset B^{r}(r\varepsilon)\) (\(\varepsilon>0\)) and \(f\) be a strictly positive \(C^{0,\alpha}\)-continuous function on \(D\) (\(\alpha>0\)), where \(B^{r}(\cdot)\) denotes the Euclidean ball of radius \((\cdot)\) centered at \((0,\cdots,0)\). If \(\rho:D\to\mathbb{R}\) be a convex viscocity solution of \[\det\left(\frac{\partial^{2}\rho}{\partial x_{i}\partial x_{j}}\right)=f\ \ \ \ \ ( \text{on }D)\] \(((x_{1},\cdots,x_{r})\,:\,\) the Euclidean coordinate of \(\mathbb{R}^{r}\)) satisfying \(\rho|_{\partial D}=0\), then \(\rho\) is of class \(C^{2,\alpha}\) on \(B^{r}(\frac{\varepsilon}{2})\). In 2003, R. Bielawski proved the following fact (see Theorems A.1 and Corollary A.3 in [2]). **Proposition 3.5([2]).** Let \(W\curvearrowright\mathbb{R}^{r}\) be a finite irreducible reflection group and \(F_{1},F_{2}\) be non-negative locally bounded \(W\)-invariant measurable functions on \(\mathbb{R}^{r}\). Assume that \(\int_{\mathbb{R}^{r}}F_{1}(x_{1},\cdots,x_{r})\,dx_{1}\cdots dx_{r}=\infty\) holds. Then the following statements (i) and (ii) hold. (i) There exists a \(W\)-invariant strictly convex global weak solution of the following Monge-Ampere equation: \[(F_{1}\circ\operatorname{grad}\rho)\cdot\det\left(\frac{\partial^{2}\rho}{ \partial x_{i}\partial x_{j}}\right)=F_{2}, \tag{3.18}\] where the gradient vector field \(\operatorname{grad}\rho\) is regarded as a (multi-valued) map of \(\mathbb{R}^{r}\) to oneself. This weak solution \(\rho\) is Lipschitz continuous. (ii) If \(F_{1},F_{2}\) are positive functions of class \(C^{k,\alpha}\) (\(k\geq 0,\ \alpha>0\)), then there exists a \(W\)-invariant strictly convex global \(C^{k+2,\alpha}\)-solution of (3.18). According to the standard discussion, the \(W\)-invariant strictly convex global \(C^{k+2,\alpha}\)-solution is of class \(C^{\infty}\). _Remark 3.2_.: (i) A \(W\)_-invariant strictly convex global weak solution_ of (3.9) means a convex function \(\rho\) on \(\mathbb{R}^{r}\) such that \[\int_{B}F_{2}(x_{1},\cdots,x_{r})\,dx_{1}\cdots dx_{r}=\int_{(\operatorname{ grad}\rho)(B)}F_{1}(x_{1},\cdots,x_{r})\,dx_{1}\cdots dx_{r} \tag{3.19}\] holds for any Borel set \(B\) of \(\mathbb{R}^{r}\). (ii) The gradient vector field \(\operatorname{grad}\rho\) is defined by \[(\operatorname{grad}\rho)_{x}:=\{(d\nu_{P})_{x}((\frac{\partial}{ \partial x_{1}})_{x}),\cdots,(d\nu_{P})_{x}((\frac{\partial}{\partial x_{r}})_ {x}))\] \[\qquad\qquad\qquad\qquad\mid P\text{ support hyperplane of the graph of }\rho\text{ at }x\},\] where \(\nu_{P}\) is the affine function whose graph is equal to \(P\). Define functions \(\widehat{F}_{1}\) and \(\widehat{F}_{2}\) on \(\mathfrak{a}^{d}\) by \[\widehat{F}_{1}(\boldsymbol{Z}):=\mathop{\Pi}\limits_{\lambda\in\triangle_{+ }}2^{m_{\lambda}}|\lambda(\boldsymbol{Z})|^{m_{\lambda}},\quad\widehat{F}_{2}( \boldsymbol{Z}):=c\cdot\mathop{\Pi}\limits_{\lambda\in\triangle_{+}}|\tanh(2 \lambda(\boldsymbol{Z}))|^{m_{\lambda}}\quad(\boldsymbol{Z}\in\mathfrak{a}^{d}).\] Then the relation (3.17) is rewritten as \[(\widehat{F}_{1}\circ\operatorname{grad}\rho)\cdot\det\left(\frac{\partial^{2 }\rho}{\partial x_{i}\partial x_{j}}\right)=\widehat{F}_{2}. \tag{3.20}\] It is clear that \(\widehat{F}_{i}\) (\(i=1,2\)) are locally bounded and non-negative \(W\)-invariant continuous (hence measurable) functions on \(\mathfrak{a}^{d}\) and that \(\widehat{F}_{1}\) satisfies \(\int_{\mathfrak{a}^{d}}\widehat{F}_{1}(x_{1},\cdots,x_{r})\,dx_{1}\cdots dx_{ r}=\infty\). Hence we can apply the statement (i) of Proposition 3.5 to (3.20) and show the existence of a \(W\)-invariant strictly convex global weak solution of (3.20). Regrettably, \(\widehat{F}_{i}\) (\(i=1,2\)) are not positive. In fact, they are equal to zero along \(\mathfrak{a}^{d}\setminus W\cdot\mathcal{C}\). Also, they are not necessarily of class \(C^{1}\) along \(\mathfrak{a}^{d}\setminus W\cdot\mathcal{C}\). Hence we cannot apply the statement (ii) of Proposition 3.5 to (3.20). However, we can derive the statement of Theorem A (stated in Introduction) in the case where \(G/K\) is of rank two. We prove Theorem A by using Propositions 3.3, 3.4 and 3.5. _Proof of Theorem A._ As above, since \(\widehat{F}_{i}\) (\(i=1,2\)) are locally bounded and non-negative \(W\)-invariant continuous (hence measurable) functions on \(\mathfrak{a}^{d}\) and that \(\widehat{F}_{1}\) satisfies \(\int_{\mathfrak{a}^{d}}\widehat{F}_{1}(x_{1},x_{2})\)\(dx_{1}dx_{2}=\infty\), it follows from (i) of Proposition 3.5 that there exists a \(W\)-invariant strictly convex weak solution of (3.20) defined on the whole of \(\mathfrak{a}^{d}\). Let \(\rho\) be the weak solution of (3.20). Also, let \(\Pi=\{\lambda_{1},\lambda_{2}\}\,(\subset\triangle_{+})\) be the fundamental root system. Then the Weyl domain \(\mathcal{C}\) is given by \[\mathcal{C}=\{\boldsymbol{Z}\,|\,\lambda_{b}(\boldsymbol{Z})>0\ \ (b=1,2)\}.\] Since \(\rho\) satisfies (3.20), we have \[\det\bigg{(}\left.\frac{\partial^{2}\rho}{\partial x_{i}\partial x_{j}}\right| _{\boldsymbol{Z}}\bigg{)}=\frac{\widehat{F}_{2}(\boldsymbol{Z})}{\widehat{F}_ {1}((\operatorname{grad}\rho)(\boldsymbol{Z}))}=c\cdot\underset{\lambda\in \triangle_{+}}{\Pi}\left(\frac{\tanh(2\lambda(\boldsymbol{Z}))}{2\lambda(( \operatorname{grad}\rho)(\boldsymbol{Z}))}\right)^{m_{\lambda}}=\frac{c}{ \mathcal{D}(\rho)(\boldsymbol{Z})} \tag{3.21}\] holds for any \(\boldsymbol{Z}\in W\cdot\mathcal{C}\). Here we note that, for any \(\boldsymbol{Z}\in W\cdot\mathcal{C}\), \(\lambda((\operatorname{grad}\rho)(\boldsymbol{Z}))\)'s (\(\lambda\in\triangle_{+}\)) are not equal to zero. In fact, \((\operatorname{grad}\rho)(\boldsymbol{Z})\) belongs to \(W\cdot\mathcal{C}\) because \(\rho\) is \(W\)-invariant and convex (see Figure 5), where we note that \[W\cdot\mathcal{C}=\mathfrak{a}^{d}\setminus\left(\underset{\lambda\in \triangle_{+}}{\cup}\lambda^{-1}(0)\right).\] On the other hand, if \(\boldsymbol{Z}\in(\lambda^{o})^{-1}(0)\setminus\{\mathbf{0}\}\,(\subset \mathfrak{a}^{d}\setminus W\cdot\mathcal{C})\) for some \(\lambda^{o}\in\triangle_{+}\), then \((\operatorname{grad}\rho)(\boldsymbol{Z})\) is tangent to \((\lambda^{o})^{-1}(0)\) and hence it is expressed as \((\operatorname{grad}\rho)(\boldsymbol{Z})=a_{\boldsymbol{Z}}\boldsymbol{Z}\) for some real constant \(a_{\boldsymbol{Z}}\) (see Figure 4). Also, since \(\rho\) is a strictly convex function of the minimum point \(\mathbf{0}(\in\mathfrak{a}^{d})\), \(a_{\boldsymbol{Z}}\) is positive for any \(\boldsymbol{Z}\in(\lambda^{o})^{-1}(0)\setminus\{\mathbf{0}\}\). Define a function \(\widehat{F}\) on \(W\cdot\mathcal{C}\) by \[\widehat{F}(\boldsymbol{Z}):=\frac{\widehat{F}_{2}(\boldsymbol{Z})}{\widehat {F}_{1}((\operatorname{grad}\rho)(\boldsymbol{Z}))}\ \ \ \ (\boldsymbol{Z}\in W\cdot \mathcal{C}).\] Since \(\rho\) is a \(W\)-invariant strictly convex function on \(\mathfrak{a}^{d}\) and \(\int_{\mathfrak{a}^{d}}\widehat{F}_{2}(x_{1},x_{2})\,dx_{1}dx_{2}>0\), it is shown that \(\rho\) is a proper function (see Proposition 1.2 of [3]). Hence \(\operatorname{grad}\rho:\mathfrak{a}^{d}\to\mathfrak{a}^{d}\) is Holder continuous. Similarly, by using \(\int_{\mathfrak{a}^{d}}\widehat{F}_{1}(x_{1},x_{2})\,dx_{1}dx_{2}>0\) instead of \(\int_{\mathfrak{a}^{d}}\widehat{F}_{2}(x_{1},x_{2})\,dx_{1}dx_{2}>0\), it is shown that \((\operatorname{grad}\rho)^{-1}:\mathfrak{a}^{d}\to\mathfrak{a}^{d}\) is Holder continuous. Define a function \(\eta\) on \(\mathfrak{a}^{d}\setminus\{\mathbf{0}\}\) by \[\eta(\boldsymbol{Z}):=\frac{\|(\operatorname{grad}\rho)(\boldsymbol{Z})\|}{\| \boldsymbol{Z}\|}\ \ \ \ (\boldsymbol{Z}\in\mathfrak{a}^{d}\setminus\{\mathbf{0}\}).\] Note that the above \(a_{\boldsymbol{Z}}\) is equal to \(\eta(\boldsymbol{Z})\). Since \(\operatorname{grad}\rho\) and \((\operatorname{grad}\rho)^{-1}\) are Holder continuous, \(\eta\) and \(\frac{1}{\eta}\) are continuous and bounded on \(\mathfrak{a}^{d}\setminus\{\mathbf{0}\}\), where we used also \((\operatorname{grad}\rho)(\mathbf{0})=\mathbf{0}\). Hence there exist positive constants \(C_{i}\) (\(i=1,2\)) satisfying \[C_{1}\leq\eta(\boldsymbol{Z})\leq C_{2}\ \ \ \ (\boldsymbol{Z}\in\mathfrak{a}^{d} \setminus\{\mathbf{0}\}). \tag{3.22}\] Take \(\boldsymbol{Z}\in(\lambda^{o})^{-1}(0)\setminus\{\mathbf{0}\}\). Let \(\{\boldsymbol{Z}_{k}\}_{k=1}^{\infty}\) be a sequence in \(W\cdot\mathcal{C}\) converging to \(\boldsymbol{Z}\). Then we have \[\lim_{k\to\infty}\frac{\tanh(2\lambda^{o}(\boldsymbol{Z}_{k}))}{2\lambda^{o}( (\operatorname{grad}\rho)(\boldsymbol{Z}_{k}))}=\frac{1}{\eta(\boldsymbol{Z})}. \tag{3.23}\] Hence we obtain \[\lim_{k\to\infty}\widehat{F}(\boldsymbol{Z}_{k})=c\cdot\left(\frac{1}{\eta( \boldsymbol{Z})}\right)^{m_{\lambda^{o}}}\cdot\underset{\lambda\in\triangle_{+ }\setminus\{\lambda^{o}\}}{\Pi}\left(\frac{\tanh(2\lambda(\boldsymbol{Z}))}{2 \lambda((\operatorname{grad}\rho)(\boldsymbol{Z}))}\right)^{m_{\lambda}}. \tag{3.24}\] This fact together with the arbitrariness of \(\lambda^{o}\) implies that \(\widehat{F}\) extends to a \(C^{0,\alpha}\)-function (\(\alpha>0\)) on \(\mathfrak{a}^{d}\setminus\{\mathbf{0}\}\). Denote by \(\widehat{F}^{e}\) this extended function. As above, since \[(\operatorname{grad}\rho)(\boldsymbol{Z})\left\{\begin{array}{ll}=\eta( \boldsymbol{Z})\boldsymbol{Z}&(\boldsymbol{Z}\in\mathfrak{a}^{d}\setminus(W \cdot\mathcal{C}\cup\{\mathbf{0}\}))\\ \in W\cdot\mathcal{C}&(\boldsymbol{Z}\in W\cdot\mathcal{C})\end{array}\right.\] and (3.22) hold, we can show \[\inf_{\boldsymbol{Z}\in W\cdot\mathcal{C}}\left|\frac{\lambda((\operatorname{grad} \rho)(\boldsymbol{Z}))}{\lambda(\boldsymbol{Z})}\right|>0\quad(\lambda\in \triangle_{+}) \tag{3.25}\] and \[\sup_{\boldsymbol{Z}\in W\cdot\mathcal{C}}\left|\frac{\lambda((\operatorname{ grad}\rho)(\boldsymbol{Z}))}{\lambda(\boldsymbol{Z})}\right|<\infty\quad( \lambda\in\triangle_{+}). \tag{3.26}\] From (3.21), (3.24), (3.25) and (3.26), we can show \[\sup_{\mathfrak{a}^{d}\setminus\{\mathbf{0}\}}\widehat{F}^{e}<\infty\quad\text { and }\quad\inf_{\mathfrak{a}^{d}\setminus\{\mathbf{0}\}}\widehat{F}^{e}>0, \tag{3.27}\] By using these facts, we shall show that \(\widehat{F}^{e}\) extends a \(C^{0,\alpha}\)-function (\(\alpha>0\)) on the whole of \(\mathfrak{a}^{d}\). Suppose that \(\widehat{F}^{e}\) does not extend a \(C^{0,\alpha}\)-function on the whole of \(\mathfrak{a}^{d}\), that is, there does not exist \(\lim_{\boldsymbol{Z}\to\mathbf{0}}\widehat{F}^{e}(\boldsymbol{Z})\). Then, by using (3.27), we can show that \[\lim_{\varepsilon\to 0}\left(\sup_{\boldsymbol{Z}\in S^{2}(\varepsilon)} \widehat{F}^{e}(\boldsymbol{Z})-\inf_{\boldsymbol{Z}\in S^{2}(\varepsilon)} \widehat{F}^{e}(\boldsymbol{Z})\right)\neq 0,\] where \(S^{2}(\varepsilon)\) denotes the sphere of radius \(\varepsilon\) centered at \(\mathbf{0}\) in \(\mathfrak{a}^{d}\). This contradicts the fact that \(\widehat{F}^{e}\) is of class \(C^{0,\alpha}\). Hence \(\lim_{\boldsymbol{Z}\to\mathbf{0}}\widehat{F}^{e}(\boldsymbol{Z})\) exists. This fact implies that \(\widehat{F}^{e}\) extends \(C^{0,\alpha}\)-function the whole of \(\mathfrak{a}^{d}\). Denote by \(\widehat{F}^{ee}\) this extended function. Take any positive constant \(b\). Set \(D_{b}:=\rho^{-1}([0,b])\) and \(\rho_{b}:=\rho-b\). By operating a suitable affine transformation of \(\mathfrak{a}^{d}\), \(B^{r}(\varepsilon_{b})\subset\partial D_{b}\subset B^{r}(r\varepsilon_{b})\) holds for some \(\varepsilon_{b}>0\). Note that \(\varepsilon_{b}\to\infty\) as \(b\to\infty\). Since \(\rho_{b}\) satisfies \[\det\left(\frac{\partial^{2}\rho_{b}}{\partial x_{i}\partial x_{j}}\right)= \frac{\widehat{F}_{2}}{\widehat{F}_{1}(\operatorname{grad}\rho_{b})} \tag{3.28}\] on \(D_{b}\) and \(\rho_{b}|_{\partial D_{b}}=0\), it follows from (3.22) and Proposition 3.4 that \(\rho_{b}\) (hence \(\rho\)) is of class \(C^{2,\alpha}\) (\(\alpha>0\)) on \(B^{r}(\frac{\varepsilon_{b}}{2})\). Since \(\varepsilon_{b}\to\infty\) as \(b\to\infty\), we see that \(\rho\) is of class \(C^{2,\alpha}\) on the whole of \(\mathfrak{a}^{d}\). Furthermore, by standard discussion, the higher regularity (\(C^{\infty}\)-property) of \(\rho\) is shown. Then \(\omega_{\rho^{h}}\) is of class \(C^{\infty}\) on the whole of \(G^{\mathbb{C}}/K^{\mathbb{C}}\). From Proposition 3.3 and (3.21), for the normal complex coordinate \((z_{1}^{p},\cdots,z_{n}^{p})\) about \(p=\operatorname{Exp}_{o}(\boldsymbol{Z})\) (\(\boldsymbol{Z}\in W\cdot\mathcal{C}\)), we obtain the following expression of \((\omega_{\rho^{h}})_{p}^{n}\): \[(\omega_{\rho^{h}})_{p}^{n}= (-1)^{\frac{n(n-1)}{2}}\sqrt{-1}^{n}\cdot n!\cdot\det\left(\left. \frac{\partial^{2}\rho^{h}}{\partial z_{i}^{p}\partial\bar{z}_{j}^{p}}\right| _{p}\right)\] \[\times(dz_{1}^{p}\wedge\cdots\wedge dz_{n}^{p}\wedge d\bar{z}_{1}^ {p}\wedge\cdots\wedge d\bar{z}_{n}^{p})_{p}\] \[= (-1)^{\frac{n(n-1)}{2}}\frac{\sqrt{-1}^{n}n!c}{4^{n}}\cdot(dz_{1}^ {p}\wedge\cdots\wedge dz_{n}^{p}\wedge d\bar{z}_{1}^{p}\wedge\cdots\wedge d \bar{z}_{n}^{p})_{p}.\] Also, \(\Omega_{p}\) is expressed as \[\Omega_{p}=(dz_{1}^{p}\wedge\cdots\wedge dz_{n}^{p})_{p},\] Hence we obtain \[(\omega_{\rho^{h}})_{p}^{n}=(-1)^{\frac{n(n-1)}{2}}\frac{\sqrt{-1}^{n}n!c}{4^{n }}\cdot\Omega_{p}\wedge\overline{\Omega}_{p}. \tag{3.29}\] Since this relation (3.29) holds at any point \(p\) of \(\mathrm{Exp}_{o}(W\cdot\mathcal{C})\), it follows from the \(G\)-invariance of \(\omega_{\rho^{h}}\) and \(\Omega\) that (3.29) holds for any point \(p\) of \(G\cdot\mathrm{Exp}_{o}(W\cdot\mathcal{C})(=(G^{\mathbb{C}}/K^{\mathbb{C}})_{ \mathrm{reg}})\). Furthermore, since \(\omega_{\rho^{h}}^{n}\) and \(\Omega\wedge\overline{\Omega}\) are continuous on \(G^{\mathbb{C}}/K^{\mathbb{C}}\), (3.29) holds on the whole of \(G^{\mathbb{C}}/K^{\mathbb{C}}\). If \(c=2^{n}\), then we have \[(\omega_{\rho^{h}})_{p}^{n}=(-1)^{\frac{n(n-1)}{2}}\left(\frac{\sqrt{-1}}{2} \right)^{n}\cdot n!\cdot\Omega_{p}\wedge\overline{\Omega}_{p}\ \ \ \ (p\in G^{\mathbb{C}}/K^{\mathbb{C}}). \tag{3.30}\] Thus the quadruple \((\mathbf{J},\beta_{\rho^{h}},\omega_{\rho^{h}},\Omega)\) gives a \(C^{\infty}\)-Calabi-Yau structure on \(G^{\mathbb{C}}/K^{\mathbb{C}}\).
2309.06515
CaloShowerGAN, a Generative Adversarial Networks model for fast calorimeter shower simulation
In particle physics, the demand for rapid and precise simulations is rising. The shift from traditional methods to machine learning-based approaches has led to significant advancements in simulating complex detector responses. CaloShowerGAN is a new approach for fast calorimeter simulation based on Generative Adversarial Network (GAN). We use Dataset 1 of the Fast Calorimeter Simulation Challenge 2022 to demonstrate the efficacy of the model to simulate calorimeter showers produced by photons and pions. The dataset is originated from the ATLAS experiment, and we anticipate that this approach can be seamlessly integrated into the ATLAS system. This development brings a significant improvement compared to the deployed GANs by ATLAS and could offer great enhancement to the current ATLAS fast simulations.
Michele Faucci Giannelli, Rui Zhang
2023-09-12T18:44:08Z
http://arxiv.org/abs/2309.06515v2
# CaloSlowerGAN, a Generative Adversarial Networks model for fast calorimeter shower simulation ###### Abstract In particle physics, the demand for rapid and precise simulations is rising. The shift from traditional methods to machine learning-based approaches has led to significant advancements in simulating complex detector responses. CaloSlowerGAN is a new approach for fast calorimeter simulation based on Generative Adversarial Network (GAN). We use Dataset 1 of the Fast Calorimeter Simulation Challenge 2022 to demonstrate the efficacy of the model to simulate calorimeter showers produced by photons and pions. The dataset is originated from the ATLAS experiment, and we anticipate that this approach can be seamlessly integrated into the ATLAS system. This development marks a significant improvement compared to the deployed GANs by ATLAS and could offer substantial enhancement to the current ATLAS fast simulations. ###### Contents * 1 Introduction * 2 Input datasets * 3 Model and hyperparameters * 3.1 CaloSlowerGAN * 3.2 Data preprocessing * 3.3 Training * 3.4 Intermediate results after hyperparameter optimisation * 4 Further optimisation of CaloSlowerGAN * 4.1 Momentum split for photon GAN * 4.2 Layer-energy normalisation * 5 Performance of CaloSlowerGAN * 5.1 Total energy * 5.2 Stability of the training * 5.3 Energy per layer * 5.4 Shower shapes * 5.5 Energy in all voxels * 5.6 Generation time * 5.7 Memory requirement * 6 Future directions * 7 Conclusions ## 1 Introduction Modern particle and nuclear physics programs require extensive, high-precision Monte Carlo (MC) simulations for modelling the response to particles that travel through the detector materials. This task is traditionally accomplished via a comprehensive detector simulation, utilising the Geant4 toolkit [1]. The most time-consuming part of the simulation process arises within the calorimeters, a sub-detector that measures energy deposits of particles. When the initial particle interacts with the dense material in the calorimeters, it generates secondary particles. This is a cascading process that can produce thousands of particles and form what is commonly known as a calorimeter shower. The large number of particles to be simulated is the origin of the time and resource-intensive nature of the Geant4 simulation. Overall, this aspect dominates the simulation time in collider experiments. As an example, in a typical event of a top and anti-top quark pair production simulated in the ATLAS experiment [2] at the Large Hadron Collider (LHC), the calorimeter shower simulation takes about 80 % of the total simulation time [3]. In the upcoming High Luminosity LHC program, the increased data volumes are expected to surpass the available computing capabilities for producing the necessary amount of MC events used in physics analyses [4; 5]. To uphold the consistent MC-to-data ratio, it becomes essential to substitute the calorimeter simulation, with a quicker alternative. This requirement has encouraged the creation of fast and high-fidelity calorimeter simulation techniques. Numerous endeavours have been undertaken to expedite the simulation of calorimeter response while upholding satisfactory physics accuracy. The FastCaloSim method [6; 7], developed within the ATLAS Collaboration, is an example of such attempts. It involves the formulation of parameterised responses for the calorimeter, tailored to specific types of incoming particles. By employing this parametrisation, it accelerates the speed of simulating an event by approximately a factor of ten, effectively bypassing the intricate shower development process carried out by Geant4. In a novel line of research, including studies in Refs. [8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26], machine learning approaches using cutting-edge generative techniques are proposed for generating the calorimeter response. In the recently developed AtlFast3 [27], the new generation of high-accuracy fast simulation in ATLAS, a combination of parametric and machine learning approaches (FastCaloSimV2 [27] and FastCaloSam [20], respectively) is adopted to achieve optimal performance in terms of both speed and simulation accuracy across the detector's full phase space. In this context, the research community organised the Fast Calorimeter Simulation Challenge 2022 [28], hereinafter referred to as CaloChallenge. CaloChallenge is a newly introduced community challenge that aims at motivating the development of generative algorithms to address the calorimeter simulation challenge. Standardised datasets and tools to facilitate training and validating processes are provided too. In this paper, we present a fast calorimeter simulation model using Generative Adversarial Networks (GANs) technique, named CaloShowerGAN, with a focus on utilising the first dataset provided by the CaloChallenge. Other models that participated in the CaloChallenge using this dataset, at the time of writing this paper, are documented in Refs. [18; 26]. The paper is organised as follows. Section 2 briefly describes the datasets used in this study provided by the challenge. The CaloShowerGAN model description, hyper-parameter optimisation, and training procedure are detailed in Section 3. Further optimisations are described in Section 4. The results and performances of CaloShowerGAN are presented in Section 5. Future research directions are presented in Section 6 followed by conclusions in Section 7. ## 2 Input datasets The first dataset provided by the challenge is part of the ATLAS open dataset [29] used in AtlFast3 [27]. It comprises two distinct subsets representing different particle types, the photon and the charged pion sets. These data are generated by ATLAS using Geant4 with the official ATLAS detector geometry, ensuring that they accurately represent genuine electromagnetic and hadronic showers. Each subset consists of 15 samples with different incident momenta generated at the calorimeter surface, followed by noise-free simulation to facilitate the training of accurate showers. The incident momentum of the samples ranges from 256 MeV to 4 TeV, increasing in powers of two. In the range from 256 MeV to 256 GeV, 10000 events are simulated at each energy value, while for higher momenta, the statistical count decreases. An overview of these statistics is depicted in Figure 1. All events are generated within the \(|\eta|\) range of \([0.20,0.25]\), aligning with the ATLAS' chosen strategy for parameterising the complete detector response. Further insights into the reasoning behind this strategy and comprehensive sample details can be found in Ref. [27]. In each event, the spatially distributed energy deposits simulated by Geant4 are referred to as "hits". These hits are initially defined in Cartesian coordinates and subsequently transformed into cylindrical coordinates (\(r\), \(\alpha\), layer) along the particle's flying direction. Here, \(r\) is the distance of the hit from the intersection point between the extrapolation of the generated particle and the layer, while \(\alpha\) denotes the polar angle in cylindrical coordinates. The coordinate "layer" corresponds to the physical instrumented layer within the ATLAS calorimeter, indicating the extent of particle propagation from the origin of the detector. The first four layers, numbered 0-3 according to the ATLAS naming scheme, are electromagnetic calorimeters dedicated to the measurement of electromagnetic showers while the following layers, denoted as 12, 13, and 14, are part of hadronic calorimeters used to measure hadronic showers. Subsequently, the hits within each layer are aggregated into volumes referred to as "voxels", with the energy within a voxel being the cumulative sum of the energies contributed by all the associated hits. The number of voxels in each layer for the two subsets is summarised in Table 1. The energy deposits in voxels are used as the input for the \(\texttt{CaloShowerGAN}\) and are the information that the generative model aims to reproduce. ## 3 Model and hyperparameters \(\texttt{CaloShowerGAN}\) is designed to have a similar structure to FastCaloGAN available in Ref. [30] so that it can be easily integrated by the ATLAS collaboration. At the same time, the tool's foundation diverges notably from FastCaloGAN, leading to superior performance. This advancement is realised through revisions in training data pre-processing and adjustments in model architecture and hyperparameters. Several of these enhancements are underpinned by significant expertise in understanding the intricate details of how showers interact within the calorimeter. ### CaloShowerGAN \(\texttt{CaloShowerGAN}\) is constructed on the foundation of the Conditional Wasserstein GAN algorithm [31; 32], which has been established for delivering good performance and training stability. To effectively simulate calorimeter showers across a broad range of incident momenta spanning multiple orders of magnitude, \(\texttt{CaloShowerGAN}\) is conditioned on the true kinetic energy1 Figure 1: Statistics of Dataset 1 in the CaloChallenge under each incident momentum. \begin{table} \begin{tabular}{c r r r r} \hline \hline Particle & Layer & \(N_{r}\) & \(N_{\alpha}\) & \(N_{\text{voxel}}\) \\ \hline \(\gamma\) & 0 & 8 & 1 & 8 \\ & 1 & 16 & 10 & 160 \\ & 2 & 19 & 10 & 190 \\ & 3 & 5 & 1 & 5 \\ & 12 & 5 & 1 & 5 \\ Total & & & & 368 \\ \hline \(\pi\) & 0 & 8 & 1 & 8 \\ & 1 & 10 & 10 & 100 \\ & 2 & 10 & 10 & 100 \\ & 3 & 5 & 1 & 5 \\ & 12 & 15 & 10 & 150 \\ & 13 & 16 & 10 & 160 \\ & 14 & 10 & 1 & 10 \\ Total & & & & 533 \\ \hline \hline \end{tabular} \end{table} Table 1: Configuration of voxels in the training dataset in each layer. Note that photons do not penetrate significantly in the hadronic calorimeter; therefore, they comprise fewer layers. \(N_{r}\) and \(N_{\alpha}\) are the number of bins in the \(r\) and \(\alpha\) directions, respectively. \(N_{\text{voxel}}\) is the number of voxels. of the incoming particle. This variable is preferred because the characteristics of the shower are directly proportional to the logarithm of the kinetic energy of the particle rather than the momentum. The architecture of CaloShowerGAN employed in this study is depicted in Figure 2. The generator comprises three hidden layers and one output layer. Each layer incorporates a dense layer, followed by batch normalisation [33] and an activation operation. The generator receives a noise vector randomly sampled from a high-dimensional normal distribution, where each dimension has a mean and standard deviation of 0.5. The condition label of the generated event is simultaneously fed as input. The output of the generator aligns with the number of voxels and is subsequently input into the discriminator as "fake" events, along with the concatenated condition label. The "real" events are taken from the dataset outlined in Section 2, joined with their actual condition labels. The discriminator encompasses three dense layers with a ReLU activation function. No batch normalisation operation is employed, as it is found not to enhance performance. Instead, spectral normalisation [34] is adopted to stabilise the training. ### Data preprocessing The input data comprises energy deposits in voxels, measured in megaelectronvolts (MeV) and structured in an \(n\times m\) matrix. Here \(n\) represents the number of voxels, and \(m\) denotes the number of events. Furthermore, the energy of each voxel is normalised based on the kinetic energy of the particle. This is similar to what is done in FastCaloGAN. This normalisation procedure allows to standardise all values within the input vector to a similar order of magnitude for all input momenta, effectively eliminating the significant difference between the momenta of the samples. In this way, the GAN can focus on reproducing the shape of the showers rather than its absolute value. The condition label is transformed to a normalised range of \([0,1]\) using the following equation: \[\hat{E}=\frac{\log\frac{E_{\text{min}}}{E_{\text{min}}}}{\log\frac{E_{\text{ min}}}{E_{\text{min}}}}. \tag{1}\] Here \(E_{\text{min}}\) (\(E_{\text{max}}\)) is the minimum (maximum) kinetic energy of the incoming particle in the training data. It has been observed that this scheme yields a notable improvement over an alternative method employed in FastCaloGAN. ### Training The training process for CaloShowerGAN involves independent training on distinct particle samples to maximise performance. Each training employs a batch size of 1024 and runs for a total of \(10^{6}\) iterations. Model checkpoints are created at intervals of \(10^{3}\) iterations. Due to the adversarial nature of GAN training, the final iteration does not necessarily yield the best outcome. To address this, the approach of saving multiple iterations is adopted. This approach has two-fold benefits: it enables swift training without evaluation during the process, and it offers flexibility in assessing the optimal iteration using diverse strategies, obviating the need for GAN re-training. Inspired by the methodology used in FastCaloGAN, the assessment metric is the total energy associated with each of the 15 incident momentum points. The \(\chi^{2}\) value for each GAN model is computed between the binned distributions of the Geant4 Figure 2: Schematic view of the CaloShowerGAN architecture. The generator comprises three hidden layers with node counts of \(N_{1}\), \(N_{2}\), and \(N_{3}\), respectively, along with an output layer sized to match the number of voxels. Each layer consists of a dense layer, followed by a batch normalisation operation and an activation operation. A 1-dimensional condition representing kinetic energy is used, alongside a 50-dimensional latent vector randomly sampled from a normal distribution with both mean and standard deviation of 0.5. The generator’s output, concatenated with the condition label, is input to the discriminator. The discriminator comprises three dense layers with a ReLU activation function. Spectral normalisation operation is used within the discriminator to stabilise the training. The generator’s activation function and both networks’ sizes vary between photon and pion GANs. sample and generated sample by the model and then normalised by the number of degrees of freedom used in each distribution (\(\chi^{2}\)/NDF). The model that gives the lowest \(\chi^{2}\)/NDF among the saved iterations is deemed the best. This selection process has proven to be a reliable metric for gauging the overall quality of a shower. It is found that the shape of the generated showers consistently improves in models with lower \(\chi^{2}\)/NDF values. The GAN architecture comprises various hyperparameters that are amenable to optimisation. Beginning with the values employed in FastCaloGAN, the optimisation process encompasses the refinement of several parameters, including the learning rate and momentum for both the generator and discriminator optimisers, the batch size, the discriminator-to-generator (D/G) ratio, the \(\lambda\) that controls the penalty contribution in the Wasserstein GAN, and the choice of activation functions. The D/G ratio, quantifying the number of times the discriminator is trained relative to a single training pass of the generator in each iteration, is found to play a pivotal role. Generator and discriminator sizes, ranging from one-quarter of the determined size to as much as four times the size, are explored. Optimisation algorithms are also investigated beyond the commonly used Adam [35] optimiser including RAdam [36] coupled with LookAhead [37], as well as AdamW [38]. It was observed that these alternative optimisers yielded sub-optimal performance when compared to Adam. Eventually, the Adam optimiser is selected for both the generator and discriminator, utilising a learning rate of \(10^{-4}\) and a momentum value of \(\beta_{1}=0.5\). These values are used for both pion and photon GANs. _Hyper-parameters for Photon CaloShowerGAN._ The utilisation of the Swish [39] activation function in the photon CaloShowerGAN yields superior performance compared to more common choices like ReLU [40]. While Swish activation can potentially introduce training instability, this drawback appears to be mitigated when coupled with the Glorot Normal [41] initialisation method for the generator neuron weights. Conversely, the ReLU activation, employed in conjunction with the He Uniform [42] initialisation, finds its place in the discriminator. Notably, a higher D/G ratio proves advantageous in strengthening the discriminator's potency against the generator, particularly when using the Swish activation function. The size of the networks is chosen as follows. The latent dimension is set to 100, allowing for intricate data representation. The width of the generator layers is increasing in the three hidden layers, from 100, 200, to 400. This scale is twice that of the generators seen in the FastCaloGANs, offering a substantial leap in capacity and potential. The discriminator size of all three hidden layers is fixed to the number of voxels, i.e. 368. The value of \(\lambda\) is chosen to be 3. _Hyper-parameters for Pion CaloShowerGAN._ The pion CaloShowerGAN employs the ReLU activation due to its superior performance compared to Swish. Both the latent dimension and the size of the generator layers have been carefully optimised. The latent dimension is chosen to be 200. The width of the generator layers is tailored to be 200, 400, and 800, respectively. Note the dimensions are larger than the photon CaloShowerGAN to account for the larger voxel count employed for pions, along with the intrinsically higher complexity and variety of hadronic showers. Larger networks are tested but fail to yield substantial gain and significantly extend the training time, hence they are not considered. The optimal discriminator size mirrors that of the generator, although it assumes a distinct configuration from the one employed for photons. The discriminator sizes sequentially progress from 533 voxels in the input layer to 800, 400, and 200 in subsequent layers. Maintaining a low D/G ratio and a relatively higher value of \(\lambda\) are found to be optimal. The introduction of batch normalisation does not emerge as a crucial factor in improving the performance, but it is applied to be consistent with the photon CaloShowerGAN. An overview of the hyperparameters used in the photon and pion CaloShowerGAN is shown in Table 2. ### Intermediate results after hyperparameter optimisation The performance of the selected CaloShowerGAN is presented in Figures 3-4 where the distribution of the total energy for generated events and the input Geant4 sample are compared for all momentum points for photons and pions respectively. Each pad contains the \(\chi_{i}^{2}\)/NDF\({}_{i}\) for the specific momentum \(i\) while the total \(\chi^{2}\)/NDF is displayed at bottom right, calculated as \(\Sigma_{\lambda}\chi^{2}/\Sigma_{i}\)NDF. On the whole, CaloShowerGAN yields notably improved results in comparison to the similar distributions presented in FastCaloGAN [27]. The results attained for pions exhibit remarkable agreement across all momentum points, yielding a total \(\chi^{2}\)/NDF value of \(1.9\)2. Only a few distributions show small deviations from the Geant4 distributions, and notably, there is no pronounced distinction in modelling either high or low momenta. The level of agreement achieved for photons GAN falls short, manifesting as a total \(\chi^{2}\)/NDF value of 3.7, nearly twice that achieved for pions. There is a clear trend in the values of the individual \(\chi^{2}\)/NDF which are worse for the higher momenta. The agreement worsened above 262 GeV with a visible shift in the generated distributions. These are likely caused by the difficulty in reproducing the asymmetric distribution of the total energy. These pitfalls motivates the further development below. Footnote 2: Our studies have revealed that the \(\chi^{2}\)/NDF value can exhibit an error of approximately 0.1 as a result of variations stemming from random seed choices. CaloShowerGAN represent a significant improvement not only in terms of physics performance but also in training speed when compared to FastCaloGAN. The optimised CaloShowerGAN requires approximately 6 hours while FastCaloGAN required over 16 hours on a GPU card with a similar computational setup. \begin{table} \begin{tabular}{l c c} \hline \hline Hyperparameter & Photon & Pion \\ \hline Latent space size & 100 & 200 \\ Generator size (\(N_{1},N_{2},N_{3}\)) & 100, 200, 400, 200, 400, 800 \\ Discriminator size (\(N_{4}\), \(N_{5}\), \(N_{6}\)) & 368, 368, 368 & 800, 400, 200 \\ Generator optimiser & Adam & Adam \\ Learning rate & \(1\times 10^{-4}\) & \(1\times 10^{-4}\) \\ \(\beta_{1}\) & 0.5 & 0.5 \\ \(\beta_{2}\) & 0.999 & 0.999 \\ Discriminator optimiser & Adam & Adam \\ Learning rate & \(1\times 10^{-4}\) & \(1\times 10^{-4}\) \\ \(\beta_{1}\) & 0.5 & 0.5 \\ \(\beta_{2}\) & 0.999 & 0.999 \\ Batch size & 1024 & 1024 \\ D/G ratio & 8 & 5 \\ \(\lambda\) & 3 & 20 \\ Activation (generator) & Swish & ReLU \\ Activation (discriminator) & ReLU & ReLU \\ Neuron weight initialisation (generator) & Glorot Normal & He Uniform \\ Neuron weight initialisation (discriminator) & He Uniform & He Uniform \\ Trainable parameters (generator, discriminator) & 261k, 408k & 871k, 829k \\ \hline \hline \end{tabular} \end{table} Table 2: Determined hyperparameter values for the photon and pion GANs. Figure 3: **Photon** CaloShoverGAN. The calorimeter response for the Geanyt simulation (solid black line) compared to CaloShoverGAN (dashed red line) for photons in the full momentum range. The \(\chi^{2}\) values in each sub-panel are calculated from the distributions in that incident energy and the final \(\chi^{2}\) is calculated from the concatenation of the histograms in all energies. Figure 4: **Pion** CaloShowerGAN. The calorimeter response for the Geant4 simulation (solid black line) compared to CaloShowerGAN (dashed red line) for pions in the full momentum range. The \(\chi^{2}\) values in each sub-panel are calculated from the distributions in that incident energy and the final \(\chi^{2}\) is calculated from the concatenation of the histograms in all energies. ## 4 Further optimisation of CaloShowerGAN ### Momentum split for photon GAN As depicted in Figure 3, the performance of the photon CaloShowerGAN reveals a dependence on the photon momentum, characterising three distinct momentum regions. These regions are distinct by specific features of the electromagnetic showers within the ATLAS calorimeter, which can pose challenges to the training process of CaloShowerGAN: 1. In the low-momentum region, i.e. for momenta up to 4 GeV, particles deposit almost all their energy in the initial two layers of the calorimeter. In the remaining layers, the voxels have minimal or negligible energy deposits, accompanied by significant event-to-event fluctuations. These fluctuations, absent in higher-momentum regions where all voxels are populated, can potentially confuse CaloShowerGAN during the learning process. 2. In the medium-momentum range between 8 GeV and 262 GeV, the energy is deposited predominantly in layer 2 as this is the layer with the largest amount of material. This presents a contrasting scenario compared to the lower momentum range. Here, the first two layers, while containing some energy, contribute insignificantly to the overall energy deposit. 3. Lastly, the samples with a momentum above 262 GeV are characterised by an asymmetric response in the total energy. This asymmetry is attributable to the shower extending beyond the confines of the voxel volume and is compounded by non-linearities in the calorimeter's response, which are more significant at higher energy levels. The mixture of events from these three distinct momentum groups during training introduces complexity to the learning process. Thus, the generation of photon showers in CaloShowerGAN is separated in three GANs, initialised with the previously outlined parameters, within the energy intervals of [256 MeV, 4 GeV], [4 GeV, 262 GeV], and [262 GeV, 4 TeV]. These GANs share a common momentum point to allow seamless interpolation across all momentum points. An additional limited HP scan is conducted to fine-tune the three GANs. The outcomes affirm that the majority of the employed hyperparameters are indeed optimal for all three GANs, requiring only minor adjustments to achieve improved performance. In the low-momentum range, the ReLU activation function takes over Swish, while the He Uniform initializer replaces Gloroth. For the two GANs trained in the higher momentum ranges, superior results are achieved using a downscaled generator network along with a latent space size reduced by half compared to the single GAN configuration. The adoption of this approach yields a clear enhancement, evidenced by the lowered \(\chi^{2}\)/NDF values for each individual GAN as compared to the single GAN's value of 3.7. In the low-momentum GAN, \(\chi^{2}\)/NDF values across all momentum points are improved, with the exception of the 4 GeV sample where a visible distribution shift contributes to a raised \(\chi^{2}\)/NDF. In the medium-momentum range, all momentum points exhibit a reduced \(\chi^{2}\)/NDF compared to the corresponding values obtained using the single GAN. However, the most substantial improvement is observed within the high-momentum range, where the previously observed disparity between generated events and input samples is effectively mitigated. The \(\chi^{2}\)/NDF results are summarised in Table 3 where a further result is derived using two GANs. While this alternative offers decreased precision compared to the three-GAN approach, it might still be valuable to consider by an experiment as it demands shorter training times and less memory during detector simulation when conducting inference. This approach is not applicable to pions, primarily due to the inherent characteristics of hadronic showers. Even at lower energies, hadronic showers tend to distribute some energy across all layers. Consequently, splitting pion training into multiple GANs results in a compromised overall performance and is therefore not recommended. \begin{table} \begin{tabular}{c c c c c} \hline \hline Name & momentum range & \(\chi^{2}\) & NDF & \(\chi^{2}\)/NDF \\ \hline \hline Single GAN & [256 MeV, 4 TeV] & 1553 & 420 & 3.7 \\ \hline \multirow{6}{*}{Two GANs} & [256 MeV, 4 GeV] & 360 & 140 & 2.6 \\ & [4 GeV, 4 TeV] & 1042 & 308 & 3.4 \\ \cline{1-1} & Sum & 1402 & 448 & 3.1 \\ \hline \multirow{6}{*}{Three GANs} & [256 MeV, 4 GeV] & 360 & 140 & 2.6 \\ & [4 GeV, 262 GeV] & 528 & 196 & 2.7 \\ \cline{1-1} & [262 GeV, 4 TeV] & 299 & 140 & 2.1 \\ \cline{1-1} \cline{2-5} & Sum & 1187 & 476 & 2.5 \\ \hline \hline \end{tabular} \end{table} Table 3: Performance of the photon CaloShowerGAN when splitting in different momentum ranges. The “sum”-med \(\chi^{2}\) and NDF are the sum of these values for the GANs considered. ### Layer-energy normalisation Despite the enhanced results stemming from the 3-GAN approach in the photon GAN, the overall performance still falls short of that demonstrated by CaloFlow [18], which currently stands as the leading model. At the time of writing, CaloFlow attains a remarkable \(\chi^{2}\)/NDF value of 1.17 for photons and 1.32 for pions. In pursuit of further enhancements, various strategies were explored. The most significant improvement is achieved by adopting a different normalisation strategy used in Ref. [43] for the input data used during CaloShowerGAN training. Additional studies were performed and some of them are detailed in Section 6. The normalisation detailed in Section 3.2 lacks constraints on pivotal physics quantities, such as layer-specific energy and total energy. While this information is inherently present in the data, not explicitly providing it to the GANs makes the learning process more challenging compared to the potential ease that clear constraints are provided as part of the inputs. This information can be encoded using the following normalisation procedure: Firstly, the voxel energy is normalised with respect to the total energy in the corresponding layer. Subsequently, the energy in each layer is normalised to the total energy in the shower, resulting in 5 (7) new input dimensions for the photon (pion) GANs. Another input is incorporated, representing the ratio between the total shower energy and the kinetic energy of the incident particle. Adopting this normalisation strategy requires modifications to the final generator layer. The output nodes are grouped based on the voxel count in each layer, and a SoftMax activation function is applied to each group to enforce layer-wise normalisation. Similarly, 5 (7) nodes, corresponding to the total deposits of photons (pions) in each layer, are grouped, employing another SoftMax activation. The last node, linked to the normalised total deposits in the entire calorimeter, employs a ReLU activation function. In the discriminator, only the input layer size is altered to accommodate the additional values. In the case of pions, it is worth noting that longer training times can be particularly beneficial; actually, pions with lower momenta continue to show significant learning improvements even after 1 million iterations. Therefore, for pions, the number of iterations used for the training is extended to 2 million, therefore doubling the training time to 12 hours. As a result, the cumulative training time for CaloShowerGAN is approximately 30 hours, which remains less than the 32 hours used for FastCaloGAN. ## 5 Performance of CaloShowerGAN ### Total energy The results of CaloShowerGAN with the layer-energy normalisation are depicted in Figures 5-6. By employing layer-energy normalisation, a substantial improvement in \(\chi^{2}\)/NDF is observed. In particular, the performance achieved by CaloShowerGAN on the pion dataset is significant and it reaches a comparable performance to that of CaloFlow [18] in terms of the \(\chi^{2}\)/NDF metric. Notably, it is essential to recognise that there exists a slight disparity in the definition of \(\chi^{2}\)/NDF between the two studies. While in this study the actual number of degrees of freedom (NDF) is adopted by excluding empty bins, the CaloFlow incorporates the total number of bins in the normalisation. Considering the proximity of the sum of all \(\chi^{2}\) values, we conclude that the two models exhibit closely comparable performance. Detailed comparisons using more complex metrics, such as a classifier, are the objective of the CaloChallenge, where both models are actively participating. Therefore, we defer further quantitative comparisons beyond the scope of the \(\chi^{2}\)/NDF metric to the CaloChallenge, where these considerations are appropriately addressed in a consistent and fair way for all models. The results obtained from the 3 GANs used to simulate the full momentum range of the photon dataset are summarised in Table 4. While CaloShowerGAN does not quite match the performance level of CaloFlow, it still attains a commendable accuracy. This achievement holds substantial promise for enhancing the efficiency of GANs employed by ATLAS. \begin{table} \begin{tabular}{c c c c c} \hline \hline Name & momentum range & \(\chi^{2}\) & NDF & \(\chi^{2}\)/NDF \\ \hline Photon single GAN & 256 MeV-4 TeV & 1131 & 420 & 2.7 \\ \hline & 256 MeV-4 GeV & 160 & 140 & 1.1 \\ & 4 GeV–4 TeV & 703 & 308 & 2.3 \\ Photon two GANs & Sum & 863 & 448 & 1.9 \\ \hline & [256 MeV, 4 GeV] & 160 & 140 & 1.1 \\ & [4 GeV, 262 GeV] & 450 & 196 & 2.3 \\ & [262 GeV, 4 TeV] & 235 & 140 & 1.7 \\ Photon three GANs & Sum & 845 & 476 & 1.8 \\ \hline \hline Pion GAN 1M & [256 MeV, 4 TeV] & 628 & 420 & 1.5 \\ \hline Pion GAN 2M & [256 MeV, 4 TeV] & 542 & 420 & 1.3 \\ \hline \hline \end{tabular} \end{table} Table 4: CaloSlowerGAN, with layer-energy normalisation. The “sum”-med \(\chi^{2}\) and NDF are the sum of these values for the GANs considered. Figure 5: **Photon CaloSlowerGAN with layer-energy normalisation. The calorimeter response for the Geant4 simulation (solid black line) compared to CaloSlowerGAN (dashed red line) for photons samples in the three momentum ranges using the layer-energy normalisation. The \(\chi^{2}\) values in each sub-panel are calculated from the distributions in that incident energy and the final \(\chi^{2}\) is calculated from the concatenation of the histograms in all energies.** Figure 6: **Pion CaloSlowerGAN, with layer-energy normalisation.** The calorimeter response for the Geant4 simulation (solid black line) compared to CaloSlowerGAN (dashed red line) for pions in the full momentum range using the layer-energy normalisation. The \(\chi^{2}\) values in each sub-panel are calculated from the distributions in that incident energy and the final \(\chi^{2}\) is calculated from the concatenation of the histograms in all energies. ### Stability of the training The training of a GAN, being an adversarial game between the two networks, is not a well-defined minimisation problem. In general, a longer training should produce a better result; however, as stated previously, the optimal GAN may not always come from the final iteration of the training due to the adversarial nature of the process. Therefore, the behaviour of \(\chi^{2}\)/NDF over the saved iterations is examined. The progression of \(\chi^{2}\)/NDF throughout iterations is visually represented in Figure 7 and Figure 8 for photons and pions, respectively. Evidently, the \(\chi^{2}\)/NDF values for both individual energies and the aggregate exhibit a decreasing trend as a function of the iteration and reach a relatively stable value toward the end of the training, indicating the model is progressively improving during training. It is interesting to observe that the best overall model, i.e. the one that produces the best total \(\chi^{2}\)/NDF, may not coincide with the optimal iteration for a specific momentum. The present selection methodology procedure is a balance between performance across all momentum points, representing a compromise solution. In general, the final selection is driven by momentum points that are relatively less accurately modelled, as these contribute the most to the overall total \(\chi^{2}\)/NDF. In the case of pions, a notable distinction emerges in the progression of \(\chi^{2}\)/NDF across various momentum points. Higher momentum points tend to converge more rapidly towards a stable solution, while lower momentum points exhibit a relatively slower convergence in \(\chi^{2}\)/NDF. The longer training used for the pions assures that a good convergence is achieved for all energies. ### Energy per layer While a satisfactory concordance in the total energy of showers might suggest success, it does not necessarily ensure the GANs' ability to replicate the complex structure of these showers. To validate this point, the energy distributions across each calorimeter layer are shown in Figure 9 and Figure 10. Good agreement with the Geant4 samples is observed when samples of all incident momentum points are merged together.. In the case of photons depicted in Figure 9, layer 0 poses a slight challenge for modelling, as the GANs slightly overshoot the intended energy deposits. It is important to highlight that this effect is relatively minor, considering the logarithmic scale of the plots. In fact, it affects less than 1% of the events. Moreover, the effect is limited for physics performance, as ATLAS electron and photon reconstructions primarily rely on layers 1 and 2. For pions in Figure 10, a bimodal structure emerges in several layers for momenta below \(1\,\mathrm{GeV}\), which is a feature that GANs encounter difficulty in reproducing. This phenomenon arises from situations where pions initially deposit a small, consistent amount of energy in the layers before the hadronic shower starts. This energy is compatible with deposits of a minimum ionising particle (MIP). While the GANs struggle to accurately model this aspect, this discrepancy is unlikely to impact physics outcomes either. The good agreement in the deeper layers and the high-momentum tails of the distribution serves as a strong indicator that CaloShowerGAN is capable of delivering exceptional performance, effectively contributing to achieving favourable outcomes in physics applications. Figure 8: **Pion CaloShowerGAN, with layer-energy normalisation.** Evolution of the individual and total \(\chi^{2}\) as a function of iteration in pions. The first \(N-1\) panels show the individual \(\chi^{2}\)/NDF for a momentum energy and the last panel shows the total \(\chi^{2}\)/NDF. The selected iteration is indicated in red dot while the orange dots in the individual cases indicate the lowest \(\chi^{2}\)/NDF when only considering that momentum. Figure 9: Energy deposit in each calorimeter layer for photons, summed over all incident energies. Figure 10: Energy deposit in each calorimeter layer for pions, summed over all incident energies. ### Shower shapes An additional validation to assess the performance of \(\mathtt{CaloShowerGAN}\) involves examining the shape of the showers across various layers. This requires evaluating the position of the shower centre and its width within each layer, in both the \(\phi\) and \(\eta\) directions. The shower centre is defined as follows: \[\langle\eta_{l}\rangle=\frac{\sum_{i}\left(E_{i}\odot H\right)}{E_{l}}\quad \text{and}\quad\langle\phi_{l}\rangle=\frac{\sum_{i}\left(E_{i}\odot F\right)}{ E_{l}}, \tag{2}\] where \(E_{i}\) represents the energy in the \(i\)-th voxel in layer \(l\), and \(H\), \(F\) are the position along \(\eta\) and \(\phi\) direction, respectively. Additionally, \(E_{l}=\sum_{i}E_{i}\) represents the energy in layer \(l\). The symbol \(\odot\) corresponds to the Hadamard product, while \(\sum\) denotes a summation across all elements in layer \(l\). The widths are defined as: \[\sigma_{l}^{\eta}=\sqrt{\frac{\sum_{i}\left(E_{i}\odot H^{2}\right)}{E_{l}}- \left(\langle\eta_{l}\rangle\right)^{2}}\quad\text{and}\quad\sigma_{l}^{\phi} =\sqrt{\frac{\sum_{i}\left(E_{i}\odot F^{2}\right)}{E_{l}}-\left(\langle\phi_{ l}\rangle\right)^{2}} \tag{3}\] It is important to note that these shapes are meaningful to compute in layers featuring multiple bins in the angular (\(\alpha\)) direction and become not-well-defined in layers that possess only one bin along the angular direction. The distributions of shape characteristics from \(\mathtt{CaloShowerGAN}\) are depicted in Figures 11-12 and Figures 13-14 for photons and pions, respectively. These distributions include all incident momenta. In general, the showers generated by \(\mathtt{CaloShowerGAN}\) effectively replicate the attributes found in the Geant4 distribution of shower centres. Regarding the shower width, in the case of photons, the agreement demonstrates considerable fidelity, with only minor deviations observed. The width of pion showers, however, displays a more notable discrepancy between \(\mathtt{CaloShowerGAN}\) and Geant4. Specifically, \(\mathtt{CaloShowerGAN}\) struggles to accurately reproduce extremely narrow showers. These comes from the events with a deposit of a MIP deposit observed in Figure 10. Of particular interest, a spurious peak within the distribution of width for layer 12 and 13 is observed in the pion GAN. Further investigation indicates that this phenomenon arises from events within low-momentum samples, as shown in Figure 15, which depicts the width along \(\eta\) in layer 12 and 13 for various momentum points. This peak is visible exclusively in the 512 MeV sample and vanishes entirely at 8 GeV. While this represents a limitation in \(\mathtt{CaloShowerGAN}\), the low energy deposits in the affected layers suggest that any potential influence Figure 11: **Photon \(\mathtt{CaloShowerGAN}\) with layer-energy normalisation**: shower centre position compared to Geant4 simulation (solid area). All energies accumulated in layers 1 (top) and 2 (bottom) along \(\eta\) (left) and \(\phi\) (right). on physics performance would be nearly negligible. On the other hand, CaloShowerGAN excels at modelling showers with high momenta. This capability holds promise for good jet substructure reconstructions by effectively mitigating challenges related to clustering and the undesired merging of adjacent clusters that should remain separate. However, to fully assess these potential advantages, comprehensive simulations and reconstructions within an authentic detector framework are required, which is beyond the scope of this paper. Figure 12: **Photon CaloShowerGAN with layer-energy normalisation**: shower width compared to Geant4 simulation (solid area). All energies accumulated in layers 1 (top) and 2 (bottom) along \(\eta\) (left) and \(\phi\) (right). Figure 13: **Pion** CaloSlowerGAN** **with layer-energy normalisation**: shower width compared to Geant4 simulation (solid area). All energies accumulated in layers 1 (first row), 2 (second row), 12 (third line) and 13 (bottom) along \(\eta\) (left) and \(\phi\) (right). Figure 14: **Pion** CaloSlowerGAN** with layer-energy normalisation**: shower width compared to Geant4 simulation (solid area). All energies accumulated in layers 1 (first row) and 2 (second row) 12 (third line) and 13 (bottom) along \(\eta\) (left) and \(\phi\) (right). Figure 15: **Pion CaloShowerGAN with layer-energy normalisation**: shower width compared to Geant4 simulation (solid area) in different incident energies of 512 MeV, 4 GeV, 8 GeV, 65 GeV and 1 TeV in layers 12 (up) and 13 (down). ### Energy in all voxels The efficacy of a model can also be evaluated by considering low-level variables such as the energy in each voxel, in contrast to the high-level observables examined in previous sections. The distribution of energy across all voxels from all samples depicted in Figure 16 for photons and pions, reveals an impressive accord in these distributions by CaloShowerGAN. Furthermore, this favourable alignment between CaloShowerGAN and Geant4 is observed in individual incident momentum samples as illustrated in Figure 17. ### Generation time The generation time for the GANs is assessed on CPUs and is summarised in Table 5 for different particles and momenta. A single event is generated, mirroring the common scenario at LHC where events typically encompass only a few particles within the small \(|\eta|\) range. As expected, the generation time does not depend on the energy of the particle to be simulated but depends on the particle type due to the different network's complexity. Therefore, the generation of pions demands relatively more time due to the larger network. The per-particle generation time can be reduced by increasing the batchsize which refers to the number of particles generated simultaneously, as is indicated in Table 6. Although this strategy is presently not applicable within the ATLAS fast simulation framework, it can bring advantages if the experiment changes its strategy to parametrising the detector with larger \(\eta\) slices. For instance, wider regions could be defined to encompass the entire Barrel region (\(|\eta|<1.2\)) or the entire EndCap region (\(1.5<|\eta|<3\)). Such an approach would enable the simultaneous simulation of many particles within these regions. This approach would facilitate the concurrent simulation of a bunch of particles within these specified regions, particularly when simulating a jet containing a spray of hadrons, which are currently all simulated using the pion GAN in AtlFast3. Figure 16: **Photons (left) and pion (right) CaloShowerGAN with layer-energy normalisation**: distribution of the voxel energy. These distributions include contributions from all energies and all layers. Figure 17: **Pion CaloShowerGAN with layer-energy normalisation**: distribution of the voxel energy of pions of 512 MeV (top left), 32 GeV (top right) and 2 TeV (bottom). These distributions include contributions from all layers. ### Memory requirement It is crucial to make sure CaloShowerGAN model is small enough to be usable. Assessing memory consumption is complex and depends on the chosen inference tool within the production system. Here the memory usage is estimated through the LWTNN library [44], which is the inference tool used in AtlFast3. LWTNN operates by storing weights in a JSON file, subsequently loaded into memory. Although the size of this file does not directly translate to the exact memory demand, as optimisations can be made and there exist associated overheads, the ratio between these two files can offer a sense of the additional memory required by CaloShowerGAN due to the enlarged networks and the additional GANs employed for photons (and also for electrons, given the identical network structure shared, as employed in FastCaloGAN). The sizes of the JSON file for FastCaloGAN are approximately 3 MB for photons and 3.9 MB for pions, whereas in the case of CaloShowerGAN, the networks necessitate 6.2 MB for photons and 5.6 MB for pions. The value for photons is multiplied by 3 for optimal performance, though it could be scaled down by a factor of 2 with a minor reduction in quality when employing only two GANs. In total, approximately 43 MB would be needed to parametrise a detector slice with CaloShowerGAN, including a pion GAN alongside 3 GANs each for electrons and photons. This constitutes roughly four times the memory demand of FastCaloGAN. Scaling the estimation to the entire detection range, a preliminary approximation of the actual memory needed by CaloShowerGAN can be derived based on the numbers provided in Ref. [20]. Here the assumption is made that CaloShowerGAN would be integrated into the ATLAS Athena framework as a replacement for FastCaloGAN. And additional assumption is that the previously mentioned scaling in the size of the networks is the same for all detector regions. The publication states that a pure parameterisation with FastCaloGAN requires 2.5 GB of memory. Given that the ATLAS Athena framework [45] consumes approximately 2 GB on its own, it can be inferred that the memory footprint of the pure FastCaloGAN parameterisation for the entire detector range is approximately 0.5 GB. Consequently, the adoption of CaloShowerGAN to exchange FastCaloGAN would necessitate roughly 2 GB for parameterisation, resulting in a total memory requirement of 4 GB for a fast simulation task, comfortably fitting within the memory capacity of the computing system used for ATLAS simulation jobs. Therefore, CaloShowerGAN could be seamlessly deployed without major modifications within the ATLAS production system. Additionally, the memory footprint could be further reduced by utilising the ONNX library [46] or other optimised inference libraries. ## 6 Future directions The performance of CaloShowerGAN holds potential for improvement through addressing the visible discrepancies presented in this paper. For example, the response of low-momentum pions could be improved to address the width of the shower which is not well described in some calorimeter layers. The CaloShowerGAN also struggle to accurately replicate pions interactions as minimum ionising particles. This could be addressed by categorising events depending on the starting position of hadronic shower. However, implementing this solution is not straightforward, as it requires large training statistics for each category and multiple GANs to be used in the inference stage. \begin{table} \begin{tabular}{c c} \hline \hline Batch size & Time per batch [ms] \\ \hline 1 & 6.3(3) \\ 10 & 6.8(15) \\ 100 & 8.2(15) \\ 1000 & 14.4(21) \\ 10000 & 70.8(48) \\ \hline \hline \end{tabular} \end{table} Table 6: Generation time for pions of 65 GeV with various batch sizes. Results are averaged over 100 trials and the standard deviations are taken as errors, measured on i9-Intel(R) Core(TM) 9900K CPU @ 3.60GHz. \begin{table} \begin{tabular}{c c c} \hline \hline Particle & Energy & Time [ms] \\ \hline Photons & 1 GeV & 4.5(2) \\ & 65 GeV & 4.9(2) \\ & 1 TeV & 4.1(2) \\ \hline Pions & 1 GeV & 6.2(2) \\ & 65 GeV & 6.3(3) \\ & 1 TeV & 6.3(3) \\ \hline \hline \end{tabular} \end{table} Table 5: Generation time for a single particle. Results are averaged over 100 trials and the standard deviations are taken as errors, measured on i9-Intel(R) Core(TM) 9900K CPU @ 3.60GHz. Likewise, although significant progress has been achieved, the precision of photon GANs still falls short of the levels achieved by other models. Particularly, the medium momentum range stands out where the modelling is comparatively less accurate. Therefore, any future research efforts that priorititise this specific region, may lead to a refinement of the electromagnetic shower simulation and further enhance the capabilities of the photon GANs. Considering that some of the GANs in CaloSlowerGAN have achieved a low \(\chi^{2}\)/NDF close to 1, future research could involve the development of a more sophisticated figure of merit to select the optimal iteration. This necessity arises due to the existence of multiple iterations that give close \(\chi^{2}\)/NDF value in the total energy distributions but differ in their ability to describe the shapes of the simulated events. One potential solution would be expanding the histograms considered in the \(\chi^{2}\)/NDF calculation, including not only the total energy but also the shapes and/or the energy distributions in each layer. However, it is essential to acknowledge that this solution introduces computational complexity, in particular assessing the shapes requires expensive computations. Moreover, this assessment must be repeated for all generated events at every energy point for each iteration. Hence it was not employed in the present work. Adopting a different figure of merit may also need a re-evaluation of the optimal training iteration count for GANs. While an extended training is already adopted for pions, this adjustment could also serve as a straightforward approach to further enhance the performance of all GANs. A natural expansion of CaloSlowerGAN, as highlighted in Section 5.6, involves the potential to parametrise a wider detector range using a single GAN. This advancement could be realised by incorporating an additional conditional parameter into the GAN input layer, specifically the \(\eta\) value of the incident particle. Although it is not feasible to test on the CaloChallenge dataset due to the absence of this parameter, we maintain confidence that this strategy would prove successful given the condition of comparable responses across different \(\eta\) values. Although a single GAN might not adequately parametrise the entire detector range for a complex system like ATLAS, we anticipate that a modest estimate of around 10 GANs could potentially replace the current deployment of 100 GANs in AtlFast3. Despite challenges introduced by the increased data volume and problem complexity, it would simplify the overall system and reduce the resources demanded during simulation. The current approach, employing fixed momentum points with substantial statistics per point, has proven highly effective for GANs. FastCaloGAN in ATLAS already demonstrated the efficacy of interpolating between momentum points, thereby satisfying the experiment's physics requisites. An alternative strategy involves training on a continuous momentum distribution, similar to other datasets in the CaloChallenge. While this approach assures improved momentum interpolation, it introduces a challenge in selecting the optimal training iteration. A new metric would be required to replace the conventional \(\chi^{2}\)/NDF. Additional explorations and studies will be necessary when training with this type of dataset. While refining CaloSlowerGAN, various data manipulations and training strategies employed by other models in the CaloChallenge are explored. However, it was found that these approaches did not yield enhancements in performance: Both diffusion models [26] and normalising flow models [16] introduce noise into the dataset, which is then subtracted from the generated events. Diverse levels of noise, ranging from 1 keV to 1 MeV, are tested, but no observable improvement in performance is detected; in fact, degradation is noted in most cases. Similarly, ideas involving masking voxels through a range of thresholds are tested, yielding no discernible benefits. While the best GANs do not incorporate either of these options, considering the advantageous outcomes witnessed with other approaches, further investigations in this direction remain a possibility. Such studies could potentially enhance the performance of GANs or offer evidence that GANs exhibit robustness in handling low-momentum voxels compared to other models. Another intriguing avenue for future exploration involves the implementation of variable learning rates. At present, a static learning rate is employed; owing to the incremental nature of training, adapting the learning rate value as the training progresses could offer improved training stability and potentially yield enhanced results. However, this concept is not included in the present results due to its unsuccessful initial trials and the already commendable performance achieved by CaloSlowerGAN in comparison to state-of-the-art benchmarks. ## 7 Conclusions In particle physics research, the need for fast and precise simulations is ever-growing. Attention has shifted from traditional methods to machine learning-based methods. The development of CaloSlowerGAN exploits Generative Adversarial Networks (GANs) and achieves a significant improvement with respect to the GAN-based method of FastCaloGAN used in the ATLAS experiment. While many new types of generative models have been proposed in the last few years, CaloSlowerGAN underscores the enduring competitiveness of GANs and achieves a similar performance to the state-of-the-art generative models. This accomplishment is achieved through the optimisation of the GAN architecture, the hyper-parameter optimisation, and, crucially, the pre-processing of the data. While certain improvements are rooted in machine learning techniques, the majority stem from a profound knowledge of the calorimeter showering processes in the ATLAS detector. These insights demonstrate how domain knowledge is still a crucial factor in maximising the efficacy of machine learning tools. While the work presented can easily be applied to any calorimeter that uses a similar voxelisation strategy to the one implemented by ATLAS, it is crucial to emphasise that given the similarities to FastCaloGAN, this work can seamlessly be integrated into the ATLAS software framework. This could potentially yield a substantial performance enhancement for the forthcoming generation of ATLAS simulations. ## Acknowledgements We would like to thank the organisers of the CaloChallenge for creating the competition and for the useful discussions carried out while preparing this paper. In particular, we would like to thank Dailia Salimani for the fruitful exchange of ideas concerning the layer-energy normalisation approach. We wish to express our gratitude for the substantial efforts of the ATLAS collaboration in releasing the codebase and dataset to the public. Our appreciation extends to the ATLAS fast calorimeter community for their valuable insights and information shared regarding the dataset. This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 754496. RZ is supported by US High-Luminosity Upgrade of the Large Hadron Collider (HL-LHC) under Work Authorization No. KA2102021. In this work, we used the NumPy 1.19.5 [47], Matplotlib 3.5.1 [48], sklearn 1.0.2 [49], h5py 3.1.0 [50], TensorFlow 2.6.0 [51], Pandas 1.4.1 [52] software packages. We are grateful to the developers of these packages.
2303.00117
Study of the Inflationary Spectrum in the Presence of Quantum Gravity Corrections
After a brief review of the different approaches to predict the possible quantum gravity corrections to quantum field theory, we discuss in some detail the formulation based on a Gaussian reference frame fixing. Then, we implement this scenario to the determination of the inflationary spectrum of primordial perturbations. We consider the quantization of an inhomogeneous free massless scalar field on a quasi-classical isotropic Universe, developing a WKB expansion of the dynamics at the next order in the Planckian parameter, with respect to the one at which standard QFT emerges. The quantum gravity corrections to the scale invariant spectrum are discussed in a specific primordial cosmological setting and then in a general minisuperspace formalism, showing that there is no mode-dependent effect and thus the scale invariant inflationary spectrum is preserved. Such result is discussed in connection to the absence of a matter backreaction on the gravitational background in the considered paradigm.
Giulia Maniccia, Giovanni Montani, Leonardo Torcellini
2023-02-28T22:37:12Z
http://arxiv.org/abs/2303.00117v2
# Study of the Inflationary Spectrum in the Presence of Quantum Gravity Corrections ###### Abstract After a brief review of the different approaches to predicting the possible quantum gravity corrections to quantum field theory, we discuss in some detail the formulation based on a Gaussian reference frame fixing. Then, we utilize this scenario in the determination of the inflationary spectrum of primordial perturbations. We consider the quantization of an inhomogeneous, free, massless scalar field in a quasi-classical isotropic Universe by developing a WKB expansion of the dynamics of the next order in the Planckian parameter, with respect to the one at which standard QFT emerges. The quantum gravity corrections to the scale-invariant spectrum are discussed in a specific primordial cosmological setting and then in a general minisuperspace formalism, showing that there is no mode-dependent effect, and thus the scale invariant inflationary spectrum is preserved. This result is discussed in connection to the absence of a matter backreaction on the gravitational background in the considered paradigm. inflationary dynamics; quantum gravity corrections to QFT; primordial perturbations spectrum 20239169. 12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.122.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.122.12.12.12.12.12.12.12.12.12.12.122.12.12.12.12.12.12.12.12.12.12.12.12.12.122.12.12.122.12.12.12.12.12.12.12.122.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.122.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.122.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.12.122.12.12.12.12.12.12.12.12.12.12.22.12.12. gravitational collapse, in which the background metric cannot be regarded as a purely classical dynamics, but simply an essentially classical dynamics affected by quantum fluctuations of the geometry. This line of research has been till now developed by a relatively limited number of studies [20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36], in which the first problem to be addressed is the emergence of a time variable for QFT from the gravity-matter Wheeler-DeWitt equation. In a pioneering approach [37], the so-called "Tomonaga time" was introduced to reconstruct QFT on curved spacetime from canonical quantum gravity. However, the most interesting proposal comes from the well-known analysis [20], treating the separation of the total Hamiltonian as a quasi-classical component and a "small" quantum subsystem (see [38] for a physical characterization of the word "small" in this context). A number of interesting implementations of this idea in the cosmological arena can be found in [23; 26; 27; 28; 34; 36; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50], whose common trait is the description of a "small" quantum subsystem, often identified in the Universe anisotropic degrees of freedom in contrast to a quasi-classical isotropic background. Other approaches aimed at obtaining QFT on curved spacetimes an effective theory limit from quantum gravity can be found in Refs. [51; 52; 53; 54; 55]. In [21], the original proposal of [20] was specialized to the problem of quantum gravity corrections to QFT, and overall extended to the next order of approximation, where such a feature affects the QFT functional Schrodinger equation. This study had the merit of outlining the emergence of a non-unitary theory, as the QFT Hamiltonian is amended by quantum gravity contributions, at the first order of expansion in the inverse of a Planckian parameter. This question of the non-unitarity was discussed in later works [29; 56; 57], and the Born-Oppenheimer character of the adopted scheme was emphasized in [25; 28; 30]. First in [33] and subsequently in [35], it was argued that, to properly deal with the non-unitarity puzzle, it is necessary to define the physical clock from a different dynamical setting. In fact, in [20; 21] (see [33] for a detailed comparison of the two works), the time dependence of the QFT wave functional is essentially recovered from the corresponding dependence on the label time of the quasi-classical metric variables. It is just in this feature that the non-unitarity naturally manifests itself in the perturbation scheme (for a review of the entire line of research, including some minisuperspace applications see [36]). In [33], the presence of the so-called kinematical action was postulated in the quantum gravity-matter dynamics [6; 58], and in [35] the whole problem was restated in the framework of [59], i.e., by fixing a Gaussian reference frame which is "materialized" in the dynamics (see also [60]). The idea is that, in the considered Born-Oppenheimer scenario, the non-physical nature of the emerging fluid (i.e., its violation of the so-called "energy conditions") does not take place, as a result of the perturbative expansion. In the present study, we re-analyze this idea and then apply it to the natural arena of predicting possible quantum gravity corrections to the inflationary spectrum. The origin of the primordial perturbation spectrum is identified in the quantum fluctuations of the inflaton field during the slow-rolling phase [61; 62; 63; 64], here approximated by an exact de Sitter regime [26]. More specifically, we consider a Robertson-Walker quasi-classical background, described via the conformal time, and we study the resulting spectrum of a free massless scalar field living on a de Sitter phase of the Universe, dominated by the vacuum energy of the transition phase, here represented by a cosmological constant term. By a Fourier decomposition of the scalar mode, we are able to deal with a set of minisuperspace models, one for each value of the wavenumber. The Schrodinger equation for QFT we consider here is the one obtained in [35]--see also [34]--and the aim of the present study is to evaluate how such a correction can affect the spectrum of the inflaton field. The main result of the considered cosmological scenario is showing how the quantum gravity corrections manifest by a simple phase factor in front of the standard QFT solution on a Robertson-Walker metric, de facto corresponding to the solution of a time-dependent harmonic oscillator. As a natural consequence, the effect of the quantum gravity corrections on the spectrum vanishes at the considered order of expansion. The explanation for such a surprising issue is then discussed in a more general minisuperspace scheme, without a specific reference to the dynamical setting. We remark that the obtained result depends on the possibility of always factorizing the quantum gravity correction to the Universe wave function with respect to the standard QFT state on the considered cosmological background. The physical motivation for such a decoupling of the wave function is finally identified in the absence, up to first order of approximation, of a backreaction of the quantum matter on the quasi-classical background. Thus, our study has the main merit of clarifying how, in the framework proposed in [35], the phenomenology (here identified in the primordial spectrum) is not affected by quantum gravity modifications, at least up to the considered expansion order, and how this perspective has to be sensitive to the existence of appreciable feedback of the quantum matter dynamics on its background variables. The paper is structured as follows. In Section 2, we discuss the implementation of the Gaussian frame procedure to define a time parameter, reviewing the original formulation in Section 2.1 and the WKB implementation in Section 2.2. In Section 3, we apply the considered formalism to calculate the possible corrections to the inflationary spectrum by introducing a Fourier decomposition of the inflaton and determining the vacuum expectation values. In Section 4, we discuss the possible physical motivations for dealing with the standard (not modified by quantum gravity) inflationary spectrum discussed in Section 3. The concluding remarks are presented in Section 5. ## 2 Reference Frame Fixing and Reparametrization We discuss here the reparametrization procedure illustrated in [59] that allowed us to define a physical clock for the quantum gravity system. The original paradigm was then applied to the case of gravity and matter via a Wentzel-Kramer-Brillouin (WKB) expansion in a Planckian parameter, as shown in [35]. ### Kuchar-Torre Gaussian Frame Proposal A proposal to recover a physical clock for the quantum gravity system was discussed in [59], based on a Gaussian reference frame implementation. The formalism there introduced allows one to fix the Gaussian frame in a quantum field theory, by a reparametrization procedure that preserves the system's invariance under the coordinate choice. For this purpose, the following term is adjoined to the action of the system: \[S^{f}=\int d^{4}x\bigg{[}\frac{\sqrt{-\beta}}{2}\left(g^{\alpha\beta}\partial _{\alpha}T(x)\,\partial_{\beta}T(x)-1\right)\mathcal{F}+\sqrt{-g}\left(g^{ \alpha\beta}\partial_{\alpha}T(x)\,\partial_{\beta}X^{i}(x)\right)\mathcal{F}_ {i}\bigg{]} \tag{1}\] where \(\mathcal{F},\mathcal{F}_{i}\) are Lagrange multipliers. Here, \(T,X^{i}\) are the Gaussian coordinates associated with the metric \(\gamma_{\mu\nu}\) satisfying \(\gamma^{00}=1,\gamma^{0i}=0\) (the implemented signature is \((+,-,-,-)\) for coherence with the original paper); the writing \(T(x),X^{i}(x)\) in Equation (1) clarifies their dependence as functions of generic coordinates \(x^{\alpha}=(t,x^{i})\) that instead correspond to the metric \(g_{\alpha\beta}\) (\(\partial_{\alpha}\) stands for the derivative with respect to \(x^{\alpha}\)). Such reparametrization is a necessary tool for recovering a field theory that is diffeomorphism-invariant, as opposed to field equations valid only in the Gaussian frame, simply by providing the map between the Gaussian and the arbitrary desired coordinates. Indeed, the non-reparametrized form of (1) would be \[S^{f}_{\rm G}=\int d^{4}X\bigg{[}-\frac{\sqrt{-\gamma}}{2}\Big{(}\gamma^{00}- 1\Big{)}\mathcal{F}+\sqrt{-\gamma}\ \gamma^{0i}\mathcal{F}_{i}\bigg{]}\, \tag{2}\] corresponding to a gauge fixing, where \(\mathcal{F},\mathcal{F}_{i}\) act as Lagrange multipliers (since their variations give the Gaussian conditions). The reparametrized form (1) is then uniquely obtained by requir ing it to be invariant under transformations of the \(x^{\alpha}\), and that, for \(x^{\alpha}\equiv(T,X^{i})\), the expression is equivalent to (2). The Hamiltonian formulation of (1) shows how such a contribution can play the role of a physical clock for quantum gravity, when \(S^{f}\) is adjoined to the Einstein-Hilbert action and the canonical quantization is implemented. Such formulation can be evaluated via the Arnowitt-Deser-Misner (ADM) foliation [65; 66], which allows one to write the line element as \[ds^{2}=N^{2}dt^{2}-h_{ij}\,dx^{i}dx^{j}\,, \tag{3}\] where we label by \(h_{ij}\) (\(i,j\) are spatial indices) the induced metric on the identified 3d hypersurfaces \(\Sigma\), and by \(N\) and \(N^{i}\) the lapse function and shift vector describing the separation in the time-like and space-like directions, respectively. The super Hamiltonian and supermomentum contributions are \[H^{f}=W^{-1}P+WW^{k}P_{k}\,, \tag{4}\] \[H^{f}_{i}=P\,\partial_{i}T+P_{k}\,\partial_{i}X^{k}\,, \tag{5}\] where \(P\), \(P_{k}\) are the momenta conjugate to \(T,X^{k}\) and \[W\equiv(1-h^{jl}\partial_{j}T\,\partial_{l}T)^{-1/2}\,, \tag{6}\] \[W^{k}\equiv h^{jl}\partial_{j}T\,\partial_{l}X^{k}\,. \tag{7}\] The functions (4) and (5), which are linear in the momenta, are added to the analogous functions \(H^{g},H^{g}_{i}\) of the gravitational sector; consequently, the total constraints \(H^{g}+H^{f}\) and \(H^{g}_{i}+H^{f}_{i}\) must vanish because of diffeomorphism invariance [1; 2]. One can thus obtain a functional Schrodinger evolution with the time definition \[\hat{\mathcal{H}}\Psi=i\hbar\,\partial_{i}\Psi=i\hbar\int_{\Sigma}d^{3}x\, \frac{\delta\Psi(T,X^{k},h^{il})}{\delta T(x)}\Big{|}_{T=t}\,, \tag{8}\] where \(\mathcal{H}=\int_{\Sigma}d^{3}x\,\hat{H}^{g}\), i.e., restricting the states to the hypersurfaces where \(t\equiv T\), so that \(\Psi\) is still a functional of the \(X^{i}\). The choices \(x^{i}\equiv X^{i}\) and \((t,x^{i})\equiv(T,X^{i})\) are also examined in the original paper [59]. However, an important characteristic of the Gaussian-frame method emerges at the classical level of the theory. Varying the total action with respect to the metric, it is observed that the corresponding Einsteinian equations are modified by the appearance of a source term: \[T^{\alpha\beta}=\mathcal{F}\,\mathrm{U}^{\alpha}\mathrm{U}^{\beta}+\frac{1}{ 2}\Big{(}\mathcal{F}^{\alpha}\,\mathrm{U}^{\beta}+\mathcal{F}^{\beta}\, \mathrm{U}^{\alpha}\Big{)}\,, \tag{9}\] being that \(\mathrm{U}^{\alpha}=g^{\alpha\beta}\partial_{\beta}T\) and \(\mathcal{F}_{\alpha}=\mathcal{F}_{i}\partial_{\alpha}X^{i}\). Thus, the Gaussian-frame terms arise as a fluid component, having four-velocity \(\mathrm{U}^{\alpha}\), energy density \(\mathcal{F}\), and heat flow \(\mathcal{F}_{\alpha}\). The associated energy conditions give the relation \[\mathcal{F}\geq 2\sqrt{\gamma^{\alpha\beta}\mathcal{F}_{\alpha}\mathcal{F}_{ \beta}}\,, \tag{10}\] which, however, is not in general satisfied due to the arbitrariness of the Lagrange multipliers, so the fluid has a non-physical character. Actually, by implementing only the Gaussian time condition with \(\mathcal{F}\) (i.e., setting \(\mathcal{F}_{i}=0\) in (1)), the fluid reduces to an incoherent dust (no heat flow is present). In this case, the energy conditions are ensured by \(\mathcal{F}\geq 0\), which can be cast as an initial condition, since \(\sqrt{-g}\mathcal{F}\) is a constant of motion. We stress that this point will be differently addressed in the next subsection. ### WKB Matter Dynamics with the Gaussian Frame Implementation Here we briefly illustrate the procedure, discussed in [35], by which the kinematical variables associated with the Gaussian reference frame can provide a suitable clock for the matter sector in a quantum gravity-matter system. Indeed, unitary dynamics emerges at the next order of expansion in a Planckian parameter, where quantum gravity corrections arise. This scheme will then be applied for the computation of the modified primordial power spectrum in the next section. Let us consider a gravity-matter system, where the gravitational Hamiltonian is characterized by a kinetic term and a potential \(V\), and the matter component is a self-interacting scalar field \(\phi\) with potential \(U_{m}(\phi)\). Such choice will turn out to be suitable for the cosmological implementation discussed in Section 3. We insert the Gaussian-frame term (1) such that the total action reads: \[S=\int dt\int_{\Sigma}d^{3}x\Big{(}\Gamma^{ij}h_{ij}+p_{\phi}\,\phi-N(H^{S}+H^{ m})-N^{i}(H_{i}^{S}+H_{i}^{m})\Big{)}+S^{f}\,, \tag{11}\] where \[H^{S}=-\frac{\hbar^{2}}{2M}\Bigg{(}G_{ijkl}\frac{\partial}{ \partial h_{ij}}\frac{\partial}{\partial h_{kl}}+g_{ij}\frac{\partial}{ \partial h_{ij}}\Bigg{)}+M\,V\,, \tag{12}\] \[H_{i}^{S}=2\hbar\,h_{ij}\,D_{k}\frac{\partial}{\partial h_{kj}}\,,\] (13) \[H^{m}=-\hbar^{2}\frac{\partial^{2}}{\partial\phi^{2}}+U_{m}\,,\] (14) \[H_{i}^{m}=-(\partial_{i}\phi)\frac{\partial}{\partial\phi}\,. \tag{15}\] In this notation, we will treat functional derivatives as ordinary partial ones. The term \(g_{ij}\,\partial/\partial h_{ij}\) in (12) is inserted to account for a generic factor ordering (see discussion in [21]), and \(D_{k}\) in (13) is the 3d covariant derivative on the hypersurface \(\Sigma\). Instead of the Einstein constant \(\kappa=8\pi G/c^{3}\), we have written in (12) and (13) the following Planckian parameter: \[M:=\frac{c^{2}}{32\pi G}=\frac{cm_{Pl}^{2}}{4\hbar}\,, \tag{16}\] with dimension of mass over length, which will be taken as the order parameter for the expansion. Indeed, the Planckian energy scale, representative of the gravitational sector, is typically larger with respect to the corresponding scale of the matter fields. In principle, it would be possible to construct the WKB expansion via a dimensionless parameter, constructed as the ratio between the present one and the corresponding quantity calculated for a typical energy scale of the quantum matter. In the case we will consider, such an energy scale corresponds to that one of the inflationary process, say, \(T\simeq 10^{15}\) GeV (see Section 3). However, we retain here a dimensional parameter in order to keep contact and comparison with the previous literature, e.g., Refs. [21; 26; 27; 29; 33; 35]. The previous consideration motivates a Born-Oppenheimer (B-O) separation of the wave function: \[\Psi\big{(}h_{ij},\phi,X^{\mu}\big{)}=\psi\big{(}h_{ij}\big{)}\chi\big{(}\phi,X^{\mu};h_{ij}\big{)} \tag{17}\] between the gravity and matter components. After performing a WKB expansion [67] in powers of \(1/M\), we have \[\Psi\big{(}h_{ij},\dot{\phi},X^{\mu}\big{)}=e^{\frac{i}{8}\big{(}M\mathcal{S}_{0} +\mathcal{S}_{1}+\frac{1}{M}\mathcal{S}_{2}\big{)}}e^{\frac{i}{8}\big{(}Q_{1}+ \frac{1}{M}Q_{2}\big{)}} \tag{18}\] up to the order \(M^{-1}\). Here, the \(S_{m}\) functions (at the \(m\)-order) account for the gravitational background, and the \(Q_{n}\) (at the order \(n\)) describe the reference fluid and matter components. We stress that the first matter contribution is of the order \(M^{0}\). Similarly to the B-O scheme, we enforce the conditions \[\frac{\langle\hat{H}^{m}\rangle}{\langle\hat{H}S\rangle} =\mathcal{O}\Big{(}M^{-1}\Big{)}\,, \tag{19}\] \[\frac{\partial Q_{n}}{\partial h_{ij}} =\mathcal{O}\Big{(}M^{-1}\Big{)}\,, \tag{20}\] where the expectation values are computed over the corresponding wave functions, due to the "fast" nature of the matter sector with respect to gravity. If the average backreaction of the matter degrees of freedom is negligible, both the gravitational and total constraints are satisfied: \[\hat{H}^{g}\,\psi\big{(}h_{ij}\big{)} =0\,, \tag{21}\] \[\hat{H}^{g}_{i}\,\psi\big{(}h_{ij}\big{)} =0\,,\] (22) \[(\hat{H}^{g}+\hat{H}^{m}+\hat{H}^{f})\Psi\big{(}h_{ij},\phi,X^{ \mu}\big{)} =0\,,\] (23) \[(\hat{H}^{g}_{i}+\hat{H}^{m}_{i}+\hat{H}^{f}_{i})\Psi\big{(}h_{ij },\phi,X^{\mu}\big{)} =0\,, \tag{24}\] where we consider also the supermomentum constraints for generality. By substituting the ansatz (18) in the constraints, with the explicit forms (4), (5), and (12)-(15), the dynamics can be analyzed order by order (we refer to the original paper [35] for the explicit computation). At the Planckian order \(M\), one obtains the classical Hamilton-Jacobi (H-J) equation for the gravitational function \(S_{0}\): \[\frac{1}{2}G_{ijkl}\frac{\partial S_{0}}{\partial h_{ij}}\frac{\partial S_{0} }{\partial h_{kl}}+V=0\,, \tag{25}\] together with its diffeomorphism invariance condition. A crucial point must instead be discussed at the order \(M^{0}\): the gravitational constraints (21) and (22) allow one to solve for \(S_{1}\), and after substituting the solutions \(S_{0}\) and \(S_{1}\) into (23) and (24), the remaining equations for the matter sector are \[\bigg{(}-2i\hbar\frac{\partial^{2}Q_{1}}{\partial\phi^{2}}+U_{m} -W^{-1}\frac{\partial Q_{1}}{\partial T}-WW^{k}\frac{\partial Q_{1}}{\partial X ^{k}}\bigg{)}e^{\frac{i}{8}Q_{1}}=0\,, \tag{26}\] \[\bigg{(}-2h_{ij}\,D_{k}\frac{\partial S_{1}}{\partial h_{kj}}-i \hbar^{-1}(\partial_{i}\phi)\frac{\partial Q_{1}}{\partial\phi}-(\partial_{ i}T)\frac{\partial Q_{1}}{\partial T}-(\partial_{i}X^{k})\frac{\partial Q_{1}}{ \partial X^{k}}\bigg{)}e^{\frac{i}{8}Q_{1}}=0\,. \tag{27}\] Such expressions require further attention for their physical interpretation. Following from the linearity of \(H^{f}\) and \(H^{f}_{i}\) in the momenta \(P\) and \(P_{k}\), a suitable time parameter can be naturally introduced as \[i\hbar\frac{\partial}{\partial\tau}=i\hbar\int d^{3}x\bigg{[}N\bigg{(}W^{-1} \frac{\partial}{\partial T}+WW^{k}\frac{\partial}{\partial X^{k}}\bigg{)}+N^{ i}\bigg{(}(\partial_{i}T)\frac{\partial}{\partial T}+(\partial_{i}X^{k})\frac{ \partial}{\partial X^{k}}\bigg{)}\bigg{]}\,. \tag{28}\] Then, the linear combination of Equations (26) and (27) with coefficients \(N\) and \(N^{i}\), respectively, takes the form \[i\hbar\frac{\partial\chi_{0}}{\partial\tau}=\hat{\mathcal{H}}^{m}\chi_{0}=\int d ^{3}x\Big{(}N\hat{H}^{m}+N^{i}\hat{H}^{m}_{i}\Big{)}\chi_{0}\,, \tag{29}\] where we label \(\chi_{0}=e^{\frac{i}{\hbar}Q_{1}}\). In other words, reading the reference fluid clock induced by the definition (28), we observe at this WKB order functional Schrodinger dynamics of the quantum matter field \(\phi\) on the gravitational background. In this limit, the resulting dynamics corresponds to QFT on curved spacetime. The quantum gravity's influence on the matter sector emerges at the next order, \(M^{-1}\). Proceeding in a similar way, one obtains the following equation for \(\chi_{1}=e^{\frac{i}{\hbar}\big{(}Q_{1}+\frac{i}{M}Q_{2}\big{)}}\): \[i\hbar\frac{\partial\chi_{1}}{\partial\tau}=\hat{\mathcal{H}}^{m}\chi_{1}+\int d ^{3}x\Bigg{[}NG_{ijkl}\frac{\partial S_{0}}{\partial h_{ij}}\bigg{(}-i\hbar \frac{\partial}{\partial h_{kl}}\bigg{)}-2N^{i}h_{ij}\,D_{k}\bigg{(}-i\hbar \frac{\partial}{\partial h_{kj}}\bigg{)}\Bigg{]}\chi_{1}\,, \tag{30}\] where the additional contributions with respect to the matter Hamiltonian are quantum gravity corrections. These modifications are unitary due to the real nature of the function \(S_{0}\) and the presence of the conjugate momenta with respect to the induced metric, and their smallness is assured by the hypothesis (20) (see [35]). Thus, the clock defined by (28) is a physical clock for the matter sector in the WKB scheme truncated at the order \(M^{-1}\). ## 3 Calculation of the Inflationary Spectrum Following the model introduced in the previous section, we now turn to the question of how the power spectrum associated with inflationary perturbations is affected by the quantum gravity corrections. ### Perturbations of the Model Before facing the analysis of the generation of primordial perturbations during the inflationary dynamics of the Universe, and when studying how the quantum gravity corrections can affect the associated power spectrum, it is worth stressing some key differences between the present analysis and other similar approaches, as in [27; 68]. In our formulation, apart from the WKB expansion in the Planckian parameter \(M\), we are addressing a B-O separation between the "slow" gravitational component and the "fast" matter contribution, with the latter including also the fluid's presence. This separation is justified by virtue of a corresponding scale separation between the energy of the quantum matter dynamics, say in the order of the matter Hamiltonian spectrum, and that one of the Planck order, at which the gravity quantization is expected to manifest itself. In view of the adopted B-O approximation we are implementing, the backreaction of the quantum matter on the gravitational background is implicitly negligible. In other words, quantum corrections of the gravitational dynamics are clearly present (as implied by the function \(S_{1}\) in Equation (18), associated with a quantum amplitude for the background metric), but their existence has to be regarded as independent of the matter's dynamics. This point of view has been clearly elucidated in [69], where a critical re-analysis of the original formulation [20], and hence, Ref. [21], has been developed. There, limiting the attention up to the zero order in the parameter \(M\), the quantum gravity component has been expressed in terms of gravitons on the vacuum Bianchi I background. This way, the WKB formulation of the gravitational field takes the form of a purely classical background on which a slow quantum graviton field lives, as referred to by independent degrees of freedom. This graviton contribution is independent, due to the B-O separation, from the quantum matter dynamics, thereby reinforcing the previous statement. If implemented to the isotropic Universe we will consider below, this formulation would also imply the presence of scalar perturbations of the metric, represented by independent degrees of freedom, and clearly, not affected by the scalar field fluctuations. In the following analysis, although developed in the presence of quantum gravity corrections, we will refer to the scalar field only; such case is equivalent to the study of a free massless scalar field fluctuating on a de Sitter background. The classical energy contribution of the scalar field will be identified with the cosmological constant term (i.e., the gap between the false and true vacuum energy density [61; 63]). The inhomogeneous fluctuation of this field will be treated as an independent degree of freedom living on the expanding de Sitter space and whose fluctuations are responsible for the emergence of a scalar perturbation spectrum. ### The Inflaton Field The theory of inflation postulates an early period of exponentially accelerated expansion of the Universe, motivating its primordial inhomogeneities as emerging from the vacuum fluctuations of a scalar field, the so-called inflaton field. One of the most remarkable results of such mechanism is the ability to explain the flatness problem of the Universe [61; 62; 70; 71]. A schematic formulation of this framework is studied by considering as a background a spatially flat universe (any curvature is damped by the exponential expansion), with a scalar field living on top. More specifically, one should consider a Friedmann-Lemaitre-Robertson-Walker (FLRW) model with line element: \[ds^{2}=-N^{2}(t)\ dt^{2}+a^{2}(t)\Big{(}dx^{2}+dy^{2}+dz^{2}\Big{)}\,, \tag{31}\] \(a\) being the cosmic scale factor (here we use the opposite signature with respect to (1) in Section 2 for easier comparison with existing literature), and it inserts the inflaton contribution as a minimally coupled scalar field \(\phi\) with potential \(U(\phi)\). Then, small perturbations are introduced, in general, both for the metric and for the inflaton, which give rise to scalar and tensor fluctuations (the detailed Hamiltonian formulation of such approach can be found, for example, in [26; 27]). Following the discussion presented in Section 3.1, we will now focus on the fluctuations of the scalar field only over the FLRW background, i.e., by variation of the action with respect to those variables. The fluctuations \(\delta\phi\) of the inflaton field can be described in a gauge-invariant way via the Mukhanov-Sasaki (M-S) variable \(v\)[72; 73; 74] (see also the discussion in [68]) defined as \[v:=a\varphi=a\,\delta\phi\,. \tag{32}\] We here stress that addressing the B-O separation discussed above does not alter the gauge invariance of the perturbation theory. In fact, in the limit in which the backreaction on the metric scalar perturbation is neglected, the M-S variable [72] simply reduces to the inhomogeneous scalar field \(\phi\) of our study times the cosmic scale factor, as in (32), and its gauge invariance is immediately recovered. The evolution of the inflaton fluctuations is responsible for the formation of primordial structures in the Universe. To analyze their behavior, let us consider modes with physical wavelength \(\lambda_{\textit{phys}}\equiv a(t)\lambda_{0}\), \(\lambda_{0}\) being the comoving wavelength. It is useful to compare this quantity with the so-called Hubble radius (or micro-physics horizon) \(\mathrm{H}^{-1}=a/a\), that for any given time is the inverse of the Hubble parameter (using \(c=1\)). This horizon represents the scale separating the gravity-dominated regime from the quantum one: the first happens for modes with physical wavelength such that \(\lambda_{\textit{phys}}\gg\mathrm{H}^{-1}\), and the second is the case for \(\lambda_{\textit{phys}}\ll\mathrm{H}^{-1}\). It can be shown that, during the period of accelerated expansion predicted by the theory, the Hubble radius is constant in the physical coordinates, and \(\lambda_{\textit{phys}}\) exponentially increases [61; 71]. Thus, the quantum fluctuations emerge at early times within the microphysical scales (i.e., for \(\lambda_{\textit{phys}}\ll\mathrm{H}^{-1}\)), rapidly expand going outside the horizon, and propagate until they re-enter the Hubble radius at later times (when inflation is over, the behavior is opposite, since \(\mathrm{H}^{-1}\) grows faster than the \(\lambda_{\textit{phys}}\)) [62; 70]. Using the gauge-invariant formalism via the M-S variable (32), it is possible to compute the power spectrum \(\mathcal{P}_{v}(k)\), where \(k\) specifies the wavenumber of each Fourier mode associated with the inflaton perturbations (see also [75; 76; 77; 78; 79; 80] for investigations of such a spectrum in different cosmological settings). However, to investigate the evolution of the primordial Universe, it is more convenient to work with the spectrum associated with the comoving curvature perturbation \(\zeta\) (which is the one leaving its fingerprint on the cosmic microwave background radiation) [81]: indeed, \(\zeta\) is constant (i.e., it freezes) for all the time in which the perturbations are outside the horizon; therefore, one only needs to compute its spectrum at the end of inflation [61; 71]. In the primordial era of our interest, the two quantities \(\zeta\) and \(v\) are directly related by \[\zeta=\sqrt{\frac{4\pi G}{\epsilon}}\frac{v}{a}, \tag{33}\] with \(\epsilon=-\dot{\mathrm{H}}/\mathrm{H}^{2}\) being the first slow-roll parameter. Therefore, in the following, we will focus on the dynamics of the M-S variable \(v\) and only at the end use (33) to compute the invariant power spectrum. Upon decomposition in Fourier modes \(v_{\mathbf{k}}\) and assuming Gaussian probability distributions for the quantum amplitudes associated with each \(v_{\mathbf{k}}\)[81], all the relevant properties of the inflationary perturbations are contained in the two-point correlation function: \[\Xi(\mathbf{r}):=\langle 0|\vartheta(\eta,\mathbf{x})\vartheta(\eta,\mathbf{x }+\mathbf{r})|0\rangle\,, \tag{34}\] where \(|0\rangle\) is the vacuum state of the inflaton field. In (34), the expectation value implies integration over \(\mathbf{k}\)-modes, which can be carried out given the expression [81] \[\Xi(\mathbf{r})=\frac{1}{(2\pi)^{3}}\int d\mathbf{p}\ e^{-i\mathbf{p}\cdot \mathbf{r}}|f_{\mathbf{p}}|^{2}=\frac{1}{2\pi^{2}}\int_{0}^{+\infty}\frac{dp} {p}\frac{sin(pr)}{pr}p^{3}|f_{p}|^{2}\,. \tag{35}\] Here, \(f_{\mathbf{p}}\) is the mode function associated with the scalar perturbations, and from (35), the power spectrum is defined as \[\mathcal{P}_{v}(k)=\frac{k^{3}}{2\pi^{2}}|f_{k}|^{2}\,, \tag{36}\] i.e., the Fourier amplitude of \(\Xi(0)\) per unit logarithmic interval. As mentioned above, this quantity is then evaluated in the super-Hubble limit \(k/(a\mathrm{H})\ll 1\), when the perturbations essentially freeze. We stress that the vacuum state in (34) must be selected as the one corresponding to the ground level of the scalar field Hamiltonian in the limit \(k/(a\mathrm{H})\to\infty\) (or equivalently \(\lambda_{\textit{phys}}\ll\mathrm{H}^{-1}\)), also known as the Bunch-Davies vacuum [26; 62; 81]. We will impose this requirement on the modified wave functional dictated by the model in Section 3.3. In the following, we will compute (36) in the specific case where the inflaton field follows modified quantum dynamics, as described by Equation (30). ### Perturbation Spectrum in the de Sitter Phase During the accelerated expansion of inflation, of particular interest is the slow-rolling phase, where the inflaton can be approximately described as a free massless scalar field (the almost constant potential acts as a cosmological term) [61]. In the following, we will consider an exact de Sitter phase; thus, the slow-rolling parameter \(\epsilon\) is neglected. The analysis of quantum gravity's effects on the inflationary spectrum is achieved by considering the fluctuations of the scalar field over a quasi-classical background, expressed by a FLRW model with line element (31). Instead of the general action (11), the considered case can be studied in the minisuperspace formalism (the supermomentum contributions are identically vanishing due to the homogeneity of the background model). The (non-trivial) relevant constraint is thus the superHamiltonian, which takes the form \[H_{tot}=\frac{\hbar^{2}}{48Ma^{2}}\partial_{a}(a\partial_{a})+4M\Lambda a^{3}-i \hbar\partial_{T}+\frac{1}{2a}\sum_{\mathbf{k}}\left(-\hbar^{2}\partial_{v_{ \mathbf{k}}}^{2}+\omega_{k}^{2}v_{\mathbf{k}}^{2}\right). \tag{37}\] where we implemented the Laplace-Beltrami factor ordering. Here, the positive cosmological constant \(\Lambda\) replaces \(U_{m}\) in (14). The term \(-i\hbar\partial_{T}\), that is, the momentum associated with the Gaussian time \(T\), is the only surviving contribution from the insertion of \(S^{f}\) (1) due to homogeneity. The last two terms in (37) are associated with the inflaton field fluctuations, where the \(v_{\mathbf{k}}\) correspond to the modes in the Fourier space of the gauge-invariant M-S variable (32); in the considered case of scalar perturbations over a FLRW background, the \(v_{\mathbf{k}}\)-modes behave as time-dependent harmonic oscillators [26; 82; 83; 84; 27], where the frequency depends on the wavenumber modulus only: \[\omega_{k}^{2}=k^{2}-\frac{a^{2}}{N^{2}}\left(\dot{\mathrm{H}}-\mathrm{H} \frac{N}{N}+2\mathrm{H}^{2}\right). \tag{38}\] The WDW constraint corresponds to the vanishing of the operator (37) applied to the total system wave function \(\Psi(a,T,v_{\mathbf{k}})\). For convenience, we implement the logarithmic scale factor, \[\alpha:=\ln\left(\frac{a}{a_{0}}\right), \tag{39}\] such that the global WDW equation reads \[i\hbar\partial_{T}\Psi=a_{0}^{-1}e^{-a}\left[\frac{\hbar^{2}}{48M}\frac{1}{a_{ 0}^{2}e^{2\alpha}}\partial_{a}^{2}+4a_{0}^{4}e^{4\alpha}\Lambda M+\frac{1}{2} \sum_{\mathbf{k}}\left(-\hbar^{2}\partial_{v_{\mathbf{k}}}^{2}+\omega_{k}^{2 }v_{\mathbf{k}}^{2}\right)\right]\Psi\,. \tag{40}\] Let us now consider a single Fourier mode identified by a wave number \(\mathbf{k}\). Following the scheme discussed above, for each independent mode, the ansatz is taken as \[\Psi_{\mathbf{k}}(\alpha,T,v_{\mathbf{k}})=\psi_{\mathbf{k}}(\alpha)\ \chi_{\mathbf{k}}(\alpha,T,v_{\mathbf{k}})\,, \tag{41}\] and then WKB expanded as in (18), obtaining \[\psi_{\mathbf{k}}(\alpha)=e^{\frac{i}{\hbar}\left[MS_{0}(\alpha)+ S_{1}(\alpha)+M^{-1}S_{2}(\alpha)\right]}\,, \tag{42}\] \[\chi_{\mathbf{k}}(\alpha,T,v_{\mathbf{k}})=e^{\frac{i}{\hbar} \left[Q_{1}(\alpha,T,v_{\mathbf{k}})+M^{-1}Q_{2}(\alpha,T,v_{\mathbf{k}})) \right]}\,. \tag{43}\] Upon substitution into (40), the solutions for the gravitational sector are readily obtained at the three orders: \[S_{0}(\alpha)=-8\sqrt{\frac{\Lambda}{3}}a_{0}^{3}\left(e^{3\alpha}-e^{3\alpha_{0} }\right)\,, \tag{44}\] \[S_{1}(\alpha)=i\hbar\frac{3}{2}(\alpha-\alpha_{0})\,, \tag{45}\] \[S_{2}(\alpha)=\frac{\hbar^{2}}{64}\sqrt{\frac{3}{\Lambda}}a_{0}^{-3}\left(e^{- 3\alpha}-e^{-3\alpha_{0}}\right)\,. \tag{46}\] Here, \(S_{0}\) solves the H-J equation and so corresponds to the classical limit of the gravitational component, and the next order functions, \(S_{1}\) and \(S_{2}\), account for quantum gravity effects. The equation for the quantum matter wave function at the first order \(M^{0}\) can be expressed in a clearer form in conformal time \(\eta\) (choosing \(N=a_{0}\,e^{a}\)), which is related to the Gaussian time constraint via \(T^{\prime}(\eta)=a_{0}\exp(\alpha(\eta))\), obtaining \[i\hbar\partial_{\eta}\chi^{(0)}_{\mathbf{k}}=\left(-\frac{\hbar^{2}}{2} \partial_{v_{\mathbf{k}}}^{2}+\frac{1}{2}\omega_{k}^{2}(\eta)\varphi_{ \mathbf{k}}^{2}\right)\chi^{(0)}_{\mathbf{k}}\,. \tag{47}\] The time-dependent harmonic oscillator system can be exactly solved by implementing the so-called Lewis-Riesenfeld method introduced in [85; 86; 87; 88], which is described in appendix A. The wave function admits a general representation of the form (A9), where the functions \(\delta_{n,k}\) and \(\rho_{k}\) are defined in (A10) and (A3), respectively. The arbitrary coefficients in those expressions are set by imposing suitable initial conditions. In this specific cosmological setting, we make use of the Bunch-Davies vacuum state requirement [62; 81]: the state must correspond to the Minkowskian vacuum in the limit \(\eta\to-\infty\) (that is, when the inflaton wavelength is small compared to the curvature of the universe). This condition is satisfied if \[\rho_{k}(\eta)\xrightarrow[n,k]{\eta\to-\infty}k^{-1/2}\,, \tag{48}\] \[c_{n,k}=\delta_{0,k} \tag{49}\] where (49) stems from the observation that the \(n=0\) eigenvalue of the invariant (A1) corresponds, for a fixed time, to the lowest-energy state of the oscillator. For the specific \(\rho_{k}\) function (A3), its coefficients must be \(A=B=\gamma_{1}=1\), so that \[\rho_{k}(\eta)=\sqrt{\frac{1}{k}+\frac{1}{\eta^{2}k^{3}}} \tag{50}\] satisfies the required limit. Then, by substituting it into (A10), the \(\delta_{n,k}\) functions are found to be \[\delta_{n,k}=-\left(n+\frac{1}{2}\right)\int d\eta\frac{1}{\rho_{k}^{2}(\eta) }=-\left(n+\frac{1}{2}\right)\left(\eta k-\arctan(\eta k)+c\right). \tag{51}\] Finally, the solution to Equation (47) satisfying the Bunch-Davies condition is: \[{}^{BD}\chi^{(0)}_{\mathbf{k}}(\eta,v_{\mathbf{k}})=\exp\left[-\frac{i}{2}( \eta k-\arctan(\eta k))\right]\left(\frac{k^{3}}{\pi\hbar\left(\frac{1}{\eta^ {2}}+k^{2}\right)}\right)^{\frac{1}{2}}\exp\left[\frac{i}{2\hbar}\left(-\frac{ 1}{\eta^{3}\left(\frac{1}{\eta^{2}}+k^{2}\right)}+i\frac{k^{3}}{\eta^{2}+k^{2 }}\right)v_{\mathbf{k}}^{2}\right] \tag{52}\] We can now focus on the next order \(M^{-1}\), where, due to the quantum gravity corrections, the dynamics is no longer that of a time-dependent oscillator: \[i\hbar\partial_{\eta}\chi^{(1)}_{\mathbf{k}}=\left[\frac{i\hbar}{24}\frac{1}{a_{ 0}^{2}e^{2\alpha}}(\partial_{\alpha}S_{0})\partial_{\alpha}-\frac{\hbar^{2}}{2 }\partial_{v_{\mathbf{k}}}^{2}+\frac{1}{2}\omega_{\mathbf{k}}^{2}v_{\mathbf{k} }^{2}\right]\chi^{(1)}_{\mathbf{k}}\,. \tag{53}\] By substituting (44) and the classical background solution \(a_{0}e^{\alpha}(\eta)=-\sqrt{\frac{3}{\Lambda}}\frac{1}{\eta}\), Equation (53) becomes \[i\hbar\partial_{\eta}\chi^{(1)}_{\mathbf{k}}(\alpha,\eta,v_{\mathbf{k}})= \left[\frac{i\hbar}{\eta}\partial_{\alpha}-\frac{\hbar^{2}}{2}\partial_{v_{ \mathbf{k}}}^{2}+\frac{1}{2}\omega_{k}^{2}(\eta)v_{\mathbf{k}}^{2}\right] \chi^{(1)}_{\mathbf{k}}(\alpha,\eta,v_{\mathbf{k}})\,. \tag{54}\] We investigate the class of separable solutions of the form \[\chi^{(1)}_{\mathbf{k}}(\alpha,\eta,v_{\mathbf{k}})=\theta(\alpha)\,\Gamma_{ \mathbf{k}}(\eta,v_{\mathbf{k}})\,, \tag{55}\] where we remark that the (quantum) degree of freedom \(\alpha\) is in principle independent from the chosen conformal time \(\eta\), and the classical relation only stands in the appropriate low-energy limit. Then, Equation (54) is solved for \[-i\hbar\partial_{\alpha}\theta(\alpha)=\lambda\theta(\alpha)\,, \tag{56}\] \[i\hbar\partial_{\eta}\Gamma_{\mathbf{k}}(\eta,v_{\mathbf{k}})= \left(-\frac{\hbar^{2}}{2}\partial_{v_{\mathbf{k}}}^{2}+\frac{1}{2}\omega_{k} ^{2}(\eta)v_{\mathbf{k}}^{2}-\frac{\lambda}{\eta}\right)\Gamma_{\mathbf{k}}( \eta,v_{\mathbf{k}})\,, \tag{57}\] where the constant \(\lambda\) identifies the family of solutions of (56), which gives the eigenvalues of the momentum associated with \(\alpha\) and so to the scale factor \(a\). Equation (57) can be solved via another suitable rescaling, \(\Gamma_{\mathbf{k}}(\eta,v_{\mathbf{k}})=\exp\left[\frac{i}{\hbar}\lambda \log(-\eta)\right]\hat{\Gamma}_{\mathbf{k}}(\eta,v_{\mathbf{k}})\), which absorbs the \(\lambda\)-factor and maps it into an equation of the form (47) for \(\hat{\Gamma}_{\mathbf{k}}\), i.e., the usual time-dependent harmonic oscillator. Therefore, the function \(\hat{\Gamma}_{\mathbf{k}}\) coincides with the \(\chi^{(0)}_{\mathbf{k}}\) of the previous order, and the \(\Gamma_{\mathbf{k}}\) is readily obtained from the rescaling above. By putting together the solutions of (56) and (57), we can write the complete matter wave function (55) as \[\chi^{(1)}_{\mathbf{k}}(\alpha,\eta,v_{\mathbf{k}})=\theta_{p_{\alpha}}(\alpha )\,e^{\frac{i}{\hbar}p_{\alpha}\log(-\eta)}\,\chi^{(0)}_{\mathbf{k}}(\eta,v_{ \mathbf{k}})\,, \tag{58}\] which can then be implemented to analyze the quantum-gravity corrected power spectrum. However, before that computation, we stress one important remark of this approach. The requirement (20) imposed in Section 2.2 due to the B-O approximation scheme translates, in this specific minisuperspace setting, to \(|p_{\alpha}|<1/M\). Therefore, one must consider for (58) a convolution over the suitable values of the momentum \(p_{\alpha}\) \[\chi^{(1)}_{\mathbf{k}}(\alpha,\eta,v_{\mathbf{k}})=\chi^{(0)}_{\mathbf{k}}( \eta,v_{\mathbf{k}})\int dp_{\alpha}g(p_{\alpha})\theta_{p_{\alpha}}(\alpha)e ^{\frac{i}{\hbar}\log(-\eta)p_{\alpha}}\,, \tag{59}\] with \(g(p_{\alpha})\) being a generic distribution. More specifically, choosing a Gaussian weight with deviation \(\sigma\) and zero mean value \[g(p_{\alpha})=\frac{1}{(\sqrt{2\pi}\sigma)^{1/2}}e^{-\frac{p_{\alpha}^{2}}{4 \sigma}}\,, \tag{60}\] the matter wave function modified by quantum gravity corrections ends up as \[\chi^{(1)}_{\mathbf{k},Gauss}(\alpha,\eta,v_{\mathbf{k}})=\chi^{(0)}_{\mathbf{k}} (\eta,v_{\mathbf{k}})\bigg{\{}(8\pi\sigma^{2})^{1/4}\exp\biggl{[}-\frac{\sigma^ {2}}{\hbar^{2}}(\alpha+\log(-\eta))^{2}\biggr{]}\bigg{\}}\,. \tag{61}\] We observe that the effect of the quantum gravity corrections has clearly factorized, an aspect which will deeply impact the result of the power spectrum analysis. Indeed, the obtained wave function shall be considered as the "new" vacuum state in order to derive the primordial power spectrum for the order \(M^{-1}\) of the prescribed theory, i.e., modified by quantum gravity effects. However, since the modification affecting the wave function (61) takes the form of a time factor only, such a spectrum will coincide with the previous order result, which is computed with the wave function (52) in the absence of quantum gravitational corrections. At this stage, the wave function \(\chi^{(1)}\) retains remarkable dependence on the quantum variable \(\alpha\) in the proposed paradigm, a property which has to be carefully addressed when studying phenomenological implications. Following the considerations in [35], we consider an "averaged" wave function in the form of \[\bar{\chi}(\eta,v_{\mathbf{k}})=\int d\alpha|A|^{2}(\alpha)\,\chi(\alpha,\eta, v_{\mathbf{k}}) \tag{62}\] where \(A=e^{iS_{1}/\hbar}\) is the (quantum) amplitude coming from the lowest-order quantum gravitational component. This choice corresponds to averaging on the quasi-classical gravitational probability density, which in the selected minisuperspace is associated with the logarithmic scale factor \(\alpha\) only. It is worth stressing that weighting the matter wave function on the WKB amplitude of the gravitational field is, on the present level, a purely phenomenological procedure. In fact, it is clear that such a wave function can in principle no longer satisfy the Schrodinger equation (53). Nonetheless, the applicability of the analysis in [69] is reliable, where it has been shown that such a calibrated wave function is actually a solution of the Schrodinger equation when suitable gauge invariance of the B-O procedure is taken into account. Upon substitution of (61) and (45) into Equation (62), the averaged wave function for each mode becomes \[\bar{\chi}^{(1)}_{\mathbf{k},Gauss}(\eta,v_{\mathbf{k}})=\chi^{(0)}_{\mathbf{k }}(\eta,v_{\mathbf{k}})\Bigg{[}\hbar\biggl{(}\frac{8\pi^{3}}{\sigma^{2}} \biggr{)}^{\frac{1}{4}}(-\eta)^{3}\exp\left(\frac{9\hbar^{2}}{4\sigma^{2}} \right)\Bigg{]}\,. \tag{63}\] Requiring normalization over the possible \(v_{\mathbf{k}}\) values, i.e., dividing by the wave function integrated on such variables, the term in squared brackets (which depends only on time and on the specific form of the weight (60)) clearly factors out of the integration. Therefore, we have for the averaged and normalized wave function \[\bar{\chi}^{(1)}_{\mathbf{k},Gauss}\xrightarrow{\text{integration over $\alpha$}}\chi^{(0)}_{\mathbf{k}}(\eta,v_{\mathbf{k}})\,, \tag{64}\] namely, we recover the previous order state. Therefore, we now proceed to the computation of the inflationary power spectrum in the described setting, by computing the two-point correlation function of the M-S variable on the the Bunch-Davies state (52). For convenience, we rewrite \({}^{BD}\chi^{(0)}_{\mathbf{k}}\) in the following way: \[{}^{BD}\chi^{(0)}_{\mathbf{k}}(\eta,v_{\mathbf{k}})=N_{k}(\eta)\exp\Bigl{(}i \delta_{0,k}(\eta)-\Omega_{k}(\eta)v_{\mathbf{k}}^{2}\Bigr{)}\,, \tag{65}\] where \[\Omega_{k}(\eta):=\frac{1}{2\hbar}\left(\frac{i}{\eta^{3}\Big{(}\frac{1}{\eta^{2}}+ k^{2}\Big{)}}+\frac{k^{3}}{\frac{1}{\eta^{2}}+k^{2}}\right), \tag{66}\] \[N_{k}(\eta):=\left(\frac{2}{\pi}\Re(\Omega_{k})\right)^{1/4}=\left(\frac{k^{3}} {\pi\hbar\Big{(}\frac{1}{\eta^{2}}+k^{2}\Big{)}}\right)^{\frac{1}{4}}, \tag{67}\] and \(\Re(\cdot)\) isolates the real part. In the following, we also isolate the real and imaginary parts of the (complex) variable \(v_{\mathbf{k}}\) as \[v_{\mathbf{k}}=\frac{1}{\sqrt{2}}(v_{\mathbf{k}}^{R}+iv_{\mathbf{k}}^{I}) \tag{68}\] for the computation of the correlation function. Then, the two-point correlation function of the complex M-S variable computed on the Bunch-Davies vacuum state corresponds to (see [81], we are here dropping the prefix in \({}^{BD}\chi_{\mathbf{k}}^{(0)}\) for readability): \[\Xi(\mathbf{r}) =\langle 0|v(\eta,\mathbf{x})v(\eta,\mathbf{x}+\mathbf{r})|0 \rangle=\int\prod_{\mathbf{k}}dv_{\mathbf{k}}^{R}dv_{\mathbf{k}}^{I}\left( \prod_{\mathbf{k}^{\prime}}\chi_{\mathbf{k}^{\prime}}^{(0)*}(\eta,v_{ \mathbf{k}^{\prime}})\right)v(\eta,\mathbf{x})v(\eta,\mathbf{x}+\mathbf{r}) \left(\prod_{\mathbf{k}^{\prime\prime}}\chi_{\mathbf{k}^{\prime\prime}}^{(0) }(\eta,v_{\mathbf{k}^{\prime\prime}})\right)\] \[=\left(\prod_{\mathbf{l}}|N_{l}(\eta)|^{4}\right)\int\prod_{ \mathbf{k}}dv_{\mathbf{k}}^{R}dv_{\mathbf{k}}^{I}\left(\prod_{\mathbf{k}^{ \prime}}e^{-2\Re(\Omega_{k^{\prime}})\big{[}(v_{\mathbf{k}^{\prime\prime}}^{ \mathbf{r}})^{2}+(v_{\mathbf{k}^{\prime\prime}}^{\mathbf{r}})^{2}\big{]}} \right)v(\eta,\mathbf{x})v(\eta,\mathbf{x}+\mathbf{r}) \tag{69}\] \[=\left(\prod_{\mathbf{l}}\frac{2\Re(\Omega_{l})}{\pi}\right)\int \frac{d\mathbf{p}}{(2\pi)^{3/2}}\int\frac{d\mathbf{q}}{(2\pi)^{3/2}}e^{i \mathbf{p}\cdot\mathbf{x}}e^{i\mathbf{q}\cdot(\mathbf{x}+\mathbf{r})}\int \prod_{\mathbf{k}}dv_{\mathbf{k}}^{R}dv_{\mathbf{k}}^{I}\Big{[}p_{\mathbf{p}}v _{\mathbf{q}}e^{-2\sum_{\mathbf{k}^{\prime}}\Re(\Omega_{k^{\prime}})\big{(}( v_{\mathbf{k}^{\prime}}^{\mathbf{r}})^{2}+(v_{\mathbf{k}^{\prime}}^{\mathbf{r}})^{2} \big{)}}\Big{]}\] where we are considering each Fourier mode of the vacuum state, substituting the expression (65) in the second equality, and expanding both variables in Fourier modes in the third. We observe that the last integral, due to its form, vanishes for \(\mathbf{p}\neq\pm\mathbf{q}\), and the same happens for \(\mathbf{p}=\mathbf{q}\), since we obtain exponents of the form \(\left[(v_{\mathbf{p}}^{R})^{2}-(v_{\mathbf{p}}^{I})^{2}\right]/2\), and the real and imaginary parts contribute the same amounts. Therefore, the surviving contribution is in the case \(\mathbf{p}=-\mathbf{q}\), that is, \[\Xi(\mathbf{r})=\left(\prod_{\mathbf{l}}\frac{2\Re(\Omega_{l})}{ \pi}\right)\int\frac{d\mathbf{p}}{(2\pi)^{3}}e^{-i\mathbf{p}\cdot\mathbf{r}} \;2\int\prod_{\mathbf{k}}dv_{\mathbf{k}}^{R}dv_{\mathbf{k}}^{I}\left[(v_{ \mathbf{p}}^{R})^{2}\;e^{-2\sum_{\mathbf{k}^{\prime}}\Re(\Omega_{k^{\prime}}) \big{(}(v_{\mathbf{k}^{\prime}}^{R})^{2}+(v_{\mathbf{k}^{\prime}}^{I})^{2} \big{)}}\right] \tag{70}\] \[\qquad=\int\frac{d\mathbf{p}}{(2\pi)^{3}}\,e^{-i\mathbf{p}\cdot \mathbf{r}}\frac{1}{2\Re(\Omega_{p})}\] where we remind that \(\Omega_{p}=\Omega_{p}(\eta)\) as from the definition (66). This corresponds, from (35) and the definition (36), to a power spectrum of the form \[\mathcal{P}_{\sigma}(k)=\frac{k^{3}}{4\pi^{2}}\frac{1}{\Re(\Omega_{k})}. \tag{71}\] Therefore, the invariant power spectrum associated with the curvature perturbation \(\zeta\) (33) is given by \[\mathcal{P}_{\zeta}(k)=\frac{4\pi G}{\epsilon\,a_{0}^{2}\,e^{2\alpha}}\, \mathcal{P}_{\sigma}(k)=\frac{G}{\pi\epsilon}\frac{k^{3}}{a_{0}^{2}\,e^{2 \alpha}}\,\frac{1}{\Re(\Omega_{k})}\,. \tag{72}\] We now evaluate this quantity in the super-Hubble limit, which in conformal time corresponds to modes for which \(k\eta\to 0^{-}\). In this case, we note from the definition (66) that the function \(\Re(\Omega_{k})\) becomes \[\Re(\Omega_{k}(\eta))\approx k^{3}\eta^{2} \tag{73}\] (we are using \(\hbar=1\) for easier comparison with the literature). When implementing this limit and substituting the classical solution \(\alpha(\eta)\), we arrive at the following result for the primordial power spectrum in the de Sitter phase: \[\mathcal{P}_{\zeta}(k)=\left.\frac{G\,\mathrm{H}_{\Lambda}^{2}}{\pi\epsilon} \right|_{k=a\mathrm{H}_{\Lambda}}, \tag{74}\] where \(\mathrm{H}_{\Lambda}=\sqrt{8\pi G\Lambda/3}\) and the slow-roll parameter \(\epsilon\) is evaluated at the horizon crossing. Recent satellite missions, such as WMAP [89] and PLANCK [90; 91], provided an accurate detection of the fluctuation spectrum in the cosmic microwave background temperature. These observations, and in particular, the Gaussian profile of the fluctuations, properly fulfill the prediction of the inflation paradigm, and in this respect, a significant constraint for the spectral index \(n_{s}\) \[n_{s}-1:=\frac{d\ln\mathcal{P}_{\zeta}}{d\ln k} \tag{75}\] is now available [90]. Nonetheless, some recent data analyses suggest the possibility of some anomaly in the Gaussianity of the fluctuations [92] and called attention to the possibility to be interpreted via a multifield inflationary scenario [93]. Clearly, the quantum gravity corrections we are searching for are extremely small with respect to the accuracy of the current fluctuation measurements, since they are in the order of the square ratio of the inflationary energy scale to the corresponding Planckian one, namely, about \(10^{-8}\). Despite the possibility of detecting such quantum gravity modifications of the spectrum in current or near-future experiments appearing unlikely, nonetheless, their prediction looks to be a fundamental conceptual challenge. We recall that we have here recovered the standard QFT spectrum for the primordial fluctuations via a functional approach, implementing the Gaussian fluid as a time parameter (28). It is evident that the quantum gravity corrections in (54) do not modify, but preserve the inflationary power spectrum up to this expansion order; an analogous result derived in a different context is present in [94]. Such result is clearly to be attributed to the form of the modified Schrodinger equation (53), which presents no coupling between the quantum gravitational degree of freedom \(\alpha\) and the perturbation variables \(v_{\mathbf{k}}\). It then follows that the correction to the "fast" wave function \(\chi\) (59) factorizes, and due its time-dependent form, does not influence the evolution of the perturbation modes in the considered setting. ## 4 Towards the General Case The result presented in Section 3 suggests that the quantum gravity-induced corrections on the matter evolution, obtained in the WKB expansion and via the time parameter introduced in (28), give as a net effect a time-dependent factor. Such term could be considered a posteriori a phase rescaling acting on the matter wave function, as we show here in the general case. Let us start from the modified dynamics (30) analyzed for a generic minisuperspace model (the supermomentum is identically vanishing); for this purpose we work with the (homogeneous) generalized variable \(h_{a}\) (i.e., the degrees of freedom associated with the 3-geometries) and the corresponding minisupermetric \(G_{ab}\), instead of the spatial metric \(h_{ij}\)[20]. We adopt for convenience the synchronous time \(N=1\) such that the definition (28) coincides with the derivative with respect to \(T\), up to a fiducial volume set to unit, but the result here discussed stands for a generic lapse function \(N\). Explicitly, the dynamics up to the order \(M^{-1}\) are described by \[i\hbar\frac{\partial\chi}{\partial T}=\hat{\mathcal{H}}^{m}\chi-i\hbar\,G_{ab} \frac{\partial S_{0}}{\partial h_{a}}\frac{\partial}{\partial h_{b}}\chi\,, \tag{76}\] and we write the matter wave functional as \[\chi(h_{a},T,\phi)=\xi_{g}(h_{a})\ \Theta_{m}(T,\phi)\,. \tag{77}\] We remark that this is a stronger requirement and is inherently different from the Born-Oppenheimer separation (18), since \(\Theta_{m}\) is now assumed to be independent of the generalized coordinate \(h_{a}\). Such separation is backed by the observation that, since there is no quantum matter back-reaction in the present model, we can consider the two sets of degrees of freedom as independent. By substituting (77) into (76), and dividing by the non-trivial functional \(\xi_{g}\), we obtain \[i\hbar\frac{\partial\Theta_{m}}{\partial T}=\hat{\mathcal{H}}^{m}\Theta_{m}- \frac{i\hbar}{\xi_{g}}G_{ab}\frac{\partial S_{0}}{\partial h_{a}}\frac{ \partial\xi_{g}}{\partial h_{b}}\,\Theta_{m}\,. \tag{78}\] Here, \(S_{0}\) belongs to the classical solution (see Equation (25)); thus, the corresponding factor is a function of time only: \(\partial_{h_{a}}S_{0}=f(T)\), where the form of \(f\) depends on the specific cosmological model. Additionally, the modified dynamics cannot induce dependence of \(\Theta\) on the \(h_{a}\), since that was separated in (77). Then, we can express the factor containing \(\xi_{g}\) as a constant, whose value can depend on the quantum number associated with \(h_{a}\); i.e., its value is fixed during the dynamics once a specific foliation is selected: \[\frac{1}{\xi_{g}}\frac{\partial\xi_{g}}{\partial h_{a}}=ik_{(h_{a})} \tag{79}\] where for convenience, we have inverted the couple of indices \(a\) and \(b\) in (76), making use of the symmetry of the minisupermetric \(G_{ab}\). The writing \(k_{(h_{a})}\) is to be understood as a function of the gravitational variable \(h_{a}\). The solution to (79) has a plane wave structure \[\xi_{g}(h_{a})=e^{ik_{(h_{a})}\cdot h_{a}}\,. \tag{80}\] The functions (80) constitute a complete basis that can be adopted to construct wave packets, which will describe the quantum gravitational contribution to \(\chi\). In what follows, we limit our attention to the plane wave (80) associated with a specific value \(k_{(h_{a})}\); in this case, the modified dynamics take the form \[i\hbar\frac{\partial\Theta_{m}}{\partial T}=\hat{\mathcal{H}}^{m}\Theta_{m}+ \hbar f(T)\,k_{(h_{a})}\Theta_{m} \tag{81}\] We now rewrite the function \(\Theta_{m}\), which is useful for the computation of the corrective effects, as: \[\Theta_{m}(T,\phi)=e^{i\Lambda(T)}\varrho(T,\phi)\,, \tag{82}\] where \(\varrho\) has the same degrees of freedom with respect to \(\Theta_{m}\), and a (complex) time-dependent phase \(\Lambda\) has been separated. In the general case, such a phase can acquire different forms depending on the wave number \(k_{(h_{a})}\) present in (79) and (80) (or, as we will discuss later, depending on the considered wave packet). It is exactly the phase factor \(\Lambda(T)\) that will account for the quantum gravity corrections, since we will see that \(\varrho\) exactly solves the unperturbed matter dynamics at such order. Indeed, by substituting (82) into (81) and requiring that \[\frac{\partial\Lambda}{\partial T}=f(T)\,k_{(h_{a})}\,, \tag{83}\] the additional contribution on the right-hand side of (81) cancels out via the phase rescaling, and the function \(\varrho\) satisfies the unperturbed Schrodinger evolution: \[i\hbar\frac{\partial\varrho(T,\phi)}{\partial T}=\hat{\mathcal{H}}_{m}\,\varrho( T,\phi)\,. \tag{84}\] Here, the matter Hamiltonian \(\mathcal{H}_{m}\) is left as a generic expression; for the purpose of the cosmological implementation above, it took the form of a time-dependent harmonic oscillator in Section 3.3. It is then possible to discuss any effects of such quantum gravity contributions to the scalar field's power spectrum. As previously stated, the net effect is encased in the time-dependent phase \(\Lambda(T)\) solution of (83), which is actually real-valued, since \(f(T)\) follows from the classical solution \(S_{0}\). The complete matter wave function at \(\mathcal{O}\big{(}M^{-1}\big{)}\) thus reads \[\chi(h_{a},T,\phi)=e^{ik_{(h_{a})}\cdot h_{a}}\,e^{ik_{(h_{a})}\int dT^{\prime }f(T^{\prime})}\,\varrho(T,\phi) \tag{85}\] where the integral in the second term \(\int dT^{\prime}f(T^{\prime})\) is intended to be between values \(T_{0}\) and \(T\), for which the WKB approximation holds. We observe that the solution (85) has the same shape of the result discussed in Section 3.3. Due to the peculiar morphology of the quantum gravity factors, arising from (79) and (83) (which originally stem from the requirement (82)), the effect on the matter spectrum is canceled once the matter wave function is properly normalized. This is the reason for which, as shown in Section 3.3, the quantum gravity corrections preserve the primordial inflationary spectrum. Clearly, the fact that, at the order \(M^{-1}\), no corrections emerge for the inflationary spectrum from quantum gravity effects, does not mean that a possible deformation of the scale invariance property cannot come out at the next orders of approximation. However, here is a peculiar point that deserves specific attention: the absence of a spectral modification is a consequence of the phase form that the quantum gravity corrections take in the matter wave function, and in turn, this feature is induced by the possibility of factorizing such a wave function into a gravitational and a matter component. The physical meaning of this assumption must be searched in the absence of a quantum matter backreaction on the classical gravitational background. ### On the Role of the Matter Backreaction We observe that the \(S_{0}\) solution for the gravitational field, and in particular, the classical momentum term appearing in the quantum gravity corrections in Equation (76), do not depend, by the considered WKB perturbation scheme, on the quantum matter degrees of freedom. It is exactly this point which enters the possibility of factorizing the matter wave function into two independent components (77). On the contrary, if the H-J equation, Equation (25), contains the expectation value of the quantum matter Hamiltonian, then also the classical momentum would be, on average, affected by the quantum degrees of freedom. Then, the choice of a factorized form for the matter wave function, even if still possible, would no longer appear as a natural solution to the perturbed dynamics. To elucidate this point of view, we here discuss in more detail the role played by the matter backreaction. In fact, when implementing a standard B-O scheme (see the original formulation [95]) in the WKB approximation order by order in \(1/M\)[25], it is immediately recognizable that the quantum matter expectation value enters both the right-hand side of the H-J equation (25), and the Schrodinger equation (29) (see also [96] and for a review [97]). As stated in [98], this contribution can be easily removed from the Schrodinger dynamics by phase rescaling, where the phase contains the matter backreaction term, though as a function of the gravitational degrees of freedom only. This operation is allowed by a natural gauge invariance of the total B-O wave function. However, as shown in [33], this redefinition of the matter wave function induces an opposite change of phase to the gravitational one, with the net effect that the backreaction term is also removed from the H-J equation. These considerations suggest that such a contribution could always be neglected in view of the gauge invariance analyzed above, and hence, that the emergence of a quantum gravity correction originating from the matter backreaction cannot be inferred. Nonetheless, we question here the correctness of doing such a phase redefinition via the matter expectation value. Actually, in the B-O procedure, the gauge invariance is used to eliminate the Berry phase [99; 100; 101], but not to cancel the (fast) electronic eigenvalue contribution in the (slow) nuclear dynamics [102]. From this point of view, it is more natural to maintain the expectation value contribution both in the H-J equation and in the Schrodinger one. This would lead to a non-trivial coupled integro-partial differential system which could be treated with a self-consistent method (for a related treatment of the backreaction in a different context, see [79]). The discussion above was thought to refer to order \(M^{1}\) and \(M^{0}\) of the WKB approximation, but it naturally extends to the \(M^{-1}\) order. Thus, if we include the matter Hamiltonian term in Equation (76), we arrive to a coupled system that only at the lowest order of approximation in a Hartree self-consistent approach can be reduced to the form discussed in Section 2.2. The complete problem naturally introduces dependence of the H-J function \(S_{0}\) on the matter one (via an integral of the matter's degrees of freedom); this point clarifies the technical content of the discussion above on the role of the matter backreaction in the separability of Equation (76) to some order of approximation. Therefore, we are led to conclude that the proposed WKB expansion in the quantity \(1/M\) must carefully take into account the evaluation of the matter (average) backreaction on the gravitational quasi-classical background. ## 5 Concluding Remarks Here, we reviewed the analysis presented in [35], aiming to calculate the quantum gravity corrections to QFT, in the theoretical framework of fixing a Gaussian reference frame, as discussed in [59]. The motivations for addressing such a revised scheme came from the search for a formulation which is not affected by the non-unitarity questions faced in [21; 29] when reconstructing a time variable for the matter wave function from the classical limit of the background gravitational field. The physical clock in [35] is provided by the materialization of the Gaussian frame as a dust fluid. It is important to recall here that such an emerging dust-like contribution no longer has, in the WKB expansion, the shortcoming of a non-positive defined energy density. The present study implemented the procedure mentioned above to calculate the possible quantum gravity corrections to the primordial inflationary spectrum. We considered the quasi-classical background corresponding to a Robertson-Walker geometry in the presence of a cosmological constant term, mimicking the vacuum energy of an inflationary phase transition, as viewed in the resulting de Sitter evolution. The matter field we quantized in the proposed scheme was clearly the inflaton scalar degree of freedom, for which we applied a Fourier decomposition and introduced the gauge-invariant M-S formulation [72; 73; 74]. The field would have been in principle associated with a wave functional, describing its dynamics, but the Fourier decomposition of its Hamiltonian allowed us to deal with a minisuperspace formulation for each independent wavenumber modulus (we recall that the inflaton can be regarded, with a very good approximation, as a free massless scalar field during the slow-rolling phase of the inflation process). Clearly, we solved the wave equation amended for the quantum gravity corrections, and we ended up showing that the solution of the time-dependent harmonic oscillator associated with each \(k\)-mode is rescaled by a phase factor, due to the additional contribution to the Schrodinger equation. As an immediate consequence, we could conclude that, at the considered approximation in the WKB scheme, no modification of the inflationary spectrum of the Universe can be determined. It would be worth analyzing the present formulation in the case in which the background gravitational field is described by a modified theory of gravity. For a discussion of how modified gravity affects the inflationary spectrum, see [103; 104; 105; 106; 107; 108]but it calls attention for further investigation how these results would appear in the present framework, i.e., including quantum gravity corrections of the extended formulation (for approaches which quantize the modified metric \(f(R)\) gravity, see [109; 110; 111]). We then discussed this result in the scheme of a generic minisuperspace model, in order to outline the real physical explanation for such surprising preservation of the scale-invariant spectrum in the theory proposed in [35]. From a mathematical point of view, we recognized that the emergence of a phase term in the matter wave function, depending on the scale factor, is a consequence of the possibility of factorizing such a wave function ab initio. Clearly, by considering higher-order contributions to the inflaton Schrodinger equation in the expansion with respect to \(1/M\), a modification of the scale invariant spectrum could arise. However, in the considered theoretical framework, the factorization of the matter wave function came from the absence of an average matter backreaction in the classical Robertson-Walker dynamics, i.e., in the H-J equation, as discussed in detail in Section 4.1. A study of the modifications to the power spectrum when the backreaction is taken into account is beyond the analysis here presented and could be investigated in future works. The analysis above suggests that the scheme in [35] could require further restatement in order to better separate the classical and quantum degrees of freedom, which is beyond the scope of this paper. Particular attention has to be focused on the procedure by which the gravitational degrees of freedom are treated--i.e., their classical and quantum components would have to be described via independent variables; see, for example, the proposal [69]. Only after such a reformulation of the gravitational background could the question concerning the matter backreaction be properly addressed in the B-O WKB picture proposed here. This perspective should call attention for future developments calculating the quantum gravity corrections to the inflationary spectrum. All the authors provided equivalent contributions to the scientific content and editing of the manuscript. All authors have read and agreed to the published version of the manuscript. This research received no external funding. Not applicable. G. Maniccia thanks the TAsP INFN initiative for support. The authors declare no conflict of interest. ## Appendix A The Lewis-Riesenfeld Invariant Method The so-called Lewis-Riesenfeld invariant method [85; 86; 87] represents an algorithm for computing the solution for a time-dependent quantum system, in the cases in which a specific invariant can be identified. Generally speaking, given a system with a generic time-dependent Hamiltonian \(\mathcal{H}(t)\), the determination of a Hermitian invariant \(I\) (also called Lewis-Riesenfeld invariant) associated with \(\hat{\mathcal{H}}(t)\) gives an eigenstate basis that can be used to obtain the solution's wave function. Here, we show the application of this method for the time-dependent quantum harmonic oscillator, for which the method was first developed. Starting from the time-dependent harmonic Hamiltonian (47), one can check that the invariant corresponds to the following expression: \[I=\frac{1}{2}\left[\frac{v_{\mathbf{k}}^{2}}{\rho_{\mathbf{k}}^{2}}+(\rho_{k} \pi_{v_{\mathbf{k}}}-\rho_{k}v_{\mathbf{k}})^{2}\right] \tag{12}\] where \(\rho_{k}\) satisfies the so-called Ermakov equation: \[\ddot{\rho}_{k}+\omega_{k}^{2}\rho_{k}=\frac{1}{\rho_{k}^{3}} \tag{10}\] and we recall that the time-dependence is inside \(\omega_{k}(\eta)\), as is the case in Section 3.3 (see the definition (38)). The solution for \(\rho_{k}\) is explicitly \[\begin{split}\rho_{k}=\gamma_{1}\bigg{[}& A^{2} \frac{(\eta k\sin(\eta k)+\cos(\eta k))^{2}}{\eta^{2}k^{3}}+B^{2}\frac{(\eta k \cos(\eta k)-\sin(\eta k))^{2}}{\eta^{2}k^{3}}\\ &+\gamma_{2}\sqrt{A^{2}B^{2}-1}\ \frac{(\eta k\sin(\eta k)+\cos(\eta k ))(\eta k\cos(\eta k)-\sin(\eta k))}{\eta^{2}k^{3}}\bigg{]}^{\frac{1}{2}}\end{split} \tag{11}\] where \(A\), \(B\), and \(\gamma_{1}=\gamma_{2}=\pm 1\) are constants to be appropriately chosen in the cosmological scenario. The expression (11) will allow one to find the eigenstates of (10), which will be described, for each mode, by a quantum index \(n\): \[\hat{I}\,\phi_{n,\mathbf{k}}(\eta,v_{\mathbf{k}})=\lambda_{n}\,\phi_{n, \mathbf{k}}(\eta,v_{\mathbf{k}})\,. \tag{12}\] The eigenstates can be determined by applying the following unitary transformation: \[\exp\biggl{(}-\frac{i}{2\hbar}\frac{\phi_{k}}{\rho_{k}}v_{\mathbf{k}}^{2} \biggr{)}\phi_{n,\mathbf{k}}=\frac{1}{\rho_{k}^{1/2}}\tilde{\phi}_{n,\mathbf{ k}}\,, \tag{13}\] that transforms Equation (47) into \[\biggl{(}-\frac{\hbar}{2}\partial_{\mathbf{v}_{\mathbf{k}}}^{2}+\frac{v_{ \mathbf{k}}}{2}\biggr{)}\tilde{\phi}_{n,\mathbf{k}}=\lambda_{n}\tilde{\phi}_{ n,\mathbf{k}} \tag{14}\] where \(v_{\mathbf{k}}=v_{\mathbf{k}}/\rho_{k}\). Such an equation is easily solved: the eigenvalues are of the form \[\lambda_{n}=\hbar\biggl{(}n+\frac{1}{2}\biggr{)}\,, \tag{15}\] coinciding with the eigenvalues of the invariant \(I\) (see (12)), without an explicit dependence on \(\mathbf{k}\), and the corresponding eigenstates \(\tilde{\phi}_{n,\mathbf{k}}\) are rescaled back from (13) to give the invariant eigenstates \[\phi_{n,\mathbf{k}}(\eta,v_{\mathbf{k}})=\biggl{[}\frac{1}{(\pi\hbar)^{1/2}2^ {n}n!\,\rho_{k}(\eta)}\biggr{]}^{1/2}\exp\Biggl{[}\frac{i}{2\hbar}\left(\frac{ \rho_{k}(\eta)}{\rho_{k}(\eta)}+\frac{i}{\rho_{k}^{2}(\eta)}\right)v_{\mathbf{ k}}^{2}\Biggr{]}H_{n}\biggl{(}\frac{1}{\hbar^{1/2}}\frac{v_{\mathbf{k}}}{\rho_{k}( \hat{I})}\,\Biggr{)}\,. \tag{16}\] Here, \(H_{n}\) are the Hermite polynomials. The state basis (16) allows one to write the solution for the starting time-dependent harmonic oscillator (47) as \[\dot{\chi}_{\mathbf{k}}^{(0)}(\eta,v_{\mathbf{k}})=\sum_{n}c_{n,k }e^{i\beta_{n,k}(\eta)}\phi_{n,\mathbf{k}}(\eta,v_{\mathbf{k}}), \tag{17}\] \[\delta_{n,k}(\eta)=-\biggl{(}n+\frac{1}{2}\biggr{)}\int d\eta\ \frac{1}{\rho_{k}^{2}(\eta)}\,, \tag{18}\] where \(c_{n,k}\) are some suitable coefficients fixed by the system's boundary conditions. Equation (17) is thus the wave function describing the evolution of the time-dependent harmonic oscillator system.
2309.14139
Exploring the Impact of Serverless Computing on Peer To Peer Training Machine Learning
The increasing demand for computational power in big data and machine learning has driven the development of distributed training methodologies. Among these, peer-to-peer (P2P) networks provide advantages such as enhanced scalability and fault tolerance. However, they also encounter challenges related to resource consumption, costs, and communication overhead as the number of participating peers grows. In this paper, we introduce a novel architecture that combines serverless computing with P2P networks for distributed training and present a method for efficient parallel gradient computation under resource constraints. Our findings show a significant enhancement in gradient computation time, with up to a 97.34\% improvement compared to conventional P2P distributed training methods. As for costs, our examination confirmed that the serverless architecture could incur higher expenses, reaching up to 5.4 times more than instance-based architectures. It is essential to consider that these higher costs are associated with marked improvements in computation time, particularly under resource-constrained scenarios. Despite the cost-time trade-off, the serverless approach still holds promise due to its pay-as-you-go model. Utilizing dynamic resource allocation, it enables faster training times and optimized resource utilization, making it a promising candidate for a wide range of machine learning applications.
Amine Barrak, Ranim Trabelsi, Fehmi Jaafar, Fabio Petrillo
2023-09-25T13:51:07Z
http://arxiv.org/abs/2309.14139v1
# Exploring the Impact of Serverless Computing on Peer To Peer Training Machine Learning ###### Abstract The increasing demand for computational power in big data and machine learning has driven the development of distributed training methodologies. Among these, peer-to-peer (P2P) networks provide advantages such as enhanced scalability and fault tolerance. However, they also encounter challenges related to resource consumption, costs, and communication overhead as the number of participating peers grows. In this paper, we introduce a novel architecture that combines serverless computing with P2P networks for distributed training and present a method for efficient parallel gradient computation under resource constraints. Our findings show a significant enhancement in gradient computation time, with up to a 97.34% improvement compared to conventional P2P distributed training methods. As for costs, our examination confirmed that the serverless architecture could incur higher expenses, reaching up to 5.4 times more than instance-based architectures. It is essential to consider that these higher costs are associated with marked improvements in computation time, particularly under resource-constrained scenarios. Despite the cost-time trade-off, the serverless approach still holds promise due to its pay-as-you-go model. Utilizing dynamic resource allocation, it enables faster training times and optimized resource utilization, making it a promising candidate for a wide range of machine learning applications. Serverless, FaaS, Function as a Service, P2P, peer-to-peer architecture, Distributed Training, Machine Learning. ## I Introduction The exponential growth of data in the modern digital age [1] has transformed the landscape of artificial intelligence (AI) and machine learning (ML), propelling these fields into a new era of innovation and discovery. This vast deluge of data, has given rise to increasingly sophisticated and complex models that can extract valuable insights and make accurate predictions [2]. However, these sophisticated models pose a formidable challenge, due to the need for vast computational resources. This escalating demand for computational power has led to the emergence of distributed training [3]. By harnessing the combined power of multiple devices, the training methodology encompasses the division of the dataset among a cohort of workers, each training their local model replicas in parallel and iteratively. To ensure convergence, the workers periodically synchronize their updated local models [4]. Various topologies have been proposed in the literature [2] to facilitate distributed training, including parameter server [5, 6, 7] and peer-to-peer architectures [8, 9, 10, 11, 12]. In the parameter server architecture, the worker nodes perform computations on their respective data partitions and communicate with the parameter server to update the global model. In contrast, peer-to-peer (P2P) architectures distribute the model parameters and computation across all nodes in the network, eliminating the need for a central coordinator [2]. Regardless of the topology employed for distributed training, developers often struggle with managing resources and navigating the complexities of ML training. This can result in over-provisioning and diminished productivity, posing challenges for ML users striving to achieve optimal outcomes [13]. To address these challenges, building machine learning (ML) on top of serverless computing platforms has emerged as an attractive solution that offers efficient resource management and scaling [14, 15, 16, 17]. By automatically scheduling stateless functions, serverless computing eliminates the need for developers to focus on infrastructure management [18, 19]. However, ML systems are not inherently compatible with the Function-as-a-Service (FaaS) model due to limitations such as statelessness, lack of function-to-function communication, and restricted execution duration [13, 20]. Numerous efforts have been made to optimize the utilization of FaaS platforms for managing ML pipelines [13, 21, 22, 23, 24, 25]. The implementation of parameter server architecture in a serverless environment demonstrated significant benefits, including reduced costs [23], scalability [13, 22], and improved performance efficiency [25]. Notwithstanding these encouraging findings, there remains a dearth of research elucidating the ramifications of serverless computing on peer-to-peer architecture. **To the best of our knowledge**, no research has been conducted to study the impact of serverless computing in a peer-to-peer environment. Distributed training in peer-to-peer (P2P) networks offers benefits such as improved scalability and fault tolerance [26], but also presents challenges. As the network grows, communication, synchronization, and model update overheads increase, leading to latency and reduced training efficiency [4]. The diverse nature of devices in P2P networks can also cause imbalanced workloads and resource constraints, complicating the training process [27]. Another challenge faced during distributed training in P2P is the implementation of parallel batch processing inside each or the workers using popular machine learning frameworks like PyTorch. These frameworks often rely on the available and limited resources of individual workers to perform parallel computing on batches, which can lead to inefficiencies when resources are scarce [28, 29]. Consequently, these frameworks may resort to processing batches sequentially, which can result in longer training times and diminished performance. In this paper, we present a novel approach to address all these challenges associated with distributed training in P2P networks by integrating serverless computing for parallel gradient computation. Our approach consists of the following components: (a) Incorporating serverless computing into the P2P training process, which eliminates the need to expand the number of workers in the network, effectively reducing communication and synchronization overhead and consequently enhancing training efficiency. (b) Introducing an advanced technique that leverages serverless functions and workflows for parallel gradient computation within each worker, ensuring efficient and accelerated gradient computation for each peer in the network, even in the presence of resource constraints. Through a series of experiments and analyses 1, we demonstrate the effectiveness of our proposed approach in improving training, and optimizing resource utilization. Footnote 1: [https://github.com/AmineBarrak/PeerToPeerServerless](https://github.com/AmineBarrak/PeerToPeerServerless) Our main contributions in this paper include: * _Propose a novel architecture that integrates serverless computing into P2P networks for distributed training._ * _Introducing an advanced technique for efficient, parallel gradient computation within each peer, even under resource constraints._ * _Demonstrate the effectiveness of the proposed approach in improving training and optimizing resource utilization._ ## II Background In this background section, we delve into the intricate world of peer-to-peer machine learning. Additionally, we will explore the realm of serverless computing and workflow service state machines, such as AWS Step Functions. ### _Peer To Peer architecture for distributed training_ Peer-to-Peer (P2P) architecture is a decentralized communication topology that is widely used in distributed systems. In a P2P system, nodes communicate directly with each other and there is no central point of control or coordination. In P2P training, the computational workload is distributed across multiple devices, creating a decentralized network where each device contributes its resources to collectively train the model. This approach can improve scalability, reduce training time, and minimize reliance on centralized infrastructure, making it a viable option for various applications, especially those with limited resources or rapidly changing workloads. P2P training in machine learning presents an attractive alternative to traditional centralized training methods. By leveraging the distributed computing capabilities of multiple devices, P2P training can offer improved scalability, fault tolerance, and privacy preservation. However, challenges such as heterogeneity and resource constraints must be addressed to fully realize the potential of P2P training in machine learning applications. ### _Serverless Computing_ Serverless computing is an emerging paradigm in cloud computing that enables developers to build and deploy applications without the need to manage server infrastructure. This model eliminates the need for developers to worry about infrastructure scaling, server maintenance, and other low-level tasks, allowing them to focus on creating business logic. Serverless computing is built on the concept of Function-as-a-Service (FaaS), which provides developers with a platform to deploy and run small pieces of code, called functions, in response to events. When an event triggers a function, the cloud provider provisions the necessary infrastructure to run the function, and then releases it once the function completes its execution. The benefits of serverless computing, such as cost-effectiveness, scalability, flexibility, and ease of use, make it a promising approach for machine learning applications, enabling efficient resource management and rapid model development. ### _Serverless AWS Step Function Workflow_ The AWS Step Function Workflow enables developers to design, execute, and monitor multi-step workflows, addressing the complexity of manually managing multiple serverless functions (e.g., AWS Lambda Function [30]). By defining a state machine using the Amazon States Language, developers can create long-running workflows that are easy to understand and maintain, improving the overall coordination of serverless applications. ## III Methodology and System Design In this section, we present a novel P2P training ML system based on Serverless computing, focusing on the design architecture, algorithm, and techniques to reduce peer overload. Our approach aims to improve efficiency, scalability, and alleviate resource constraints in ML training. ### _Design Architecture of Peer to Peer training Machine Learning based on Serverless Computing_ We implemented our approach using AWS Lambda due to its 15-minute timeout and 10GB RAM availability [31]. Comparable services exist on platforms like Google Cloud Functions, Azure Functions, and IBM Cloud Functions. Figure 1 describes the overall proposed architecture. During the training of deep learning models, PyTorch strives to maximize resource utilization efficiently. However, ML frameworks _i.e._,PyTorch, do not inherently possess a mechanism to seamlessly transition between parallel and sequential processing under resource constraints. In real-world scenarios, ML frameworks leverage a GPU for computations when available and default to the CPU when GPU resources are not accessible. By harnessing the power of serverless computing, our system architecture enables parallel gradient computations across multiple Lambda functions, leading to a substantial reduction in overall computation time. We thoroughly examine the intricacies of our peer-to-peer architecture, which consists of four integral system components. An overview of the peer to peer ML system based on Serverless computing architecture is depicted in Figure 1. **AWS S3 Buckets** : In a peer-to-peer network, data is systematically partitioned into discrete segments, with each peer's assigned portion subsequently uploaded to a dedicated S3 bucket. This approach guarantees seamless access to their own data for each peer, while simultaneously leveraging the high-performance, cloud-based architecture of S3. **AWS Lambda Function** : We strategically chose to implement parallel batch processing, a complex task made feasible by employing AWS Lambda serverless functions. By harnessing AWS Lambda's capabilities, we link each data batch to a specific Lambda function responsible for executing the necessary gradients computations. This approach significantly reduces total computation time through the wise distribution of workloads across multiple Lambda function instances, accelerating data processing and cutting down the time needed to complete processing the training set. Additionally, We integrate AWS Step Functions to manage, orchestrate and invoke the Lambda serverless parallel computing process, adapting to the availability of data batches and ensuring efficient handling of the workload. **EC2 Instance**: Each EC2 instance in our system architecture, assigned to individual peers, carries multiple responsibilities. First, it acts as a trigger for invoking Lambda functions responsible for essential gradient computation. Additionally, it includes a crucial set of features that enable gradient exchange between peers. Ultimately, the EC2 instance is equipped with a specialized feature to detect model convergence, further boosting the overall efficiency of the system. **RabbitMQ** : The proposed architecture relies on the utilization of RabbitMQ, that enable seamless communication between peers. After computing gradient averages over batches, a peer publishes the resultant data to its dedicated queue. Other peers in the network can access the gradients published in the queue, enabling efficient and seamless information sharing. This is a critical aspect of our methodology, as it allows each peer to access the required information quickly and accurately to perform computations. RabbitMQ's reliability and security ensure smooth and secure data transmission and communication, promoting an efficient processing of complex data sets. ### _Peer to Peer training Machine Learning_ We specify a peer-to-peer architecture that leverages distributed computation for the purpose of training machine learning models. Algorithm 1 present the logic we followed. Initially, a workload is provided that includes the Deep Neural Network (DNN) model and the training dataset, along with parameters specifying the number of peers (P), batch size (B), and training epochs (E). Additionally, each peer has an array of key-value pairs, where the key is the peer's rank (ID) and the value is the computed gradient. Fig. 1: Overview of the proposed Peer To Peer training based on Serverless computing We explain in the following the different sections of the algorithm. #### Iii-B1 Dataset Preprocessing Within our system architecture, we have integrated a preprocessing stage to to transform the training dataset using methods like min-max scaling, standardization, and normalization. After preprocessing, the dataset is divided into partitions for each peer in the training process. Aataloader is implemented to further split the partitions into batches, which are then stored in designated Amazon S3 cloud storage buckets. #### Iii-B2 Compute Batch Gradients The peer-to-peer training paradigm entails a multi-stage process wherein each worker subdivides its designated data subset into smaller batches, which are intended to expedite the training and convergence process by allowing each worker to compute gradients for smaller subsets of the data. During the training phase, each worker calculates the gradients for the batches of data it has processed and subsequently averages these gradients across all batches. This crucial step enables each worker to obtain an accurate representation of the gradients for its designated subset of data. #### Iii-B3 Communication Protocol To communicate between peers, we used Amazon MQ's RabbitMQ for exchanging gradients between multiple peers during the model synchronization process. Each peer is assigned a dedicated queue that contains a single, persistent gradient message. When a new gradient is generated, it replaces the previous one in the queue, ensuring that the latest gradient is always available for consumption by other peers. Peers can access and consume gradient messages from all other queues without deleting them, which promotes efficient gradient exchange and prevents data loss in case of temporary disruptions. The persistence of gradient messages guarantees the availability of the necessary information for model synchronization, even under challenging network conditions. When peers are ready to synchronize their models, they read the gradient messages from all other queues, excluding their own. This process allows them to effectively update their models based on the gradients received from other peers, streamlining the distributed training process across the entire system. To store received gradients from peers, a dictionary is created, where the peer's rank serves as the key to map to its corresponding received gradient. Each peer retains the received gradients in the local dictionary, and if the dictionary's size exceeds a threshold predefined in advance, the peer retrieves the gradients and calculates their average. The worker then updates its model parameters in accordance with the result. This iterative process continues for a predetermined number of epochs, as established by the input hyperparameters. To overcome Amazon MQ's message size limitations (100MB per message), large files are stored in Amazon S3 and referenced using UUIDs. Sending UUIDs through Amazon MQ enables efficient, scalable data transfer without compromising performance or reliability, providing a flexible solution for seamless data exchange. #### Iii-B4 Compression / Decompression we address the challenge of high communication overhead by incorporating the QSGD algorithm [32]. This algorithm uses a compression technique to quantize gradients before transmission, reducing the size of transmitted gradients and leading to improved training efficiency. #### Iii-B5 Average Gradients After receiving gradients from other peers, each peer aggregates the gradients by computing their average and uses this averaged gradient to update their local model parameters. The advantage of this approach is that it allows each peer to learn from the gradients computed by other peers, resulting in a more accurate representation of the global gradients. #### Iii-B6 Synchronous & Asynchronous Gradient Computation In the following stage, the worker simultaneously distributes the averaged gradients to all other workers in the network and receives from them their averaged gradients as well. This process can be executed using either synchronous or asynchronous approaches. Figure 2 show an example of synchronous and asynchronous communication using four workers. **In the asynchronous communication**, Amazon MQ's RabbitMQ service provide a separate dedicated queues for each peer. These queues store the latest gradients generated by each peer, and they can be accessed and consumed by other peers without having to wait for every peer to finish their gradient computation. This means that a peer can start updating its model with the latest available gradients from other peers, without waiting for gradients from slower peers or those experiencing temporary disruptions. **In the synchronous communication,** a synchronization barrier is added to ensure that all peers progress through the distributed training process together. Synchronizing autonomous peers in a distributed system is challenging, especially when using RabbitMQ queues for gradient communication. Factors like varying resource availability can cause some peers to progress through epochs at different speeds. To address this issue, we have implemented a RabbitMQ-based synchronization mechanism. Each peer sends a message to a designated synchronization queue, signifying the completion of gradient computation, sending, and receiving for all connected peers. Once the size of this synchronization queue matches the total number of peers, it indicates that all peers have completed the current epochs, and they can then proceed to the next one in a coordinated manner. #### Iii-B7 Convergence Detection To detect model convergence, two key techniques are used: _ReduceLROnPlateau_ and _Early Stopping_. ReduceLROnPlateau adjusts the learning rate during training, improving generalization by preventing overshooting the loss function's minimum. It monitors model performance on a validation dataset, reducing the learning rate if improvement stalls. Early stopping detects convergence by tracking performance during training and stopping when performance degrades, preventing overfitting. If convergence isn't reached through these techniques, the epoch limit determines the maximum training iterations. Achieving convergence ensures the model's accuracy and effectiveness in making predictions. #### Iii-B8 Memory, CPU and Time metrics collection: To assess and diagnose the efficiency of the system architecture, several Python libraries are used for recording performance metrics. Tracemalloc is utilized for measuring RAM utilization, psutil for monitoring CPU usage in real-time, and the per_counter function for evaluating time-based performance. These tools enable a deep understanding of system performance and identification of areas requiring optimization or improvement. ### _Serverless to reduce a Peer overload computing:_ We leverage AWS Lambda for serverless parallel batch processing, enabling efficient workload distribution and reducing computation time. By assigning specific Lambda functions to data batches, we effectively manage gradients computations. AWS Step Functions orchestrate the Lambda functions, adapting to data batch availability for optimal workload handling. This serverless approach minimizes peer overload and accelerates training set processing, enhancing overall performance. ## IV Experimental Setup This section details our experimental setup to evaluate the performance of various CNN models across different datasets on the proposed architectures. ### _Datasets_ **MNIST:** The MNIST Handwritten Digit Collection [33] consists of 60,000 samples of handwritten numerals, each categorized into one of ten classes. **CIFAR:** The CIFAR Image Dataset [34] encompasses 60,000 color images spanning ten distinct classes, such as automobiles, animals, and objects. Each category contains 6,000 images that are evenly distributed. ### _Model Architectures and Hyperparameters_ **SqueezeNet 1.1:** SqueezeNet 1.1 is an efficient CNN architecture [35], with fewer parameters ( 1.2 million) and a small model size (\(<\)5MB). **MobileNet V3 Small:** MobileNet V3 Small [36] is a lightweight CNN tailored for mobile and edge devices, featuring inverted residual blocks, linear bottlenecks, and squeeze-and-excitation modules. With approximately 2.5 million trainable parameters and a compact model size, **VGG-11:** VGG-11 is a deep convolutional neural network (CNN) architecture developed for image classification tasks [37]. It is a variation of the VGG family, with 11 weight layers, including convolutional and fully connected layers. With an input resolution of 224x224 and approximately 132.9 million trainable parameters. ### _EC2 Instances configuration for peers_ We aim to determine the ideal machine instance for three different neural network models: Vgg11, MobileNet V3 Small, and SqueezeNet 1.1. We started with the smallest available machine instance and trained the models on it. If the machine crashed due to resource limitations during training, we moved up to the next larger machine instance until we found one that was able to train the model without issue. Additionally, we incrementally increased the number of peers during the experimentation, starting with 4 and adding 4 peers at a time Fig. 2: Synchronous(left) and asynchronous(right) Communication until we reached 12 peers, to determine the computation and communication resources usage. Ultimately, we determined that the Vgg11 model require t2.large instance, while the MobileNet V3 Small and SqueezeNet 1.1 models could be trained model could be trained on t2.medium instance. This approach allowed us to optimize the use of resources and achieve optimal performance for each model, taking into account both computation and cost. ### _Serverless client functions configuration_ In the following, we discuss our approach to implementing a serverless training workflow by leveraging AWS Step Functions and Lambda functions for parallel gradient computation and batch processing. #### Iv-D1 Serverless AWS Lambda Configuration for Gradient Computation We prepared an AWS lambda serverless function for machine learning batch training. The function is designed to be invoked with essential parameters such as the specific model, batch identifier, optimizer, learning rate, and loss function. To obtain the necessary data batch for training, the function accesses an S3 bucket, where we have pre-processed and stored batches. To facilitate seamless deployment on our custom ARM architecture, we packaged the machine learning dependencies, including the Pytorch library, in a zip file with a size less than 50MB. If additional dependencies are needed, they can be incorporated as separate layers within the AWS Lambda service. This approach allows for a modular structure while complying with the service's constraints. The total size of the unzipped files must not exceed 250MB, ensuring that the serverless function remains within the allowable resource limits, ultimately fostering efficient and scalable training processes in our custom ARM-based environment. #### Iv-D2 Serverless AWS Lambda Pricing One of the key factors in pricing for AWS Lambda is the amount of memory allocated to the function. The prices of AWS Lambda are calculated based on the amount of memory allocated to the function and the duration of the execution. The objective is to compare the costs of running the same workload using EC2 peer to peer instances without serverless and using EC2 small instances by invoking serverless lambda for parallel compute gradients. This comparison will give us insights into the cost-effectiveness of using serverless computing in contrast to traditional computing methods. #### Iv-D3 Dynamic AWS Step Function State Machine for Parallel Batch Processing We have developed a Dynamic State Machine using AWS Step Functions, designed to compute parallel batch gradients on serverless Lambda functions. This state machine is generated dynamically according to the given batch number, allowing it to accommodate varying batch sizes. By leveraging the parallel computing capabilities of AWS Step Functions, each Lambda invocation processes an assigned batch saved in an S3 bucket. Once the state machine is deployed, it is invoked with the necessary input, which includes the total number of batches and the data required for the Lambda function to compute gradients corresponding to each data batch. This data encompasses the model, batch, optimizer, learning rate, and loss function. Our approach effectively enables parallel processing of gradient computations within a serverless environment using AWS Step Functions and Lambda functions. ## V Experimental Results In this section, we present the results of our experiments, focusing on distributed deep learning aspects including resource requirements, serverless efficiency, communication overhead, and synchronization barriers in peer-to-peer training. ### _Identify tasks needing expensive computational level_ To determine the resource usage and identify computationally expensive tasks in a distributed peer-to-peer training setup. In this setup, four worker nodes collaborate to train a machine learning model. The experiment focuses on measuring the resource usage at different stages of the distributed training process, including computing gradients, sending gradients, receiving gradients, updating the model, and convergence detection, is monitored and captured. Metrics such as CPU usage, memory consumption, and processing time are recorded for each stage. The experiment continues to four epochs and the average per epoch is computed. Afterward, we compare resource consumption across different stages and identify the most computationally demanding tasks. According to the results of our experimental investigation on three different models, namely VGG11, MobileNetV3 Small, and SqueezeNet, using two distinct datasets, MNIST and CIFAR, we have identified the most resource-intensive step during the training process. As demonstrated in the tabulated data of Table I, the computation of gradients consumes a substantial amount of computational resources and memory, particularly for VGG11, which requires approximately 4 GB of memory per batch, and given that we executed 30 batches during our experiment. In comparison to other stages, such as sending and receiving data, updating models, and detecting convergence, the computation of gradients resulted in the highest CPU usage. As a result, it is reasonable to recommend the migration of the computation of gradients to a serverless infrastructure, which can reduce the overheads associated with managing and provisioning resources. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline **Should** **Number** **Defined** **Defined** **Defined** **Defined** **Defined** **Defined** **Defined** **Defined** **Defined** **Defined** **Defined** **Defined** **Defined** **Defined** **Defined** **Defined** **Defined** **Defined** **Defined** **Defined** **Defined** **Defined** **Defined** **Defined** **Defined** **Defined** **Defined** **Defined** **Defined** **Defined** **Defined** **Defined** **Defined** **Defined** **Defined** **Defined** **Defined** **Defined** **Defined** **Defined** **Defined** **Defined **Defined** **Defined** **Defined** **Defined** **Defined** **Defined** **Defined** **Defined** **Defined** **Defined** **Defined** **Defined** **Defined** **Defined** **Defined** **Defined** **Defined** **Defined** **Defined** **Defined** **D** **D** **D** **D** **D** **D** **D** **D** **D** **D** **D** **D** **D** **D** **D** **D** **D** **D** **D** **D** **D** **D** **D** **D** **D** **D** **D** **D** **D** **D** **D** **D** **D** **D** **D** ** **D** ** **D** **D** ** **D** **D** ** **D** ** **D** ** **D** **D** ** **D** **D** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** **D** **** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** **D** ** **D** ** **D** ** **D** **D** ** **D** ** **D** **D** ** **D** **D** **D** ** **D** ** **D** ** **D** ** **D** **D** **D** **D** ** **D** ** **D** ** **D** **D** ** **D** ** **D** **D** ** **D** **D** **D** ** **D** **D** ** **D** **D** ** **D** **D** **D** **D** ** **D** **D** ** **D** ** **D** ** **D** ** **D** ** **D** **D** ** **D** ** **D** ** **D** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** **D** ** **D** ** **D** ** **D** ** **D** **D** **D** **D** **D** **D** ** **D** ** **D** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** **D** **D** ** **D** **D** **D** **D** ** **D** ** **D** **D** ** **D** **D** ** **D** **D** ** **D** **D** ** **D** **D** ** **D** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** **D** ** **D** **D** **D** **D** ** **D** ** **D** ** **D** ** **D** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** **D** ** **D** **D** ** **D** **D** ** **D** ** **D** ** **D** **D** ** **D** ** **D** **D** ** **D** ** **D** ** **D** ** **D** ** **D** ** **D** **D** **D** ** **D** **D** ** **D** ** **D** **D** **D** ** **D** ** **D** ** **D** ** **D** ** **D** **D** ** **D** ** **D** **D** ** **D** ** **D** **D** ** **D** **D** **D** ** **D** **D** ** **D** **D** ** **D** **D** **D** **D** ** **D** **D** **D** ** **D** **D** ** **D** **D** ** **D** ** **D** **D** ** **D** **D** **D** ** **D** **D** **D** **D** ** **D** **D** ** **D** **D** **D** **D** **D** **D** ** **D** **D** ** **D** **D** **D** ** **D** **D** **D** ** **D** ** **D** ** **D** **D** ** **D** ** **D** **D** **D** ** **D** **D** **D** ** **D** **D** ** **D** **D** **D** **D** ** **D** **D** **D** **D** **D** **D** **D** **D** **D** **D** **D** **D** **D** **D** **D** **D** ** **D** ** **D** **D** **D** **D** **D** **D** **D** **D** **D** **D** **D** **D** **D** **D** **D** **D** ** **D** **D** **D** **D** **D** **D** **D** **D** **D** **D** ** **D** **D** **D** **D** **D** **D** **D** ** **D** **D** **D** **D** **D** **D** **D** **D** ### _Evaluation of Serverless Infrastructure for Gradient Computing_ Throughout this section, we conducted a series of experiments to evaluate the impact of serverless infrastructure on the performance and cost of gradients computing. We evaluate two distinct architectures to assess the impact of serverless integration on resource utilization and cost. In the first architecture, we train a VGG11 model and MNIST dataset with _t2.large_ instances. In the second architecture, we train the same model with _t2.small_ instances, while offloading high-computational tasks to a distributed lambda serverless infrastructure. Iii-B1 Computation Time Comparison: Serverless vs. Instance-based Architectures for Gradient Computing We examined different architectures, batch sizes, and numbers of workers to gain a comprehensive understanding of the potential benefits and challenges associated with serverless integration in terms of execution time of the gradients computation. The findings from our experiments are illustrated in a bar plot figure3, where we have two bars for each batch size - one representing the time taken with serverless infrastructure (blue bar) and the other without serverless infrastructure (orange bar). This visual representation clearly highlights the significant improvements in the time taken to compute batches when employing serverless infrastructure across various batch sizes (64, 128, 512, and 1024) and numbers of workers (4, 8, and 12). For instance, in a configuration with 4 workers and a batch size of 64, the blue bar (serverless) is considerably shorter than the orange bar (non-serverless), demonstrating a remarkable 97.34% reduction in the time taken to compute batches. Similarly, with 8 workers and a batch size of 128, the improvement reaches 92.04%. However, it is worth noting that the improvement tends to decrease as the number of workers increases, especially for larger batch sizes. #### Iii-B2 Cost Comparison: Serverless vs. Instance-based Architectures for Gradient Computing In the previous experiment, we evaluated the impact of serverless infrastructure on computation time for gradient computing in peer-to-peer training. The results demonstrated significant improvements across varying batch sizes and numbers of workers, especially with a four-worker setup. This finding prompted us to delve deeper into the cost analysis for this scenario. In this section, we present a cost comparison between serverless and instance-based architectures for gradient computing, focusing on a case study involving four workers, the VGG11 model, and the MNIST dataset. Tables II and III detail the time and cost evaluation for computing gradients with different batch sizes in both architectural scenarios. Lambda memory size was set to match the minimal functional requirements for gradient computation. In our cost comparison analysis, the estimated cost per peer was calculated as follows: \[\text{Cost per Peer}_{\text{serverless}} =\text{[Lambda Cost}\times\text{Num of batches}\] \[\quad+\text{EC2 Cost}\text{]}\] \[\quad\times\text{Computation Time} \tag{1}\] \[\text{Cost per Peer}_{\text{instance-based}}=\text{EC2 Cost}\times\text{Computation Time} \tag{2}\] From Table II, we observe that for serverless architecture, as the batch size decreases, so does the computation time, leading to variable costs per batch size. However, the number of batches, also increases, affecting the lambda costs since each batch is a separate invocation of the lambda function. Hence, while larger batch sizes increase efficiency in computation time, they also necessitate more resources, thus increasing the costs. In comparison, Table III presents the costs associated with the instance-based architecture, showing a clear increase in the costs as the batch size decreases. The cost differences between the two architectures can be attributed to the use of different instance types (t2-small, t2.large) and the varying memory size requirements for the lambda functions in the serverless architecture. For a detailed understanding of the cost dynamics, we scrutinized the estimated cost of computing gradients (in USD) for both architectures across all batch sizes. For a batch size of 1024, we found that the serverless architecture costs approximately 5.34 times more than the instance-based architecture. However, this discrepancy in cost decreases with smaller batch sizes. The results highlight a greater cost when utilizing a serverless architecture with low resource instances, it's important Fig. 3: Comparison of Processing Training Time on Gradients Computing for different number of Peers and batch sizes in Peer to Peer Training with and Without Serverless to consider the time-efficiency gains. As there is a trade-off between the significant improvements in computation time and the cost associated with using serverless infrastructure. It is essential for researchers and practitioners to consider their specific requirements, such as training time constraints and budget limitations, when selecting an architecture for gradient computing. ### _Compression and Communication Overhead_ In this section, we will explore Compression and Communication Overhead in distributed deep learning systems. We will first analyze the impact of varying the number of workers on computation and communication overhead, followed by an investigation into Gradient Compression techniques for enhancing communication efficiency during the training process. #### Iv-C1 Computation and communication Over workers To elucidate the impact of communication overhead on system performance in a peer-to-peer architecture, we conducted rigorous experiments involving both VGG111 and MobileNet V3 Small models, varying the number of workers. In each experiment, we meticulously recorded both the compute time and the communication time. The results presented in the Figures 4 show the relationship between the number of workers (peers), communication time, and computation time for VGG11 and MobileNet V3 Small models when using a batch size of 1024. In both cases, the figures reveal that as the number of workers increases, computation time decreases while communication time increases. This can be attributed to the fact that with more workers, the dataset is divided among more devices, allowing for faster computation. We notice that the magnitude of the increase is much higher in the VGG11 model compared to the MobileNet V3 Small model. This could be due to the VGG11 model having a larger number of parameters, which results in more gradient information being communicated between workers. #### Iv-C2 Gradient Compression for communication improvement As mentioned in the previous section, communication overhead increases as the number of workers increases. Gradient compression can be a solution to mitigate this. In order to assess the impact of gradient compression on communication overhead, an experimental investigation was executed using the VGG11 model, the MNIST dataset, and a network composed of four peers. Our paramount focus was on precisely measuring the send and receive times from a single peer, in order to comprehensively elucidate communication efficiency. As illustrated in Figure5, we demonstrate that the utilization of gradient compression techniques yields to a significant reduction in communication time when compared to the utilization of non-compressed gradients. This reduction in communication time is observed across a broad range of batch sizes. ### _Peer to Peer Training and Communication Barrier Synchronisation_ In our experiments, we aimed to compare the performance of two different peer-to-peer (P2P) approaches: synchronous P2P and asynchronous P2P. We conducted experiments on Mobilenet v3 small with a batch size of 64, a learning rate of 0.001, and the optimizer SGD. Our findings revealed that \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Batch Size & 1024 & 512 & 128 & 64 \\ \hline Number of batches & 15 & 30 & 118 & 235 \\ \hline Instance Type & t2-small & t2-small & t2-small & t2-small \\ \hline Lambda & 4400 MB & 2800 MB & 1800 MB & 1700 MB \\ Memory size & & & & \\ \hline Time to Compute & 41.2 & 28.1 & 12.9 & 10.5 \\ Gradients (seconds) & & & & \\ \hline Estimated EC2 & & & & \\ instance Cost & 50.00000639 & 50.00000639 & 50.00000639 & 50.00000639 \\ (USD / seconds) & & & & \\ \hline Estimated Lambda & & & & \\ Cost (USD / seconds) & & & & \\ \hline Estimated Compute & & & & \\ Gradients Cost per & & & & \\ Peer (USD) & & & & \\ \hline \end{tabular} \end{table} TABLE II: Time and Cost Evaluation of Compute Gradients in Peer to Peer Training with Serverless; Model trained on VGG11, MNIST dataset, and Four Peers Fig. 4: Gradients Computation and communication time per # Peers on VGG11 and MobileNet V3 Small (1024 batch size) \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline batch size & 1024 & 512 & 128 & 64 \\ \hline Instance Type & t2-large & t2-large & t2-large & t2-large \\ \hline Time to Compute & & & & \\ Gradients (seconds) & & & & \\ \hline Estimated EC2 & & & & \\ instance Cost & 50.00002578 & 50.00002578 & 50.00002578 & 50.00002578 \\ (USD / seconds) & & & & \\ \hline Estimated Compute & & & & \\ \hline Gradients Cost per & 50.000665 & 50.00717 & 50.00851 & 50.01017 \\ Peer (USD) & & & & \\ \hline \end{tabular} \end{table} TABLE III: Time and Cost Evaluation of Compute Gradients in Peer to Peer Training without Serverless; Model trained on VGG11, MNIST dataset, and Four Peers the synchronous P2P approach outperformed the asynchronous P2P approach in terms of convergence rate and achieved a higher accuracy level. Specifically, the synchronous P2P approach achieved an accuracy of 84.3% after approximately 128 epochs, while the asynchronous P2P approach required a greater number of epochs to converge and exhibited instability during the convergence. This was due to the asynchronous approach's tendency to consider outdated gradients, resulting in more epochs number to converge [16]. These results indicate that in P2P communication, synchronicity plays a crucial role in achieving a faster and more accurate convergence rate. ## VI Discussion In this section, we reflect on the key findings and implications of our research on distributed deep learning in peer-to-peer training setups. Our focus is on the benefits and challenges of serverless infrastructure, the significance of managing communication overhead, addressing synchronization barriers, and the impact of choosing the model architecture and dataset on overall training performance. ### _Benefits and Challenges of Serverless Infrastructure_ Our experimental results have underscored that the computation of gradients is the most resource-intensive task in distributed peer-to-peer training setups. By transitioning this task to serverless infrastructure, we have witnessed substantial improvements in computation time across various batch sizes and numbers of workers. This emphasizes the potential of serverless infrastructure as an efficient and scalable solution for managing computationally demanding tasks in distributed training, especially when working with complex models that require expensive computational instances. Serverless infrastructure offers notable benefits but also presents cost implications. Our analysis revealed its potential in situations of constrained computational resources. Although serverless architectures may be more costly for smaller batch sizes, they significantly improve computation time when resources are scarce. This underlines their advantage in scenarios that demand rapid processing under resource limitations. Therefore, researchers and practitioners need to weigh their specific needs, such as time constraints, budget, resource limitations, and model complexity, when deciding on adopting a serverless architecture. This balanced approach allows for an optimal decision that accommodates cost, efficiency, and performance requirements. ### _Reducing Communication Overhead_ Our analysis of communication overhead in distributed peer-to-peer training revealed that the amount of data transferred during the training process is influenced by the choice of model architecture and dataset. Larger model architectures and more complex datasets resulted in higher communication overhead, which can negatively impact the overall efficiency of the training process. To reduce communication overhead, techniques such as gradient compression [32], model sparsification [38], Delta compression [39], communication optimisation [40] and efficient data encoding can be employed. These methods can help minimize the data transfer during the training process, resulting in a more efficient and cost-effective training setup. ### _Impact of Model Architecture and Dataset Choices_ Our experiments have shown that the choice of model architecture and dataset can have a significant impact on various aspects of distributed peer-to-peer training, including computational resource requirements, communication overhead, and synchronization barriers. Larger model architectures and more complex datasets generally require more computational resources, result in higher communication overhead, and demand longer synchronization times. This highlights the importance of selecting appropriate model architectures and datasets for distributed training processes, considering the available resources and the desired trade-offs between training time, cost, and performance. Fig. 5: Compression Algorithm impact on Time Communication (Send and Receive Gradients) Fig. 6: Synchronous Vs Asynchronous Peer to Peer Training of MobileNet V3 Small ## VII Related work In this related work section, we will explore two distinct but interrelated areas: Peer-to-Peer Machine Learning, which focuses on decentralized training approaches, and Serverless Computing for Machine Learning, which examines the efficient use of serverless to reduce computational overhead. ### _Peer to Peer in Machine Learning_ In recent years, numerous initiatives have been proposed to address the challenges of distributed, decentralized, and peer-to-peer (P2P) systems for machine learning. These works can be broadly classified into the following categories: decentralized training methodologies, privacy-preserving approaches, and communication-efficient solutions. Decentralized training methodologies, such as BrainTorrent [9] and the consensus-based distributed stochastic gradient descent algorithm proposed by Zhanhong et al. [41], utilize fully decentralized systems for training a shared model. Both approaches showcase the potential of decentralized training in large-scale machine learning systems while maintaining scalability and convergence guarantees. BrainTorrent requires peers to share their local model weights and update them by calculating a weighted average between the weights of the receiving and sending peers. In contrast, Zhanhong et al.'s algorithm leverages gossip-based communication protocols to propagate the model between nodes in fixed topology networks. Addressing communication overhead and efficiency in P2P topologies is another critical aspect of distributed machine learning systems. Garfield [4] presents a decentralized architecture for training machine learning models in the presence of adversarial nodes, leveraging a Byzantine fault-tolerant consensus protocol for secure and scalable P2P training. Xing et al. [42] highlight the communication overhead in P2P parameter synchronization and the need for efficient communication strategies. SELMCAST [43], an algorithm for multicast receiver selection, optimizes the bottleneck sending rate to reduce time cost for parameter synchronization. Lastly, the SAPS-PSGD algorithm [12] maximizes bandwidth efficiency through adaptive worker pair selection in a distributed training approach involving a coordinator and multiple peers. These communication-efficient solutions showcase the potential to overcome the challenges of communication overhead in peer-to-peer topologies while achieving substantial savings compared to centralized topologies. In this work, our proposed method involves peers in a distributed training system sending large gradient computations to serverless computing resources instead of calculating them locally. By doing so, the peers can focus on other tasks, such as updating model weights and communicating with other nodes in the network, while serverless platforms efficiently handle the computationally expensive gradient calculations. ### _Serverless Computing for Machine Learning_ Given that the use of serverless runtimes for machine learning pipelines is a relatively new research area [15, 17, 19, 44, 45], several initiatives have been undertaken to encourage wider adoption and promote efficient utilization of Function-as-a-Service (FaaS) platforms. To this end, various serverless development frameworks have been proposed, such as Cirrus [15], which has been meticulously crafted to proficiently manage the entire ML workflow. Particularly, recent research efforts have been directed towards ML model training [16, 20, 21, 22, 24, 46]. Existing approaches for distributed training machine learning are based on parameter server communication topologie where all communication between workers goes through a server for a synchronisation purpose. Ali et al. [13] proposed SMLT, a serverless framework for distributed training based on parameter server architecture and a Hybrid Storage Enabled Hierarchical Model Synchronization method. This approach achieves faster training speeds and reduced monetary costs compared to other serverless ML training frameworks and VM-based systems. Experimental evaluations show SMLT outperforms state-of-the-art VM-based systems and serverless ML training frameworks in training speed (up to 8x) and cost (up to 3x). MLLESS [20], proposed by Sarroca and Sanchez-Artigas, is a FaaS-based ML training prototype designed for cost-effective ML training in serverless computing. It incorporates a decentralized design, a significance filter, and a scale-in auto-tuner optimized for serverless computing. MLL-less outperforms serverful ML systems by up to 15x for sparse ML models with fast convergence and demonstrates ease of scaling out to large clusters of serverless workers. In this work, we use a fully decentralized machine learning training approach, leveraging serverless benefits to compute expensive gradient calculations, combining the advantages of both distributed learning and serverless computing for more efficient and scalable ML training processes. ## VIII Conclusion In this paper, we present a novel serverless peer-to-peer (P2P) architecture for distributed training, introducing an efficient parallel gradient computation technique to address resource constraints. We evaluated the performance of our approach with a focus on computational resource requirements, serverless infrastructure efficiency, communication overhead, and synchronization barriers. Our experimental results showed that the computation of gradients is the most computationally expensive task, benefiting from serverless infrastructure integration and leading to up to a 97.34% improvement in computation time. We investigated the trade-off between computation time improvements and associated costs, revealing that serverless architecture tended to be more expensive, with costs being up to 5.3 times higher than traditional, instance-based architectures. Additionally, we briefly analyzed the communication overhead and synchronization problems in the distributed training process, highlighting the need for efficient strategies in P2P distributed training systems. The insights gleaned from this research can be utilized by other researchers and practitioners to build upon our work, further optimize distributed training processes, and potentially revolutionize the way machine learning models are trained across various applications.
2309.13656
Plebanśki-Demiański à la Ehlers-Harrison: Exact Rotating and Accelerating Type I Black Holes
Recently, it was shown that type D black holes, encompassed in the large Pleban\'ski--Demia\'nski (PD) family, exhibit a wide class of algebraically general generalizations via the application of Ehlers and Harrison transformations. In this work, we first discuss some mathematical details behind the composition of such transformations, and next, we introduce a qualitative picture of the most general type I generalization of the PD family, dubbed ``Enhanced Pleban\'ski--Demia\'nski'' spacetime. We provide the exact form of the solution in the original PD coordinates, obtained via the simultaneous action of an Ehlers and a Harrison transformation on the vacuum PD geometry. In order to make the physics more transparent, we explicitly construct a rotating and accelerating black hole which further has NUT parameter and electric charges, both of them entering, not only the event horizon, but the Rindler horizon as well. This solution is directly obtained in the ``physical'' coordinates recently proposed by Podolsk\'y and Vr\'atny. Finally, a pedagogical appendix is thoughtfully included, providing readers with a user-friendly step-by-step guide to the Ernst formalism, in an attempt to address and resolve various minor inconsistencies frequently appearing in the relevant literature.
José Barrientos, Adolfo Cisterna, Konstantinos Pallikaris
2023-09-24T14:50:07Z
http://arxiv.org/abs/2309.13656v3
# Plebanski-Demianski a la Ehlers-Harrison: Exact Rotating and Accelerating Type I Black Holes ###### Abstract Standard black holes of algebraic type D, namely, those encompassed in the large Plebanski-Demianski family, have been recently shown to exhibit a wide class of algebraically general generalizations. These type I spacetimes emerge by applying Ehlers or Harrison transformations to any type D accelerating seed. This work commences with a discussion about the properties of the enhanced symmetry transformations in the Ernst formulation of Einstein-Maxwell theory, specifically, with a detailed investigation of their compositions. While Ehlers transformations are known to form a one-parameter subgroup, also commuting with Harrison transformations, here we show that the composition of two Harrison operations does not in general lead to a Harrison one, rather to an Ehlers-Harrison composition with fixed Ehlers parameter. Next, we introduce a qualitative picture of the most general type I generalization of the Plebanski-Demianski family, dubbed "Enhanced Plebanski-Demianski" spacetime, followed by the exact form of the solution in the original coordinates given by Plebanski and Demianski, which we obtain via the simultaneous action of an Ehlers and a Harrison transformation on the neutral Plebanski-Demianski geometry. In order to make the physics more transparent, we explicitly construct a rotating and accelerating black hole which further has NUT parameter and electric charges, both of them entering, not only the event horizon, but the Rindler horizons as well. This solution is directly obtained in the physical coordinates recently proposed by Podolsky and Vratny. Finally, an appendix is thoughtfully included, providing readers with a user-friendly step-by-step guide to the Ernst formalism, in an attempt to address and resolve various minor inconsistencies frequently appearing in the relevant literature. ## I Introduction The Kerr black hole is a particularly noteworthy exact solution of Einstein's field equations, especially from an astrophysical point of view. Realistic celestial bodies generally exhibit rotational motion. Even if their rotation is minimal, the conservation of angular momentum turns out to play an important role during gravitational collapse. Consequently, an analytical expression for the exterior spacetime around rotating sources is mandatory for studying such scenarios, a fact (among others) showcasing the overall significance of studying exact solutions within the framework of General Relativity (GR). Concerning solutions to the Einstein-Maxwell field equations, the Plebanski-Demianski (PD) family [1; 2; 3] is of uttermost importance. It represents the most general type D spacetime, and is thus classified as algebraically special [4]. The PD family encompasses various solutions, including the charged generalization of the Kerr black hole and its enhanced version with acceleration and/or Newman-Unti-Tamburino (NUT) charge. The causal structure of the PD spacetime is understood as that of two rotating charged black holes accelerating away from each other while carrying NUT charge [5; 6; 7; 8; 9]. A distinctive observation has been made concerning the PD hierarchy of solutions, revealing the absence of a nonrotating limit when both acceleration and NUT charge are present [10]. This led to the conjecture that accelerating NUT black holes may not exist, or, if they do exist, that they do not belong to the PD class, suggesting that they might not be found among algebraically special spacetimes. Despite the inherent challenges in understanding accelerating NUT spacetimes, Chang, Mann, and Stelea managed to construct a sort of accelerating NUT black hole in their seminal work [11], employing intricate solution-generating techniques. Based on the SL(2,\(\mathbb{R}\)) symmetry of a reduced Lagrangian obtained via dimensional reduction of the four-dimensional GR Lagrangian along the time direction, they successfully demonstrated that, with an accelerating version of the Zipoy-Voorhees line element [10] as a seed, one can obtain a new solution which correctly reduces to the Taub-NUT black hole in the zero-acceleration limit, while it also assumes the standard form of the C-metric in a certain parameter limit. Thorough analysis of this solution was later conducted by Podolsky and Vratny [12], who showed that it represents a genuine accelerating NUT black hole, and, moreover, that it falls under algebraic type I, thereby being algebraically general. Consequently, the aforementioned solution is not included in the PD family. Furthermore, one of its notable features is the imbuing of both the black hole and accelerating horizons with a NUT charge. As a result, the background spacetime is no longer represented solely by the Rindler line element, but rather by its NUT generalization, which turns to be of type I. Remarkably, the NUT parameter not only enters the black hole horizons but the accelerating horizons as well. Recently, a highly efficient mechanism for introducing NUT charge to accelerating spacetimes has been proposed [13; 14]. This innovative approach is based on the utilization of Ehlers transformations [15; 16], part of the Lie point symmetries inherent in the Einstein-Maxwell system, which become apparent when expressing the action in terms of the Ernst potentials [17; 18]. Through the application of the so-called electric Ehlers transformation, the proposed method adds a NUT charge to a given seed spacetime. In particular, it allows the introduction of a single NUT charge [13], or even two such charges [14], to any stationary axially symmetric spacetime in electrovacuum. In [13], it has been demonstrated how the above machinery accurately provides the Chang, Mann, and Stelea solution [11] with remarkable simplicity, following the approach proposed by Podolsky and Vratny [12]. Additionally, a Reissner-Nordstrom (RN) C-metric NUT black hole that faithfully reduces to the RN-C-metric and RN-NUT configurations in certain limits, has been also presented.1 While electric Ehlers transformations were readily known to add NUT charge to a given seed [19], the primary focus was directed towards static spherically symmetric seeds. As a result, the intricate interplay between NUT charge and the accelerating nature of a given seed has not been given much attention. The principal novelty of considering accelerating seeds lies in the emergence of Rindler horizons, representing the causal obstructions experienced by any accelerating observer along her/his trajectory. An Ehlers transformation does not only affect the black hole horizons but also the Rindler horizons, yielding novel backreactions involving a NUT-enhanced background. Footnote 1: Extension of these solutions by including a conformally coupled scalar in the matter sector, has also been studied in [13]. The investigation of incorporating a second NUT parameter, into a solution that already carries NUT charge, has been explored in [14]. As expected, in the case of nonaccelerating seeds, the introduction of a second NUT parameter proves to be redundant, as the latter can be absorbed by the NUT charge already present in the spacetime. However, for accelerating seeds with angular momentum, a distinct scenario unfolds; both NUT charges, the one confined to the horizon, and the other permeating throughout the whole spacetime, can in general coexist. Tuning both NUT charges, doable only in the presence of angular momentum, proves to be useful for removing the Misner string. Such a scenario has been studied considering the full PD class. The discovery of these findings has sparked a renewed interest in probing the black hole spectrum of GR beyond the well-explored type D class, leading to novel ways for constructing algebraically general black hole solutions. Among the evident extensions to be considered, lies the application of electric Harrison transformations [16], which are known to introduce electric and magnetic monopole charges to a given seed. Indeed, when applying a Harrison transformation to add electric charge to a C-metric seed, it becomes apparent that the resulting solution is not algebraically special, but of algebraically general nature instead. Once again, a key element in this construction revolves around the occurrence of Rindler horizons. Both event and Rindler horizons are imbed with electric charge, a fact strongly affecting the spacetime geometry. The newly obtained charged accelerating solution is different from the well-known RN-C-metric present in the PD family. This indicates that the addition of electric charge to an accelerating seed leads to a unique class of type I black holes with distinctive properties. The construction of such type I charged accelerating black holes has been initially addressed in [20], with a particular emphasis given on the RN-C-metric and RN-C-metric-NUT cases. In this study, we present a qualitative picture of the most general type I extension of the PD family, achievable by the sequential application of Ehlers and Harrison transformations, a direct application of their composition that is. The resulting spacetime does in general feature two distinct NUT parameters and two sets of electromagnetic charges. We term this final configuration the "Enhanced Plebanski-Demianski" spacetime (EPD). In the physical spherical-like coordinates introduced by Podolsky and Vratny [2; 3], its most general form would be described by nine parameters, the six parameters contained in the original PD spacetime, i.e., the mass \(m\), angular momentum \(a\), acceleration parameter \(A\), NUT charge \(l\) and electromagnetic charges \(e\) and \(g\), together with a second NUT parameter \(\bar{l}\) and a second pair of electromagnetic charges \(\bar{e}\) and \(\bar{g}\), induced by the Ehlers and Harrison maps, respectively. The latter three are henceforth dubbed Ehlers-Harrison charges. Notice that \(\bar{l}\), \(\bar{e}\) and \(\bar{g}\) will correspond to a reparametrization of the original parameters introduced via the Ehlers-Harrison map. Due to the high computational complexity of the task, we are able to provide the explicit form of the EPD spacetime in the original PD coordinates, only for a neutral PD seed. Despite this, the solution we present is sufficiently general and novel. Although the use of the original PD coordinates (and parameters) proves to be mandatory for the integration of the solution, regarded as a computational problem per se, the physical meaning is more or less obscure. For this reason, we also explicitly provide the full spacetime of an accelerating and rotating black hole carrying Ehlers-Harrison charges, using the physics-wise transparent form of the PD metric [2; 3] as the seed. Our paper is structured as follows: in Sec. II, we provide a concise introduction to Ehlers and Harrison transformations and discuss their crucial role in generating novel stationary axially symmetric solutions within the Einstein-Maxwell framework. Furthermore, we thoroughly investigate compositions of these transformations, disclosing an interesting equivalence (under certain assumptions) between the composition of two Harrison transformations and that of an Ehlers transformation with a Harrison one. In Sec. III, we present the Enhanced Plebanski-Demianski type I hierarchy of solutions and explicitly construct the EPD spacetime in PD coordinates, starting from a neutral PD seed. Next, we provide (in spherical-like coordinates) an exact expression for the metric representing an accelerating and rotating black hole with both Ehlers and Harrison charges, together with an expression for the gauge field supporting it. Various limits are discussed. We conclude our study in Sec. IV, where we highlight the significance of the new findings and discuss promising ways for further exploration using these innovative techniques. Lastly, in Appendix B, we offer a user-friendly rederivation of the Ernst equations, in an attempt to address sign inconsistencies often appearing in the relevant literature. ## II Ernst equations and the SU(2,1) symmetry The mathematical framework developed by Ernst in the 1960s [17; 18] has been a particularly valuable tool for studying stationary axisymmetric electrovacuum fields. Its remarkable novelty is the disclosure of additional symmetries in the Einstein-Maxwell system which remain elusive in the standard formulation. By casting the Einstein-Maxwell field equations into a set of two complex equations for the complex Ernst potentials, one ends up finding a collection of symmetry transformations which form a Lie group with eight real parameters [21; 22], isomorphic to SU(2,1). In a nutshell, the formulation works as follows.2 The most general stationary and axially symmetric spacetime within the Einstein-Maxwell framework is represented by the well-known Lewis-Weyl-Papapetrou (LWP) line element and the gauge field accompanying it, Footnote 2: See Appendix B for a detailed derivation of the Ernst equations. Here, we are assuming the so-called “electric” version of the LWP spacetime, and we have set \(p=0\) and \(s=1\). \[\mathrm{d}s^{2} = -f\left(\mathrm{d}t-\omega\,\mathrm{d}\varphi\right)^{2}+\frac{1} {f}\left[\rho^{2}\,\mathrm{d}\varphi^{2}+\mathrm{e}^{2\gamma}\left(\mathrm{d} \rho^{2}+\mathrm{d}z^{2}\right)\right], \tag{1a}\] \[A = A_{t}\,\mathrm{d}t+A_{\varphi}\,\mathrm{d}\varphi, \tag{1b}\] respectively, where \(f,\ \omega\), and \(\gamma\) are functions of Weyl's coordinates \(\rho\) and \(z\). It can be shown (see Appendix B for details) that, defining the pair of (complex) Ernst potentials \[\mathcal{E}=f-|\Phi|^{2}+i\chi,\quad\Phi=A_{t}+i\tilde{A}_{\varphi}, \tag{2}\] the Einstein-Maxwell field equations are cast into two complex three-dimensional equations, namely \[\left(\mathsf{Re}\,\mathcal{E}+|\Phi|^{2}\right)\nabla^{2}\mathcal{ E} = \mathbf{\nabla}\mathcal{E}\cdot\left(\mathbf{\nabla}\mathcal{E}+2\Phi^{*} \mathbf{\nabla}\Phi\right), \tag{3a}\] \[\left(\mathsf{Re}\,\mathcal{E}+|\Phi|^{2}\right)\nabla^{2}\Phi = \mathbf{\nabla}\Phi\cdot\left(\mathbf{\nabla}\mathcal{E}+2\Phi^{*}\mathbf{ \nabla}\Phi\right). \tag{3b}\] Here, all vector quantities are understood as vectors in flat space with cylindrical coordinates \(\{\rho,z,\varphi\}\). The so-called twisted potentials \(\tilde{A}_{\varphi}\) and \(\chi\) are then given by \[\hat{\mathbf{\varphi}}\times\mathbf{\nabla}\tilde{A}_{\varphi} = \frac{f}{\rho}\left(\mathbf{\nabla}A_{\varphi}+\omega\mathbf{\nabla}A_{t} \right), \tag{4a}\] \[\hat{\mathbf{\varphi}}\times\mathbf{\nabla}\chi = -\left(\frac{f^{2}}{\rho}\mathbf{\nabla}\omega+2\hat{\mathbf{\varphi}} \times\mathsf{Im}\left(\Phi^{*}\mathbf{\nabla}\Phi\right)\right), \tag{4b}\] respectively. Equations (3) enjoy certain symmetries, which must then be inherent in the Einstein-Maxwell system. These symmetry transformations, which are henceforth referred to as Ernst symmetries, are \[\mathcal{E} =\mathcal{E}_{0}+ib\,, \Phi =\Phi_{0}\,, \tag{5a}\] \[\mathcal{E} =\mathcal{E}_{0}-2\alpha^{*}\Phi_{0}-|\alpha|^{2}\,, \Phi =\Phi_{0}+\alpha\,,\] (5b) \[\mathcal{E} =\frac{\mathcal{E}_{0}}{1+ic\mathcal{E}_{0}}\,, \Phi =\frac{\Phi_{0}}{1+ic\mathcal{E}_{0}}\,,\] (5c) \[\mathcal{E} =|\lambda|^{2}\mathcal{E}_{0}\,, \Phi =\lambda\Phi_{0}\,,\] (5d) \[\mathcal{E} =\frac{\mathcal{E}_{0}}{1-2\beta^{*}\Phi_{0}-|\beta|^{2} \mathcal{E}_{0}}\,, \Phi =\frac{\beta\mathcal{E}_{0}+\Phi_{0}}{1-2\beta^{*}\Phi_{0}-|\beta| ^{2}\mathcal{E}_{0}}\,, \tag{5e}\] where \(\alpha,\beta\) and \(\lambda\) are complex parameters, while \(b\) and \(c\) are real.3 Not all of these transformations can be used to generate novel spacetimes. In fact, (5a) and (5b) are nothing else than gravitational and electromagnetic gauge transformations, while (5d) corresponds to a coordinate rescaling combined with an electromagnetic duality rotation. However, the remaining symmetries, (5c) and (5e), the so-called Ehlers [15] and Harrison [16] transformations, act in a nontrivial way, thereby producing new nonequivalent spacetimes which, of course, are again solutions of the Einstein-Maxwell field equations. Footnote 3: In this section, latin letters \(a,b,\ldots\) are reserved for real parameters, whereas Greek letters \(\alpha,\beta,\ldots\) stand for complex ones. In addition, it is worth observing that Ehlers and Harrison maps can be obtained as compositions of the gauge transformations with the inversion operation. To see this, let us first denote with a \(0\) subscript the seed quantities, e.g. \(\mathcal{E}_{0},\ \Phi_{0}\). Next, let \(\mathsf{T}^{1}_{c}\) and \(\mathsf{T}^{2}_{\beta}\) denote the gravitational and electromagnetic gauge transformations, (5a) and (5b), respectively, and let \(\mathsf{T}^{3}_{\lambda}\) stand for the combination of a rescaling with a duality transformation, viz., Eq. (5d). We display them once again below, this time in terms of the new nomenclature, \[\mathsf{T}^{1}_{c} :(\mathcal{E}_{0},\Phi_{0})\mapsto(\mathcal{E}_{0}+ic,\Phi_{0})= :(\mathcal{E},\Phi)\,, \tag{6a}\] \[\mathsf{T}^{2}_{\beta} :(\mathcal{E}_{0},\Phi_{0})\mapsto\left(\mathcal{E}_{0}-2\beta^{ *}\Phi_{0}-|\beta|^{2},\Phi_{0}+\beta\right),\] (6b) \[\mathsf{T}^{3}_{\lambda} :(\mathcal{E}_{0},\Phi_{0})\mapsto\left(|\lambda|^{2}\mathcal{E} _{0},\lambda\Phi_{0}\right), \tag{6c}\] and further complement them with the discrete "inversion" transformation that Ernst equations possess, namely \[\mathsf{l}:(\mathcal{E}_{0},\Phi_{0})\mapsto\left(\frac{1}{\mathcal{E}_{0}}, \frac{\Phi_{0}}{\mathcal{E}_{0}}\right). \tag{6d}\] With these at hand, it is quite straightforward to observe that a certain composition of the above transformations leads to the Ehlers transformation, the one generalized to account for the presence of the gauge field [15; 23], \[\mathsf{E}_{c}:=\mathsf{l}\circ\mathsf{T}^{1}_{c}\circ\mathsf{l}:(\mathcal{E} _{0},\Phi_{0})\mapsto\left(\frac{\mathcal{E}_{0}}{1+ic\mathcal{E}_{0}},\frac{ \Phi_{0}}{1+ic\mathcal{E}_{0}}\right). \tag{6e}\] Similarly, replacing \(\mathsf{T}^{1}_{c}\) with \(\mathsf{T}^{2}_{\beta}\) in the above, one obtains the Harrison map (5e) \[\mathsf{H}_{\beta}:=\mathsf{l}\circ\mathsf{T}^{2}_{\beta}\circ\mathsf{l}:( \mathcal{E}_{0},\Phi_{0})\mapsto\left(\frac{\mathcal{E}_{0}}{1-2\beta^{*}\Phi _{0}-|\beta|^{2}\mathcal{E}_{0}},\frac{\Phi_{0}+\beta\mathcal{E}_{0}}{1-2\beta ^{*}\Phi_{0}-|\beta|^{2}\mathcal{E}_{0}}\right), \tag{6f}\] which mixes electromagnetism with gravity [16]. In the next subsection we focus on compositions of Ehlers and Harrison transformations, which will later be used in Sec. III to construct new stationary and axially symmetric solutions of the Einstein-Maxwell system, which are algebraically general. ### Compositions Due to the fact that the composition of two inverse transformations equals the identity transformation, i.e., \(\mathsf{l}\circ\mathsf{l}=\mathbb{I}\), and since gravitational gauge transformations commute, namely \(\mathsf{T}^{1}_{b}\circ\mathsf{T}^{1}_{c}=\mathsf{T}^{1}_{b+c}\), it easily follows that Ehlers transformations form a one-parameter subgroup; they satisfy the group property \(\mathsf{E}_{b}\circ\mathsf{E}_{c}=\mathsf{E}_{b+c}\). Moreover, since, in general, \(\mathsf{T}_{c}^{\mathsf{I}}\circ\mathsf{T}_{\beta}^{2}=\mathsf{T}_{\beta}^{2} \circ\mathsf{T}_{c}^{\mathsf{I}}\), it also follows that Ehlers transformations commute with Harrison ones, viz., \(\mathsf{E}_{c}\circ\mathsf{H}_{\beta}=\mathsf{H}_{\beta}\circ\mathsf{E}_{c}\). Actually, since in the next section we are going to use this particular composition to build the new solutions, let us be proactive and display this map here, \[\mathsf{E}_{c}\circ\mathsf{H}_{\beta}:(\mathcal{E}_{0},\Phi_{0}) \mapsto\left(\frac{\mathcal{E}_{0}}{1-2\beta^{*}\Phi_{0}+(ic-|\beta|^{2})\, \mathcal{E}_{0}},\frac{\beta\mathcal{E}_{0}+\Phi_{0}}{1-2\beta^{*}\Phi_{0}+( ic-|\beta|^{2})\,\mathcal{E}_{0}}\right). \tag{7}\] On the other hand, we observe that Harrison transformations fail to form a subgroup, this due to the fact that the electromagnetic gauge transformations do not commute, \[\left(\mathsf{T}_{\beta}^{2}\circ\mathsf{T}_{\alpha}^{2}-\mathsf{T}_{\alpha+ \beta}^{2}\right)(\mathcal{E}_{0},\Phi_{0})=\beta\alpha^{*}-\alpha\beta^{*}. \tag{8}\] Indeed, \(\mathsf{T}_{\beta}^{2}\circ\mathsf{T}_{\alpha}^{2}=\mathsf{T}_{i(\alpha\beta^ {*}-\beta\alpha^{*})}^{2}\mathsf{T}_{\alpha+\beta}^{2}\), which implies that two general Harrison transformations amount to a particular Ehlers-Harrison one, namely, \[\mathsf{H}_{\alpha}\circ\mathsf{H}_{\beta}=\mathsf{E}_{i(\alpha\beta^{*}- \beta\alpha^{*})}\mathsf{H}_{\alpha+\beta}. \tag{9}\] This is quite an interesting observation. Recall that the Harrison map is thought of as a charging transformation rendering vacuum into electrovacuum solutions. But what does really happen when the seed is an electrovacuum one?4 The above composition property seems to tell us that the application of a Harrison transformation on a static electrovacuum seed will lead to a stationary electrovacuum spacetime--albeit suffering from a NUT-like singularity. Of course, this NUT parameter will not be free; it is rather determined by the other parameters and charges at play. Footnote 4: Remember that an electrovacuum solution can always be obtained via a Harrison transformation of a vacuum seed. All of this is well understood in the light of Eq. (9). Two general Harrison transformations applied to a vacuum seed amount to charging this seed and simultaneously adding a fixed NUT parameter to it. Therefore, without resorting to further compositions of the Harrison map with the other symmetries, the only way to avoid the cross term in the target metric is if \(\alpha,\beta\) satisfy the relation \[\mathsf{Re}\,\beta=\frac{\mathsf{Re}\,\alpha}{\mathsf{Im}\,\alpha}\,\mathsf{ Im}\,\beta, \tag{10}\] in which case \(\mathsf{H}_{\beta}\circ\mathsf{H}_{\alpha}=\mathsf{H}_{\alpha+\beta}\). For example, assuming \(\mathsf{Im}\,\alpha\neq 0\), we have that \[\mathsf{H}_{c(\mathsf{Re}\,\alpha/\mathsf{Im}\,\alpha+i)}\circ\mathsf{H}_{ \alpha}=\mathsf{H}_{\alpha(1+c/\mathsf{Im}\,\alpha)}, \tag{11}\] where we remind the reader that \(\alpha\) is complex whereas \(c\) is real. The generalization is straightforward; since a composition of Ehlers transformations is again an Ehlers transformation, and since Ehlers transformations commute with Harrison ones, from Eq. (9) we can conclude that \[\mathsf{H}_{\alpha_{p}}\circ\ldots\circ\mathsf{H}_{\alpha_{2}}\circ\mathsf{H} _{\alpha_{1}}=\mathsf{E}_{c}\circ\mathsf{H}_{\alpha_{1}+\alpha_{2}+\ldots+ \alpha_{p}}, \tag{12}\] where the real parameter \(c\) is fixed in terms of the parts of \(\alpha_{1},\alpha_{2},\ldots,\alpha_{p}\). As we are going through the various composition properties, and since the enhanced transformations presented in [20] and [24] prove to be convenient, it is worth studying the latter in the above spirit. The so-called enhanced Ehlers transformation \[\mathsf{EE}_{c}:(\mathcal{E}_{0},\Phi_{0})\mapsto\left(\frac{\mathcal{E}_{0} +ic}{1+ic\mathcal{E}_{0}},\Phi_{0}\frac{1+ic}{1+ic\mathcal{E}_{0}}\right), \tag{13}\] which directly provides purely the NUT extension of a given seed, is nothing else than the composition \[\mathsf{EE}_{c}=\mathsf{T}_{c}^{1}\circ\mathsf{T}_{1+ic}^{3}\circ\mathsf{E}_{c}. \tag{14}\] Enhanced Ehlers transformations retain the properties of the original Ehlers transformations in the sense that they also form a one-parameter subgroup, \[\mathsf{EE}_{b}\circ\mathsf{EE}_{c}=\mathsf{EE}_{(b+c)/(1-bc)}. \tag{15}\] It is also fortunate that the enhancing itself is an operation which can be applied after transforming the solution a la Ehlers. However, this is not the case for the enhanced version of the Harrison transformation presented in [20], which reads \[\mathsf{E}\mathsf{H}_{\{\alpha,b\}}=\mathsf{T}_{-c\alpha}^{2}\circ\mathsf{H}_{ \alpha}\circ\mathsf{T}_{\text{ce}^{ik}}^{3},\quad c=\frac{\sqrt{1+4|\alpha|^{2 }}-1}{2|\alpha|^{2}}, \tag{16}\] for the Harrison transformation does not commute with \(\mathsf{T}^{3}\).5 The enhanced version introduces one additional real parameter \(b\), besides the Harrison transformation parameter \(\alpha\). The former can be appropriately fixed to nullify the cross-term contribution from the Harrison operation in the target metric, if seed charges are present. Footnote 5: Note here that Ehlers and Harrison transformations do not in general commute with the gauge transformations. ## III Enhanced Plebanski-Demianski metric: the type I hierarchy Having delineated the transformations in Sec. II and armed with a clear understanding of the effect that the Ehlers-Harrison transformation has on the accelerating horizons, we are well-positioned to seek the explicit integration of the all-inclusive family of type I geometries, herein referred to as the Enhanced Plebanski-Demianski spacetime, or EPD for short. As the task of integrating Eqs. (4) for the twisted potentials in this most general case, proves to be a rather daunting computational challenge in spherical-like coordinates, we shall use a "lighter" form of the PD metric, the one in the original PD coordinates. We banish the details to Appendix A, with the immensity of the expressions justifying us in doing so. Based on the findings therein, we are well-equipped to provide a description of the entire hierarchy tree of solutions within this new EPD family, which features in Fig. 1 as the "parent" spacetime. Although we lack an analytic form for the full EPD spacetime, the one with all seed parameters and charges switched on, we nevertheless corroborate the tree structure in Fig. 1 with our results in Appendix A, at least up to a case directly next to the most general one, in particular, the EPD solution without seed electromagnetic charges. It is worth noting that, in the hierarchy diagram, the root node (the EPD spacetime) is characterized by a set of physical parameters \(\{m,a,A,l,e,g\}\) of the PD seed and by an additional parameter triplet introduced via the Ehlers-Harrison operation and denoted as \(\{c,b_{e},b_{m}\}\) hereafter. The parameters \(b_{e}\) and \(b_{m}\) are the real and imaginary parts, respectively, of a Harrison parameter \(\beta\), whereas \(c\) stands for a real Ehlers transformation parameter. Recall that \(b_{e}\) is then associated with the inclusion of electric charge, while \(b_{m}\) with the inclusion of a magnetic monopole charge. Since the results in Appendix A have been derived using the original PD coordinates, it is worth remarking that establishing the relationships between the parameters in the a la PD form of the target metric and the physical parameters we actually use in the hierarchy structure, can be tricky at times, yet a definitely feasible task. Therefore, the use of the physical parameters in Fig. 1 is _ipso facto_ justified. Moreover, the explicit parametrizations of the Ehlers-Harrison parameters \(\{c,b_{e},b_{m}\}\) in terms of the extra NUT and electromagnetic charges \(\{\bar{l},\bar{e},\bar{g}\}\), and vice versa, which necessarily contain (some of) the seed parameters, may vary between the various nodes in the hierarchy tree. Thus, we prefer to adhere to the use of \(\{c,b_{e},b_{m}\}\) in most cases. In general, a zoo of reparametrizations is usually necessary to present the various metrics in the standard form, or in a desired form in lack of a standard one. Having said that, the unfortunate occurrence of the same symbols in various "children" of the EPD spacetime in Fig. 1, should not mislead the reader into believing that the parameters are actually the same (although such cases are not excluded). They should rather be understood in terms of the "physical" properties they characterize, these being the same in all cases included. Then, finding the specific reparametrizations, is a work better undertaken on a case-by-case basis. We shall also remark that the term "enhanced" is used to convey the action of an Ehlers-Harrison map, thereby effecting a nontrivial transformation of the background, in which the original PD spacetime resides. Here, all the extra parameters, that is, \(c\), \(b_{e}\), and \(b_{m}\), appear. When we only operate with one of the two maps, we refer to the resulting family as either "Ehlers", or "Harrison", with \(c\) entering the solution in the former case, and \(b_{e}\) and \(b_{m}\) in the latter. For example, Ehlers-RN-C-metric corresponds to an RN-C-metric black hole in a background with NUT parameter \(c\), brought in via the Ehlers transformation. Similarly, Harrison PD indicates a PD spacetime in a background featuring electromagnetic charges \(b_{e}\) and \(b_{m}\), added via the Harrison operation. At this stage, it is also important to mention two key cases. First, in the vanishing-acceleration limit, keeping two sets of NUT and electromagnetic charges (seed and Ehlers-Harrison), is redundant. After an appropriate reparametrization, only one set of charges should remain. Second, exclusively in the presence of rotation, that is \(a\neq 0\), the seed NUT parameter \(l\) can coexist with the Ehlers NUT. When \(a=0\), the presence of two NUT parameters is again superfluous, and only a single effective NUT parameter should remain. On the other hand, the seed electromagnetic charges \(e\) and \(g\) can exist as independent charges, along the Harrison charges, even for a nonrotating seed. Consequently, in an attempt to have a consistent notation for all cases depicted in Fig. 1, we lastly adhere to the following rule. For effective NUT and electromagnetic charges, i.e., combinations of seed NUT with Ehlers NUT and combinations of seed charges with Harrison charges, respectively, we use a bar accent. Thus, since we previously agreed to use of \(c,b_{e},b_{m}\) in the hierarchy tree, we shall also use \(\bar{c}\), \(\bar{b}_{e}\), and \(\bar{b}_{m}\) to denote the effective quantities. Finally, when classifying the solutions, a convenient criterion to distinguish if a given spacetime is algebraically general or special, is to examine the relation \[I^{3}=27J^{2}, \tag{3.1}\] where \[I=\Psi_{0}\Psi_{4}-4\Psi_{1}\Psi_{3}+3\Psi_{2}^{2},\qquad J=\begin{vmatrix} \Psi_{0}&\Psi_{1}&\Psi_{2}\\ \Psi_{1}&\Psi_{2}&\Psi_{3}\\ \Psi_{2}&\Psi_{3}&\Psi_{4}\end{vmatrix}. \tag{3.2}\] A spacetime is said to be algebraically general, ergo of Petrov type I, whenever the identity (3.1) is not satisfied. Otherwise, the spacetime is said to be algebraically special. Then, a convenient strategy to follow here, is to choose a Fig. 1: Hierarchy of solutions for the Enhanced-Plebanski-Demianski spacetime. tetrad, for which, the invariants \(\Psi_{1}\) and \(\Psi_{3}\) vanish, with \(\Psi_{0},\Psi_{4}\neq 0\). This allows one to write Eq. (16) in the simpler form \[\Psi_{0}\Psi_{4}\left(\Psi_{0}\Psi_{4}-9\Psi_{2}^{2}\right)^{2}=0, \tag{17}\] which implies that, if \(\Psi_{0}\Psi_{4}\neq 9\Psi_{2}^{2}\), the spacetime is algebraically general. In the subsequent subsection, we construct an accelerating and rotating black hole endowed with NUT and electromagnetic charges entering both horizons, Rindler and black hole ones. We attain this solution by acting with the Ehlers-Harrison map on a neutral NUTless PD seed. This time, we derive the solution in the physical spherical-like coordinates _ab initio_, a fact compensating for the sacrifice of yet another seed parameter. ### Enhanced Kerr: accelerating and rotating black hole with NUT parameter and electromagnetic charges Considering the coordinates presented in [2; 3] and the subsequent correction of the gauge field introduced in [14], we start by writing down the PD metric as \[\Omega^{2}\mathrm{d}s_{0}^{2} = -\frac{Q}{R^{2}}\left[\mathrm{d}t-\left(1-x\right)\left(a+2l+ax \right)\mathrm{d}\varphi\right]^{2}+\frac{R^{2}}{1-x^{2}}\left(\frac{\mathrm{ d}x^{2}}{P}+\frac{\left(1-x^{2}\right)\mathrm{d}r^{2}}{Q}\right) \tag{18}\] \[+\frac{\left(1-x^{2}\right)P}{R^{2}}\left\{a\,\mathrm{d}t-\left[ r^{2}+\left(a+l\right)^{2}\right]\mathrm{d}\varphi\right\}^{2},\] where \[\Omega(r,x) =1-\frac{aA}{a^{2}+l^{2}}\,r\left(l+ax\right) \tag{19a}\] \[R^{2}(r,x) =r^{2}+\left(l+ax\right)^{2},\] (19b) \[P(x) =\Omega(r_{+},x)\,\Omega(r_{-},x),\] (19c) \[Q(r) =\left(r-r_{+}\right)\left(r-r_{-}\right)\left(1+aA\,\frac{a-l}{ a^{2}+l^{2}}\,r\right)\left(1-aA\,\frac{a+l}{a^{2}+l^{2}}\,r\right), \tag{19d}\] with \[r_{\pm}=m\pm\sqrt{m^{2}+l^{2}-a^{2}-e^{2}-g^{2}}, \tag{20}\] denoting the locations of the black hole horizons. The parameters appearing above are the "physical" ones: the mass \(m\), the angular momentum \(a\), the acceleration parameter \(A\), the NUT parameter \(l\), and the electromagnetic charges \(e\) and \(g\). We can cast (18) into the LWP form (1a) with Weyl's canonical coordinates \(\rho,z\) expressed in terms of \(r,x\) via6 Footnote 6: For functions with a single argument, a prime accent denotes differentiation with respect to that argument. \[\rho(r,x) = \frac{\sqrt{\left(1-x^{2}\right)PQ}}{\Omega^{2}}, \tag{21a}\] \[2Aa^{2}r^{2}z(r,x) = \left(a^{2}+l^{2}\right)\left[r\left(r_{+}+r_{-}\right)-2r_{+}r_{ -}\right]+2aLlr_{+}r_{-}r\] (21b) \[+\frac{2\left[a^{2}+l^{2}-2aA\left(l+ax\right)r\right]Q-\Omega\, Q^{\prime}\left(a^{2}+l^{2}\right)r}{\Omega^{2}},\] and the seed functions being \[f_{0}(r,x) = \frac{Q-a^{2}\left(1-x^{2}\right)P}{\Omega^{2}R^{2}}, \tag{21c}\] \[\omega_{0}(r,x) = (1-x)\left[2l+a\left(1+x\right)\left(1-\frac{P}{\Omega^{2}f_{0}} \right)\right],\] (21d) \[\gamma(r,x) = \frac{1}{2}\ln\frac{R^{2}f_{0}}{\Omega^{2}\left[\left(\partial_{r }z\right)^{2}+\left(\partial_{r}\rho\right)^{2}\right]Q}. \tag{21e}\] Finally, the seed gauge field \(A_{0}\) has nonvanishing temporal and azimuthal components, \[A_{t,0}(r,x) = -\frac{er+g\left(l+ax\right)}{R^{2}}, \tag{3.8a}\] \[A_{\varphi,0}(r,x) = gx-\left(1-x\right)\left(a+2l+ax\right)A_{t,0}, \tag{3.8b}\] respectively. This information almost suffices to identify the seed Ernst potentials. We also need to solve Eqs. (2.4) for the seed twisted potentials \(\tilde{A}_{\varphi,0}\) and \(\chi_{0}\). To do so, we need to write the gradient of a function \(F(r,x)\) in the coordinate system \(\{t,r,x,\varphi\}\); it reads \[\mathbf{\nabla}F=\frac{1}{h_{r}}\partial_{r}F\hat{\mathbf{r}}+\frac{1}{h_{x}}\partial _{x}F\hat{\mathbf{x}}, \tag{3.9}\] where the scale factors \(h_{r}(r,x)\) and \(h_{x}(r,x)\) are given by \[h_{r}=\frac{R}{\Omega\sqrt{Q}}=h_{x}\sqrt{\frac{\left(1-x^{2} \right)P}{Q}}. \tag{3.10}\] Then, the differential equations via which we are to determine \(\tilde{A}_{\varphi,0}\) become \[\partial_{x}\tilde{A}_{\varphi,0} = a\,\partial_{r}A_{t,0}, \tag{3.11a}\] \[\partial_{r}\tilde{A}_{\varphi,0} = -\frac{\partial_{x}A_{t,0}}{a}, \tag{3.11b}\] admitting the solution \[\tilde{A}_{\varphi,0}(r,x)=-\frac{gr-e\left(l+ax\right)}{R^{2}}, \tag{3.12}\] up, of course, to the addition of an integration constant which we set to zero. The differential equations determining \(\chi_{0}\), i.e., the second equation in the set (2.4), admit the solution \[\chi_{0}(r,x) = \frac{2\left(a^{2}+l^{2}\right)\left[amx-l\left(r-m\right) \right]-2aA\left[\left(a^{2}-l^{2}\right)\left(r-m\right)r+a\left(2m-r_{+} \right)r_{+}\left(l+ax\right)x\right]}{R^{2}\left(a^{2}+l^{2}\right)\Omega}, \tag{3.13}\] again, up to the addition of an integration constant which we also neglect for simplicity without loss of generality. The seed Ernst potentials are found by simply substituting the above functions into (2.2), and there is no reason to display them explicitly here. Before proceeding with the Ehlers-Harrison transformation, let us communicate a somewhat interesting observation which will prove pertinent also in the target case. Notice that \(\tilde{A}_{\varphi,0}\) is obtained from \(A_{t,0}\) via a duality transformation of the charges \((e,g)\mapsto(g,-e)\). Indeed, the previous exchange of the charges generates a discrete phase transformation \(\Phi_{0}\mapsto-i\Phi_{0}\) which maps \(A_{t,0}\mapsto\tilde{A}_{\varphi,0}\) and \(\tilde{A}_{\varphi,0}\mapsto-A_{t,0}\). Since \(|\Phi_{0}|^{2}\) and \(e^{2}+g^{2}\) are then invariant under such transformations, it follows that \(\mathcal{E}_{0}\) is preserved. Therefore, looking at the target potentials (2.7), it becomes apparent that, if we simultaneously perform a duality transformation of the Harrison parameter \(\beta=b_{e}+ib_{m}\), i.e., \((b_{e},b_{m})\mapsto(b_{m},-b_{e})\), or equivalently \(\beta\mapsto-i\beta\), the target gravitational potential \(\mathcal{E}\) is preserved whereas \(\Phi\mapsto-i\Phi\), exactly as in the seed case. In other words, the particular exchanges of charges and parameters end up inducing a \(\mathsf{T}_{-i}^{3}\) transformation of the potentials which, of course, leaves the Ernst equations invariant. Further looking at the twist equations (2.4), the differential equation determining \(\omega\) does not transform, in contrast to the one determining the azimuthal component of the target gauge field. One can basically see where this is going; the target metric will be invariant under the simultaneous charge and Harrison-parameter duality exchanges, but the target Maxwell field will not, an after all quite expected result. Let us now operate on the seed potentials with a combined Ehlers-Harrison transformation via the composition \(\mathsf{E}_{c}\circ\mathsf{H}_{\beta}\). The new potentials read \[\mathcal{E} = \frac{\mathcal{E}_{0}}{\Lambda},\quad\Phi=\frac{\Phi_{0}+\left(b _{e}+ib_{m}\right)\mathcal{E}_{0}}{\Lambda}, \tag{3.14}\] \[\Lambda = 1+\left(ic-b_{e}^{2}-b_{m}^{2}\right)\mathcal{E}_{0}-2\left(b_{e }-ib_{m}\right)\Phi_{0},\quad\beta=b_{e}+ib_{m}.\] Using the definitions (2.2), we can readily identify some of the new functions. In particular, \[f(r,x) = \frac{f_{0}}{|\Lambda|^{2}}, \tag{3.15a}\] \[|\Lambda|^{2}\chi(r,x) = \chi_{0}-c|\mathcal{E}_{0}|^{2}-2b_{e}\left(\chi_{0}A_{t,0}-\tilde {A}_{\varphi,0}\operatorname{\mathsf{Re}}\mathcal{E}_{0}\right)-2b_{m}\left( \chi_{0}\tilde{A}_{\varphi,0}+A_{t,0}\operatorname{\mathsf{Re}}\mathcal{E}_{0} \right),\] (3.15b) \[|\Lambda|^{2}A_{t}(r,x) = A_{t,0}+\left(b_{m}c-b_{e}b_{m}^{2}-b_{e}^{3}\right)|\mathcal{E} _{0}|^{2}+\left(4b_{e}b_{m}-c\right)\left(\chi_{0}A_{t,0}-\tilde{A}_{\varphi,0} \operatorname{\mathsf{Re}}\mathcal{E}_{0}\right)\] (3.15c) \[\tilde{A}_{\varphi}(r,x) = A_{t}\mid\left\{\Phi_{0}\mapsto-i\Phi_{0},\beta\mapsto-i\beta \right\}, \tag{3.15d}\] while \(\gamma\) remains the same. We once again remind the reader that \(b_{e}\) and \(b_{m}\) are the real and imaginary parts of the Harrison parameter \(\beta\), and \(c\) is the real Ehlers parameter. Now, we still have to solve for \(\omega\) and \(A_{\varphi}\). Ideally, we would like to obtain the result with all the seed charges and parameters switched on, but this turns out to be an extremely demanding task, computation-wise, in these coordinates. Therefore, we restrict ourselves to discussing a particular case whose novelty is sufficient in the sense that it corresponds to an accelerating and rotating black hole of Petrov type I with NUT parameter and electric charge, both coming from the Ehlers-Harrison map. To this end, we switch off \(b_{m}\), the seed NUT parameter \(l\) and the initial charges \(e,g\) in order to greatly simplify things. With these assumptions, one can indeed first get \(\omega\), \[\omega = \omega_{0}+C_{1}-\frac{a\left(b_{e}^{4}+c^{2}\right)}{x^{2}} \left(1-\frac{\left(1-x^{2}\right)PF}{\Omega^{5}R^{2}f_{0}}\right)-\frac{2c \left[a^{2}+r\left(r-2m\right)\right]H}{\Omega^{4}R^{2}f_{0}}, \tag{3.16}\] where \[F(r,x) = R^{2}\left\{1-3Axrx-A^{2}\left[r^{2}+a^{2}\left(1+3x^{2}\right) \right]+A^{3}\left[3r^{2}+a^{2}\left(3+x^{2}\right)\right]xr\right\} \tag{3.17a}\] \[-2m\left(1-A^{2}r^{2}\right)\left[r\left(1-x^{2}\right)+2x^{2}m- 2A\left(2r-m\right)xr-a^{2}A\left(1+3x^{2}\right)x\right],\] \[H(r,x) = A\left(1-x^{2}\right)R^{2}\left\{1+A^{2}\left[a^{2}\left(1+x^{2} -2Axr\right)-r^{2}\right]\right\} \tag{3.17b}\] are functions of \(r,x\) which depend only on the seed parameters, and where \(\omega_{0}\) is given in Eq. (3.7d). Note that the integration constant has been already shifted as to have a nonsingular \(A\to 0\) limit. Having obtained \(\omega\), we can now integrate the first equation in (2.4) for the azimuthal part of the target gauge field, finding \[A_{\varphi}=G-\omega A_{t}+C_{2}, \tag{3.18}\] where \[G = \frac{{ab_{e}}^{3}}{x^{2}}\left(1+\frac{2m\left(1-x^{2}\right)P \left(1-A^{2}r^{2}\right)\left[r\left(1-x^{2}\right)-4Ar^{2}x-a^{2}A\left(1+3x ^{2}\right)x+2m\left(x+rA\right)x\right]}{\Omega^{5}R^{2}f_{0}}\right. \tag{3.19}\] \[-\left.\frac{\left(1-x^{2}\right)P\left\{1-3Arx-A^{2}\left[r^{2} +a^{2}\left(1+3x^{2}\right)\right]+A^{3}x\left[3r^{2}+a^{2}\left(3+x^{2} \right)\right]r\right\}}{\Omega^{5}f_{0}}\right),\] and \(C_{2}\) is another integration constant. Therefore, the target solution is given by \[\mathrm{d}s^{2} = -\frac{f_{0}}{|1+\left(ic-b_{e}^{2}\right)\mathcal{E}_{0}|^{2}} \left(\mathrm{d}t-\omega\,\mathrm{d}\varphi\right)^{2}+\frac{|1+\left(ic-b_{e}^ {2}\right)\mathcal{E}_{0}|^{2}\left(1-x^{2}\right)PQ}{\Omega^{4}f_{0}}\, \mathrm{d}\varphi^{2} \tag{3.20a}\] \[+\frac{|1+\left(ic-b_{e}^{2}\right)\mathcal{E}_{0}|^{2}R^{2}}{ \Omega^{2}Q}\left[\mathrm{d}r^{2}+\frac{Q}{\left(1-x^{2}\right)P}\,\mathrm{d}x ^{2}\right],\] and the gauge field \[A=-b_{e}\,\frac{b_{e}^{2}|\mathcal{E}_{0}|^{2}-f_{0}}{|1+\left(ic-b_{e}^{2} \right)\mathcal{E}_{0}|^{2}}\,\mathrm{d}t+\left(G-\omega A_{t}+C_{2}\right) \mathrm{d}\varphi. \tag{3.20b}\] With the restrictive assumptions \(e=0=g=l\) we have made so far, we have \[P(x) = 1+A\left(a^{2}Ax-2m\right)x, \tag{3.21a}\] \[Q(r) = \left(1-A^{2}r^{2}\right)\left[a^{2}+\left(r-2m\right)r\right],\] (3.21b) \[\Omega(r,x) = 1-Axr,\] (3.21c) \[R(r,x) = \sqrt{r^{2}+a^{2}x^{2}}. \tag{3.21d}\] The seed function \(f_{0}\) is given in (3.7c), whereas \(\chi_{0}\) assumes the neat form \[\chi_{0}(r,x)=\frac{2a\left[m\left(x+Ar\right)-AR^{2}\right]}{ \Omega R^{2}}. \tag{3.22}\] Thus, \(\mathcal{E}_{0}=f_{0}+i\chi_{0}\), since the complex electromagnetic seed potential \(\Phi_{0}\) is zero (\(A_{t,0}=0=A_{\varphi,0}=\tilde{A}_{\varphi,0}\)). Although the objective of providing explicit expressions is completed, the form (3.20a) of the metric may not be the most convenient when discussing certain limits. Therefore, we also propose the alternative form \[\Omega^{2}\mathrm{d}s^{2} = -\frac{Q-a^{2}\tilde{P}}{\mathcal{R}^{2}}\left[\mathrm{d}t- \left(\tilde{\omega}+A\,\frac{\left(1-x^{2}\right)\left(r^{2}-2r\sqrt{\tilde{m }^{2}+\tilde{l}^{2}}+a^{2}\right)W}{4\left(\tilde{m}^{2}+\tilde{l}^{2}\right) \left(r^{2}-2r\sqrt{\tilde{m}^{2}+\tilde{l}^{2}}+a^{2}x^{2}\right)\Omega^{5}R ^{2}f_{0}}\right)\mathrm{d}\varphi\right]^{2}\] \[+\mathcal{R}^{2}\left(\frac{\tilde{P}Q}{Q-a^{2}\tilde{P}}\, \mathrm{d}\varphi^{2}+\frac{\mathrm{d}r^{2}}{Q}+\frac{\mathrm{d}x^{2}}{\tilde {P}}\right),\] where \[\mathcal{R}^{2}(r,x) = \left|1-\left(b_{e}^{2}+i\,\frac{\tilde{l}}{2\sqrt{\tilde{m}^{2} +\tilde{l}^{2}}}\right)\mathcal{E}_{0}\right|^{2}R^{2},\quad\tilde{P}(x)= \left(1-x^{2}\right)P, \tag{3.24}\] \[\tilde{\omega}(r,x) = 2\tilde{l}\left(1-x\right)-a\frac{8m^{2}\left(mr+a\tilde{l}x \right)\left(1-x^{2}\right)+\left(\tilde{l}^{2}+4m^{2}b_{e}^{4}\right)\left\{ \left(r-2m\right)^{2}+\left[2m\left(r-2m\right)+a^{2}\right]x^{2}\right\}}{4m^ {2}\left[r\left(r-2m\right)+a^{2}x^{2}\right]}. \tag{3.25}\] with new parameters \[\tilde{l}=-2cm,\quad\tilde{m}=m\sqrt{1-4c^{2}}. \tag{3.26}\] Here, we have set \(C_{1}=2\tilde{l}\) in order to avoid transforming the time coordinate which is the alternative course of action to obtain the above form if one wants to keep \(C_{1}\) arbitrary. Moreover, the function \(W(r,x)\) is quite involved. In particular, \[W(r,x) = -4m^{2}aR^{2}\Omega^{3}\left(R^{2}A-2mx\right)+a\left(\tilde{l}^{ 2}+4m^{2}b_{e}^{4}\right)\tilde{F}+4m\tilde{l}\Omega\tilde{H}, \tag{3.27}\] with \[\tilde{F}(r,x) = 8m^{3}\left(x+rA\right)\left(1-r^{2}A^{2}\right)+2R^{2}m\left\{ 3x+3r\left(2-x^{2}\right)A+x\left[2a^{2}\left(1+3x^{2}\right)-r^{2}\left(4-x ^{2}\right)\right]A^{2}\right\} \tag{3.28a}\] \[-R^{4}A\left(3-A\left\{rx+\left[3r^{2}-a^{2}\left(1+3x^{2}\right) \right]A-rx\left[r^{2}-a^{2}\left(3+x^{2}\right)\right]A^{2}\right\}\right)\] \[+4m^{2}\left[r^{4}\left(4-3x^{2}\right)A^{3}-a^{2}x^{2}\left(2+3x^ {2}\right)A+r^{5}x^{3}A^{4}+r^{3}xA^{2}\left(4-x^{2}+a^{2}A^{2}\right)\right]\] \[+4m^{2}\left(r^{2}A\left\{x^{2}\left[3+a^{2}\left(5+x^{2}-x^{4} \right)A^{2}\right]-4\right\}-rx\left[4+a^{2}\left(2+2x^{2}-3x^{4}\right)A^{2 }\right]\right),\] \[\tilde{H}(r,x) = 2R^{2}m\left\{-r-x\left[r^{2}+a^{2}\left(2+x^{2}\right)\right]A +r\left[r^{2}-a^{2}\left(1-3x^{2}\right)\right]A^{2}+r^{2}x\left[r^{2}+a^{2} \left(2-x^{2}\right)\right]A^{3}\right\}\] (3.28b) \[-R^{4}\left\{A^{2}\left[r^{2}-a^{2}\left(1+x^{2}-2rxA\right) \right]-1\right\}.\] Observe that in some of the preceded equations, we used \(m\) instead of \(\tilde{m}\); this is not a typographical error. We rather did so only for brevity. Keep in mind that, whenever we express the metric in this form, the NUT parameter and the mass are given by \(\tilde{l}\) and \(\tilde{m}\) in Eq. (3.26), respectively, not by \(c\) and \(m\). This particular form of the target metric with the redefined parameters will be more suitable for various important limits, as we will see below. In passing, we also remark that, unfortunately, the Misner string is not removable for \(\tilde{l}\neq 0\), or \(c\neq 0\) in the original form (3.20a), since \[\Delta\omega=\lim_{x\to 1}\omega-\lim_{x\to-1}\omega=-4\tilde{l}. \tag{3.29}\] However, switching on the seed charges, we know that they will interact with the Harrison parameter \(b_{e}\), contributing to the above discontinuity in such a way, that the latter becomes eliminable via proper tuning [20]. Let us now start from this exotic enhanced Kerr metric of type I (a subfamily of the enhanced PD for \(e=0=g=l\)), and discuss some limiting cases. #### iv.2.1 Vanishing acceleration limit In the case of vanishing acceleration, one expects to be able to recover the Kerr-Newman-NUT metric (type D). Indeed, after proper coordinate transformations and parameter redefinitions, the metric (3.20a) acquires the form7 Footnote 7: Kerr–Newman–NUT metric as displayed in [2]. \[\mathrm{d}s^{2} = -\frac{Q}{R^{2}}\left[\mathrm{d}\bar{t}-\left(1-x\right)\left( \bar{a}+2\bar{l}+\bar{a}x\right)\mathrm{d}\varphi\right]^{2}+\frac{R^{2}}{1-x^ {2}}\left(\mathrm{d}x^{2}+\frac{\left(1-x^{2}\right)\mathrm{d}\bar{r}^{2}}{Q}\right) \tag{3.30}\] \[+\frac{\left(1-x^{2}\right)}{R^{2}}\left\{\bar{a}\,\mathrm{d} \bar{t}-\left[\bar{r}^{2}+\left(\bar{a}+\bar{l}\right)^{2}\right]\mathrm{d} \varphi\right\}^{2},\] with \[R^{2}(r,x) =\bar{r}^{2}+\left(\bar{l}+\bar{a}x\right)^{2}, \tag{3.31a}\] \[Q(r) =\left(\bar{r}-\bar{r}_{+}\right)\left(\bar{r}-\bar{r}_{-}\right),\] (3.31b) \[\bar{r}_{\pm} =\bar{m}\pm\sqrt{\bar{m}^{2}+\bar{l}^{2}-\bar{a}^{2}-\bar{e}^{2}}, \tag{3.31c}\] where \[\bar{t} = \frac{1}{\sqrt{\left(1-b_{e}^{2}\right)^{2}+c^{2}}}\left\{t+ \left[a\left(b_{e}^{2}+c^{2}\right)-4cm-C_{1}\right]\varphi\right\}, \tag{3.32a}\] \[\bar{r} = r\sqrt{\left(1-b_{e}^{2}\right)^{2}+c^{2}}-\frac{2m\left[c^{2}-b _{e}^{2}\left(1-b_{e}^{2}\right)\right]}{\sqrt{\left(1-b_{e}^{2}\right)^{2}+c^ {2}}},\] (3.32b) \[\bar{a} = a\sqrt{\left(1-b_{e}^{2}\right)^{2}+c^{2}},\] (3.32c) \[\bar{l} = -\frac{2cm}{\sqrt{\left(1-b_{e}^{2}\right)^{2}+c^{2}}},\] (3.32d) \[\bar{m} = \frac{m\left(1-c^{2}-b_{e}^{4}\right)}{\sqrt{\left(1-b_{e}^{2} \right)^{2}+c^{2}}},\] (3.32e) \[\bar{e} = 2mb_{e}. \tag{3.32f}\] Clearly, the new NUT parameter and the new seed electric charge are proportional to the transformation parameters \(c\) and \(b_{e}\), respectively.8 There is no need to display the gauge field here, for it will actually be misaligned with respect to the standard form in the Kerr-Newman-NUT solution. However, this issue is known, and its resolution is given by acting with an additional duality rotation on the Ernst electromagnetic potential [24]. Footnote 8: Do not confuse \(\tilde{m}\) with the \(\tilde{m}\) we previously introduced. #### ii.3.2 Transforming the Rindler spacetime Looking again at the original form of the metric, Eq. (3.20a), with the involved parameters being \(m,a,A,c,b_{e}\), we can proceed with killing the mass \(m\) and the angular momentum \(a\) to obtain the metric \[\Omega^{2}\mathrm{d}s^{2}=-\frac{Q}{\mathcal{R}^{2}}\left[\mathrm{d}t+\frac{2cA }{\Omega^{2}}\left(1-x^{2}\right)\mathrm{d}\varphi\right]^{2}+\mathcal{R}^{2} \left[\left(1-x^{2}\right)\mathrm{d}\varphi^{2}+\frac{\mathrm{d}r^{2}}{Q}+ \frac{\mathrm{d}x^{2}}{1-x^{2}}\right], \tag{3.33}\] where \[Q(r)=r^{2}(1-A^{2}r^{2}),\quad\Omega(r,x)=1-Aar,\quad\mathcal{R}^{2}(r,x)= \frac{c^{2}Q^{2}+\left(r^{2}\Omega^{2}-b_{e}^{2}Q\right)^{2}}{\Omega^{4}r^{2}}. \tag{3.34}\] The gauge field accompanying it, reads \[A = b_{e}\frac{\left(r^{2}\Omega^{2}-b_{e}^{2}Q\right)Q}{\mathcal{R}^{2} \Omega^{4}r^{2}}\,\mathrm{d}t+\left\{C_{2}+\frac{2cA}{\Omega^{2}}\left(1-x^{2 }\right)A_{t}\right\}\mathrm{d}\varphi. \tag{3.35}\] Interestingly, this spacetime is obtainable by essentially operating with a combined Ehlers-Harrison map on a Rindler spacetime. Therefore, it is evident that, if we switch off the acceleration, we recover Minkowski spacetime. Indeed, after a Weyl rescaling of the metric, \(\mathrm{d}s^{2}\to\mathrm{d}\bar{s}^{2}=\mathrm{d}s^{2}/\left(1-b_{e}^{2}+c^{2}\right)\), and a time rescaling \(t\to\left(1-b_{e}^{2}+c^{2}\right)\bar{t}\), this utterly proves to be the case. Of course, the Maxwell field also vanishes up to the choice of gauge. #### ii.3.1 Vanishing rotation limit The case of vanishing rotation corresponds to the enhanced C-metric (type I), i.e., a C-metric into which, NUT and electromagnetic charges enter via the Ehlers-Harrison map. Here, we choose to consider the alternative form (3.23) of the target metric which is the most befitting for the task at hand. Therefore, our parameters are \(\tilde{l},\tilde{m},a,A,b_{e}\). When \(a\to 0\), the metric (3.23) becomes \[\Omega^{2}\mathrm{d}s^{2} = -\frac{Q}{\mathcal{R}^{2}}\left\{\mathrm{d}t-\tilde{l}\left[2 \left(1-x\right)+A\,\frac{\tilde{P}}{\Omega^{2}\sqrt{\tilde{m}^{2}+\tilde{l}^ {2}}}\right]\mathrm{d}\varphi\right\}^{2}+\mathcal{R}^{2}\left(\tilde{P}\, \mathrm{d}\varphi^{2}+\frac{\mathrm{d}r^{2}}{Q}+\frac{\mathrm{d}x^{2}}{\tilde {P}}\right), \tag{3.36}\] where \[Q(r)=r\left(r-2\sqrt{\tilde{m}^{2}+\tilde{l}^{2}}\right)\left(1-A^{2}r^{2} \right),\quad\tilde{P}(x)=\left(1-x^{2}\right)\left(1-2Ax\sqrt{\tilde{m}^{2}+ \tilde{l}^{2}}\right),\quad\Omega(r,x)=1-Aar,\] (3.37a) and \[\mathcal{R}^{2}(r,x)=\frac{\tilde{l}^{2}Q^{2}+4\left(\tilde{m}^{2}+\tilde{l}^ {2}\right)\left(r^{2}\Omega^{2}-b_{e}^{2}Q\right)^{2}}{4\left(\tilde{m}^{2}+ \tilde{l}^{2}\right)\Omega^{4}r^{2}}. \tag{3.37b}\] The solution further contains a gauge field \[A = b_{e}\,\frac{\left(r^{2}\Omega^{2}-b_{e}^{2}Q\right)Q}{\mathcal{R}^{2} \Omega^{4}r^{2}}\,\mathrm{d}t+\left\{C_{2}-\left[2\tilde{l}\left(1-x\right)+A \,\tilde{l}\,\frac{\tilde{P}}{\Omega^{2}\sqrt{\tilde{m}^{2}+\tilde{l}^{2}}} \right]A_{t}\right\}\mathrm{d}\varphi. \tag{3.38}\] The explicit form (3.36) of the enhanced C-metric is particularly suitable for taking further limits. In fact, it is not hard to observe that by killing the Harrison parameter \(b_{e}\), the spacetime configuration assumes the form of the accelerating NUT black hole described in [13; 14]. On the other hand, killing \(\tilde{l}\), the solution reduces to the accelerating charged black holes described in [20]. Both limits are straightforward up to minor reparametrizations. Finally, killing the mass \(\tilde{m}\) in (3.36), we arrive at the enhanced Rindler spacetime, which features in Fig. 1 as the last descendant of the EPD spacetime in the middle column. This represents a sort of massless accelerating black hole with Ehlers NUT and Harrison electromagnetic charges. Further comments The present study endeavours to contribute to the discussion surrounding algebraically general black holes within the framework of Einstein-Maxwell theory. These type I spacetimes have recently gained substantial attention, mainly because they arise via a highly nontrivial action of the Ehlers or Harrison transformations on spacetimes featuring accelerating horizons. Specifically, operating on an accelerating seed with an Ehlers map, or a Harrison map, or both, has the remarkable effect of altering the algebraic properties of the seed, this due to the transformation parameters--a NUT parameter in the case of Ehlers, or electromagnetic charges in the case of Harrison--penetrating the Rindler horizon. In this work, we presented a complete hierarchical structure for the type I solutions arising via the combined action of Ehlers and Harrison maps. The graphical form of the hierarchy was given in Fig. 1, with the graph's root node being the Enhanced Plebanski-Demianski spacetime, or EPD for short. We managed to provide an explicit form of the solution, at least up to the case of neutral seeds, a task computationally feasible only at the (minor) cost of using the original PD coordinates. However, in the case of a neutral NUTless PD seed, viz., the accelerating Kerr, we were able to integrate the equations directly in the physical spherical-like coordinates, thereby obtaining the explicit form of a novel type I spacetime representing accelerating and rotating black holes, endowed with both NUT and electromagnetic charges via the Ehlers-Harrison map. On top of that, we further scrutinized some limits of this solution, demonstrating how various spacetimes, previously presented in the pertinent literature, arise as limiting cases. Before bringing forth the hierarchy of these type I solution, we engaged in a detailed investigation of the Ernst symmetries, particularly focusing on the Ehlers and Harrison maps. After reviewing how these two maps emerge from proper compositions of gravitational and electromagnetic gauge transformations with the inversion symmetry, inherent in the Ernst equations, we discussed their composition properties. It turned out that, although the Ehlers transformations form a subgroup, the Harrison ones do not, with the reason behind this failure made manifest. These insights into the transformations, as mathematical operations per se, provided a better understanding also of the physical effects they produce when acting on spacetimes. Finally, we thoughtfully included a user-friendly rederivation of the Ernst equations in Appendix B, purely for pedagogical purposes, and in order to deal with minor inconsistencies, often encountered in the literature, regarding signs in the definitions of the Ernst potentials and the twisted-potential equations. In considering ways to further enrich this type I hierarchy of solutions, an interesting prospect is the introduction of angular momentum into a given seed through a suitable solution-generating technique, as conjectured in [20]. A promising approach would involve the inverse scattering method [25], a mechanism known for generating, e.g. the Kerr spacetime from the Minkowski metric. If the Rindler horizon somehow interacts with the external angular momentum, one can speculate that the entire type I hierarchy, as presented here, would be further extended to allow for two distinct angular momenta, the seed one and the one introduced via the solution-generating technique. Consequently, the intriguing thought of a Rindler-Kerr background may ultimately materialize, _inter alia_ becoming a fertile soil for exploring novel interactions between all parameters within this generalized family. Although searching for ways to extend this hierarchy is definitely tempting, it is readily evident that there are numerous novel geometries within the EPD family, which need to be thoroughly examined. A comprehensive investigation of their causal structure, along with a satisfactory geometric description of how they extend beyond their type D counterparts, is imperative. In addition, given the presence of acceleration, delving into their thermodynamics constitutes an intriguing challenge. A succinct framework for understanding the thermodynamics of accelerating black holes, remains yet elusive, recent commendable contributions towards this direction [26; 27; 28; 29] notwithstanding.9 Last but not least, it is necessary to fully probe and understand the intricate mechanism behind the change in the algebraic nature of an accelerating seed, effected by Ehlers or Harrison transformations acting on the latter--namely how these operations alter the principal null directions of the Weyl tensor, etc.--, in an attempt to solidify a consistent framework for generating algebraically general black holes. Footnote 9: Recently the thermodynamics of accelerating black holes in three-dimensions has been also explored, opening a new road towards exploring the holographic properties of accelerating spacetimes [30; 31; 32; 33]. ## V Acknowledgments The work of J.B. is supported by FONDECYT Postdoctorado grant No. 3230596. The work of A.C. is funded by Primus grant PRIMUS/23/SCI/005 from Charles University and FONDECYT Regular grant No. 1210500. K. P. acknowledges financial support provided by the European Regional Development Fund (ERDF) through the Center of Excellence TK133 "The Dark Side of the Universe" and PRG356 "Gauge gravity: unification, extensions and phenomenology." ## Appendix A EPD spacetime in the original PD coordinates: the neutral-seed case In this section, we present a detailed construction of the Enhanced Plebanski-Demianski spacetime. To facilitate this task, two key assumptions are made. First, we employ the original PD coordinates to lighten the computational burden. Second, in order to obtain an analytic non-integral form of the target twisted potentials, we shall switch off the seed electromagnetic charges, \(e\) and \(g\). Despite these simplifying assumptions, our setup is yet general enough to accommodate, for the first time, an explicit integration of the solution obtained by acting on a rotating seed with the Ehlers-Harrison map. Let us start by writting down the line element of the PD spacetime, and the gauge field supporting it, in the following way, \[\mathrm{d}s_{0}^{2} =-f_{0}\left(\mathrm{d}t-\omega_{0}\,\mathrm{d}\varphi\right)^{2} +\frac{\rho^{2}\,\mathrm{d}\varphi^{2}}{f_{0}}+\frac{R^{2}}{\Omega^{2}}\left( \frac{\mathrm{d}r^{2}}{Q}+\frac{\mathrm{d}x^{2}}{P}\right), \tag{30a}\] \[A_{0} =-\frac{er+\hat{\omega}gx}{R^{2}}\,\mathrm{d}t+\frac{e\hat{ \omega}rx^{2}-gxr^{2}}{R^{2}}\,\mathrm{d}\varphi, \tag{30b}\] where we have introduced the functions \[f_{0}(r,x) = \frac{Q-\hat{\omega}^{2}P}{\Omega^{2}R^{2}}, \tag{31a}\] \[\omega_{0}(r,x) = \hat{\omega}\frac{x^{2}Q+r^{2}P}{Q-\hat{\omega}P},\] (31b) \[\rho(r,x) = \frac{\sqrt{PQ}}{\Omega^{2}},\] (31c) \[Q(r) = \hat{\omega}^{2}k+e^{2}+g^{2}-2mr+\epsilon r^{2}-\frac{2Anr^{3}}{ \hat{\omega}}-kA^{2}r^{4},\] (31d) \[P(x) = k+\frac{2nx}{\hat{\omega}}-\epsilon x^{2}+2Amx^{3}-A^{2}\left(k \hat{\omega}^{2}+e^{2}+g^{2}\right)x^{4},\] (31e) \[R(r,x) = \sqrt{r^{2}+\hat{\omega}^{2}x^{2}},\] (31f) \[\Omega(r,x) = 1-Axr. \tag{31g}\] As it stands, the metric (30a) is already in the Lewis-Papapetrou form (1a), albeit in the chart \(\{t,r,x,\varphi\}\), with Weyl's coordinate \(\rho\) given above, and the \(z\) coordinate given by \[z(r,x)=\frac{k\hat{\omega}Ar^{2}+n\left(1+Axr\right)r-\hat{\omega}A\left[A \left(e^{2}+g^{2}+k\hat{\omega}^{2}\right)x-m\left(1+Axr\right)+\epsilon r \right]x}{\hat{\omega}\Omega^{2}}. \tag{32}\] For completeness, we also display the function \(\gamma\), \[\gamma(r,x)=\frac{1}{2}\ln\frac{Q-\hat{\omega}^{2}P}{\Omega^{4}\left[P\left( \partial_{x}\rho\right)^{2}+Q\left(\partial_{r}\rho\right)^{2}\right]}. \tag{33}\] To find the target metric we first need to solve Eqs. (4) for the twisted potentials which read \[\tilde{A}_{\varphi,0} = \frac{gr-e\hat{\omega}x}{R^{2}}, \tag{34a}\] \[\chi_{0} = 2\frac{nr-m\hat{\omega}x+\hat{\omega}A\left[kr^{2}+\left(e^{2}+g ^{2}+k\hat{\omega}^{2}\right)x^{2}\right]}{\Omega R^{2}}. \tag{34b}\] With these at hand, the target metric functions, i.e., the ones obtained after the application of the Ehlers-Harrison map, can be expressed as follows \[f = \frac{f_{0}}{|\Lambda|^{2}}, \tag{10a}\] \[\hat{\omega}^{2}|\Lambda|^{2}R^{2}\Omega^{4}\chi = -2\hat{\omega}^{2}(-1+rxA)^{3}\big{[}nr-m\hat{\omega}x+k\hat{ \omega}(r^{2}+\hat{\omega}^{2}x^{2})A\big{]}\] (10b) \[-c\bigg{(}4n^{2}(\hat{\omega}^{2}+r^{4}A^{2})+4m^{2}(\hat{\omega}^{2}+ \hat{\omega}^{4}x^{4}A^{2})-4n\hat{\omega}\big{\{}-kA\big{[}-\hat{\omega}^{4}x ^{3}A+r^{5}A^{2}\] \[\qquad+\hat{\omega}^{2}r(2-3rxA+r^{2}x^{2}A^{2})\big{]}+(\hat{ \omega}^{2}x+r^{3}A)\varepsilon\big{\}}-4m\hat{\omega}\big{\{}-2n(r^{2}+\hat{ \omega}^{2}x^{2})A\] \[\qquad+k\hat{\omega}A\big{[}-r^{3}A+\hat{\omega}^{4}x^{5}A^{2}+ \hat{\omega}^{2}x(2-3rxA+r^{2}x^{2}A^{2})\big{]}+\hat{\omega}(r+\hat{\omega}^ {2}x^{3}A)\varepsilon\big{\}}\] \[\qquad+\hat{\omega}^{2}(r^{2}+\hat{\omega}^{2}x^{2})\big{\{}k^{2} A^{2}\big{[}r^{4}A^{2}+\hat{\omega}^{4}x^{4}A^{2}+2\hat{\omega}^{2}(2-4rxA+r^{2}x^{2} A^{2})\big{]}\] \[\qquad-2k(r^{2}-\hat{\omega}^{2}x^{2})A^{2}\varepsilon+ \varepsilon^{2}\big{\}}\bigg{)},\] \[\tilde{A}_{\varphi} = b_{e}\chi. \tag{10d}\] The rotational function \(\omega\) can be divided in two terms. The first term is the seed function \(\omega_{0}\), which stands for the rotational function of the Kerr-NUT black hole. The second term contains all the couplings with the Ehlers and Harrison parameters. Thus, \[\omega=\omega_{0}+\varpi+C_{1}, \tag{11}\] where \(\varpi\) reads \[\varpi = -\frac{2c(k\hat{\omega}+2nx)-be^{4}k^{2}\hat{\omega}^{2}A-c^{2}k^{2} \hat{\omega}^{2}A}{\hat{\omega}x^{2}A} \tag{108}\] \[-\frac{\big{\{}k\hat{\omega}(-1+\hat{\omega}^{2}x^{4}A^{2})+x \big{[}-2n+\hat{\omega}x(-2mxA+\varepsilon)\big{]}\big{\}}}{\hat{\omega}x^{2}A(- 1+rxA)^{3}\big{\{}-2n(\hat{\omega}^{2}x+r^{3}A)-2m\hat{\omega}(r+\hat{\omega}^{2 }x^{3}A)+\hat{\omega}(r^{2}+\hat{\omega}^{2}x^{2})\big{[}k(-r^{2}+\hat{\omega}^ {2}x^{2})A^{2}+\varepsilon\big{]}\big{\}}}\] \[\times\bigg{(}2c(-1+rxA)\big{\{}2m\hat{\omega}r(-1+2rxA+\hat{ \omega}^{2}x^{4}A^{2})+2n\big{[}r^{3}A(-1+2rxA)+\hat{\omega}^{2}x(-1+rxA+r^{2} x^{2}A^{2})\big{]}\] \[\qquad+\hat{\omega}(r^{2}+\hat{\omega}^{2}x^{2})\big{[}kA^{2}(-r ^{2}-\hat{\omega}^{2}x^{2}+2r^{3}xA)+\varepsilon-2rxA\varepsilon\big{]}\big{\}} \!+\!be^{4}A\big{\{}k^{2}\hat{\omega}^{2}(r^{2}+\hat{\omega}^{2}x^{2})\] \[\qquad\times A^{2}(-r^{2}-3\hat{\omega}^{2}x^{2}+3r^{3}xA+\hat{ \omega}^{2}rx^{3}A)+k\hat{\omega}\big{[}2nr^{3}A(-1+4rxA)+2n\hat{\omega}^{2}x (-1-rxA+3r^{2}x^{2}A^{2})\] \[\qquad+2m\hat{\omega}(-r+3r^{2}xA+3\hat{\omega}^{2}x^{3}A-r^{3} x^{2}A^{2})-\hat{\omega}(r^{2}+\hat{\omega}^{2}x^{2})(-1+3rxA)\varepsilon \big{]}\] \[\qquad+2x(nr-m\hat{\omega}x)\big{[}2m\hat{\omega}+r(2nrA-\hat{ \omega}\varepsilon)\big{]}\big{\}}\!+\!c^{2}A\big{\{}k^{2}\hat{\omega}^{2}(r^{ 2}+\hat{\omega}^{2}x^{2})A^{2}(-r^{2}-3\hat{\omega}^{2}x^{2}+3r^{3}xA+\hat{ \omega}^{2}rx^{3}A)\] \[\qquad\times A^{2}(-r^{2}-3\hat{\omega}^{2}x^{2}+3r^{3}xA+\hat{ \omega}^{2}rx^{3}A)+k\hat{\omega}\big{[}2nr^{3}A(-1+4rxA)+2n\hat{\omega}^{2}x (-1+4rxA)+2n\hat{\omega}^{2}x(-1-rxA+3r^{2}x-1)\] \[\qquad-\hat{\omega}(r^{2}+\hat{\omega}^{2}x^{2})(-1+3rxA) \varepsilon\big{]}\!+\!2x(nr-m\hat{\omega}x)\big{[}2m\hat{\omega}+r(2nrA-\hat{ \omega}\varepsilon)\big{]}\big{\}}\bigg{)}.\] Finally, it remains to integrate the magnetic component of the target gauge field, which is found to be \[A_{\varphi}=G-\omega A_{t}+C_{2}, \tag{109}\] with \[G(r,x) = \frac{be^{3}\big{\{}k\hat{\omega}(-1+\hat{\omega}^{2}x^{4}A^{2}) +x\big{[}-2n+\hat{\omega}x(-2mxA+\varepsilon)\big{]}\big{\}}}{\hat{\omega}x^{ 2}(-1+rxA)^{3}\big{\{}-2n(\hat{\omega}^{2}x+r^{3}A)-2m\hat{\omega}(r+\hat{ \omega}^{2}x^{3}A)+\hat{\omega}(r^{2}+\hat{\omega}^{2}x^{2})\big{[}k(-r^{2}+ \hat{\omega}^{2}x^{2})A^{2}+\varepsilon\big{]}\big{\}}} \tag{110}\] \[\times\bigg{(}k^{2}\hat{\omega}^{2}(r^{2}+\hat{\omega}^{2}x^{2})A ^{2}(-r^{2}-3\hat{\omega}^{2}x^{2}+3r^{3}xA+\hat{\omega}^{2}rx^{3}A)+k\hat{ \omega}\big{[}2nr^{3}A(-1+4rxA)\] \[\qquad+2n\hat{\omega}^{2}x(-1-rxA+3r^{2}x^{2}A^{2})+2m\hat{\omega} (-r+3r^{2}xA+3\hat{\omega}^{2}x^{3}A-r^{3}x^{2}A^{2})\] \[\qquad-\hat{\omega}(r^{2}+\hat{\omega}^{2}x^{2})(-1+3rxA) \varepsilon\big{]}\!+\!2x(nr-m\hat{\omega}x)\big{[}2m\hat{\omega}+r(2nrA-\hat{ \omega}\varepsilon)\big{]}\bigg{)}-\frac{be^{3}k^{2}\hat{\omega}}{x^{2}},\] and \(C_{2}\) being yet another integration constant. ## Appendix B A user-friendly guide to the Ernst formalism In this section, we present a detailed derivation of the renowned Ernst equations. It all starts with the Einstein-Maxwell action10 Footnote 10: We use natural units, and we further have set \(G=(4\pi)^{-1}\). \[I_{\rm EM}\left[g_{\mu\nu},A_{\mu}\right]=\frac{1}{4}\int{\rm d}^{4}x\sqrt{-g} \left(R-F_{\mu\nu}F^{\mu\nu}\right). \tag{111}\] Varying with respect to the metric and the gauge field we obtain the field equations \[G_{\mu\nu} = 2\left(F_{\mu}{}^{\lambda}F_{\nu\lambda}-\frac{1}{4}F_{\lambda \sigma}F^{\lambda\sigma}g_{\mu\nu}\right), \tag{112a}\] \[\partial_{\nu}\left(\sqrt{-g}F^{\nu\mu}\right) = 0, \tag{112b}\] respectively. Since the trace of the energy-momentum tensor vanishes, the metric field equations admit a particularly simple Ricci form, \[R_{\mu\nu}=2\left(F_{\mu}{}^{\lambda}F_{\nu\lambda}-\frac{1}{4}F_{\lambda\sigma}F ^{\lambda\sigma}g_{\mu\nu}\right)=:2T_{\mu\nu}. \tag{101}\] ### Field equations with the "electric" LWP ansatz Now, since we are interested in stationary and axisymmetric spacetimes characterized by two commuting Killing vectors, \(\partial_{t}\) and \(\partial_{\varphi}\), we consider the LWP metric ansatz \[\mathrm{d}s^{2}=-f\left(\mathrm{d}t-\omega\,\mathrm{d}\varphi\right)^{2}+ \frac{1}{f}\left[\rho^{2}\mathrm{d}\varphi^{2}+\mathrm{e}^{2\gamma}\left( \mathrm{d}\rho^{2}+\mathrm{d}z^{2}\right)\right], \tag{102}\] together with a gauge field with the same symmetries \[A=A_{t}\,\mathrm{d}t+A_{\varphi}\,\mathrm{d}\varphi, \tag{103}\] where \(f,\,\omega,\,\gamma\) and \(A_{t},\,A_{\varphi}\) are functions of \(\rho\) and \(z\). The simplest equations to tackle first are the field equations for the Maxwell field \(A_{\mu}\). One can easily show that \[F_{\mu\nu}=\delta_{\mu\nu}^{\rho t}A_{t}^{\prime}+\delta_{\mu\nu}^{zt}\dot{A} _{t}+\delta_{\mu\nu}^{\rho\varphi}A_{\varphi}^{\prime}+\delta_{\mu\nu}^{z \varphi}\dot{A}_{\varphi}, \tag{104}\] where our convention for the rank-4 skew-symmetric Kronecker delta is \(\delta_{\mu\nu}^{\lambda\sigma}=\delta_{\mu}^{\lambda}\delta_{\sigma}^{\sigma }-\delta_{\nu}^{\lambda}\delta_{\mu}^{\sigma}\). A prime accent \({}^{\prime}\) denotes a derivative with respect to \(\rho\), and a dot accent 'denotes a derivative with respect to \(z\). Since \(\partial_{t},\,\partial_{\varphi}\) are Killing vectors, we have that only the vectors \(F^{\rho\mu}\) and \(F^{z\mu}\) appear in the Maxwell field equations. These read \[\sqrt{-g}F^{\rho\mu} = \rho\left[-\frac{A_{t}^{\prime}}{f}+\frac{\omega f}{\rho^{2}} \left(A_{\varphi}^{\prime}+\omega A_{t}^{\prime}\right)\right]\delta_{t}^{\mu} +f\rho\frac{A_{\varphi}^{\prime}+\omega A_{t}^{\prime}}{\rho^{2}}\delta_{ \varphi}^{\mu}=:\rho\hat{F}^{\mu}, \tag{105a}\] \[\sqrt{-g}F^{z\mu} = \rho\left[-\frac{\dot{A}_{t}}{f}+\frac{\omega f}{\rho^{2}}\left( \dot{A}_{\varphi}+\omega\dot{A}_{t}\right)\right]\delta_{t}^{\mu}+f\rho\frac{ \dot{A}_{\varphi}+\omega\dot{A}_{t}}{\rho^{2}}\delta_{\varphi}^{\mu}=:\rho F^ {\mu}. \tag{105b}\] Then, the Maxwell field equations are given by \[\frac{1}{\rho}\left(\rho\hat{F}^{\mu}\right)^{\prime}+\dot{F}^{\mu}=0. \tag{106}\] Given that the divergence of a vector \(\mathbf{V}(\rho,z)\) in cylindrical coordinates reads \[\mathbf{\nabla}\cdot\mathbf{V}=\frac{1}{\rho}(\rho V_{\rho})^{\prime}+\dot{V}_{z}, \tag{107}\] it turns out the Maxwell field equations can be written as \[\mathbf{\nabla}\cdot\mathbf{F}^{\mu}=0, \tag{108}\] where \[\mathbf{F}^{\mu}:=\left(\hat{F}^{\mu},F^{\mu},0\right)=\left[-\frac{\mathbf{ \nabla}A_{t}}{f}+\frac{\omega f}{\rho^{2}}\left(\mathbf{\nabla}A_{\varphi}+\omega \mathbf{\nabla}A_{t}\right)\right]\delta_{t}^{\mu}+f\frac{\mathbf{\nabla}A_{\varphi} +\omega\mathbf{\nabla}A_{t}}{\rho^{2}}\delta_{\varphi}^{\mu}. \tag{109}\] Let us put now our attention on Einstein field equations. In particular, we have the \(tt\) component \[-2f^{3}\left(\mathbf{\nabla}A_{\varphi}+\omega\mathbf{\nabla}A_{t}\right)\cdot\left( \mathbf{\nabla}A_{\varphi}+\omega\mathbf{\nabla}A_{t}\right)-\rho^{2}\mathbf{\nabla}f \cdot\mathbf{\nabla}f+f^{4}\mathbf{\nabla}\omega\cdot\mathbf{\nabla}\omega-f\rho^{2}\left( 2\mathbf{\nabla}A_{t}\cdot\mathbf{\nabla}A_{t}-\nabla^{2}f\right)=0, \tag{110}\] and the \(t\varphi\) component \[-f^{4}\omega\mathbf{\nabla}\omega\cdot\mathbf{\nabla}\omega+2\omega f^{3} \left(\mathbf{\nabla}A_{\varphi}+\omega\mathbf{\nabla}A_{t}\right)\cdot\left(\mathbf{ \nabla}A_{\varphi}+\omega\mathbf{\nabla}A_{t}\right)\] \[-f\rho^{2}\left[\omega\left(2\mathbf{\nabla}A_{t}\cdot\mathbf{\nabla}A_{t }+\nabla^{2}f\right)+2\left(2\mathbf{\nabla}A_{\varphi}\cdot\mathbf{\nabla}A_{t}+\mathbf{ \nabla}f\cdot\mathbf{\nabla}\omega\right)\right]-f^{2}\rho\left(\rho\nabla^{2} \omega-2\omega^{\prime}\right)+\omega\rho^{2}\mathbf{\nabla}f\cdot\mathbf{\nabla}f \ =0. \tag{111}\] Looking at the form of these two, it becomes apparent that, multiplying Eq. (147) by \(\omega\) and adding to Eq. (148), we can obtain a simpler equation, namely, \[\frac{2f}{\rho^{2}}\left(2\omega\mathbf{\nabla}A_{t}\cdot\mathbf{\nabla}A_{t}+2\mathbf{ \nabla}A_{\varphi}\cdot\mathbf{\nabla}A_{t}+\mathbf{\nabla}f\cdot\mathbf{\nabla}\omega \right)+\frac{f^{2}}{\rho^{2}}\left(\nabla^{2}\omega-\frac{2\omega^{\prime}}{ \rho}\right)=0. \tag{149}\] At this stage, and after cumbersome algebra, we can finally present the latter as \[\mathbf{\nabla}\cdot\left[\frac{f^{2}}{\rho^{2}}\mathbf{\nabla}\omega+\frac{4f}{\rho^{ 2}}A_{t}\left(\mathbf{\nabla}A_{\varphi}+\omega\mathbf{\nabla}A_{t}\right)\right]=0, \tag{150}\] modulo the Maxwell field equations. Knowing all the other functions, we can also determine \(\gamma\) via the equations \[r\left(R_{\rho z}-2T_{\rho z}\right)=0,\quad\frac{r}{2}\left[R_{\rho\rho}-R_{ zz}-2\left(T_{\rho\rho}-T_{zz}\right)\right]=0, \tag{151}\] which directly provide us with expressions for \(\dot{\gamma}\) and \(\gamma^{\prime}\), respectively. Do also note that if the \(tt\) and \(t\varphi\) components of the Einstein field equations are satisfied, then the \(\varphi\varphi\) component vanishes identically. ### Let's twist again Notice that if \(h\) is some function of \(\rho,z\), then \[\mathbf{\nabla}\cdot\left(\frac{1}{\rho}\hat{\mathbf{\varphi}}\times\mathbf{\nabla}h \right)=0, \tag{152}\] regardless of how our triad is ordered. Therefore, in a fashion similar to "closed is locally exact", here we may argue that \(\mathbf{\nabla}\cdot\mathbf{V}(\rho,z)=0\) implies that there exists a function \(h(\rho,z)\) such that \[\mathbf{V}=\frac{1}{\rho}\hat{\mathbf{\varphi}}\times\mathbf{\nabla}h. \tag{153}\] Such a function will be called a _twisted potential_. Now, recalling the definition of our vectors \(\mathbf{F}^{t}\) and \(\mathbf{F}^{\varphi}\) (149), and considering one of the Maxwell field equations, namely \(\mathbf{\nabla}\cdot\mathbf{F}^{\varphi}=0\), we can always write \[\rho\mathbf{F}^{\varphi}=(-)^{p}\hat{\mathbf{\varphi}}\times\mathbf{\nabla}\tilde{A}_ {\varphi}. \tag{154}\] Here, \(p=0\), \(1\) is introduced just to keep track of the available sign freedom in the definition of the twisted potential. Since \(\hat{\mathbf{\varphi}}\times(\hat{\mathbf{\varphi}}\times\mathbf{V})=-\mathbf{V}\) for any vector \(\mathbf{V}\) with only \(\rho,z\) components, we may cross both sides of the above equation with \(\hat{\mathbf{\varphi}}\) from the left to get \[\frac{(-)^{p}}{f}\mathbf{\nabla}\tilde{A}_{\varphi}+\frac{\omega}{\rho}\hat{\mathbf{ \varphi}}\times\mathbf{\nabla}A_{t}=-\frac{1}{\rho}\hat{\mathbf{\varphi}}\times\mathbf{ \nabla}A_{\varphi}. \tag{155}\] Using Eq. (152), we can easily show that \[\mathbf{\nabla}\cdot\left(\frac{(-)^{p}}{f}\mathbf{\nabla}\tilde{A}_{\varphi}+\frac{ \omega}{\rho}\hat{\mathbf{\varphi}}\times\mathbf{\nabla}A_{t}\right)=0, \tag{156}\] which can freely replace the equation \(\mathbf{\nabla}\cdot\mathbf{F}^{\varphi}=0\). Remember that the other equation in the Maxwell set reads \[\mathbf{\nabla}\cdot\left(-\frac{1}{f}\mathbf{\nabla}A_{t}+\omega\mathbf{F}^{\varphi} \right)=0. \tag{157}\] Clearly, using the definition of the twisted potential, we can present it as \[\mathbf{\nabla}\cdot\left(-\frac{1}{f}\mathbf{\nabla}A_{t}+(-)^{p}\frac{\omega}{\rho} \hat{\mathbf{\varphi}}\times\mathbf{\nabla}\tilde{A}_{\varphi}\right)=0. \tag{158}\] Hence, we have managed to cast the Maxwell field equations \(\mathbf{\nabla}\cdot\mathbf{F}^{\mu}=0\) into a pair of equations comprised of (156) and (158). Multiplying Eq. (156) with \(i\) and subtracting Eq. (158) from it, we can express our pair as the single complex equation \[\mathbf{\nabla}\cdot\left(\frac{1}{f}\mathbf{\nabla}\Phi+\frac{i\omega}{\rho}\hat{\mathbf{ \varphi}}\times\mathbf{\nabla}\Phi\right)=0, \tag{159}\] where \(\Phi=A_{t}+i(-)^{p}\tilde{A}_{\varphi}\) is the first complex potential we have introduced. Let us now turn back our attention towards the gravity sector. Using the identity (177), we can show that \[(-)^{p}\mathbf{\nabla}\cdot\left(\frac{1}{\rho}\hat{\mathbf{\varphi}}\times\tilde{A}_{ \varphi}\mathbf{\nabla}A_{t}\right)=-\frac{1}{2}\mathbf{\nabla}\cdot\left[\frac{1}{ \rho}\hat{\mathbf{\varphi}}\times\mathsf{Im}\left(\Phi^{*}\mathbf{\nabla}\Phi\right) \right]. \tag{185}\] Taking into account the above equation, then Eq. (185) can be written as \[\mathbf{\nabla}\cdot\left(\frac{f^{2}}{\rho^{2}}\mathbf{\nabla}\omega+4A_{t}\mathbf{F} ^{\varphi}\right)=0. \tag{186}\] Using the definition (179) together with the identity (185), and considering that \[\mathsf{Im}\left(\Phi^{*}\mathbf{\nabla}\Phi\right)=(-)^{p}\left(A_{t}\mathbf{\nabla} \tilde{A}_{\varphi}-\tilde{A}_{\varphi}\mathbf{\nabla}A_{t}\right), \tag{187}\] one can cast Eq. (185) into \[\mathbf{\nabla}\cdot\left[\frac{f^{2}}{\rho^{2}}\mathbf{\nabla}\omega+\frac{2}{\rho} \hat{\mathbf{\varphi}}\times\mathsf{Im}\left(\Phi^{*}\mathbf{\nabla}\Phi\right)\right] =0. \tag{188}\] Following the same procedure as for Maxwell's equations, we introduce another twisted potential \(\chi\) such that \[\frac{f^{2}}{\rho}\mathbf{\nabla}\omega+2\hat{\mathbf{\varphi}}\times\mathsf{Im}\left( \Phi^{*}\mathbf{\nabla}\Phi\right)=(-)^{s}\hat{\mathbf{\varphi}}\times\mathbf{\nabla}\chi, \tag{189}\] where \(s=0,\,1\) is another parameter introduced to keep track of the sign freedom in the definition of the second twisted potential. If we cross now both sides of the above equation with \(\hat{\mathbf{\varphi}}\) from the left, then \[\frac{1}{f^{2}}\left[(-)^{s}\mathbf{\nabla}\chi-2\,\mathsf{Im}\left(\Phi^{*}\mathbf{ \nabla}\Phi\right)\right]=-\frac{1}{\rho}\hat{\mathbf{\varphi}}\times\mathbf{\nabla}\omega. \tag{190}\] Again, using the identity (177), one can easily show that the equation \[\mathbf{\nabla}\cdot\left\{\frac{1}{f^{2}}\left[(-)^{s}\mathbf{\nabla}\chi-2\,\mathsf{ Im}\left(\Phi^{*}\mathbf{\nabla}\Phi\right)\right]\right\}=0, \tag{191}\] may freely replace the Einstein equation (185). For later use, we denote the above as \(\mathbf{\nabla}\cdot\mathbf{G}=0\). At this stage, we need to recall that for a vector \(\mathbf{V}\) with only \(\rho,z\) components, it holds that \[(\hat{\mathbf{\varphi}}\times\mathbf{V})\cdot(\hat{\mathbf{\varphi}}\times\mathbf{V} )=\mathbf{V}\cdot\mathbf{V}. \tag{192}\] We may look at the other Einstein equation, namely Eq. (172), and use this knowledge together with the definition of \(\mathbf{F}^{\varphi}\), to write it as \[-2\rho^{4}f\mathbf{F}^{\varphi}\cdot\mathbf{F}^{\varphi}-\rho^{2}\mathbf{\nabla}f \cdot\mathbf{\nabla}f+f\rho^{2}\nabla^{2}f-2f\rho^{2}\mathbf{\nabla}A_{t}\cdot\mathbf{ \nabla}A_{t}+\rho^{2}f^{4}\mathbf{G}\cdot\mathbf{G}\ =\ 0. \tag{193}\] Considering that \[\rho^{2}\mathbf{F}^{\varphi}\cdot\mathbf{F}^{\varphi}=\mathbf{\nabla}\tilde{A}_{ \varphi}\cdot\mathbf{\nabla}\tilde{A}_{\varphi},\qquad\mathbf{\nabla}\Phi\cdot\mathbf{ \nabla}\Phi^{*}=\mathbf{\nabla}A_{t}\cdot\mathbf{\nabla}A_{t}+\mathbf{\nabla}\tilde{A}_{ \varphi}\cdot\mathbf{\nabla}\tilde{A}_{\varphi}, \tag{194}\] then Eq. (193) becomes \[-\mathbf{\nabla}f\cdot\mathbf{\nabla}f+f\nabla^{2}f-2f\mathbf{\nabla}\Phi\cdot\mathbf{\nabla} \Phi^{*}+f^{4}\mathbf{G}\cdot\mathbf{G}\ =\ 0. \tag{195}\] Multiplying Eq. (191) with \(if^{2}\) and subtracting it from Eq. (195), we combine the two Einstein equations into one complex gravitational equation, namely \[-\mathbf{\nabla}f\cdot\mathbf{\nabla}f+f\nabla^{2}f-2f\mathbf{\nabla}\Phi\cdot\mathbf{\nabla} \Phi^{*}+f^{4}\mathbf{G}\cdot\mathbf{G}-if^{2}\mathbf{\nabla}\cdot\mathbf{G}=0. \tag{196}\] ### The Ernst equations Recall now that the defining equation for the twisted potential \(\chi\), Eq. (100), can be written as \[\frac{1}{\rho}\mathbf{\nabla}\omega=\hat{\mathbf{\varphi}}\times\mathbf{G}, \tag{101}\] and by using the identity (100), the Maxwell equations (101) can be brought to the form \[-\frac{1}{f^{2}}\mathbf{\nabla}f\cdot\mathbf{\nabla}\Phi+\frac{1}{f}\nabla^{2}\Phi+ \frac{i}{\rho}\mathbf{\nabla}\omega\cdot\left(\hat{\mathbf{\varphi}}\times\mathbf{\nabla} \Phi\right)=0. \tag{102}\] Remembering the product property \[\mathbf{X}\cdot\left(\mathbf{Y}\times\mathbf{Z}\right)=-\mathbf{Z}\cdot\left( \mathbf{Y}\times\mathbf{X}\right), \tag{103}\] which holds true for arbitrary vectors \(\mathbf{X}\), \(\mathbf{Y}\), and \(\mathbf{Z}\), and using Eq. (101), we finally reach \[f\nabla^{2}\Phi=\mathbf{\nabla}f\cdot\mathbf{\nabla}\Phi-if^{2}\mathbf{\nabla}\Phi\cdot \mathbf{G}. \tag{104}\] At this stage, if we make an educated introduction of a complex gravitational potential \[\mathcal{E}=f-|\Phi|^{2}-i(-)^{s}\chi, \tag{105}\] we can show that Eq. (104) can be written as \[\left(\mathsf{Re}\,\mathcal{E}+|\Phi|^{2}\right)\nabla^{2}\Phi=\mathbf{\nabla} \Phi\cdot\left(\mathbf{\nabla}\mathcal{E}+2\Phi^{*}\mathbf{\nabla}\Phi\right). \tag{106}\] Most interesting, however, is the fact that after the introduction of the potential \(\mathcal{E}\), the complex gravitational equation (102) also takes a similar form, namely \[\left(\mathsf{Re}\,\mathcal{E}+|\Phi|^{2}\right)\nabla^{2}\mathcal{E}=\mathbf{ \nabla}\mathcal{E}\cdot\left(\mathbf{\nabla}\mathcal{E}+2\Phi^{*}\mathbf{\nabla}\Phi \right), \tag{107}\] --modulo Eq. (106). The remaining pair of Einstein equations, which gives \(\gamma\) in terms of integrals, can also be expressed in terms of the two complex potentials. The form of these equations will not bother us here since we are going to determine \(\gamma\) in a different manner via comparison. Summing up the findings, we introduced two complex potentials, \[\mathcal{E}=f-|\Phi|^{2}-i(-)^{s}\chi,\quad\Phi=A_{t}+i(-)^{p}\tilde{A}_{ \varphi}, \tag{108}\] and two twisted potentials \(\tilde{A}_{\varphi}\) and \(\chi\), which are given by the equations \[\hat{\mathbf{\varphi}}\times\mathbf{\nabla}\tilde{A}_{\varphi} = \frac{(-)^{p}f}{\rho}\left(\mathbf{\nabla}A_{\varphi}+\omega\mathbf{ \nabla}A_{t}\right), \tag{109a}\] \[\hat{\mathbf{\varphi}}\times\mathbf{\nabla}\chi = (-)^{s}\left(\frac{f^{2}}{\rho}\mathbf{\nabla}\omega+2\hat{\mathbf{ \varphi}}\times\mathsf{Im}\left(\Phi^{*}\mathbf{\nabla}\Phi\right)\right), \tag{109b}\] respectively. With these at hand, we showed that the Einstein-Maxwell system assumes the form of a pair of complex equations, known as the Ernst equations [17; 18], \[\left(\mathsf{Re}\,\mathcal{E}+|\Phi|^{2}\right)\nabla^{2}\mathcal{ E} = \mathbf{\nabla}\mathcal{E}\cdot\left(\mathbf{\nabla}\mathcal{E}+2\Phi^{*} \mathbf{\nabla}\Phi\right), \tag{110a}\] \[\left(\mathsf{Re}\,\mathcal{E}+|\Phi|^{2}\right)\nabla^{2}\Phi = \mathbf{\nabla}\Phi\cdot\left(\mathbf{\nabla}\mathcal{E}+2\Phi^{*}\mathbf{ \nabla}\Phi\right) \tag{110b}\] ### Making the "magnetic" LWP ansatz After a double Wick rotation, the metric (108) acquires the so-called "magnetic" form \[\mathrm{d}s^{2}=f\left(\mathrm{d}\varphi-\omega\mathrm{d}t\right)^{2}+\frac{1} {f}\left[\mathrm{e}^{2\gamma}\left(\mathrm{d}\rho^{2}+\mathrm{d}z^{2}\right)- \rho^{2}\mathrm{d}t^{2}\right]. \tag{111}\] Considering a Maxwell field of the form (101), it is possible to show that the Maxwell equations read \(\mathbf{\nabla}\cdot\mathbf{F}^{\mu}=0\) again, with the difference being in the definition of \(\mathbf{F}^{\mu}\). In particular, now we have \[\mathbf{F}^{\mu}=f\frac{\mathbf{\nabla}A_{t}+\omega\mathbf{\nabla}A_{\varphi}}{\rho^{2}} \delta_{t}^{\mu}+\left[-\frac{\mathbf{\nabla}A_{\varphi}}{f}+\omega\mathbf{F}^{t} \right]\delta_{\varphi}^{\mu}. \tag{102}\] Following the method used in the "electric" case, we now define a twisted potential \(\tilde{A}_{t}\) via the equation \[\rho\mathbf{F}^{t}=(-)^{p}\hat{\mathbf{\varphi}}\times\tilde{A}_{t}. \tag{103}\] Clearly, equation \[\mathbf{\nabla}\cdot\left(\frac{(-)^{p}}{f}\mathbf{\nabla}\tilde{A}_{t}+\frac{\omega}{ \rho}\hat{\mathbf{\varphi}}\times\mathbf{\nabla}A_{\varphi}\right)=0, \tag{104}\] is now the alternative form of \(\mathbf{\nabla}\cdot\mathbf{F}^{t}=0\). If we define our complex potential as \[\Phi=A_{\varphi}+i(-)^{p}\tilde{A}_{t}, \tag{105}\] we can show that the Maxwell field equations assume the form of the complex equation (101). If we now move our attention to the Einstein equations, we have that the \(\varphi\varphi\) component can be written as \[-2f^{3}\left(\mathbf{\nabla}A_{t}+\omega\mathbf{\nabla}A_{\varphi}\right)\cdot\left( \mathbf{\nabla}A_{t}+\omega\mathbf{\nabla}A_{\varphi}\right)+\rho^{2}\mathbf{\nabla}f \cdot\mathbf{\nabla}f-f^{4}\mathbf{\nabla}\omega\cdot\mathbf{\nabla}\omega-f\rho^{2} \left(2\mathbf{\nabla}A_{\varphi}\cdot\mathbf{\nabla}A_{\varphi}+\nabla^{2}f\right) =0, \tag{106}\] whereas the \(t\varphi\) component assumes the form \[f^{4}\omega\mathbf{\nabla}\omega\cdot\mathbf{\nabla}\omega+2\omega f^{3}\left(\mathbf{ \nabla}A_{t}+\omega\mathbf{\nabla}A_{\varphi}\right)\cdot\left(\mathbf{\nabla}A_{t}+ \omega\mathbf{\nabla}A_{\varphi}\right)\] \[-f\rho^{2}\left[\omega\left(2\mathbf{\nabla}A_{\varphi}\cdot\mathbf{ \nabla}A_{\varphi}-\nabla^{2}f\right)+2\left(2\mathbf{\nabla}A_{\varphi}\cdot \mathbf{\nabla}A_{t}-\mathbf{\nabla}f\cdot\mathbf{\nabla}\omega\right)\right]+f^{2}\rho \left(\rho\nabla^{2}\omega-2\omega^{\prime}\right)-\omega\rho^{2}\mathbf{\nabla}f \cdot\mathbf{\nabla}f =0. \tag{107}\] Multiplying Eq. (106) by \(\omega\) and adding it to Eq. (107), we obtain \[\frac{2f}{\rho^{2}}\left(\mathbf{\nabla}f\cdot\mathbf{\nabla}\omega-2\omega\mathbf{ \nabla}A_{\varphi}\cdot\mathbf{\nabla}A_{\varphi}-2\mathbf{\nabla}A_{\varphi}\cdot \mathbf{\nabla}A_{t}\right)+\frac{f^{2}}{\rho^{2}}\left(\nabla^{2}\omega-\frac{2 \omega^{\prime}}{\rho}\right)=0. \tag{108}\] Modulo the Maxwell field equations, this takes the neat form \[\mathbf{\nabla}\cdot\left[\frac{f^{2}}{\rho^{2}}\mathbf{\nabla}\omega-\frac{4f}{\rho ^{2}}A_{\varphi}\left(\mathbf{\nabla}A_{t}+\omega\mathbf{\nabla}A_{\varphi}\right) \right]=0. \tag{109}\] The \(tt\) component vanishes identically if the \(\varphi\varphi\) and \(t\varphi\) components are satisfied, and once again, \(\gamma\) is given in terms of integrals by solving Eqs. (100). Using the new definition of the complex potential \(\Phi\), Eq. (105), then the equation (109) can be easily cast into \[\mathbf{\nabla}\cdot\left[\frac{f^{2}}{\rho^{2}}\mathbf{\nabla}\omega-\frac{2}{\rho} \hat{\mathbf{\varphi}}\times\mathsf{Im}\left(\Phi^{*}\mathbf{\nabla}\Phi\right) \right]=0. \tag{110}\] Therefore, our twisted potential \(\chi\) is given by \[\frac{f^{2}}{\rho}\mathbf{\nabla}\omega-2\hat{\mathbf{\varphi}}\times\mathsf{Im} \left(\Phi^{*}\mathbf{\nabla}\Phi\right)=(-)^{s}\hat{\mathbf{\varphi}}\times\mathbf{ \nabla}\chi, \tag{111}\] and using the identity (102), one can show that the equation \[\mathbf{\nabla}\cdot\left\{\frac{1}{f^{2}}\left[(-)^{s}\mathbf{\nabla}\chi+2\mathsf{Im }\left(\Phi^{*}\mathbf{\nabla}\Phi\right)\right]\right\}=0, \tag{112}\] is now the one that may replace the Einstein equation (109). Again, we denote the above as \(\mathbf{\nabla}\cdot\mathbf{G}=0\). In the same fashion as in the electric case, we can express the \(\varphi\varphi\) component of the Einstein field equations as \[-\mathbf{\nabla}f\cdot\mathbf{\nabla}f+f\nabla^{2}f+2f\mathbf{\nabla}\Phi\cdot\mathbf{\nabla} \Phi^{*}+f^{4}\mathbf{G}\cdot\mathbf{G}=0. \tag{113}\] We also see that the Maxwell field equations can be once again cast into the form (115). Introducing now the complex gravitational potential \[\mathcal{E}=-f-|\Phi|^{2}+i(-)^{s}\chi, \tag{116}\] we observe that the Einstein-Maxwell system can be brought to the form of the Ernst equations (100). Summing up the findings in the magnetic case, we introduced two complex potentials, \[\mathcal{E}=-f-|\Phi|^{2}+i(-)^{s}\chi,\quad\Phi=A_{\varphi}+i(-)^{p}\tilde{A }_{t}, \tag{117}\] and two twisted potentials \(\tilde{A}_{t}\) and \(\chi\), which are given by the equations \[\hat{\mathbf{\varphi}}\times\mathbf{\nabla}\tilde{A}_{t} = \frac{(-)^{p}f}{\rho}\left(\mathbf{\nabla}A_{t}+\omega\mathbf{\nabla}A_{ \varphi}\right), \tag{118a}\] \[\hat{\mathbf{\varphi}}\times\mathbf{\nabla}\chi = (-)^{s}\left(\frac{f^{2}}{\rho}\mathbf{\nabla}\omega-2\hat{\mathbf{ \varphi}}\times\mathsf{Im}\left(\Phi^{*}\mathbf{\nabla}\Phi\right)\right), \tag{118b}\] respectively.
2309.07623
SwitchGPT: Adapting Large Language Models for Non-Text Outputs
Large Language Models (LLMs), primarily trained on text-based datasets, exhibit exceptional proficiencies in understanding and executing complex linguistic instructions via text outputs. However, they falter when requests to generate non-text ones. Concurrently, modality conversion models, such as text-to-image, despite generating high-quality images, suffer from a lack of extensive textual pretraining. As a result, these models are only capable of accommodating specific image descriptions rather than comprehending more complex instructions. To bridge this gap, we propose a novel approach, \methodname, from a modality conversion perspective that evolves a text-based LLM into a multi-modal one. We specifically employ a minimal dataset to instruct LLMs to recognize the intended output modality as directed by the instructions. Consequently, the adapted LLM can effectively summon various off-the-shelf modality conversion models from the model zoos to generate non-text responses. This circumvents the necessity for complicated pretraining that typically requires immense quantities of paired multi-modal data, while simultaneously inheriting the extensive knowledge of LLMs and the ability of high-quality generative models. To evaluate and compare the adapted multi-modal LLM with its traditional counterparts, we have constructed a multi-modal instruction benchmark that solicits diverse modality outputs. The experiment results reveal that, with minimal training, LLMs can be conveniently adapted to comprehend requests for non-text responses, thus achieving higher flexibility in multi-modal scenarios. Code and data will be made available at https://github.com/xinke-wang/SwitchGPT.
Xinyu Wang, Bohan Zhuang, Qi Wu
2023-09-14T11:38:23Z
http://arxiv.org/abs/2309.07623v1
# SwitchGPT: Adapting Large Language Models for Non-Text Outputs ###### Abstract Large Language Models (LLMs), primarily trained on text-based datasets, exhibit exceptional proficiencies in understanding and executing complex linguistic instructions via text outputs. However, they falter when requests to generate non-text ones. Concurrently, modality conversion models, such as text-to-image, despite generating high-quality images, suffer from a lack of extensive textual pretraining. As a result, these models are only capable of accommodating specific image descriptions rather than comprehending more complex instructions. To bridge this gap, we propose a novel approach, SwitchGPT, from a modality conversion perspective that evolves a text-based LLM into a multi-modal one. We specifically employ a minimal dataset to instruct LLMs to recognize the intended output modality as directed by the instructions. Consequently, the adapted LLM can effectively summon various off-the-shelf modality conversion models from the model zoos to generate non-text responses. This circumvents the necessity for complicated pretraining that typically requires immense quantities of paired multi-modal data, while simultaneously inheriting the extensive knowledge of LLMs and the ability of high-quality generative models. To evaluate and compare the adapted multi-modal LLM with its traditional counterparts, we have constructed a multi-modal instruction benchmark that solicits diverse modality outputs. The experiment results reveal that, with minimal training, LLMs can be conveniently adapted to comprehend requests for non-text responses, thus achieving higher flexibility in multi-modal scenarios. Code and data will be made available at [https://github.com/xinke-wang/SwitchGPT](https://github.com/xinke-wang/SwitchGPT). ## Introduction The emergence of Large Language Models (LLMs) [16, 14, 15] that are capable of understanding and executing complex linguistic tasks has been a remarkable advancement in recent years. Their prowess in comprehending, processing, and generating text-based responses has paved the way for groundbreaking applications. This includes fields such as natural language understanding [17], automated question-answering systems [13], and conversational AI assistants [15]. Predominantly, these models are trained on extensive crowdsourced text-based datasets. Such datasets encompass a myriad of topics and languages, capturing the vast expanses of human knowledge [14, 15]. Nonetheless, due to their text-centric training, traditional LLMs primarily operate with textual inputs and outputs, resulting in unsatisfactory responses when tasked with generating non-textual outputs (see Figure 1). Despite not being directly exposed to non-textual data such as images and speech during their training, recent research [13, 15, 16, 17] has indicated that LLMs possess a profound potential to understand such non-textual data. This revelation opens the door to evolving pure text-based LLMs into a multi-modal paradigm. For example, Mini-GPT4 [17] trains a linear layer that connects BLIP-2 [13] with Vicuna [12], demonstrating the possibility that LLMs can understand image inputs. However, studies on enabling LLMs to produce non-textual outputs remain relatively limited, restricting the LLM's interactive capabilities in multimodal scenarios. Unlike traditional LLMs, which are primarily designed for unimodal interaction, specifically text-to-text communication, modality conversion models are adept at handling data across different modalities. Figure 1: Given an instruction expecting a non-text response, text-based LLMs like ChatGPT [15] are constrained to providing text responses, while popular text-to-image models such as Stable Diffusion [16] generate imagery based on direct description. In contrast, our proposed SwitchGPT comprehensively interprets the underlying intent of the instruction, accurately producing a more appropriate response. tioning (Vinyals et al., 2015; Hossain et al., 2019) exemplifies the route of image\(\rightarrow\)text, whereas text-conditioned image generation (Goodfellow et al., 2014; Ramesh et al., 2021) illustrates the transition from text\(\rightarrow\)image. These models signal a significant advancement in producing realistic samples across diverse data types. Yet, their training chiefly relies on paired data, such as image-text pairs. The available volume of such paired data is substantially smaller than single modality data, with comparisons often being in the ball-park of hundreds of billion tokens (pure text) _v.s._ mere hundreds of million pairs (image-text). As a result, modal conversion models often lack the richness of knowledge and depth of understanding exhibited by LLMs. This limitation also means they usually struggle with comprehending and executing more complex instructions that rely on acquired knowledge and common sense. (see Figure 1). Given the aforementioned strengths and limitations of both LLMs and modality conversion models, an intriguing proposition emerges: _Can we amalgamate the profound knowledge and understanding capabilities of LLMs with the modality conversion models?_ Hence, the integrated models could potentially interpret more intricate instructions and deliver outputs in various modalities. A natural idea that emerges from this conundrum is to position the LLM as a coordinator to orchestrate and utilize modality conversion models. Some recent works (Schick et al., 2023; Shen et al., 2023; Lian et al., 2023) have demonstrated the immense potential of leveraging LLMs as a controller, showcasing their superior orchestration abilities to plan and execute fine-grained instructions with external tools. For instance, HuggingGPT (Shen et al., 2023) devised a workflow where ChatGPT is used as a controller to invoke HuggingFace's open-sourced models, thereby accomplishing sophisticated AI tasks. However, it suffers from a few shortcomings. **Inficiency:** HuggingGPT operates as an online model, heavily leaning on OpenAI's ChatGPT API. This dependency necessitates frequent invocations of ChatGPT for functions such as Task Planning and Model Selection, substantially inflating both the operational cost and latency. **Instability:** HuggingGPT uses ChatGPT as a black box without any tuning, and the LLM's outputs are uncontrolled and do not always return the desired results, leading to exceptions in the workflow. Another example is LLM-Grounded Diffusion (Lian et al., 2023), which harnesses the reasoning capabilities of LLMs to produce scene layouts in the form of bounding boxes based on the input instructions. As a result, it offers more meticulous control over the positioning and layout of objects with the generated image. However, the **inflexibility** of such methods confines them to a singular modality conversion pathway, and they often falter when interpreting indirect requests. The above shortcomings highlight the challenges in current integration efforts and the full potential of combining LLMs with modality conversion models. In this paper, we endeavor to address the above challenges and present an instruction-tuned LLM to unlock its ability to generate non-text outputs, with the help of several modality conversion models. The primary contributions of this paper are threefold: * We present a Modality-aligned Instruction Tuning that efficiently enables LLMs to discern the intended output modality as dictated by the instructions. Thus, LLMs are empowered to summon appropriate modality conversion models for non-text responses while fully retaining their original reasoning capabilities. With this technique, any LLM can be easily adapted to produce non-text outputs with minimal training. * We introduce a new evaluation set that includes thousands of instructions targeting text, image, and audio outputs, to assess LLMs' abilities in handling multi-modal output requests, enabling a better understanding of performance variations across different LLMs in multi-modal scenarios. * We conduct comprehensive experiments to validate our approach against state-of-the-art open-source LLMs as well as OpenAI's ChatGPT API in multi-modal scenarios. The experimental results indicate that our models tuned with modality-aligned instructions are able to retain their original reasoning abilities while consistently generating accurate and appropriate non-text outputs. ## Related Work **LLM as Controller.** The AI community has witnessed a transformative surge in Natural Language Processing advancements over recent years, largely driven by the emergence of LLMs, such as GPT (Brown et al., 2020), OPT (Zhang et al., 2022), PaLM (Chowdhery et al., 2022), Bloom (Scao et al., 2022), and LLMa (Touvron et al., 2023a). The potential of LLMs extends beyond their immediate function of text generation. With vast knowledge bases and intricate reasoning capabilities, LLMs closely emulate human-like comprehension. With these capabilities, a promising direction has emerged, which treats the LLM as a controller. Instead of using LLMs merely as standalone entities, they can serve as a coordinator between external tools (Shen et al., 2023; Schick et al., 2023; Li et al., 2023; Qin et al., 2023) or even manipulate robotic systems (Driess et al., 2023; Mai et al., 2023). For example, HuggingGPT (Shen et al., 2023) employs ChatGPT as a controller to manage a list of open-sourced models in HuggingFace's Hub to solve AI tasks. However, these methods often treat LLMs as a black box, utilizing custom prompts for specific invocation rules, leading to notably unstable outputs. Contrary to solely depending on pre-trained LLMs for generating control commands, our method fine-tunes the LLM to yield structured responses, thereby ensuring more consistent and reliable outcomes. **LLM Finetuning.** Given that LLMs typically possess tens to hundreds of billions of parameters and necessitate training on vast datasets, the barriers to their training and fine-tuning are significantly raised, both in terms of computational resources and data collection. Recently, many efforts have delved into more resource-efficient methods of LLM fine-tuning. From a data perspective, instruction tuning (Wang et al., 2022; Liu et al., 2023; Taori et al., 2023) leverages high-performing, large-scale LLMs such as GPT-3.5 or GPT-4 to generate vast amounts of instructions and responses, which are then used to fine-tune relatively smaller LLMs or even those large models themselves. From the view of model parameters, Parameter Efficient Fine-Tuning (PEFT) explores the use of post-training quantization (Dettmers et al., 2022; Frantar et al., 2023) or freezing LLM parameters to train an adapter (Hu et al., 2022; Dettmers et al., 2023), with the goal of reducing computational overhead. Based on these techniques, low-cost, customizable training of LLMs has become feasible, for example, Alpaca (Taori et al., 2023) and Vicuna (Chiang et al., 2023) significantly improve the performance of Llama (Touvron et al., 2023) by conducting an instruction tuning. LLaVA (Liu et al., 2023) presents visual instruction tuning, enabling LLMs to understand image contents by tuning on language-image paired instructions. However, these efforts either focus on enhancing the LLM's reasoning performance or on its image\(\rightarrow\)text understanding abilities. In contrast, our approach delves deeper into understanding the intention behind input instructions, enabling the LLM to decide the most fitting output modality. This unique capability is achieved using our proposed modality-aligned instruction tuning, which not only retains the text\(\rightarrow\)text reasoning prowess of the LLMs but also enables them to produce non-text outputs, including text\(\rightarrow\)image and text\(\rightarrow\)speech. **Modality Conversion Models.** Multi-modal applications have been a long-standing research topic in the AI community. The unique capability of these models to comprehend and translate between different data representations facilitates conversions between various modalities, such as image captioning (Vinyals et al., 2015; Hossain et al., 2019) (image\(\rightarrow\)text), text-conditioned image generation (Goodfellow et al., 2014; Ramesh et al., 2021) (text\(\rightarrow\)image), speech recognition (Ao et al., 2022) (audio\(\rightarrow\)text), or even multiple modality conversion (text\(\rightarrow\)text/image\(\rightarrow\)text/image) such as Uni-Diffuser (Bao et al., 2023). We specifically focus on text\(\rightarrow\)image and text\(\rightarrow\)speech conversion in this paper. Given a text description, a text\(\rightarrow\)image model aims at synthesizing an image that accurately depicts the content described in the text. Early methods were mostly based on GAN (Xu et al., 2018; Goodfellow et al., 2014) and VQ-VAE (Ramesh et al., 2021), while recently, due to their stability and improved generation quality, diffusion-based (Rombach et al., 2022) approaches have become increasingly popular. For text\(\rightarrow\)speech, the goal is to transform a piece of text content to corresponding audio. While these models can convert text into other modalities, they struggle to understand complex instructions, making it difficult to apply them to advanced interactive features, such as AI assistants. To tackle these challenges, our method synergizes LLMs with modality conversion models, thereby aligning the advanced reasoning abilities inherent to LLMs with the conversion capabilities of multi-modal models. ## SwitchGPT ### Preliminary _Why multi-modal output matter?_ While the text remains the predominant medium for human communication, images, and sounds often assume indispensable roles in various contexts. For instance, a photograph capturing a golden sunset over a tranquil beach can elicit emotions that mere words may find challenging to express. Similarly, visually impaired individuals heavily depend on auditory cues and descriptions to comprehend their surroundings. Therefore, the value of non-text outputs cannot be understated. However, there is still limited research on adapting LLMs Figure 2: Comparison of responses to the user’s instruction by different models. While traditional text-to-image models like Stable Diffusion often generate images based on superficial keywords, they might miss the underlying intent of the instruction. Hugging-GPT can produce unstable results, for instance, it outputs three responses in this case. LLM-Grounded Diffusion, though adept at controlling the layout, still falls short in grasping the deeper nuances behind the user’s request, as evidenced by the unwarranted inclusion of a ‘book’ in its image. In contrast, our proposed approach not only captures the true essence of the instruction but also aligns it with the desired input for the modality conversion model, resulting in a more faithful visual representation. (The figure is best viewed zoomed in.) for non-text outputs. One primary reason for the limited research in this area is the prohibitive cost associated with multi-modal LLM pre-training. A pioneering work Emu [23] aims to bridge this gap by introducing a vision-language pre-trained LLM. Emu is pre-trained on massive datasets, comprising billions of image-text pairs and millions of video-subtitle pairs, using 128 A100 GPUs. This enables Emu to not only be capable of accepting image/text inputs but also generate textual and visual outputs. Even though Emu has achieved impressive performance on various tasks, its adaptability comes at a high cost. For example, introducing a new modality such as audio necessitates the collection of corresponding paired data and complete retraining. Given the flourishing developments in recent years, the AI community has amassed countless open-source models for diverse tasks. Integrating these models into LLMs offers another solution for adapting them to non-text outputs. This not only conserves computational resources and reduces carbon emissions but also provides a cost-effective and smooth transition to newer models without the need for extensive retraining. This paper falls into this category. Existing methods such as HuggingGPT [23] can invoke external models to generate non-text responses beyond the capabilities of traditional LLMs like ChatGPT. However, its results are unstable. For example, Figure 2 depicts that HuggingGPT may produce an audio response when an image output is expected. Moreover, it inputs the full instruction as a prompt to Stable Diffusion, leading to suboptimal results. The reasons leading to the above issues can be summarized as follows: * Relying solely on designing prompts to utilize the zero-shot capability of LLM for invoking external models can introduce ambiguity, leading to unstable outputs. * The misalignment between LLMs outputs and external model inputs often results in subpar performance. ### Modality-aligned Instruction Generation To solve the aforementioned issues, we introduce Modality-aligned Instruction Tuning (MaIT). The primary purpose of MaIT is twofold. First, MaIT aims to cheaply tune the LLM \(L\) to understand and interpret the expected output modality \(t\) from a given instruction \(I\). Formally, this can be written as: \[L(I)=(r)\stackrel{{\text{MaIT}}}{{\longrightarrow}}L^{\prime}(I)=(r,t), \tag{1}\] where \(L^{\prime}\) is the adapted LLM after instruction tuning, and \(r\) is the output response. For example, consider a simple instruction "What is the answer to the following equation 1+1=?". While \(L\) might respond to '2', \(L^{\prime}\) is expected to produce ('2', 'text'), with 'text' being a flag denoting the desired output modality. This quantity type flag, \(t\), informs the LLM about _when_ to invoke _which_ modality conversion model. For instance, if \(t\) is 'image', the LLM knows to use text\(\rightarrow\)image conversion model rather than simply giving the response \(r\). However, recognizing when and which model to invoke is not the sole requirement. It is equally crucial to guide the LLM on _how_ to use the modality conversion model. Without this guidance, the LLM might misinterpret the instruction or use a misaligned response as input when interfacing with the modality conversion model. This can be seen in the failure example of HuggingGPT depicted in Figure 2, which results in less-than-ideal outcomes. As such, a secondary goal of MaIT is to ensure the LLM's outputs align seamlessly with the inputs of the modality conversion model. This is built upon the fact that for a single route of modality conversion, such as text\(\rightarrow\)image, different models often share almost the same training dataset. Therefore, instead of aligning the LLM with a particular model, it is more effective to align it with the training samples themselves. In other words, the objective is to minimize the distribution gap Figure 3: **Left: Flowchart illustrating the process of generating modality-aligned instructions. By embedding the text from modality datasets into prompts, our method ensures alignment between ChatGPT-generated instructions and the modality conversion models. Right: Pipeline of instruction tuning and inference. During training, the parameters of the LLM are frozen, and an adapter is trained on the generated modality-aligned instructions dataset. At the inference stage, the structured response of LLM is parsed and used to select the appropriate modality conversion route to produce the final outputs.** between the LLM's outputs and the textual descriptions in the modal conversion task training dataset. We can rewrite Equation 1 to represent this purpose as follows: \[L(I)=(r)\mathop{\longrightarrow}\limits_{\min\Delta(D_{r^{\prime}},D_{\text{ text}\to t})}^{\text{MatT}}L^{\prime}(I)=(r^{\prime},t), \tag{2}\] where \(D_{r^{\prime}}\) and \(D_{\text{text}\to t}\) respectively represent the distribution of the output \(r^{\prime}\) by the adapted \(L^{\prime}\), and the text description distribution of the training dataset for the text\(\to t\) task. To achieve this, we directly use the training data from the modality conversion tasks, such as the caption of images, to construct the response. According to Equation 2, we are able to generate modality-aligned instructions. Specifically, for the modality type \(t\), we consider the three most common modalities in this paper, _i.e._, text, image, and speech. Following the approaches of previous studies [19], we initiate the process by designing seed instructions. Each record comprises three components: the instruction, the anticipated output modality, and the response. For example, a record might look like this: {"instruction": "How do you pronounce the name of the fast food brand with a yellow golden arch logo?", "response": {"type": "speech", "McDonald's"}}. Notably, with an abundance of open-source text-to-text instruction datasets available, we concentrated our efforts on devising seed instructions specifically for the text\(\rightarrow\)image and text\(\rightarrow\)speech modalities. To generate instructions in larger quantities, we employed OpenAI's Chat-GPT API. This involved integrating descriptions from the modality conversion task's training data into the ChatGPT prompt to produce modality-aligned instructions. An example of this process is illustrated in Figure 3. Using a picture caption like "A black metal bicycle with a clock inside the front wheel", we prompt ChatGPT to formulate a suitable instruction that solicits the generation of such an image. In this case, the ChatGPT comes up with an intriguing instruction "Can you generate an image that represents the concepts of 'time travel', using everyday objects such as a bicycle and a clock?". This showcases ChatGPT's strong capability in producing diverse instructions. Additionally, the original caption, combined with the specified modality type, is employed to construct the ground-truth response. Specifically, we use the LAION-aesthetic [10] and LibriTTS [13] as referenced modality datasets to sample image captions or speech contents. In this manner, we synchronize the output of LLM with the input of the modality conversion model. Importantly, since we exclusively used textual descriptions of images for instruction creation, it is only necessary to conduct text-to-text tuning on the LLM. This avoids potential computational overhead from multi-modal fine-tuning. We will provide more details about the generated instructions in the appendix. ### Training and Inference Once the modality-aligned instruction dataset has been generated, we integrate it with the existing text-only instructions. This results in the training set comprising three routes of instructions, _i.e._, text\(\rightarrow\)text, text\(\rightarrow\)image, and text\(\rightarrow\)speech. In line with previous research, we maintain a comparable number of instructions for training, totaling approximately 52k, with each route accounting for roughly one-third of this total. While conducting instruction-tuning, it is essential to preserve the original reasoning and generation capabilities of the LLM. Therefore, for Equation 2, when \(t\) is text, it is desired that the distribution of responses \(r\) and \(r^{\prime}\) remain as similar as possible: \[\left\{\begin{aligned} &\min\Delta(D_{r^{\prime}},D_{r})&,t= \text{text}\\ &\min\Delta(D_{r^{\prime}},D_{\text{text}\to t})&,t \neq\text{text}\end{aligned}\right. \tag{3}\] This consistency is attained by incorporating the text\(\rightarrow\)text instructions during the instruction-tuning phase. Furthermore, the parameters of the LLM are kept constant. Only a Low-Rank adapter (LoRA) [12] is fine-tuned, which not only safeguards the pre-trained weights but also significantly reduces computational costs. As shown in Figure 2, a challenge of the current method such as HuggingGPT [10] is the unpredictable format of the output. Such variability can lead to inconsistencies during subsequent parsing stages, often resulting in exceptions. To address this issue, we encode all responses into a structured JSON format, ensuring that the LLM produces outputs with a consistent structure. Due to its extensive pre-training, LLM is already familiar with the JSON formatting rules. Therefore, training with a modest amount of data can effectively guide it to generate appropriately formatted outputs. As shown in the right block of Figure 3, the instructional data consists solely of text, so the LLM does not require interaction with any modality conversion models during the training phase. When moving to the inference stage, the LLM outputs a structured JSON format response for the given instruction. This determines _when_ to choose _which_ modality conversion model and _how_ to use it. In this manner, all modality conversion models remain static without any fine-tuning. ## Experiments ### Implementation In implementing the proposed pipeline, we used the Llama-2 [14] as our foundation LLM. For modality conversion, StableDiffusion-v1-5 [15] and SpeechT5 [1] models were employed. The parameters of the LLM were loaded using Int8 [13] and subsequently frozen. With the modality-aligned instructions generated by GPT-3.5-turbo, we trained a LoRA adapter [16] upon the frozen LLM. Our training was executed on four Nvidia A100 (40GB) GPUs (however, training on a single GPU is possible), utilizing a per-device batch size of 4 and gradient accumulation steps of 8. Optimization of the model was carried out using the AdamW [17] for 3 epochs, with a consistent learning rate \(3^{-4}\). The entire training procedure was efficiently completed in approximately 3 hours. ### Validation Set and Metrics Evaluating LLM performance objectively is crucial for contrasting different methods. Current benchmarks, such as The Open LLM Leaderboard [14], are primarily tailored to assess text\(\rightarrow\)text reasoning and generation capabilities. In contrast, this paper focuses on enabling the LLM to produce non-text outputs. Given this discrepancy, existing evaluation standards are not suited for a comprehensive assessment of our model. To address this gap, we introduce a new validation set (which will be released) specifically designed to facilitate the assessment of multi-modal output LLMs. This set contains 2,400 instructions, each demanding an output in one of three modalities: text, image, or speech. For the text\(\rightarrow\)text instructions, we sample questions from established benchmarks, including Truthful-QA benchmark [13] and MMLU [1], applying a variant multiple-choice metric. The text\(\rightarrow\)image instructions are bifurcated into two categories: 200 intricate instructions crafted by humans, and 600 instructions chosen from proposals generated by ChatGPT-4 using the COCO-caption [1] as a reference. The performance of LLM on this task is evaluated using both CLIP [1] and FID [1] scores. Regarding the text\(\rightarrow\)speech tasks, a similar strategy as of the text\(\rightarrow\)image is used to construct the instructions, while Librits [1] is employed as reference. As the text\(\rightarrow\)speech model translates input text verbatim, the audio quality remains unaffected by the LLM's output. We thus employ the BLEU [1] score to gauge the similarity between the LLM's output and the referenced contents. Additionally, to ascertain if the model can discern the desired output modality from the instruction, we evaluate its performance using classification accuracy, and only those predictions that match the ground-truth modality will be further assessed for their vision, language, or speech scores. Notably, since the default output modality of LLM is text, these outputs will not contribute to classification accuracy. We explain more details about evaluation metrics in the appendix. ### Quantitative Results To assess the efficacy of the proposed SwitchGPT, we compared them with a range of state-of-the-art techniques. These include pretrained LLMs like OPT [15] and Llama [16]; instruction-tuned LLMs such as Alpaca [17] and Vicuna [18]; and the commercial API, GPT-3.5-turbo. Additionally, we considered methods that employ LLMs as controllers, such as HuggingGPT [15] and LLM-grounded Diffusion [13]. Given that many text-based LLMs are not inherently designed to produce non-text responses, a few-shot assessment approach was adopted. Specifically, four instructions and corresponding responses are given as examples, prompting these methods to generate responses to new inputs (please refer to supplementary for more details). Notably, all of the LLMs share identical modality conversion models to produce non-text responses. Table 1 demonstrates the performance of the proposed method in comparison with existing techniques. For modality accuracy, our method achieves comparable performance with GPT-3.5-turbo and surpasses all other techniques. This highlights its capability to understand the expected modality of the instruction. Interestingly, the OPT-6.7B model lags considerably in accuracy when compared to all other methods, even its own 2.7B counterpart. We observed that it predominantly provides text output for most instructions. This discrepancy might be attributed to its heightened sensitivity to specific prompts and examples. The CLIP score indicates the congruence of the generated images with their corresponding descriptions. As all LLM models utilize the same text\(\rightarrow\)image foundation model, the score can be used to measure the quality of the prompt generated by the LLM. the score serves as an indicator of the prompt quality produced by the LLM. Our model consistently excels in this metric, highlighting the effectiveness of the modality-alignment instruction tuning. Regarding language scoring, our method achieves an accuracy comparable to the original Llama2-7B. This suggests that the foundational reasoning capability remains unaffected. Additionally, since stable diffusion and LLM-grounded diffusion can only generate image results, we compare their performance in Table 2. \begin{table} \begin{tabular}{l|c|c|c|c} \hline \hline Method & Modality Acc. (\%) \(\uparrow\) & \begin{tabular}{c} Vision \\ CLIP \(\uparrow\) \\ \end{tabular} & \begin{tabular}{c} Language \\ SLA \(\uparrow\) \\ \end{tabular} & \begin{tabular}{c} Speech \\ BLEU \(\uparrow\) \\ \end{tabular} \\ \hline OPT-2.7B [15] & 68.4 & 16.6 & 131.3 & 0.53 & 0.07 \\ OPT-6.7B [15] & 16.2 & 6.4 & 277.3 & 0.66 & 0.00 \\ Llama-7B [16] & 66.8 & 17.9 & 88.6 & 0.55 & 0.07 \\ Llama2-7B [16] & 76.2 & 18.7 & 86.1 & 0.69 & 0.08 \\ Alpaca-7B [16] & 72.3 & 17.2 & 94.7 & 0.58 & 0.07 \\ Vicuna-7B [17] & 73.1 & 17.0 & 95.0 & 0.59 & 0.07 \\ GPT-3.5-turbo (OpenAI 2023) & **88.1** & 21.4 & 84.1 & **0.75** & **0.17** \\ HuggingGPT [15] & 81.7 & 19.9 & 85.2 & 0.62 & 0.10 \\ SwitchGPT-7B (Ours) & 86.9 & **22.6** & **82.4** & 0.67 & 0.16 \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of the performance between the proposed SwitchGPT and state-of-the-art models. \begin{table} \begin{tabular}{c c c} \hline \hline Method & CLIP & FID \\ \hline Stable Diffusion v1-5 & 15.2 & 140.3 \\ LLM-grounded Diffusion & 18.9 & 85.9 \\ SwitchGPT-7B (Ours) & **22.6** & **82.4** \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison of the performance between Stable Diffusion v1-5, LLM-grounded Diffusion, and our method for text\(\rightarrow\)image instructions. ### Qualitative Results In Figure 4, we provide a qualitative comparison of various methods based on instructions intended for image output generation. The results demonstrate that our proposed method aligns well with modality conversion models, yielding enhanced performance. Other methods, such as HuggingGPT, tend to use the instruction itself as the prompt to input into the modality conversion model, resulting in unsatisfactory outputs. Meanwhile, Figure 5 presents a demo (to be released) showcasing an example conversation between our model and users on the topic 'The Statue of Liberty'. The interaction highlights our model's capability to discern the underlying intentions of the instructions, despite these conversational inputs being quite different from the instructions used during training. Consequently, it is capable of producing appropriate responses involving different modalities. See more results in the appendix. ## Conclusion In this paper, we have presented modality-aligned instruction tuning, a method designed to adapt text-based LLMs for generating non-text responses. This is the inaugural effort to align the outputs of LLMs and the inputs of modality conversion models from a data-driven perspective. A key advantage is that it allows LLMs to be trained without direct exposure to non-text modality data. Instead of delving into the complex real of multi-modal pre-training, they simply undergo a standard instruction tuning process, ensuring computational efficiency. To objectively evaluate our method, we have introduced a validation set comprising instructions that request text, image, and audio outputs. Based on this dataset, we examined several state-of-the-art LLMs in a few-shot setting and benchmarked our approach against them. We anticipate that our proposed SwitchGPT will serve as a baseline for future studies in multimodal output LLMs. In future research, Figure 4: Qualitative results. From left to right: images produced by Stable Diffusion, LLM-grounded Diffusion, HuggingGPT, and the proposed SwitchGPT. The right side of each image showcases the intermediate artifacts such as the layout and prompts (Stable Diffusion uses instruction as prompt directly) generated by the corresponding methods. Figure 5: The proposed SwitchGPT is able to generate non-text responses to fulfill the given instructions. one promising direction could be to design adapters at the encoder stage that would enable the integration of multimodal inputs. By seamlessly connecting this feature with the method proposed in this study, it could potentially give rise to a comprehensive Any-to-Any LLM capable of handling a wider array of tasks and functionalities.
2308.16407
General Formula for the Green's Function Approach to the Spin-1/2 Antiferromagnetic Heisenberg Model
A wide range of analytical and numerical methods are available to study quantum spin systems. However, the complexity of spin correlations and interactions limits their applicability to specific temperature ranges. The analytical approach utilizing Green's function has proved advantageous, as it allows for formulation without restrictions on the presence of long-range order and facilitates estimation of the spin excitation spectrum and thermodynamic quantities across the entire temperature range. In this work, we present a generalized formulation of the Green's function method that can be applied to diverse spin systems. As specific applications, we consider the hypercubic lattice and the $J_1$-$J_2$ model. For the cubic lattice case, the Green's function approach provides a good estimation for the transition temperature. Regarding the $J_1$-$J_2$ model, we include nematic correlations in the analysis and find no signature of such correlations, though accurate numerical calculations are required in the presence of strong frustration. Although our focus is on the spin one-half antiferromagnetic Heisenberg model on an arbitrary lattice, the Green's function approach can be generalized to incorporate other interactions and higher spin values.
Daiki Sasamoto, Takao Morinari
2023-08-31T02:23:24Z
http://arxiv.org/abs/2308.16407v2
General Formula for the Green's Function Approach to the Spin-1/2 Antiferromagnetic Heisenberg Model ###### Abstract A wide range of analytical and numerical methods are available to study quantum spin systems. However, the complexity of spin correlations and interactions limits their applicability to specific temperature ranges. The analytical approach utilizing Green's function has proved advantageous, as it allows for formulation without restrictions on the presence of long-range order and facilitates estimation of the spin excitation spectrum and thermodynamic quantities across the entire temperature range. In this work, we present a generalized formulation of the Green's function method that can be applied to diverse spin systems. As specific applications, we consider the hypercubic lattice and the \(J_{1}\)-\(J_{2}\) model. For the cubic lattice case, the Green's function approach provides a good estimation for the transition temperature. Regarding the \(J_{1}\)-\(J_{2}\) model, we include nematic correlations in the analysis and find no signature of such correlations, though accurate numerical calculations are required in the presence of strong frustration. Although our focus is on the spin one-half antiferromagnetic Heisenberg model on an arbitrary lattice, the Green's function approach can be generalized to incorporate other interactions and higher spin values. ## I Introduction Quantum antiferromagnetic Heisenberg models are of utmost importance in exploring intriguing phenomena. Of particular significance is the case of spin-1/2, where quantum correlation effects are most pronounced. A well-known example is cuprate superconductors [1], where high-temperature superconductivity emerges upon hole doping in the quasi-two-dimensional spin-1/2 Heisenberg antiferromagnet. The rich physics of cuprate superconductors is likely linked to the phenomenon of quantum spin liquid. Originating from Anderson's proposal of the resonating valence bond state [2], extensive theoretical and experimental research has been devoted to spin liquid states [3; 4; 5], which may host exotic quantum states. The exploration of novel quantum ground states poses a challenge due to their absence of long-range order. In cases where magnetic long-range order exists, mean-field theory can be applied, and quantum fluctuations around it can be considered. However, our particular interest lies in states lacking long-range order and exhibiting high entanglement. Analytically investigating such systems is a formidable task, necessitating the adoption of sophisticated numerical algorithms, such as the density-matrix renormalization group, tensor network algorithms, and others. Our interest in quantum spin liquid states extends beyond their ground states. We must also explore their finite-temperature properties, especially when considering high-temperature superconductivity and practical quantum devices. At finite temperatures, quantum Monte Carlo serves as a powerful numerical algorithm. However, it may encounter challenges, particularly in cases involving frustration, where the negative sign problem can arise. In this paper, we employ a double-time Green's function method [6; 7]. This approach does not rely on long-range order and can be applied to both the ground state and finite-temperature properties. By employing a decoupling scheme in the equations of motion that preserves the spin rotational symmetry [8], we derive a general formula independent of spatial dimensionality and the range of the exchange interaction. As specific applications, we investigate the three-dimensional antiferromagnetic Heisenberg model and the \(J_{1}\)-\(J_{2}\) model. Our results demonstrate that for the former case, the magnetic transition temperature is in good agreement with the quantum Monte Carlo result [9]. For the latter case, we develop a self-contained formulation of the theory that does not require inputs from other methods. This highlights the versatility and accuracy of our Green's function approach in studying diverse spin systems and finite-temperature properties. The rest of the paper is organized as follows. In Sec. II, we derive a general theoretical formula. In Sec. III, we apply this formula to the \(d\)-dimensional hypercubic lattice. Specifically, we estimate the transition temperature for the case of \(d=3\) and demonstrate its good agreement with the quantum Monte Carlo result. In Sec. IV, we investigate the \(J_{1}\)-\(J_{2}\) model. We compute the correlation functions at zero temperature. Despite using only a single parameter for the decoupling approximation, our results are in good agreement with previous works employing several decoupling parameters. We also present the spin-wave dispersion. ## II Formulation We consider a general spin-1/2 quantum Heisenberg antiferromagnet on an arbitrary lattice, with the Hamil tonian expressed as: \[H=\sum_{\left\langle i,j\right\rangle}J_{ij}\mathbf{S}_{i}\cdot\mathbf{S}_{j}, \tag{1}\] where \(\mathbf{S}_{i}=(S_{i}^{x},S_{i}^{y},S_{i}^{z})\) represents the spin-1/2 operator at site \(i\). The parameter \(J_{ij}=J_{ji}\) denotes the interaction strength between the spins at sites \(i\) and \(j\), and \(\left\langle i,j\right\rangle\) indicates the sum over a general pair of spins on the lattice. In the double-time Green's function method, we study the equation of motion for the spin Green's function. To facilitate this analysis, we find it convenient to work with the Matsubara Green's function rather than the real-time Green's function. Denoting the imaginary time by \(\tau\) and introducing the imaginary time ordering operator \(T_{\tau}\), the Matsubara Green's function is defined as follows: \[\mathcal{G}_{j}(\tau)=-\langle T_{\tau}S_{0}^{+}(\tau)S_{j}^{-}(0)\rangle, \tag{2}\] where \(S_{j}^{\pm}=S_{j}^{x}\pm iS_{j}^{y}\) and \(A\left(\tau\right)\) is defined as \(A\left(\tau\right)=e^{\tau H}Ae^{-\tau H}\) for any operator \(A\). \(0\) represents a site in the bulk of the system. In order to systematically compute the equation of motion, we first present a general formula. Suppose \(A\) and \(B\) are bosonic operators. We define the following Green's function: \[\mathcal{G}_{AB}(\tau)=-\langle T_{\tau}A(\tau)B(0)\rangle\equiv\left\langle A \middle|B\right\rangle_{\tau}. \tag{3}\] The equation of motion is given by \[\frac{\partial}{\partial\tau}\mathcal{G}_{AB}(\tau)=-\langle T_{\tau}\left[H, A(\tau)\right]B(0)\rangle-\delta\left(\tau\right)\left\langle\left[A,B \right]\right\rangle. \tag{4}\] The Fourier transform of \(\mathcal{G}_{AB}\left(\tau\right)\) is defined by: \[\mathcal{G}_{AB}\left(i\omega_{n}\right)=\int_{0}^{\beta}d\tau e^{i\omega_{n} \tau}\langle A\middle|B\rangle_{\tau}\equiv\left\langle A\middle|B\right\rangle _{i\omega_{n}}. \tag{5}\] Here the bosonic Matsubara frequency \(\omega_{n}\) takes the values \(\omega_{n}=\frac{2\pi n}{\beta}\), where \(n\) is an integer, and \(\beta=\frac{1}{k_{\mathrm{B}}T}\) is the inverse temperature with \(k_{\mathrm{B}}\) being the Boltzmann constant and \(T\) being the temperature. After the Fourier transformation of Eq. (4), we obtain the following equation: \[i\omega_{n}\langle A\middle|B\rangle_{i\omega_{n}}=\left\langle\left[A,H \middle|B\right\rangle_{i\omega_{n}}+\left\langle\left[A,B\right]\right\rangle. \tag{6}\] This is the general equation of motion. We can use Eq. (6) with \(A\) replaced by \(\left[A,H\right]\) to derive the equation of motion for the first term in the right-hand side. By repeating similar steps, we can derive higher-order equations of motion. Furthermore, by rewriting Eq. (3) as: \[\mathcal{G}_{AB}(\tau)=-\left\langle T_{\tau}A\left(0\right)B\left(-\tau \right)\right\rangle, \tag{7}\] we can derive the following equation of motion[10]: \[i\omega_{n}\langle A\middle|B\rangle_{i\omega_{n}}=-\langle A\mid\left[B,H \right]\rangle_{i\omega_{n}}+\left\langle\left[A,B\right]\right\rangle. \tag{8}\] Applying Eq. (6) to Eq. (2) and computing \(\left[S_{0}^{+},H\right]\) we obtain the following equation of motion: \[i\omega_{n}\langle S_{0}^{+}|S_{j}^{-}\rangle_{i\omega_{n}}=J_{\mathbf{\delta}_{1 }}\langle S_{0}^{z}S_{1}^{+}|S_{j}^{-}\rangle_{i\omega_{n}}-J_{\mathbf{\delta}_{1 }}\langle S_{0}^{+}S_{1}^{z}|S_{j}^{-}\rangle_{i\omega_{n}}. \tag{9}\] Here, we assume that there is no long-range order, so that \[\left\langle\left[S_{0}^{+},S_{j}^{-}\right]\right\rangle=2\delta_{j,0}\left \langle S_{0}^{z}\right\rangle=0. \tag{10}\] In Eq. (9), we denote \(1\) as the site interacting with site \(0\), and \(\mathbf{\delta}_{1}\) represents the displacement vector connecting site \(0\) and site \(1\). The summation with respect to site \(1\) is implicit to simplify the notation. The equation of motions for the two terms in the right hand side of Eq. (9) are \[i\omega_{n}\big{\langle}S_{0}^{z}S_{1}^{+}|S_{j}^{-}\big{\rangle} _{i\omega_{n}} = J_{\mathbf{\delta}_{2}}\big{\langle}S_{0}^{z}S_{1}^{z}S_{1+2}^{+}|S _{j}^{-}\big{\rangle}_{i\omega_{n}}-J_{\mathbf{\delta}_{2}}\big{\langle}S_{0}^{z}S _{1}^{+}S_{1+2}^{z}|S_{j}^{-}\big{\rangle}_{i\omega_{n}}+\frac{1}{2}J_{\mathbf{ \delta}_{2}}\big{\langle}S_{0}^{+}S_{2}^{-}S_{1}^{+}|S_{j}^{-}\big{\rangle}_{i \omega_{n}} \tag{11}\] \[-\frac{1}{2}J_{\mathbf{\delta}_{2}}\big{\langle}S_{0}^{-}S_{2}^{+}S_{1 }^{+}|S_{j}^{-}\big{\rangle}_{i\omega_{n}}+2\delta_{1,j}\left\langle S_{0}^{z}S _{1}^{z}\right\rangle-\delta_{0,j}\left\langle S_{0}^{-}S_{1}^{+}\right\rangle,\] and \[i\omega_{n}\big{\langle}S_{0}^{+}S_{1}^{z}|S_{j}^{-}\big{\rangle} _{i\omega_{n}} = \frac{1}{2}J_{\mathbf{\delta}_{2}}\big{\langle}S_{0}^{+}S_{1}^{+}S_{ 1+2}^{-}|S_{j}^{-}\big{\rangle}_{i\omega_{n}}-\frac{1}{2}J_{\mathbf{\delta}_{2}} \big{\langle}S_{0}^{+}S_{1}^{-}S_{1+2}^{+}|S_{j}^{-}\big{\rangle}_{i\omega_{n}} +J_{\mathbf{\delta}_{2}}\big{\langle}S_{0}^{z}S_{2}^{+}S_{1}^{z}|S_{j}^{-}\big{\rangle} _{i\omega_{n}} \tag{12}\] \[-J_{\mathbf{\delta}_{2}}\big{\langle}S_{0}^{+}S_{2}^{z}S_{1}^{z}|S_{j }^{-}\big{\rangle}_{i\omega_{n}}-\delta_{1,j}\left\langle S_{0}^{+}S_{1}^{-} \right\rangle+2\delta_{0,j}\left\langle S_{0}^{z}S_{1}^{z}\right\rangle.\] Here, \(1+2\) represents the site \(\mathbf{\delta}_{1}+\mathbf{\delta}_{2}\). It is important to note that sites \(1\) and \(2\) can be the same site in this expression. Now we apply the decoupling approximation intro duced by Kondo and Yamaji [8]. That is, for example, \[\left\langle S_{0}^{z}S_{1}^{z}S_{2}^{+}|S_{j}^{-}\right\rangle_{i\omega_{n}}\simeq \alpha\langle S_{0}^{z}S_{1}^{z}\rangle\langle S_{2}^{+}|S_{j}^{-}\rangle_{i \omega_{n}}, \tag{13}\] where \(\alpha\) is a parameter to be determined. This is understood as follows [8]: \[\left\langle S_{0}^{z}S_{1}^{z}S_{2}^{+}|S_{j}^{-}\right\rangle_{i \omega_{n}} = \left\langle S_{0}^{z}S_{1}^{z}\left[\alpha+2\left(1-\alpha\right) S_{2}^{z}\right]S_{2}^{+}|S_{j}^{-}\right\rangle_{i\omega_{n}} \tag{14}\] \[\simeq \left\langle S_{0}^{z}S_{1}^{z}\left[\alpha+2\left(1-\alpha \right)S_{2}^{z}\right]\right\rangle\left\langle S_{2}^{+}|S_{j}^{-}\right\rangle _{i\omega_{n}}\] \[= \alpha\left\langle S_{0}^{z}S_{1}^{z}\right\rangle\left\langle S_ {2}^{+}|S_{j}^{-}\right\rangle_{i\omega_{n}}.\] After applying this decoupling scheme, we obtain \[i\omega_{n}\mathcal{F}_{j}\left(i\omega_{n}\right) \tag{15}\] \[\simeq \frac{1}{2}\left[\left(1-\delta_{1,2}\right)J_{\mathbf{\delta}_{2}} \alpha c_{1-2}+\delta_{1,2}J_{\mathbf{\delta}_{1}}\right]\mathcal{G}_{j}\left(i \omega_{n}\right)-\frac{1}{2}\left[\left(1-\delta_{1+2,0}\right)J_{\mathbf{\delta }_{2}}\alpha c_{1+2}+\delta_{1+2,0}J_{\mathbf{\delta}_{2}}\right]\mathcal{G}_{j-1} \left(i\omega_{n}\right)\] \[+\frac{1}{2}\left(1-\delta_{1+2,0}\right)J_{\mathbf{\delta}_{2}} \alpha c_{1}\mathcal{G}_{j-1-2}\left(i\omega_{n}\right)-\frac{1}{2}\left(1- \delta_{1,2}\right)J_{\mathbf{\delta}_{2}}\alpha c_{1}\mathcal{G}_{j-2}\left(i \omega_{n}\right)\] \[-\frac{1}{2}\delta_{1+2,0}J_{\mathbf{\delta}_{2}}\mathcal{F}_{j} \left(i\omega_{n}\right)+\frac{1}{2}\delta_{1,2}J_{\mathbf{\delta}_{1}}\mathcal{F} _{j}\left(i\omega_{n}\right)+\left(\delta_{1,j}-\delta_{0,j}\right)c_{1},\] where \[\omega_{\mathbf{k}}^{2} = \frac{1}{2}J_{\mathbf{\delta}_{1}}\left[\left(1-\delta_{\mathbf{\delta}_ {1},\mathbf{\delta}_{2}}\right)J_{\mathbf{\delta}_{2}}\alpha c_{1-2}\right] \tag{16}\] \[- \frac{1}{2}J_{\mathbf{\delta}_{1}}\left[\left(1-\delta_{1+2,0}\right) J_{\mathbf{\delta}_{2}}\alpha c_{1+2}+\delta_{1+2,0}J_{\mathbf{\delta}_{2}}\right]e^{i\mathbf{k} \cdot\mathbf{\delta}_{1}}\] \[- \frac{1}{2}\left(1-\delta_{1,2}\right)J_{\mathbf{\delta}_{2}}\alpha c _{1}e^{i\mathbf{k}\cdot\mathbf{\delta}_{2}}\] \[+ \frac{1}{2}\left(1-\delta_{1+2,0}\right)J_{\mathbf{\delta}_{2}}\alpha c _{1}e^{i\mathbf{k}\cdot(\mathbf{\delta}_{1}+\mathbf{\delta}_{2})}.\] Equations (22) and (23) provide us with the general formula. When applying this formula to a specific system, we only need to perform the necessary summations with respect to \(1\) and \(2\), which are implicit in the equations above. This allows us to efficiently compute the Green's function and the corresponding frequency for the given spin system without the need to derive the equations from scratch each time. The correlation functions \(c_{i-j}\) are determined through a self-consistent calculation. The Fourier transform of \(\mathcal{G}_{\mathbf{k}}\left(i\omega_{n}\right)\) leads to \[\mathcal{G}_{\mathbf{k}}\left(\tau\right) = \frac{1}{\beta}\sum_{i\omega_{n}}e^{-i\omega_{n}\tau}\mathcal{G}_{ \mathbf{k}}\left(i\omega_{n}\right) \tag{24}\] \[= \frac{A_{\mathbf{k}}}{2\omega_{\mathbf{k}}}\left(\frac{e^{-\tau\omega_{ \mathbf{k}}}}{e^{-\beta\omega_{\mathbf{k}}}-1}-\frac{e^{\tau\omega_{\mathbf{k}}}}{e^{\beta \omega_{\mathbf{k}}}-1}\right),\] where \[A_{\mathbf{k}}=J_{\mathbf{\delta}_{1}}\left(e^{i\mathbf{k}\cdot\mathbf{\delta}_{1}}-1\right)c _{1} \tag{25}\] The correlation function is given by \[c_{j}=2\left\langle S_{0}^{+}S_{j}^{-}\right\rangle=\frac{1}{N}\sum_{\mathbf{k}}e^ {i\mathbf{k}\cdot\mathbf{R}_{j}}\frac{A_{\mathbf{k}}}{\omega_{\mathbf{k}}}\coth\left(\frac{ \beta\omega_{\mathbf{k}}}{2}\right). \tag{26}\] If we introduce a single decoupling parameter \(\alpha\) at Eq. (13), the total number of parameters to be determined, including \(\alpha\), is equal to the number of self-consistent equations that need to be solved. This can be achieved by utilizing the identity \(c_{0}=1\) in the absence of long-range order. However, if additional decoupling parameters are introduced, we must impose extra conditions or constraints on the system to guarantee a unique solution. These additional conditions can be other theoretical results and/or experimental findings. We do not introduce additional decoupling parameters, and employ a single decoupling parameter. Consequently, the system of equations can be solved independently without requiring any extra conditions. This self-contained nature of the formalism streamlines the solution process, making it more efficient and eliminating the need for additional constraints or conditions, which is particularly advantageous in practical applications. ## III \(d\)-dimensional hypercubic lattice In this section, we shall derive the formula for the hypercubic lattice with a spatial dimension \(d\). We make the assumption that \(J_{ij}=J\) if sites \(i\) and \(j\) are nearest neighbors, and \(J_{ij}=0\) otherwise. From Eq. (23), we obtain \[\omega_{\mathbf{k}}^{2}=\frac{1}{2}J^{2}z\left(1-\gamma_{\mathbf{k}}\right)\left[1- \left(1+z\gamma_{\mathbf{k}}\right)a_{1}+\left(z-2\right)a_{11}+a_{2}\right], \tag{27}\] where \(z=2d\), \(a_{1}=\alpha c_{1}\) etc., and \[\gamma_{\mathbf{k}}=\frac{1}{d}\sum_{\mu=1}^{d}\cos k_{\mu}. \tag{28}\] Here, we set the lattice constant to unity. The Green's function is given by \[\mathcal{G}_{\mathbf{k}}\left(i\omega_{n}\right)=-\frac{Jz\left(1-\gamma_{\mathbf{k}} \right)c_{1}}{\left(i\omega_{n}\right)^{2}-\omega_{\mathbf{k}}^{2}}. \tag{29}\] We note that this result includes the one-dimensional chain [8] and the two-dimensional square lattice [11]. Now we focus on the case of \(d=3\). The self-consistent equations are given by \[1=-\frac{zc_{1}J}{N}\sum_{\mathbf{k}}\frac{1-\gamma_{\mathbf{k}}}{\omega_{\mathbf{k}}} \coth\bigg{(}\frac{\beta\omega_{\mathbf{k}}}{2}\bigg{)}, \tag{30}\] \[c_{1}=-\frac{zc_{1}J}{N}\sum_{\mathbf{k}}\gamma_{\mathbf{k}}\frac{1-\gamma_{\mathbf{k}}}{ \omega_{\mathbf{k}}}\coth\bigg{(}\frac{\beta\omega_{\mathbf{k}}}{2}\bigg{)}, \tag{31}\] \[\left(z-2\right)c_{11}+c_{2} = -\frac{zc_{1}J}{N}\sum_{\mathbf{k}}\left(z\gamma_{\mathbf{k}}^{2}-1\right) \tag{32}\] \[\times\frac{1-\gamma_{\mathbf{k}}}{\omega_{\mathbf{k}}}\coth\bigg{(}\frac {\beta\omega_{\mathbf{k}}}{2}\bigg{)},\] where \(c_{11}\) represents the correlation function between sites with a displacement vector of \((1,1,0)\). For the cubic lattice, \(c_{11}\) are the same for \((0,1,1)\), \((1,0,1)\), \((1,-1,0)\), and so on. Antiferromagnetic long-range order occurs when \(\omega_{\bf Q}\) vanishes, where \({\bf Q}=(\pi,\pi,\pi)\). We observe that Eqs. (30) to (32) exhibit convergence when we replace summation with an integral over the Brillouin zone. Conversely, divergence occurs for \(d=1\) and \(d=2\), indicating the absence of long-range order at \(T=0\) in these dimensions. To enhance the efficiency of evaluating Eqs. (30) to (32), we apply the integral formula primarily in the vicinity of \({\bf q}={\bf Q}\), employing a simple summation for other points in the Brillouin zone. Our result yields a transition temperature for antiferromagnetic long-range order of \(k_{\rm B}T_{c}/J=1.0689\), in good agreement with the quantum Monte Carlo result [9] of \(k_{\rm B}T_{c}/J=0.946\pm 0.001\). We note that our estimation of the transition temperature slightly exceeds the previous result [12] of \(k_{\rm B}T_{c}/J=1.039\), which was derived from the point of disappearance of the sublattice magnetization. ## IV The \(J_{1}\)-\(J_{2}\) model In the investigation of quantum spin liquid states, the influence of frustration effects is significant. To assess the viability of the Green's function approach for addressing frustrated spin systems, we turn our attention to the \(J_{1}\)-\(J_{2}\) model on the square lattice [13; 14; 15; 16; 17; 18; 19; 20]. In this model, \(J_{1}\) denotes the interaction between nearest neighbors, while \(J_{2}\) denotes the interaction between next-nearest neighbors. The system exhibits strong frustration when \(J_{1}\sim J_{2}\). By employing the derived formula above, we obtain the following results: \[\omega_{\mathbf{k}}^{2} = 2J_{1}^{2}\left[\left(1+a_{2}+2a_{11}\right)-\left(1+4\gamma_{\bm {k}}\right)a_{1s}\right]\left(1-\gamma_{\mathbf{k}}\right)+2J_{1}^{2}\gamma_{\bm {k}}^{a}\left(1+4\gamma_{\mathbf{k}}\right)a_{1a} \tag{33}\] \[+2J_{2}^{2}\left(1-\gamma_{\mathbf{k}}^{\prime}\right)\left[\left(1+a _{22}+2a_{2}\right)-a_{11}\left(1+4\gamma_{\mathbf{k}}^{\prime}\right)\right]\] \[+4J_{1}J_{2}\left[\left(1-\gamma_{\mathbf{k}}\right)+\left(1-\gamma_ {\mathbf{k}}^{\prime}\right)-2\gamma_{\mathbf{k}}^{\prime}\left(1-\gamma_{\mathbf{k}} \right)\right]a_{1s}+4J_{1}J_{2}\gamma_{\mathbf{k}}^{a}\left(1+2\gamma_{\mathbf{k}}^{ \prime}\right)a_{1a}\] \[+4J_{1}J_{2}\left[\left(1-\gamma_{\mathbf{k}}\right)+\left(1-\gamma_ {\mathbf{k}}^{\prime}\right)\right]a_{21s}-4J_{1}J_{2}\gamma_{\mathbf{k}}^{a}a_{21a}-8J _{1}J_{2}\gamma_{\mathbf{k}}\left(1-\gamma_{\mathbf{k}}^{\prime}\right)a_{11},\] where \(\gamma_{\mathbf{k}}=\left(\cos k_{x}+\cos k_{y}\right)/2\), \[\gamma_{\mathbf{k}}^{a}=\cos k_{x}\cos k_{y}, \tag{34}\] and \(a_{1s}=\left(a_{1x}+a_{1y}\right)/2\), \(a_{1a}=\left(a_{1x}-a_{1y}\right)/2\), \(a_{21s}=\left(a_{21x}+a_{21y}\right)/2\), \(a_{21a}=\left(a_{21x}-a_{21y}\right)/2\). \(a_{1x}\) (\(a_{1y}\)) refers to the nearest neighbor correlation function in the \(x\) (\(y\)) direction. Similarly, \(a_{21x}\) (\(a_{21y}\)) denotes the correlation function between the sites separated by the displacement vector \(\left(2,1\right)\) (\(\left(1,2\right)\)). Here, we consider the possibility of nematic correlation [21] through the variables \(a_{1a}\) and \(a_{21a}\). When \(a_{1a}\) and \(a_{21a}\) are non-vanishing, that indicates the presence of nematic correlations in the system. We note that Eq. (34), when excluding these nematic correlations, agrees with the previous result [22]. The self-consistent equations are given by \[\alpha = a_{1s}I_{00}^{(0)}+a_{1a}I_{00}^{(1)}+a_{11}I_{00}^{(2)}, \tag{36}\] \[a_{1s} = a_{1s}I_{1s}^{(0)}+a_{1a}I_{1s}^{(1)}+a_{11}I_{1s}^{(2)},\] (37) \[a_{1a} = a_{1s}I_{1a}^{(0)}+a_{1a}I_{1a}^{(1)}+a_{11}I_{1a}^{(2)},\] (38) \[a_{11} = a_{1s}I_{11}^{(0)}+a_{1a}I_{11}^{(1)}+a_{11}I_{11}^{(2)},\] (39) \[a_{2} = a_{1s}I_{20}^{(0)}+a_{1a}I_{20}^{(1)}+a_{11}I_{20}^{(2)},\] (40) \[a_{21s} = a_{1s}I_{21s}^{(0)}+a_{1a}I_{21s}^{(1)}+a_{11}I_{21s}^{(2)},\] (41) \[a_{21a} = a_{1s}I_{21a}^{(0)}+a_{1a}I_{21a}^{(1)}+a_{11}I_{21a}^{(2)},\] (42) \[a_{22} = a_{1s}I_{22}^{(0)}+a_{1a}I_{22}^{(1)}+a_{11}I_{22}^{(2)}, \tag{43}\] where \[I_{\eta}^{(\ell)}=-\frac{8J^{(\ell)}}{\beta N}\sum_{\mathbf{k}}f_{\eta}\left(\bm {k}\right)\sigma_{\mathbf{k}}^{(\ell)}g\left(\frac{\beta\omega_{\mathbf{k}}}{2}\right), \tag{44}\] with \(J^{(0)}=J^{(1)}=J_{1}\), \(J^{(2)}=J_{2}\), \(g(x)=x\coth x\), and \[\sigma_{\mathbf{k}}^{(0)} = \frac{1-\gamma_{\mathbf{k}}}{\omega_{\mathbf{k}}^{2}}, \tag{45}\] \[\sigma_{\mathbf{k}}^{(1)} = -\frac{\gamma_{\mathbf{k}}^{2}}{\omega_{\mathbf{k}}^{2}},\] (46) \[\sigma_{\mathbf{k}}^{(2)} = \frac{1-\gamma_{\mathbf{k}}^{2}}{\omega_{\mathbf{k}}^{2}}. \tag{47}\] The function \(f_{\eta}\left(\mathbf{k}\right)\) are defined by \[f_{00}\left(\mathbf{k}\right) = 1, \tag{48}\] \[f_{1s}\left(\mathbf{k}\right) = \gamma_{\mathbf{k}},\] (49) \[f_{1a}\left(\mathbf{k}\right) = \gamma_{\mathbf{k}}^{a},\] (50) \[f_{11}\left(\mathbf{k}\right) = \gamma_{\mathbf{k}}^{\prime},\] (51) \[f_{20}\left(\mathbf{k}\right) = 4\gamma_{\mathbf{k}}^{2}-2\gamma_{\mathbf{k}}^{\prime}-1,\] (52) \[f_{21s}\left(\mathbf{k}\right) = \gamma_{\mathbf{k}}\left(2\gamma_{\mathbf{k}}^{\prime}-1\right),\] (53) \[f_{21a}\left(\mathbf{k}\right) = \gamma_{\mathbf{k}}^{a}\left(2\gamma_{\mathbf{k}}^{\prime}+1\right),\] (54) \[f_{22}\left(\mathbf{k}\right) = 4\gamma_{\mathbf{k}}^{\prime}-8\gamma_{\mathbf{k}}^{2}+4\gamma_{\mathbf{k}}^ {\prime}+1. \tag{55}\] We solve the self-consistent equations (36) to (43) as follows: Initially, we solve the equations for \(r=0\), resulting in a reduced set of equations that simplifies to a single equation. This equation can be solved using the bisection method. Subsequently, we solve the differential equations for \(a_{ij}\), employing either the parameter \(r\) or the temperature \(T\) as the variable. Notably, it is unnecessary to compute \(\alpha\) at intermediary steps. The change in \(\alpha\) concerning \(r\) or \(T\) can be calculated at the end of the computation. Figure 1 presents the results at temperature \(T=0\). The correlation functions are plotted as a function of the ratio \(r=J_{2}/J_{1}\). It is worth noting that there exists a symmetry between the cases \(r=0\) and \(r=1\), as previously observed [22], due to the exchange between \(J_{1}\) and \(J_{2}\). Additionally, we observe that the nematic correlations \(c_{1a}=a_{1a}/\alpha\) and \(c_{21a}=a_{21a}/\alpha\) are both zero within the numerical error. The formula derived in this study was obtained by employing the decoupling scheme at the second-order equation of motion. It is important to note that the resulting self-consistent equations at \(T=0\) do not directly allow us to discuss the presence of long-range order. However, within this framework, we can investigate the short-range order. Based on our analysis of the self-consistent equations, we find no indication of short-range nematic correlation in the system. The spin-wave dispersion at \(T=0\) is shown in Fig. 2. There is a systematic change in the energy dispersion by varying the values of \(J_{1}\) and \(J_{2}\). It is important to note that there is no need to assume any long-range order to obtain the spin-wave dispersion. However, a discrepancy arises when considering the gap at \((\pi,\pi)\) and \((\pi,0)\). For the case when \(J_{1}\) is finite and \(J_{2}=0\), the gap at \((\pi,\pi)\) should vanish, and similarly, for \(J_{1}=0\) and \(J_{2}\) is finite, the gap at \((\pi,0)\) should also vanish. However, the decoupling approximation at the second-order equation of Figure 1: (Color online) Spin correlation functions as a function of \(r=J_{2}/J_{1}\) at \(T=0\). There is symmetry between the correlation functions at \(r=0\) and those at \(r=1\). Specifically, \(c_{1}\), \(c_{11}\), and \(c_{2}\) at \(r=0\) correspond to \(c_{11}\), \(c_{2}\), and \(c_{22}\) at \(r=1\), respectively. The results confirm this symmetry. Additionally, the nematic correlations, \(c_{1a}\) and \(c_{21a}\), are found to be zero within the numerical error. motion, as mentioned earlier, leads to a finite gap at these points, contradicting the expected behavior. Employing the decoupling scheme at the third-order equation of motion has the potential to yield improvements and could resolve the discrepancy observed in the gap at \((\pi,\pi)\) and \((\pi,0)\). Such investigations are left for future research. One can also compute the energy dispersion at finite temperature without long-range order but with short-range order. The advantage of this formula is that it enables the computation of dynamical quantities, which can be experimentally verified. We expect the Green's function approach to become accurate at high temperatures due to the short spin-spin correlation length. Therefore, in principle, we can utilize the high-temperature expansion result [23] to set the initial values of the correlation functions, \(c_{ij}\), when solving the self-consistent equations. This procedure works very well when the frustration effect is weak. However, extremely precise numerical computations are required [24] when the frustration effect is significant. To obtain reliable results for the \(J_{1}\)-\(J_{2}\) model, there are two possible approaches. One option is to perform a three-dimensional calculation by considering a quasi-two-dimensional system. Such a calculation was carried out [25] using several decoupling parameters and incorporating input from exact diagonalization while introducing an approximate expression for the ground state energy. If we introduce one decoupling parameter, there is no need to consider inputs from other methods. Another approach involves implementing the decoupling approximation at the third-order equation of motion. These aspects remain to be explored in future research. ## V Summary To summarize, we have presented a generalized formulation of the Green's function method that can be applied to diverse antiferromagnetic Heisenberg spin systems. We have discussed the hypercubic lattice and the \(J_{1}\)-\(J_{2}\) model as specific applications. We have derived the spin-wave dispersion formula for the hypercubic lattice in any spatial dimension. For the cubic lattice case, the Green's function approach provides a good estimation for the transition temperature. For the case of the \(J_{1}\)-\(J_{2}\) model, we computed the correlation functions at zero temperature and spin-wave dispersion. Although we have included the effect of nematic correlation, there is no signature of the nematic order. We note that the Green's function approach can be generalized to incorporate other interactions and higher spin values. The key features of this approach are that it does not need to assume specific correlations and enables the investigation of finite temperature and dynamical properties. Exploring higher-order approximations in the decoupling scheme is promising, and it may provide more accurate results and a better understanding of the physical behavior of the spin system. Such investigation can open up new insights into the spin system and pave the way for more accurate and comprehensive theoretical descriptions.
2309.11041
Polarization-based cyclic weak value metrology for angular velocity measurement
Weak measurement has been proven to amplify the detection of changes in meters while discarding most photons due to the low probability of post-selection. Previous power-recycling schemes enable the failed post-selection photons to be repeatedly selected, thus overcoming the inefficient post-selection and increasing the precision of detection. In this study, we focus on the polarization-based weak value angular-velocity measurement and introduce three cyclic methods to enhance the accuracy of detecting time shift in a Gaussian beam: power recycling, signal recycling, and dual recycling schemes. By incorporating one or two partially transmitting mirrors into the system, both the power and signal-to-noise ratio (SNR) of the detected light are substantially enhanced. Compared to non-polarization schemes, polarization-based approaches offer several advantages, including lower optical loss, unique cyclic directions, and a wider optimal region. These features effectively reduce crosstalk among different light paths and theoretically eliminate the walk-off effect, thus yielding improvements in both theoretical performance and application.
Zi-Rui Zhong, Yue Chen, Wei-Jun Tan, Xiang-Ming Hu, Qing-Lin Wu
2023-09-20T03:39:52Z
http://arxiv.org/abs/2309.11041v3
# Polarization-based cyclic weak value metrology for angular velocity measurement1 ###### Abstract Weak value has been proved to amplify the detecting changes of the meters at the cost of power due to post-selection. Previous power-recycling schemes enable the failed post-selection photons to be reselected repeatedly, thus surpassing the upper noise limit and improving the precision of interferometric systems. Here we introduce three cyclic methods to improve the sensitivity of polarization-based weak-value-based angular velocity measurement: power-, signal- and dual-recycling schemes. By inserting one or two partially transmitting mirrors inside the system, both the power and precision of detected signals are greatly enhanced, and the dual-recycling scheme has wider optimal region than that of power- or signal-recycling schemes. Compared to non-polarization schemes, polarization-based schemes enjoy lower optical loss and unique cyclic directions. These reduce the crosstalk among different paths of light and, theoretically, eliminate the walk-off effect, thus towering in both theoretical performance and application. ## I Introduction Since first introduced by Aharonov, Albert and Vaidman(AAV) in Ref.[1], weak measurement has shown its numerous potential in various precise measurements. Unlike the classical(or strong) measurements set forth by von Neumann[2], the coupling between the probe and system is very weak, leading to the famous measurement disturbance guaranteed by Heisenberg limit invalidated[3]. Therefore, it can be used to reconsider many interesting quantum phenomena such as Hardy's paradox[4; 5; 6; 7], three-box problem[8; 9; 10] and quantum Cheshire cats[11; 12; 13; 14; 15]. By preparing appropriate pre- and post-selected states, a weak measurement enables a so called 'weak value' to record the information of the weak interaction process. Generally, the weak value is defined as \(A_{w}=\left\langle f\right|\hat{A}\left|i\right\rangle/\left\langle f\right|i\rangle\), where \(\left|i\right\rangle\), \(\left|f\right\rangle\) are the pre-, post-selected states and \(\hat{A}\) is the measured observable. Because \(\left\langle f\middle|i\right\rangle\) exists in the denomination of the formula, \(A_{w}\) can be very large if \(\left|i\right\rangle\) and \(\left|f\right\rangle\) are nearly orthogonal. Thus, it has the potential to detect many small physical effects such as the spin Hall effect[16; 17; 18], Goos-Hanchen shift[19; 20], beam deflection[21], velocity[22], phase shifts[23; 24; 25], temperature[26], angular velocity[27; 28; 29] and resonance[30], to name a few. However, the weak-value-amplification (WWA) effect comes at the sacrifice of the post-selection probability, which is defined as \(P=\left|\left\langle f\middle|i\right\rangle\right|^{2}\). This results in only a small part of the incident light being detected while discarding the rest most. To solve this problem, the power-recycling technique[31; 32; 33; 34; 35] is introduced by placing a partially transmitting mirror(PTM) at the bright port of the interferometer. It reuses the failed post-selection photons and can permit all the input light to be detected if providing ideal experimental conditions. Besides, this enhancement enables the amplification of SNR itself by large weak value factor, thus breaking the upper limits of classical measurements. The similar conclusion is obtained for the signal-recycling weak measurement[36], which works by placing the PTM at the dark port of the interferometer. In the power- and signal-recycling system above, the failed and successful post-selected photons are reused, respectively. Furthermore, these two parts can be combined in one system for further improvement, denoted by dual-recycling scheme[37; 38; 39; 40]. Previous dual-recycled interferometric WWA setup obtains large precision improvement while sacrificing some of WWA effect of pointer due to the walk-off effect. In addition, the walkway of cyclic photons inside the interferometer is intricate, resulting in inevitable crosstalk, which increases the system loss. Here we introduce above-mentioned cyclic schemes of polarization-based WWA set up based on the angular velocity measurement of [41]. Compared with the non-polarization cyclic schemes, we substitute the polarization-beam-splitter (PBS) for beam-splitter(BS). This simplifies the direction of light paths, which is only clockwise \(\circlearrowright\) in this article, and reduces the optical loss. In addition, the 'only-direction' permits a filter to refresh all cyclic photons before their last weak interaction, thus eliminating the walk-off effect. ## II Standard WWA setup We first review the standard WWA setup for angular velocity measurement in [41]. As shown in Fig. 1(a), a non-Fourier limit Gaussian pulse \(I_{0}\left(t\right)=\left(N^{2}/2\pi\tau^{2}\right)^{1/2}exp\left(-t^{2}/2 \tau^{2}\right)\), where \(N\) is the number of photons and \(\tau\) is the length of pulse, is sent to a polarization-dependent system. The first Glan prism(G1) combined with half-wave plate(HWP) provide the pre-selection, where the axis of G1 is verti cal and the angle between G1 and HWP is \(\phi_{1}\). The second Glan prism(G2), whose original orientation of axis is the horizon, provides to the post-selected state. The weak interaction is expressed by a unitary operator \(\hat{U}_{w}=exp\left(i2\omega t\hat{A}\right)\), where \(\omega\) is angular velocity by rotating HWP and \(\hat{A}\) is a Hermitian operator \(\hat{A}=\left|L\right\rangle\left\langle L\right|-R\right\rangle\left\langle R \right|\) (\(\left|L\right\rangle,\ \left|R\right\rangle\) are the left, right circularly polarized states). Here we introduce a unitary operator \(\hat{U}_{\phi}=exp\left(i2\phi_{1}\hat{A}\right)\) to represent the polarization rotation \(\phi_{1}\) produced by the piezo-driven half-wave plate(PHWP). In this way, the pre-selected state of system is \(\left|\psi_{pre}\right\rangle=\hat{U}_{\phi}\left|V\right\rangle=\frac{i}{ \sqrt{2}}\left[exp\left(-i2\phi_{1}\right)\left|R\right\rangle-exp\left(i2 \phi_{1}\right)\left|L\right\rangle\right]\) and the post-selected state is \(\left|\psi_{pos}\right\rangle=cos\phi_{2}\left|H\right\rangle-sin\phi_{2} \left|V\right\rangle=\frac{i}{\sqrt{2}}\left[exp\left(-i2\phi_{2}\right) \left|R\right\rangle+exp\left(i2\phi_{2}\right)\left|L\right\rangle\right]\). We assume \(\left|\phi_{0}\right\rangle\) is the infinite state of probe. Therefore, the intensity of detected light is given by \[\begin{split} I_{d}\left(t\right)&=\left|\left\langle \psi_{pos}\right|\hat{U}_{w}\left|\psi_{pre}\right\rangle\otimes\left|\left. \varphi_{0}\right\rangle\right|^{2}\\ &\approx\frac{N}{\sqrt{2\pi\tau^{2}}}sin^{2}\phi exp\left[-\frac{1}{2 \tau^{2}}\left(t-\frac{4\omega\tau^{2}}{\phi}\right)^{2}\right],\end{split} \tag{1}\] where we introduce the angle \(\phi=2\phi_{1}-\phi_{2}\) and assume \(2\omega\tau\ll\phi\ll 1\). The corresponding weak value is given by \(A_{w}=\left\langle\psi_{pos}\right|\hat{A}\left|\psi_{pre}\right\rangle/ \left\langle\psi_{pos}|\psi_{pre}\right\rangle\), which is related to the time shift. Compared with the incident light, the time shift induced by the PHWP is \(\delta t=\frac{4\omega\tau^{2}}{\phi}=4\omega\tau^{2}\left|A_{w}\right|\). Based on Fisher information theory[42; 43], the fisher information of time shift \(\delta t\) is \(F\left(\delta t\right)=\int dtI_{d}\left|\frac{d}{d\delta t}lnI_{d}\right|^{2 }\approx\frac{N\phi^{2}}{\tau^{2}}\). So the minimum uncertainty of angular velocity determined by Cramer-Rao bound(CRB) satisfies \[\Delta\omega_{CRB}=\frac{\phi}{4\tau^{2}}\Delta\left(\delta t\right)\approx \frac{1}{4\sqrt{N}\tau}. \tag{2}\] Figure 1: (Color online) (a)Schematic of weak-value-based angular velocity measurement. A laser wave is generated by an acoustic optical modulator(AOM) and enters a polarization-dependent angular velocity measurement system consisting of two Glan prisms(G1 and G2) and the PHWP. The small angular velocity \(\omega\) is induced by the PHWP and finally measured by the detector. (b): The power-recycling scheme. The PBS, which distinguishes the polarization between H and V, is combined with the PTM to reuse the failed post-selection photons repeatedly. Two QWPs near the PTM provide the initial polarization and rotate the polarization direction of cyclic light back to V. The filter in front of the PHWP refreshes the beam profile on each pass. (c)The signal-recycling scheme. The he PTM at the output is combined with the previous system to form a signal-recycling cavity, thus improving the detected signal. (d)The dual-recycling scheme. Combining the power- and signal-recycling PTMs into one scheme to further enhance the precision of detecting. QWP: quarter-wave plate. HWP: half-wave plate. PBS: polarization-beam-splitter. PHWP: piezo-driven half-wave plate. PTM: partially transmitting mirror. H: horizontal. V: vertical. Therefore, the corresponding SNR is given by \[SNR=\frac{\omega}{\left(\Delta\omega\right)}=4\omega\tau\sqrt{N}=\frac{\sqrt{N} \phi}{\tau}\delta t. \tag{3}\] ## III Power-recycling The power-recycled weak-value setup is shown in Fig. 1(b). The initial state \(\left|V\right\rangle\) is provided by the G and two QWP, and we use the combination 'HWP1 PBS HWP2' to replace the G2 for the post-selection. The PTM, whose reflection and transmission coefficients are \(r\) and \(p\)\(\left(r^{2}+p^{2}=1\right)\), is placed between two QWP, thus reflecting the failed post-selected light while rotating the light polarization to \(\left|V\right\rangle\). In this angular velocity measurement scheme, the initial light state can be expressed as \(\left|\,\varphi_{0}\right\rangle=\sqrt{I_{0}\left(t\right)}\left|\,\alpha \right\rangle=\left(N^{2}/2\pi\tau^{2}\right)^{1/4}exp\left(-t^{2}/4\tau^{2} \right)\left|\,\alpha\right\rangle,\) where \(\left|\,\alpha\right\rangle\) is coherent light state. Here we define two orthogonal states \(\left|\,\psi_{1}\right\rangle\) and \(\left|\,\psi_{2}\right\rangle\) to represent the input and output system states, where \(\left|\,\psi_{1}\right\rangle=\left|V\right\rangle=\frac{i}{\sqrt{2}}\left( \left|R\right\rangle-\left|L\right\rangle\right)\) and \(\left|\,\psi_{2}\right\rangle\)\(=\left|H\right\rangle=\frac{1}{\sqrt{2}}\left(\left|R\right\rangle+\left|L \right\rangle\right)\). Post-selected by the input and output ends, the meter states become \(\left|\,\varphi_{ref}\right\rangle=\left\langle V\right|\hat{U}_{w}\hat{U}_{ \phi}\left|V\right\rangle\left|\,\varphi_{0}\right\rangle\) and \(\left|\,\varphi_{out}\right\rangle=\left\langle H\right|\hat{U}_{w}\hat{U}_{ \phi}\left|V\right\rangle\left|\,\varphi_{0}\right\rangle\), respectively. This produces two measurement operators \(M_{11}=\left\langle\psi_{1}\right|\hat{U}_{w}\hat{U}_{\phi}\left|\psi_{1} \right\rangle=\cos\left(\phi+2\omega t\right)\) and \(M_{12}=\left\langle\psi_{2}\right|\hat{U}_{w}\hat{U}_{\phi}\left|\psi_{1} \right\rangle=\sin\left(\phi+2\omega t\right)\). We introduce the non-unitary operator \(\hat{L}=\sqrt{1-\gamma}\), where \(\gamma\) is the single-pass power loss, to express the loss of optical imperfection in one return. Assuming the length of one traversal is \(l_{cav}\), the pulse transition time of per traversal is given by \(t_{cav}=2l_{cav}/c\). Generally, both the measurement operators and the meter state are related to the number of traversals. For example, \(M_{11}\) should be written as \(M_{11}^{\,\,\,\,n}=\cos\left[\phi+2\omega\left(t-nt_{cav}\right)\right]\). [35] proved that this change is small and only induces a constant delay, which can be eliminated. Therefore, with the resonance cavity, the amplitude of the detected signal is given by the sum of amplitude from all traversal numbers, \[\left|\,\varphi_{d}\right\rangle_{pow}=pM_{12}\sum_{n=0}^{\infty}\left(rLM_{1 1}\right)^{n}\left|\,\varphi_{0}\right\rangle \tag{4}\] It is a summation of the convergence series so that there is a maximum value of \(n\), denoted by \(n_{max}\). Therefore, the formula above can be simplified as \[\left|\,\varphi_{d}\right\rangle_{pow} =pM_{12}\sum_{n=0}^{n_{max}}\left(rLM_{11}\right)^{n}\left|\, \varphi_{0}\right\rangle \tag{5}\] \[\approx\left(\frac{N^{2}}{2\pi\tau^{2}}\right)^{\frac{1}{4}}exp \left(-\frac{t^{2}}{4\tau^{2}}\right)\frac{p\sin\left(\phi+2\omega t\right)}{ 1-rL\cos\left(\phi+2\omega t\right)}\] Next, we do a Taylor expansion on the function \(f\left(x\right)=p\sin\left(\phi+2\omega t\right)/\left[1-rL\cos\left(\phi+2 \omega t\right)\right]\) and make an approximation \(f\left(x\right)\approx f\left(0\right)+xf^{\prime}\left(0\right)\approx exp \left(-f^{\prime}\left(0\right)x/f\left(0\right)\right)\). Then the amplitude of detected state is \[\left|\,\varphi_{d}\right\rangle_{pow}\approx A\left(\frac{N^{2}}{2\pi\tau^{2 }}\right)^{\frac{1}{4}}\sin^{2}\left(\phi+2\omega t\right)exp\left[-\frac{ \left(t-\delta t_{p}\right)^{2}}{4\tau^{2}}\right] \tag{6}\] where \[A=\frac{p}{1-r\sqrt{1-\gamma}\cos 2\phi} \tag{7}\] and \[\delta t_{p}=\frac{2\omega\tau^{2}\left(\cos\phi-r\sqrt{1-\gamma}\right)}{ \sin\phi\left(1-r\sqrt{1-\gamma}\cos 2\phi\right)}. \tag{8}\] Due to the walk-off effect, the time shift changes from \(\delta t\) to \(\delta t_{p}\). But if we place a filter in front of the PHWP, each time reflected by the PTM, the light pass through the filter and is projected into \(\left|\,\varphi_{0}\right\rangle\). This leaves the pre-filter state as \(\left|\,\varphi\prime\right\rangle_{pow}=M_{11}\left|\,\varphi_{0}\right\rangle \left/\sqrt{\left|M_{11}\left|\,\varphi_{0}\right\rangle\right|^{2}}\). So the probability of surviving the filter is \[p_{f}= \left|\left\langle\varphi_{0}|\varphi\prime\right\rangle\right|^{ 2}=\frac{\cos^{2}\phi}{\sinh 4\omega^{2}\tau^{2}+\cos^{2}\phi e^{-4\omega^{2}\tau^{2}}} \tag{9}\] \[\approx 1-\phi^{2}\left(4\omega^{2}\tau^{2}\right)-\left(4\omega^{2} \tau^{2}\right)^{2}/2+\cdots,\] where we make the approximation in the weak value range, \(2\omega\tau\ll\phi\ll 1\). In this way, the time shift is refreshed every cycle, eliminating the walk-off effect while adding a minimum 'filter' loss \(\gamma_{min}\approx 4\omega^{2}\tau^{2}\phi^{2}\) to the system[31]. Therefore, the power of the detected signal is given by \[I_{pow}\approx\left(\frac{N^{2}}{2\pi\tau^{2}}\right)^{\frac{1}{2}}A^{2}\sin ^{2}\left(\phi+2\omega t\right)exp\left[-\frac{\left(t-\delta t\right)^{2}}{2 \tau^{2}}\right]. \tag{10}\] Similarly, the corresponding SNR determined by Cramer-Rao bound(CRB) is \[\text{SNR}_{pow}\approx A\frac{\sqrt{N}\phi}{\tau}\delta t, \tag{11}\] which is A times to the SNR of standard weak measurement. ## IV Signal-recycling The similar methods are obtained in the signal-recycled scheme. As shown in Fig. 1(c), the optical axis of Glan prism is vertical, promising the input state \(\left|V\right\rangle\). The post-selection is provided by the combination 'HWP PBS QWPs'. The PTM is placed between the two QWPs to reflect the output signal. This post-selection processes provide two measurement operators: \(M_{12}=\left\langle\psi_{2}\right|\hat{U}_{w}\hat{U}_{\phi}\left|\,\psi_{1} \right\rangle=\sin\left(\phi+2\omega t\right)\) and \(M_{22}=\left\langle\psi_{2}\right|\hat{U}_{w}\hat{U}_{\phi}\left|\psi_{2}\right\rangle =\cos\left(\phi+2\omega t\right)\). In this signal-recycled cavity, the amplitude of the detected signal is given by \[\left|\left.\varphi_{d}\right\rangle_{sig} =pM_{12}\sum_{n=0}^{n_{max}}\left(rLM_{22}\right)^{n}\left|\left. \varphi_{0}\right\rangle\right. \tag{12}\] \[\approx\left(\frac{N^{2}}{2\pi\tau^{2}}\right)^{\frac{1}{4}}exp \left(-\frac{t^{2}}{4\tau^{2}}\right)\frac{p\sin\left(\phi+2\omega t\right)}{ 1-rL\cos\left(\phi+2\omega t\right)}\] which is the same to \(\left|\left.\varphi_{d}\right\rangle_{pow}\right.\). This means the power- and signal-recycling are of the same status in the weak-value-based signal improvement. Different from previous interferometric signal-recycling scheme, the only clockwise path permits all cyclic photons to be refreshed before the next post-selection. With the filter, the pre-filter state is \(\left|\left.\varphi\prime\right\rangle_{sig}=M_{22}\left|\left.\varphi_{0} \right\rangle/\sqrt{\left|M_{22}\left|\left.\varphi_{0}\right\rangle\right. \right|^{2}}=\left|\left.\varphi\prime\right\rangle_{pow}\right.\). We can predict that the same conclusion is taken in the calculation of the detected power and SNR, \(I_{sig}=I_{pow}\) and \(SNR_{sig}=SNR_{pow}\) ## V Dual-recycling Because the power- and signal-recycling mirrors reuse different parts of the photon respectively, it seems that the two PTMs can be compatible in one system. In Fig. 1(d), the Glan prism together with two QWPs provide the input state \(\left|\left.\psi_{1}\right\rangle\right.\), and the output state \(\left|\left.\psi_{2}\right\rangle\right.\) is provided by the combination 'HWP PBS QWPs'. In this cyclic process, all possible post-selections are available. Similarly, the filter in front of the PHWP projects the meter state into \(\left|\left.\varphi_{0}\right\rangle\right.\), thus eliminating the walk-off effect and maintaining the large point shift associated with the WVA. This also results in a minimum optical loss \(\gamma_{min}\approx 4\omega^{2}\tau^{2}\phi^{2}\), which can be ignored in the weak value range \(2\omega\tau\ll\phi\ll 1\). For simple calculation, we assume the parameters of PTMs are the same and introduce the measurement matrix \[U=\begin{bmatrix}M_{11}&M_{12}\\ M_{21}&M_{22}\end{bmatrix}=\begin{bmatrix}\cos\left(\phi+2\omega t\right)& \sin\left(\phi+2\omega t\right)\\ -\sin\left(\phi+2\omega t\right)&\cos\left(\phi+2\omega t\right)\end{bmatrix} \tag{13}\] which is formed by four measurement operators and arranged in the order corresponding to the subscripts. \(\left(U^{n}\right)_{12}\) represents the physical process that the incident light travels through the dual recycling cavity \(n\) times and finally reaches the detector. Therefore, the steady state amplitude detected by the meter is given by the Figure 2: (Color online) Comparison of power-, signal- and dual-recycling schemes. (a), (b) and (c) correspond to the case of A and B varying with \(r\) under different value of \(\gamma\): \(0\), \(0.1\) and \(0.2\) where \(\phi=0.1\). In (d), (e) and (f), we assume \(r=0.9\) and plot A, B varying with \(\phi\) under \(\gamma=0,\ 0.1,\ 0.2\), respectively. A and B: the improvement factor of power (or signal) and dual recycling schemes. \(r\): the reflection coefficient of PTM. \(\gamma\): optical loss. sum of amplitude of all traversal numbers, \[\langle\alpha\mid\varphi_{d}\rangle=p\sum_{n=0}^{n_{max}}\left(\sqrt{1- \gamma}\right)^{n+1}\left(U^{n+1}\right)_{12}p\left\langle\alpha\mid\varphi_{0}\right\rangle \tag{14}\] \[\approx p^{2}\left(\frac{N^{2}}{2\pi\tau^{2}}\right)^{\frac{1}{4} }exp\left(-\frac{t^{2}}{4\tau^{2}}\right)\hat{L}\left(\frac{U}{I-r\sqrt{1- \gamma}U}\right)_{12}\] \[=-\frac{p^{2}\left(\frac{N^{2}}{2\pi\tau^{2}}\right)^{\frac{1}{4} }exp\left(-\frac{t^{2}}{4\tau^{2}}\right)\sqrt{1-\gamma}\sin\left(\phi+2 \omega t\right)}{1+\left(1-\gamma\right)r^{2}-2\sqrt{1-\gamma}r\cos\left( \phi+2\omega t\right)}\] \[\approx-\frac{p^{2}\left(\frac{N^{2}}{2\pi\tau^{2}}\right)^{ \frac{1}{4}}exp\left(-\frac{t^{2}}{4\tau^{2}}\right)\sqrt{1-\gamma}\sin\left( \phi+2\omega t\right)}{1+\left(1-\gamma\right)r^{2}-2\sqrt{1-\gamma}r\cos\phi},\] Where the last approximation is taken with the minimum 'filter' loss \(\gamma_{min}\approx 4\omega^{2}\tau^{2}\phi^{2}\). In this way, the intensity of the detected signal is given by \[I_{dua}\approx\left(\frac{N^{2}}{2\pi\tau^{2}}\right)^{\frac{1}{2}}B^{2}\sin^ {2}\left(\phi+2\omega t\right)exp\left[-\frac{\left(t-\delta t\right)^{2}}{2 \tau^{2}}\right] \tag{15}\] where \[B=\frac{p^{2}}{1+\left(1-\gamma\right)r^{2}-2\sqrt{1-\gamma}r\cos\phi}. \tag{16}\] Thus, the corresponding SNR is \[\text{SNR}_{pow}\approx B\frac{\sqrt{N}\phi}{\tau}\delta t. \tag{17}\] ## VI Comparison Here we define \(A\) and \(B\) are the improvement factors of power(or signal) and dual recycling, respectively. It is clear that the power-, signal- and dual-recycling schemes improve the SNR of standard WWA setup \(A\), \(A\) and \(B\) times. \(A^{2}\) and \(B^{2}\) also corresponds to the improvement of the detected power. Therefore, as shown in Fig. 2, we plot \(A\) and \(B\) varying with \(r\)(Fig. 2(a), (b) and (c)) or \(\phi\)(Fig. 2(d), (e) and (f)) under different loss \(\gamma=0,\ 0.1,\ 0.2\), which correspond to ideal, low and regular loss. As expected, both \(A\) and \(B\) can reach the maximum value \(1/\phi=10\), as shown in Fig. 2(a), (b) and (c). Figure 3: (Color online) Comparison of polarization-based and interferometric dual recycling schemes. (a), (b) and (c) correspond to the case of \(B\) varying with \(r\) and \(\phi\) under different value of \(\gamma\): \(0\), \(0.1\) and \(0.2\) in 3D, respectively. (d), (e) and (f) respresent that \(B_{non}\) varies with \(r\) and \(\phi\) under different value of \(\gamma\): \(0\), \(0.1\) and \(0.2\) in 3D. \(B\) and \(B_{non}\): the improvement factor of polarization-based and non-polarization dual recycling schemes. \(r\): the reflection coefficient of PTM. \(\phi\): post-selected angle. \(\gamma\): optical loss. However, the peak of \(A\) decreases faster that of \(B\), corresponding to a larger limitation to the improvement of detecting. Therefore, the dual-recycling cavity has tolerance for a wider range of \(r\) and \(\gamma\), which applies more to real experiments. In Fig. 2(d), (e) and (f), we set \(r=0.9\), a common parameter of the PTM, and draw the curves of \(A\) and \(B\) varying with \(\phi\) where \(\phi\in[0.01,\ 0.2]\). We can see that the improvement factor of dual recycling is larger in most weak value range, thus suppressing the power or signal recycling. In previous dual-recycled interference-based WWA system[40], the amplification effect of pointer is reduced by the walk-off effect, leading to a limitation of the precision gain. From the equations (25) and (26) in [40], without a filter, the improvement factor \(B\) changes to \(B_{non}\) \[B_{non}=\xi\frac{p^{2}}{1+\left(1-\gamma\right)r^{2}-2\sqrt{1-\gamma}r\cos\phi}, \tag{18}\] where \[\xi=\phi\frac{\cos\phi\left[1+r^{2}\left(1-\gamma\right)\right]-2r\sqrt{1- \gamma}}{\sin\phi[1+r^{2}\left(1-\gamma\right)-2r\sqrt{1-\gamma}\cos\phi]}. \tag{19}\] Due to the proper using of the filter, a minimum filter loss replaces original performance reduction in the polarization-based dual recycling scheme. For clear comparison, in Fig. 3, we similarly set \(\phi=0.1\) and plot \(B\), \(B_{non}\) vary with \(r\) and \(\phi\) under different losses. The polarization-based scheme has obvious improvement and the gaps between \(B_{non}\) and \(B\) decrease as the loss increases. This is established on the assumption that both systems enjoy the same loss \(\gamma\). Actually, replacing the BS with the PBS can effectively reduce the optical loss. The probability of photons surviving the PBS (\(\geq 95\%\)) is known to be larger than that of the BS (\(\geq 90\%\)). In addition, the PBS simplifies the propagation paths of cyclic photons, which reduces the crosstalk among photons. All these reasons make this polarization-based scheme advantageous in both theoretical performance and experimental application. ## VII Conclusion In summary, we have proposed three polarization-based cyclic weak measurement schemes based on the angular velocity weak measurement setup. By inserting one or two PTMs into the system to form a resonant cavity, all the incident light can be detected in principle. In our analysis, this polarization-based schemes can suppress the pervious interferometric schemes by their lower theoretical loss and improved cyclic paths. This permits a filter to refresh the cyclic photons, thus eliminating the walk-off effect. In addition, we get the similar conclusion about the improvement factor, all three schemes can greatly improve the power and SNR of the detected signal. Especially, the dual-recycling scheme enjoys the wider optimal region. However, the beam will be a diffracting Gaussian beam with a waist instead of the parallel beam treated above. Similar to the solutions of [32], several lenses should be placed well-designed to form stable self-reproduction of Gaussian mode. In addition, the cavity is unstable due to the influence of optical platform jitter, temperature, pressure and so on. Therefore, the Pound-Drever-Hall(PDH) system is also essential to feedback the instability, locking the lengths of cavity. This cyclic mode is also applied to other WVA experimental realizations because the post-selection absolutely exists in all weak value setup. In addition, the quantum resources can be used to increase the precision beyond the standard quantum limit[43; 44], which is a predictable way to further improve the performance of weak value metrology. ## VIII Acknowledgements This work was supported by the National Natural Science Foundation of China (Grants No. 11734015).
2309.17356
On particular integrability for (co)symplectic and (co)contact Hamiltonian systems
As a generalization and extension of our previous paper [Escobar-Ruiz and Azuaje, J. Phys. A: Math. Theor. 57, 105202 (2024)], in this work, the notions of particular integral and particular integrability in classical mechanics are extended to the formalisms of cosymplectic, contact and cocontact geometries. This represents a natural scheme to study nonintegrable time-dependent systems where only a part of the whole dynamics satisfies the conditions for integrability. Specifically, for Hamiltonian systems on cosymplectic, contact and cocontact manifolds, it is demonstrated that the existence of a particular integral allows us to f ind certain integral curves from a reduced, lower dimensional, set of Hamilton equations. In the case of particular integrability, these trajectories can be obtained by quadratures. Notably, for dissipative systems described by contact geometry, a particular integral can be viewed as a generalization of the important concept of dissipated quantity as well.
R. Azuaje, A. M. Escobar-Ruiz
2023-09-29T16:02:19Z
http://arxiv.org/abs/2309.17356v3
Particular integrals and particular integrability for (co)symplectic and (co)contact Hamiltonian systems ###### Abstract In this paper we study the notions of particular integral and particular integrability, in classical mechanics, under a geometrical approach. For Hamiltonian systems on cosymplectic, contact and cocontact manifolds, it is demonstrated that the existence of a particular integral implies that some of the trajectories can be found from a simpler reduced set of Hamilton equations. In the case of particular integrability, these trajectories can be obtained by quadratures. For dissipative systems described using contact geometry, a particular integral can be viewed as a generalization of the important concept of dissipated quantity as well. _Keywords:_ particular integral, integrability by quadratures, symmetric reduction, contact Hamiltonian systems, dissipated quantities. ## 1 Introduction It is well-known that the existence of symmetries for a classical mechanical system simplifies the problem of solving the equations of motion. Also, it is common knowledge that in the symplectic framework the constants of motion for a Hamiltonian system are related to symmetries [1, 2, 3]. The famous Liouville theorem on integrability asserts that for a Hamiltonian system with \(n\) degrees of freedom, the knowledge of \(n\) functionally independent constants of motion (or integrals of motion) in involution allows us to find the solutions of the Hamilton equations of motion by quadratures, i.e., by using finitely many algebraic operations (including taking inverse functions) and calculating the integrals of known functions [1, 2, 4, 5, 6, 7, 8, 9, 10]. Physically, a constant of motion is viewed as a conserved quantity during time evolution for any set of initial conditions in the corresponding domain. For this reason, they can also be called global integrals. The concept of particular integral introduced in [11] generalizes the notion of a global integral. Specifically, a particular integral is a conserved quantity for possibly certain (sub)sets of initial conditions only. One important rationale to investigate particular integrals is that they allow us to study non-integrable systems in certain points and regions where the dynamics of the system satisfies the regularity requirements for integrability which, in turn, leads to the notion of particular integrability. In [12] the concepts of particular integral and particular integrability were introduced within the formalism of symplectic geometry, i.e., they were considered for Hamiltonian systems defined on symplectic manifolds. Symplectic geometry very often is regarded as the natural geometric formalism where the Hamiltonian theory of classical mechanics is developed. Nevertheless, only autonomous conservative systems can be described under this framework. Other geometric constructions such as cosymplectic geometry, contact geometry and cocontact geometry allow alternative formulations of Hamiltonian mechanics. They have been proven to be instrumental in describing non-autonomous and dissipative mechanical systems within the Hamiltonian theory. For completeness, in section 3 of this paper, we present a brief review of symplectic geometry, cosymplectic geometry, contact geometry, cocontact geometry and the corresponding geometric formulations of Hamiltonian Mechanics. Currently, contact and cocontact Hamiltonian mechanics are subjects of intense active research, see for example [13, 14, 15, 16, 17]. For instance, in the case of dissipative systems, the so called dissipated quantities were studied in relation with Noether symmetries [13, 14, 18], giving as a result dissipation laws analogous to the conservation theorems occurring in conservative systems. In fact, the notion of particular integral generalizes the concept of dissipated quantity. In [12] the concept of particular integrability has been presented as a more general notion than Liouville integrability. Accordingly, they share important features. For example, a common key element is the property of functional independence of the involved functions. It is worth reminding this property and the corresponding reduction of the relevant manifold: let \(M\) be a smooth manifold, we say that the functions \(f_{1},\cdots,f_{k}\in C^{\infty}(M)\) with \(k\leq dim(M)\) are functionally independent if for any \(p\) in a dense subset of \(M\) the differential maps \(df_{1}|_{p},\cdots,df_{k}|_{p}\) are linearly independent; this means that \(p\) is a regular point of the differentiable function \(F:M\longrightarrow\mathbb{R}^{k}\) defined by \(F=(f_{1},\cdots,f_{k})\). Let us consider the level set \(M_{f}=\{x\in M:f_{i}(x)=\alpha_{i},\alpha_{i}\in\mathbb{R}\}\), and suppose that \(f_{1},\ldots,f_{k}\) are functionally independent, from the Regular Level Set Theorem [19] we have that \(M_{f}\) is a smooth submanifold of \(M\) of dimension \(dim(M_{f})=dim(M)-k\). The aim of this paper is to present the notions of particular integral and particular integrability in Hamiltonian mechanics under the formalisms of cosymplectic geometry, contact geometry and cocontact geometry in detail. The associated reduction of the equations of motion, using particular integrals, is investigated in each formalism. This paper is organized as follows. In section 2, we present the notion of particular integral for a classical mechanical system defined by a smooth vector field on a smooth generic manifold; in this section we emphasize the similarities and differences between constants of motion (also called integrals of motion or first Liouville integrals) and particular integrals. In section 3, in order to set up the notation and language used in this study, a brief review of symplectic, cocompact, contact and cocontact geometry as well as their corresponding formulations of Hamiltonian mechanics is provided. Afterwards, in section 4 we describe the notions of particular integral and particular integrability for Hamiltonian systems defined on a smooth manifold equipped with different geometric structures. This section is divided into four parts, in the first one we give a summary of the symplectic case as presented in [12] whilst in the second, third and fourth subsections we introduce the notions of particular integral and particular integrability for Hamiltonian systems within the framework of cosymplectic, contact and cocontact geometry, respectively. ## 2 Particular integrals for classical mechanical systems Classical mechanics considers the motion of dynamical systems whose future and past are uniquely determined by the initial positions and initial velocities of all points of the system; such systems are called (classical) mechanical systems. The phase space of a mechanical system is the set whose elements are the sets of positions and velocities of all points of the given system [20]. Poincare visualized a mechanical system as a field of vectors on phase space, in which a trajectory is a smooth curve tangent at each of its points to the vector based at that point; which lead him to the notion of a smooth manifold as the phase space in classical mechanics [1]. Let \(M\) be a smooth manifold of dimension \(n\) and \(V\) a smooth vector field on \(M\). \(V\) defines a (classical) mechanical system on \(M\) whose trajectories are the integral curves of \(V\)[21]. Let us remember that a integral curve of \(V\) is a curve \(\gamma:I\subset\mathbb{R}\longrightarrow M\) such that \(\frac{d}{dt}\gamma(t)=V_{\gamma(t)}\ \forall t\in I\). Let \((x^{1},\cdots,x^{n})\) be local coordinates on \(M\), the local expression of \(V\) is \(V=V^{1}(x^{1},\cdots,x^{n})\frac{d}{dx^{n}}+\cdots+V^{n}(x^{1},\cdots,x^{n}) \frac{\partial}{\partial x^{n}}\), and the integral curves \(\gamma(t)=(x^{1}(t),\cdots,x^{n}(t))\) of \(V\) satisfy the system of equations \[\left\{\begin{array}{l}\dot{x}^{1}=V^{1}(x^{1},\cdots,x^{n}),\\ \vdots\\ \dot{x}^{n}=V^{n}(x^{1},\cdots,x^{n}).\end{array}\right. \tag{1}\] Let us consider the mechanical system \((M,V)\), \(M\) is the phase space of the system and \(V\) is the dynamical vector field. The system of first order \(n\) differential equations (1) is the system of (local) equations of motion (the motion of a system in classical mechanics is described using ordinary differential equations [20]). The evolution of a function \(f\in C^{\infty}(M)\) (an observable) along the trajectories of the system is given by \[\frac{d}{dt}f(\gamma(t))=df_{\gamma(t)}(\frac{d}{dt}\gamma(t))=(\frac{d}{dt} \gamma(t))(f)=V_{\gamma(t)}(f)=(Vf)(\gamma(t))\, \tag{2}\] so the temporal evolution of \(f\) is given by \[\dot{f}\ =\ Vf. \tag{3}\] A function \(f\in C^{\infty}(M)\) is called a constant of motion (or an integral of motion) of the mechanical system \((M,V)\) if \(f\) is constant along the trajectories of the system, which is equivalent to \(Vf=0\). The existence of a constant of motion enable us to construct a reduced mechanical system whose dynamics is contained into the dynamics of the bigger one, i.e., we can construct a reduced system such that its trajectories are trajectories of the original system. So the existence of a constant of motion allows us to look for trajectories of the system that are (locally) solutions of a reduced system of differential equations of motion. Indeed, let us suppose that \(f\) is a constant of motion of the system \((M,V)\), we have that the level set \[M_{f}\ =\ \{\,x\in M:\ f(x)\,=\,c,\,c\in\mathbb{R}\}\,\] is a smooth submanifold of \(M\) of codimension \(1\) for regular values of \(f\)[19] and it is closed under the dynamics of \(V\), i.e., if \(\gamma:I\longrightarrow M\) is an integral curve of \(V\) such that \(\gamma(t_{0})\in M_{f}\) for some \(t_{0}\in I\) then \(\gamma(t)\in M_{f}\) for every \(t\in I\). We can prove this last statement as follows: let us suppose that \(\gamma:I\longrightarrow M\) is an integral curve of \(V\) such that \(\gamma(t_{0})\in M_{f}\) for some \(t_{0}\in I\), then \(f(\gamma(t_{0}))=c\) and since \(f\) is a constant of motion it is constant along the whole trajectory, i.e., \(f(\gamma(t))=c\) for every \(t\in I\), so we conclude that \(\gamma(t)\in M_{f}\) for every \(t\in I\). Now we can look for trajectories of the system \((M,V)\) that live in \(M_{f}\). It is important remarking that the set of the trajectories of the system \((M,V)\) that live in \(M_{f}\) is not the whole set of the trajectories of the system, but the element of this first set can be found by solving a reduced system of differential equations. Indeed, if \(f\) is a constant of motion of \((M,V)\) then the integral curves that live in \(M_{f}\) are (locally) solutions of a system of \(n-1\) differential equations: the integral curves of \(V\) that live in \(M_{f}\) are integral curves of the restriction \(V|_{M_{f}}\) of the vector field \(V\) to \(M_{f}\), so if \((y^{1},\cdots,y^{n-1})\) are local coordinates on \(M_{f}\) then such integral curves are solutions of the vector differential equation \(\dot{Y}=V|_{M_{f}}(Y)\), where \(Y=(y^{1},\cdots,y^{n-1})\), which is equivalent to a system of \(n-1\) differential equations. Now we present the concept of particular integral explaining how it is related with that one of constant of motion. **Definition 1**.: _We say that \(p\in C^{\infty}(M)\) is a particular integral of \((M,V)\) if \(Vp=a\,p\) for some function \(a\in C^{\infty}(M)\) such that_ \[\lim_{x\to x_{0}}a(x)\ =\ a(x_{0})\in\mathbb{R}\,\quad\forall\,x_{0}\in M_{p }=\{\,x\in M:\ p(x)\,=\,0\,\}\.\] _If so, we say that \(a\in C^{\infty}(M)\) is a function with real values on \(p=0\)._ Hence, the special case \(a=0\) corresponds to the definition of a constant of motion. Similarly to the previous case where a constant of motion is involved, the existence of a particular integral allows us to look for certain trajectories of the system that are (locally) solutions of a reduced system of differential equations of motion. We have the following result. **Lemma 1**.: _If \(p\) is a particular integral of \((M,V)\) then \(M_{p}\) is closed under the dynamics of \(V\)._ Proof.: Let \(\gamma:I\longrightarrow M\) be an integral curve of \(V\) such that \(\gamma(t_{0})\in M_{p}\) for some \(t_{0}\in I\), so we have \(p(\gamma(t_{0}))=0\) and \[\begin{split}\frac{d}{dt}p(\gamma(t))&=(Vp)(\gamma(t ))\\ &=(ap)(\gamma(t))\\ &=a(\gamma(t))p(\gamma(t)),\end{split} \tag{4}\] i.e. \(\frac{d}{dt}p(\gamma(t))\) is proportional to \(p(\gamma(t))\); we can see that \(\frac{d^{k}}{dt^{k}}p(\gamma(t))\) is proportional to \(p(\gamma(t))\) for each \(k=1,2,\ldots\), so we have that \[\frac{d^{k}}{dt^{k}}p(\gamma(t))|_{t=t_{0}}\ =\ 0\, \tag{5}\] for every \(k=1,2,\ldots\), i.e., \(p(\gamma(t))\) and all its derivatives are zero on \(t=t_{0}\), therefore since we suppose that \(p(\gamma(t))\) is an analytic function then \(p(\gamma(t))=0\ \forall t\in I\)[22, 23, 24], i.e., \(\gamma(t)\in M_{p}\ \forall t\in I\). Now given a particular integral we can look for the trajectories of the system \((M,V)\) that live in \(M_{p}\), and like for a constant of motion, if \(p\) is a particular integral of \((M,V)\) then the integral curves that live in \(M_{p}\) are (locally) solutions of a reduced system of \(n-1\) differential equations. Up to this point general classical mechanical systems have been considered. We are interested in Hamiltonian systems, these are mechanical systems on smooth manifolds where the dynamical vector fields are the Hamiltonian vector fields for distinguished functions called the Hamiltonian functions. In order to construct a Hamiltonian system, a smooth manifold equipped with a geometric structure is required. The corresponding Hamiltonian vector field for a given function is assigned according to a one-to-one correspondence, between vector fields and \(1\)-forms on such manifold, defined by means of the geometric structure. So that the geometric structure plays a fundamental roll in the construction of the Hamiltonian system. Symplectic geometry is considered the natural geometric framework where the theory of Hamiltonian mechanics is developed, nevertheless, only autonomous conservative systems can be described under this formalism. Other geometric frameworks allow alternative formulations of Hamiltonian mechanics. They are instrumental to describe non-autonomous and dissipative mechanical systems as Hamiltonian systems. These formalisms are cosymplectic geometry, contact geometry and cocontact geometry which we briefly review in the next section. ## 3 Geometric formalisms for Hamiltonian mechanics ### Symplectic geometry In this subsection, in order to set up the notation and language used in this paper, we present a brief review of symplectic geometry and the formulation of symplectic (time-independent conservative) Hamiltonian mechanics (for details see [1, 6, 19, 25, 26]). Let \(M\) be a smooth manifold. A symplectic structure on \(M\) is a closed non-degenerate 2-form \(\omega\) on \(M\). Closed means that \(d\omega=0\) and non-degenerate implies that for each 1-form \(\alpha\) on \(M\) there is one and only one vector field \(X\) which obeys \(X\lrcorner\omega=\alpha\). A symplectic manifold is a pair \((M,\omega)\) where \(\omega\) is a symplectic structure on \(M\). By definition, each symplectic manifold is of even dimension. Let \((M,\omega)\) be a symplectic manifold of dimension \(2n\). Around any point \(x\in M\) there exist local coordinates \((q^{1},\cdots,q^{n},p_{1},\cdots,p_{n})\), called canonical coordinates or Darboux coordinates, such that \[\omega=dq^{i}\wedge dp_{i}. \tag{6}\] In this paper we adopt the Einstein summation convention ( i.e., a summation over repeated indices is assumed). For each \(f\in C^{\infty}(M)\) it is assigned a vector field \(X_{f}\) on \(M\), called the Hamiltonian vector field for \(f\), according to \[X_{f}\lrcorner\omega\ =\ df. \tag{7}\] In canonical coordinates, \(X_{f}\) reads \[X_{f}=\frac{\partial f}{\partial p_{i}}\frac{\partial}{\partial q^{i}}-\frac{ \partial f}{\partial q^{i}}\frac{\partial}{\partial p_{i}}. \tag{8}\] The assignment \(f\longmapsto X_{f}\) is linear, that is \[X_{f+ag}\ =\ X_{f}+\alpha X_{g}\, \tag{9}\] \(\forall f,g\in C^{\infty}(M)\) and \(\forall\alpha\in\mathbb{R}\). The symplectic structure \(\omega\) on \(M\) defines a Poisson structure on \(C^{\infty}(M)\) as follows: given \(f,g\in C^{\infty}(M)\) the Poisson bracket of \(f\) and \(g\) is defined by \[\{f,g\}=X_{g}f, \tag{10}\] or equivalently \[\{f,g\}=\omega(X_{f},X_{g}). \tag{11}\] In canonical coordinates we have \[\{f,g\}\ =\ \frac{\partial f}{\partial q^{i}}\frac{\partial g}{\partial p_{i} }\ -\ \frac{\partial f}{\partial p_{i}}\frac{\partial g}{\partial q^{i}}. \tag{12}\] The assignment \(f\mapsto X_{f}\) is a Lie algebra antihomomorphism between the Lie algebras \((C^{\infty}(M),\{,\})\) and \((\mathfrak{X}(M),[,])\)[19], i.e., \(X_{\{f,g\}}=-[X_{f},X_{g}]\). The theory of time-independent conservative Hamiltonian systems is naturally constructed within the mathematical formalism of symplectic geometry. Given \(H\in C^{\infty}(M)\), the dynamical system defined on \(M\) by the Hamiltonian vector field \(X_{H}\) is a Hamiltonian system denoted by \((M,\omega,H)\). \((M,\omega)\) is called the phase space of the system, \(H\) is the Hamiltonian function and \(n=\frac{dim(M)}{2}\) is the number of degrees of freedom of the system. The trajectories of the system are the integral curves of \(X_{H}\), in canonical coordinates \((q^{1},\cdots,q^{n},p_{1},\cdots,p_{n})\) its expression \(\psi(t)=(q^{1}(t),\cdots,q^{n}(t),p_{1}(t),\cdots,p_{n}(t))\) satisfies the Hamilton equations of motion \[\dot{q^{i}}=\frac{\partial H}{\partial p_{i}},\hskip 28.452756pt\dot{p_{i}}=- \frac{\partial H}{\partial q^{i}}\hskip 28.452756pt;\hskip 28.452756pt\dot{i}=1,2,3, \ldots,n. \tag{13}\] The evolution (the temporal evolution) of a function \(f\in C^{\infty}(M)\) (a physical observable) along the trajectories of the system is given by \[\dot{f}=X_{H}f=\{f,H\}\, \tag{14}\] so that \(f\) is a constant of motion if and only if \(\{f,H\}=0\). ### Cosymplectic geometry In this subsection, a review of the cosymplectic geometry and the associated formulation of time-dependent Hamiltonian systems is presented. **Definition 2**.: _Let \(M\) be a \(2n+1\) dimensional smooth manifold. A cosymplectic structure on \(M\) is a couple \((\Omega,\eta)\), where \(\Omega\) is a closed 2-form on \(M\) and \(\eta\) is a closed 1-form on \(M\) such that \(\eta\wedge\Omega^{n}\neq 0\). If \((\Omega,\eta)\) is a cosymplectic structure on \(M\) we say that \((M,\Omega,\eta)\) is a cosymplectic manifold._ Let \((M,\Omega,\eta)\) be a cosymplectic manifold of dimension \(2n+1\). Around any point \(x\in M\) there exist local coordinates \((q^{1},\cdots,q^{n},p_{1},\cdots,p_{n},t)\), called canonical coordinates or Darboux coordinates, such that \[\Omega=dq^{i}\wedge dp_{i}\hskip 28.452756pt\mbox{and}\hskip 28.452756pt\eta= \mbox{dt}. \tag{15}\] There exists a distinguished vector field \(R\) on \(M\), called the Reeb vector field, which obeys \[R.\Omega=0\qquad\quad\text{ and }\qquad\text{R}.\eta=1. \tag{16}\] In canonical coordinates we have \(R=\frac{\partial}{\partial t}\). For each \(f\in C^{\infty}(M)\) it is assigned a vector field \(X_{f}\) on \(M\), called the Hamiltonian vector field for \(f\), according to \[X_{f}.\Omega=df-(Rf)\eta\qquad\text{ and }\qquad\text{X}_{\ell}.\eta=0. \tag{17}\] Or equivalently, since \(\mathcal{X}_{\eta\Omega}:TM\longrightarrow T^{*}M\) defined by \(\mathcal{X}_{\eta\Omega}(X)=X.\Omega+(X.\eta)\eta\) is a bundle isomorphism [27], \(X_{f}\) is the only vector field such that \[X_{f}.\Omega+(X_{f}.\eta)\eta=df-(Rf)\eta. \tag{18}\] In canonical coordinates we have \[X_{f}=\frac{\partial f}{\partial p_{i}}\frac{\partial}{\partial q^{i}}-\frac {\partial f}{\partial q^{i}}\frac{\partial}{\partial p_{i}}. \tag{19}\] The assignment \(f\longmapsto X_{f}\) is linear, that is \[X_{f+ag}=X_{f}+\alpha X_{g}, \tag{20}\] \(\forall f,g\in C^{\infty}(M)\) and \(\forall\alpha\in\mathbb{R}\). As in the symplectic case, we have that the cosymplectic structure \((\Omega,\eta)\) on \(M\) defines a Poisson structure on \(C^{\infty}(M)\) as follows: given \(f,g\in C^{\infty}(M)\) the Poisson bracket of \(f\) and \(g\) is defined by \[\{f,g\}=X_{g}f, \tag{21}\] or equivalently \[\{f,g\}=\Omega(X_{f},X_{g}). \tag{22}\] In canonical coordinates, it reads \[\{f,g\}=\frac{\partial f}{\partial q^{i}}\frac{\partial g}{\partial p_{i}}- \frac{\partial f}{\partial p_{i}}\frac{\partial g}{\partial q^{i}}. \tag{23}\] As in the symplectic framework, we have that the assignment \(f\mapsto X_{f}\) is a Lie algebra antihomomorphism between the Lie algebras \((C^{\infty}(M),\{,\})\) and \((\mathfrak{X}(M),[,])\)[27], i.e., \(X_{\{f,g\}}=-[X_{f},X_{g}]\). In addition \(X_{Rf}=-[X_{f},R]\). The theory of time-dependent conservative Hamiltonian systems is naturally constructed within the mathematical formalism of cosymplectic geometry (for details see [27, 6, 28]). Given \(H\in C^{\infty}(M)\), the dynamical system defined on \(M\) by the so called evolution vector field \(E_{H}=X_{H}+R\) is a Hamiltonian system denoted by \((M,\Omega,\eta,H)\). \((M,\Omega,\eta)\) is called the phase space of the system, \(H\) is the Hamiltonian function and \(n=\frac{dim(M)-1}{2}\) is the number of degrees of freedom of the system. The trajectories of the system are the integral curves of \(E_{H}\), in canonical coordinates \((q^{1},\cdots,q^{n},p_{1},\cdots,p_{n},t)\) its expression \(\psi(t)=(q^{1}(t),\cdots,q^{n}(t),p_{1}(t),\cdots,p_{n}(t),t)\) satisfies the Hamilton equations of motion \[\dot{q^{i}}=\frac{\partial H}{\partial p_{i}},\qquad\quad\dot{p_{i}}=-\frac{ \partial H}{\partial q^{i}}. \tag{24}\] The evolution (the temporal evolution) of a function \(f\in C^{\infty}(M)\) (a physical observable) along the trajectories of the system is given by \[\dot{f}=E_{H}f=X_{H}f+Rf=\{f,H\}+Rf. \tag{25}\] So that \(f\) is a constant of motion if and only if \(\{f,H\}+Rf=0\). ### Contact geometry In this subsection we move to the formalism of contact Hamiltonian systems which naturally describe dissipative systems (for details see [28, 29, 30]), and they also have applications in thermodynamics, statistical mechanics and others [31]. **Definition 3**.: _Let \(M\) be a \(2n+1\) dimensional smooth manifold. A contact structure on \(M\) is a 1-form \(\theta\) on \(M\) such that \(\theta\wedge d\theta^{n}\neq 0\). If \(\theta\) is a contact structure on \(M\) we say that \((M,\theta)\) is a contact manifold._ There is a wider notion of contact manifolds, some authors define a contact structure on a manifold as a one-codimensional maximally non-integrable distribution [13, 32, 33, 34]. Locally a contact structure can be expressed as the kernel of a one-form \(\theta\) satisfying \(\theta\wedge d\theta^{n}\neq 0\), \(\theta\) is called a local contact form. However, not every contact structure admits a global contact form. When a contact structure admits a global contact form, the contact manifold is called co-oriented. Every contact manifold \((M,\theta)\) from definition 3 is a co-oriented contact manifold in this wider sense by taking the distribution \(\mathrm{Ker}(\theta)\). In this paper we restrict ourselves to co-oriented contact manifold and we refer to them as contact manifolds. Let \((M,\theta)\) be a contact manifold of dimension \(2n+1\). Around any point \(x\in M\) there exist local coordinates \((q^{1},\cdots,q^{n},p_{1},\cdots,p_{n},z)\), again called canonical or Darboux coordinates, such that \[\theta=dz-p_{i}dq^{i}. \tag{26}\] There exists a distinguished vector field \(R\) on \(M\), called the Reeb vector field, which obeys \[R\lrcorner\theta=1\hskip 28.452756pt\mathrm{and}\hskip 28.452756pt\mathrm{R} \lrcorner\mathrm{d}\theta=0. \tag{27}\] In canonical coordinates we simply have \(R=\frac{\partial}{\partial z}\). For each \(f\in C^{\infty}(M)\) it is assigned a vector field \(X_{f}\) on \(M\), called the contact Hamiltonian vector field for \(f\), according to \[X_{f}\lrcorner\theta=-f\hskip 28.452756pt\mathrm{and}\hskip 28.452756pt\mathrm{K} \lrcorner\mathrm{d}\theta=\mathrm{df}-(\mathrm{Rf})\theta. \tag{28}\] Or equivalently, since \(X_{\theta}:TM\longrightarrow T^{*}M\) defined by \(X_{\theta}(X)=X\lrcorner d\theta+(X\lrcorner\theta)\theta\) is a bundle isomorphism [29], \(X_{f}\) is the only vector field such that \[X_{f}\lrcorner d\theta+(X_{f}\lrcorner\theta)\theta=df-(Rf+f)\theta. \tag{29}\] In canonical coordinates we have \[X_{f}=\frac{\partial f}{\partial p_{i}}\frac{\partial}{\partial q^{i}}-\left( \frac{\partial f}{\partial q^{i}}+p_{i}\frac{\partial f}{\partial z}\right) \frac{\partial}{\partial p_{i}}+\left(p_{i}\frac{\partial f}{\partial p_{i}}- f\right)\frac{\partial}{\partial z}. \tag{30}\] It can be checked that the assignment \(f\longmapsto X_{f}\) is linear, that is \[X_{f+ag}=X_{f}+\alpha X_{g}, \tag{31}\] \(\forall f,g\in C^{\infty}(M)\) and \(\forall\alpha\in\mathbb{R}\). Symplectic and cosymplectic manifolds are Poisson manifolds (symplectic and cosymplectic structures define Poisson brackets), but a contact manifold is strictly a Jacobi manifold, i.e., a contact structure on a manifold defines a Jacobi bracket. Given \(f,g\in C^{\infty}(M)\) the Jacobi bracket of \(f\) and \(g\) is defined by \[\{f,g\}=X_{g}f+fRg, \tag{32}\] or equivalently \[\{f,g\}=\theta([X_{f},X_{g}]). \tag{33}\] In canonical coordinates we have \[\{f,g\}=\frac{\partial f}{\partial q^{i}}\frac{\partial g}{\partial p_{i}}- \frac{\partial f}{\partial p_{i}}\frac{\partial g}{\partial q^{i}}+\frac{ \partial f}{\partial z}\left(p_{i}\frac{\partial g}{\partial p_{i}}-g\right) -\frac{\partial g}{\partial z}\left(p_{i}\frac{\partial f}{\partial p_{i}}-f \right). \tag{34}\] In the contact case the assignment \(f\mapsto X_{f}\) also defines a Lie algebra antihomomorphism between the Lie algebras \((C^{\infty}(M),\{,\})\) and \((\mathfrak{X}(M),[,])\), indeed, for \(f,g\in C^{\infty}(M)\) we have \[-[X_{f},X_{g}]=X_{\{f,g\}}. \tag{35}\] For details see [29]. Hamiltonian systems on contact manifolds are called contact Hamiltonian systems. Given \(H\in C^{\infty}(M)\), the dynamics of the dynamical system on \((M,\theta)\) (the phase space) with Hamiltonian function \(H\) is defined by the Hamiltonian vector field \(X_{H}\). \((M,\theta,H)\) is a contact Hamiltonian system with \(n=\frac{dim(M)-1}{2}\) degrees of freedom. In canonical coordinates we have \[X_{H}=\frac{\partial H}{\partial p_{i}}\frac{\partial}{\partial q^{i}}-\left( \frac{\partial H}{\partial q^{i}}+p_{i}\frac{\partial H}{\partial z}\right) \frac{\partial}{\partial p_{i}}+\left(p_{i}\frac{\partial H}{\partial p_{i}}- H\right)\frac{\partial}{\partial z}. \tag{36}\] The trajectories \(\psi(t)=(q^{1}(t),\cdots,q^{n}(t),p_{1}(t),\cdots,p_{n}(t),z(t))\) of the system are the integral curves of \(X_{H}\), they satisfy the dissipative Hamilton equations of motion \[\dot{q^{i}}=\frac{\partial H}{\partial p_{i}},\hskip 28.452756pt\dot{p_{i}}=- \left(\frac{\partial H}{\partial q^{i}}+p_{i}\frac{\partial H}{\partial z} \right),\hskip 28.452756pt\dot{z}=p_{i}\frac{\partial H}{\partial p_{i}}-H. \tag{37}\] The evolution of a function \(f\in C^{\infty}(M)\) along the trajectories of the system is given by \[\dot{f}=\{f,H\}-fRH. \tag{38}\] So that \(f\) is a constant of motion if and only if \(\{f,H\}-fRH=0\). ### Cocontact geometry This subsection together with the previous ones, completes the notation and language employed in this paper. Here we briefly review the formalism of time-dependent contact Hamiltonian systems introduced in [35]. **Definition 4**.: _Let \(M\) be a \(2n+2\) dimensional smooth manifold. A cocontact structure on \(M\) is a couple \((\theta,\eta)\) of 1-forms on \(M\) such that \(\eta\) is closed and \(\eta\wedge\theta\wedge(d\theta)^{n}\neq 0\). If \((\theta,\eta)\) is a cocontact structure on \(M\) we say that \((M,\theta,\eta)\) is a cocontact manifold._ Let \((M,\theta,\eta)\) be a cocontact manifold of dimension \(2n+2\). Around any point \(p\in M\) there exist local coordinates \((t,q^{1},\cdots,q^{n},p_{1},\cdots,p_{n},z)\), called canonical coordinates or Darboux coordinates, such that \[\theta=dz-p_{i}dq^{i}\hskip 28.452756pt\mbox{and}\hskip 28.452756pt\eta= \mbox{dt}. \tag{39}\] There exist two distinguished vector fields \(R_{z}\) and \(R_{t}\) on \(M\), called the contact Reeb vector field and the time Reeb vector field respectively, such that \[\left\{\begin{array}{l}R_{z}\lr\eta=0\\ R_{z}\lr\theta=1\\ R_{z}\lr d\theta=0\end{array}\right. \tag{40}\] and \[\left\{\begin{array}{l}R_{t}\lr\eta=1\\ R_{t}\lr\theta=0\\ R_{t}\lr d\theta=0\end{array}\right.. \tag{41}\] In canonical coordinates, we have \(R_{z}=\frac{\partial}{\partial z}\) and \(R_{t}=\frac{\partial}{\partial t}\). For each \(f\in C^{\infty}(M)\) it is assigned a vector field \(X_{f}\) on \(M\), called again the contact Hamiltonian vector field for \(f\), according to \[X_{f}\lr\theta=-f,\hskip 28.452756ptX_{f}\lr d\theta=df-(R_{z}f)\theta-(R_{t} f)\eta\hskip 28.452756pt\mbox{and}\hskip 28.452756ptX_{t}\lr\eta=0. \tag{42}\] Or equivalently, since \(\mathcal{X}_{\theta,\eta}:TM\longrightarrow T^{*}M\) defined by \(\mathcal{X}_{\theta,\eta}(X)=X\lr d\theta+(X\lr\theta)\theta+(X\lr\eta)\eta\) is a bundle isomorphism, \(X_{f}\) is the only one vector field such that \[X_{f}\lr d\theta+(X_{f}\lr\theta)\theta+(X_{f}\lr\eta)\eta=df-(R_{z}f+f) \theta-R_{t}f\eta. \tag{43}\] In canonical coordinates, we have \[X_{f}=\frac{\partial f}{\partial p_{i}}\frac{\partial}{\partial q^{i}}-\left( \frac{\partial f}{\partial q^{i}}+p_{i}\frac{\partial f}{\partial z}\right) \frac{\partial}{\partial p_{i}}+\left(p_{i}\frac{\partial f}{\partial p_{i}}- f\right)\frac{\partial}{\partial z}. \tag{44}\] We can observe that the assignment \(f\longmapsto X_{f}\) is linear, that is \[X_{f+\alpha g}=X_{f}+\alpha X_{g}, \tag{45}\] \(\forall f,g\in C^{\infty}(M)\) and \(\forall\alpha\in\mathbb{R}\). Like contact manifolds, cocontact manifolds are Jacobi manifolds; given \(f,g\in C^{\infty}(M)\) the Jacobi bracket of \(f\) and \(g\) is defined by \[\{f,g\}=X_{g}f+fR_{z}g. \tag{46}\] In canonical coordinates, we have \[\{f,g\}=\frac{\partial f}{\partial q^{i}}\frac{\partial g}{\partial p_{i}}- \frac{\partial f}{\partial p_{i}}\frac{\partial g}{\partial q^{i}}+\frac{ \partial f}{\partial z}\left(p_{i}\frac{\partial g}{\partial p_{i}}-g\right)- \frac{\partial g}{\partial z}\left(p_{i}\frac{\partial f}{\partial p_{i}}-f \right). \tag{47}\] Of course the assignment \(f\mapsto X_{f}\) also defines a Lie algebra antihomomorphism between the Lie algebras \((C^{\infty}(M),\{,\}\) and \((\mathfrak{X}(M),[,])\), indeed, for \(f,g\in C^{\infty}(M)\) we have \[-[X_{f},X_{g}]=X_{\{f,g\}}. \tag{48}\] Time-dependent contact Hamiltonian systems are defined under the formalism of cocontact geometry. Given \(H\in C^{\infty}(M)\), the dynamics of the Hamiltonian system on \((M,\theta,\eta)\) (the phase space) with Hamiltonian function \(H\) is defined by the evolution vector field \(E_{H}=X_{H}+R_{t}\). \((M,\theta,\eta,H)\) is a cocontact Hamiltonian system with \(n=\frac{dm(M)-2}{2}\) degrees of freedom. In canonical coordinates \[E_{H}=\frac{\partial H}{\partial p_{i}}\frac{\partial}{\partial q^{i}}-\left( \frac{\partial H}{\partial q^{i}}+p_{i}\frac{\partial H}{\partial z}\right) \frac{\partial}{\partial p_{i}}+\left(p_{i}\frac{\partial H}{\partial p_{i}}- H\right)\frac{\partial}{\partial z}+\frac{\partial}{\partial t}. \tag{49}\] The trajectories \(\dot{\psi}(s)=(t(s),q^{1}(s),\cdots,q^{n}(s),p_{1}(s),\cdots,p_{n}(s),z(s))\) of the system are the integral curves of \(E_{H}\), they satisfy the equations \[\dot{q^{i}}=\frac{\partial H}{\partial p_{i}},\hskip 28.452756pt\dot{p}_{i}=- \left(\frac{\partial H}{\partial q^{i}}+p_{i}\frac{\partial H}{\partial z} \right),\hskip 28.452756pt\dot{z}=p_{i}\frac{\partial H}{\partial p_{i}}-H, \hskip 28.452756pt\dot{t}=1. \tag{50}\] Since \(\dot{t}=1\), we can take \(t=s\), which implies that the temporal parameter for the system is \(t\), that is, the trajectories of the system are parametrized by \(t\) \[\psi(t)=(t,q^{1}(t),\cdots,q^{n}(t),p_{1}(t),\cdots,p_{n}(s),z(t)) \tag{51}\] and they obey the dissipative Hamilton equations of motion \[\dot{q^{i}}=\frac{\partial H}{\partial p_{i}},\hskip 28.452756pt\dot{p}_{i}=- \left(\frac{\partial H}{\partial q^{i}}+p_{i}\frac{\partial H}{\partial z} \right),\hskip 28.452756pt\dot{z}=p_{i}\frac{\partial H}{\partial p_{i}}-H. \tag{52}\] The evolution of a function \(f\in C^{\infty}(M)\) along the trajectories of the system is given by \[\dot{f}=E_{H}f=X_{H}f+R_{t}f=\{f,H\}-fR_{z}H+R_{t}f. \tag{53}\] So that \(f\) is a constant of motion if and only if \(\{f,H\}-fR_{z}H+R_{t}f=0\). ## 4 Particular integrals and particular integrability for Hamiltonian systems ### Symplectic Hamiltonian systems Let \((M,\omega,H)\) be a Hamiltonian system with \(n\) degrees of freedom (\(\dim(M)=2n\)). The dynamical vector field is the Hamiltonian vector field \(X_{H}\), so a particular integral of \((M,\omega,H)\) is a function \(f\in C^{\infty}(M)\) such that \(X_{H}f=af\) for some function \(a\in C^{\infty}(M)\) with real values on \(f=0\); in terms of the Poisson bracket we have that \(f\) is a particular integral if and only if \(\{f,H\}=af\) (this is the original notion of particular integral introduced in [11] and presented under the symplectic framework in [12]). Let us suppose that \(f\in C^{\infty}(M)\) is a particular integral of \((M,\omega,H)\), we know that we can look for the trajectories of the system that live in \(M_{f}=\{\,x\in M:\;f(x)=0\,\}\), which provided that \(0\) is a regular value of \(f\) is a smooth submanifold of \(M\) of dimension \(2n-1\), and these trajectories are (locally) solutions of a system of \(2n-1\) differential equations. In canonical coordinates \((q^{1},\cdots,q^{n},p_{1},\cdots,p_{n})\) on \((M,\omega)\), the trajectories of the system are the solutions of the Hamilton's equations of motion \[\dot{q^{i}}=\frac{\partial H}{\partial p_{i}},\hskip 28.452756pt\dot{p}_{i}=- \frac{\partial H}{\partial q^{i}}\hskip 28.452756pt;\hskip 28.452756pt\dot{z}=1,2,3, \ldots,n. \tag{54}\] In [12] it is shown that the trajectories of \((M,\omega,H)\) that live in \(M_{f}=\{\,x\in M:\;f(x)=0\,\}\) are solutions of the system of \(2n-1\) differential equations \[\left\{\begin{array}{c}\dot{Q}^{1}=\frac{\partial K}{\partial P_{1}}\big{|}_ {f=0}\\ \vdots\\ \dot{Q}^{n-1}=\frac{\partial K}{\partial P_{n-1}}\big{|}_{f=0}\\ \dot{Q}^{n}=\frac{\partial K}{\partial Q^{i}}\big{|}_{f=0}\\ \dot{P}_{1}=-\frac{\partial K}{\partial Q^{i}}\big{|}_{f=0}\\ \vdots\\ \dot{P}_{n-1}=-\frac{\partial K}{\partial Q^{n-1}}\big{|}_{f=0}\end{array}\right., \tag{55}\] where \((Q,P)\) are canonical coordinates on \((M,\omega)\) such that \(P_{n}=f\) (such a canonical transformation can be described by a generating function of the form \(F=F_{2}(q^{1},\cdots,q^{n},P_{1},\cdots,P_{n-1},f)-Q^{i}P_{i}\)[4, 36]) and \((Q^{1},\cdots,Q^{n},P_{1},\cdots,P_{n-1})\) are local coordinates on \(M_{f}\). As it is remarked in [12], system (55) is not a system of Hamilton equations of motion since it is a system of an odd number of differential equations. Nevertheless, in [12] it is shown that the solutions of the system (55) can be found by solving a system of \(2n-2\) Hamilton equations of motion and quadratures; we present the most important aspects of this reduction process, for details see [12]. The system (55) is the system of equations of motion for the dynamical system defined by \(X_{H}\) on \(M_{f}\) which is not Hamiltonian. Some remarks are in order: * Since we are taking \(P_{n}=f=0\), thus, \(\dot{P}_{n}=-\frac{\partial K}{\partial Q^{n}}\big{|}_{{}_{f=0}}=0\), so the function \(K\big{|}_{{}_{M_{f}}}\) does not depend on the coordinate \(Q^{n}\). Therefore, the time evolution of each coordinate \(Q^{1},\cdots,Q^{n-1},P_{1},\cdots,P_{n-1}\) on \(M_{f}\) is independent of \(Q^{n}\). * Because \(P_{n}=f\) is not a coordinate on \(M_{f}\), formally the term \(\frac{\partial K}{\partial f}\big{|}_{{}_{f=0}}\) can not be calculated as a derivative on \(M_{f}\). In fact, it is a function defined on \(M\) restricted to \(M_{f}\). Hence, it follows that the time evolution of \(Q^{n}\), \(\dot{Q}^{n}=\frac{\partial K}{\partial f}\big{|}_{{}_{f=0}}\), may depend on \(Q^{n}\) as well. * One can solve the original system (55) by solving first the reduced system \[\left\{\begin{array}{c}\dot{Q}^{1}=\frac{\partial K}{\partial P_{1}}\big{|} _{{}_{f=0}}\\ \vdots\\ \dot{Q}^{n-1}=\frac{\partial K}{\partial P_{n-1}}\big{|}_{{}_{f=0}}\\ \dot{P}_{1}=-\frac{\partial K}{\partial Q^{n-1}}\big{|}_{{}_{f=0}}\\ \vdots\\ \dot{P}_{n-1}=-\frac{\partial K}{\partial Q^{n-1}}\big{|}_{{}_{f=0}}\end{array} \right.,\] (56) and afterwards integrating the equation \(\dot{Q}^{n}=\frac{\partial K}{\partial f}(Q^{1},\cdots,Q^{n-1},Q^{n},P_{1}, \cdots,P_{n-1})\). System (56) is a system of Hamilton equations of motion in the coordinates \((Q^{1},\cdots,Q^{n-1},P_{1},\cdots,P_{n-1})\), which are coordinates on the level submanifold \(M_{f,Q}=\{x\in M:\ f(x)=0,\ Q^{n}(x)=\mbox{constant}\}\). Indeed, \((M_{f,Q},\omega\big{|}_{{}_{M_{f,Q}}})\) is a symplectic manifold of dimension \(2n-2\) and system (56) is the system of Hamilton equations of motion in the lower-dimensional phase space \((M_{f,Q},\omega\big{|}_{{}_{M_{f,Q}}})\) with Hamiltonian function \(K\big{|}_{{}_{M_{f}}}=K\big{|}_{{}_{M_{f}}}(Q^{1},\cdots,Q^{n-1},P_{1},\cdots, P_{n-1})\). The dynamics defined by the Hamiltonian system \((M_{f,Q},\omega\big{|}_{{}_{M_{f,Q}}},K\big{|}_{{}_{M_{f}}})\) is the projection of the dynamics of \(X_{H}\big{|}_{{}_{M_{f}}}\) into the submanifold \(M_{f,Q}\) in the sense that if \(\pi:M_{f}\longrightarrow M_{f,Q}\) is the projection map from \(M_{f}\) to the submanifold \(M_{f,Q}\) defined in the canonical coordinates \((Q,P)\) by \[\pi(Q^{1},\cdots,Q^{n-1},Q^{n},P_{1},\cdots,P_{n-1})=(Q^{1},\cdots,Q^{n-1},P_ {1},\cdots,P_{n-1}), \tag{57}\] then the trajectories of the system \((M_{f,Q},\omega\big{|}_{{}_{M_{f,Q}}},K\big{|}_{{}_{M_{f}}})\) are images by \(\pi\) of trajectories of the system defined by \(X_{H}\) on \(M_{f}\), i.e., \(\gamma(t)=(\left.Q^{1}(t),\cdots,Q^{n-1}(t),P_{1}(t),\cdots,P_{n-1}(t)\right)\) is a trajectory of the system \((M_{f,Q},\omega\big{|}_{{}_{M_{f,Q}}},K\big{|}_{{}_{M_{f}}})\) if and only if there is a trajectory \(\overline{\gamma(t)}=(Q^{1}(t),\cdots,Q^{n-1}(t),Q^{n}(t),P_{1}(t),\cdots,P_{n -1}(t))\) such that \(\pi(\overline{\gamma(t)})=\gamma(t)\) for every \(t\). To lift the dynamics from \(M_{f,Q}\) to \(M_{f}\) we consider the trajectories \(\gamma(t)=(Q^{1}(t),\cdots,Q^{n-1}(t),P_{1}(t),\cdots,P_{n-1}(t))\) of the system \((M_{f,Q},\omega\big{|}_{{}_{M_{f,Q}}},K\big{|}_{{}_{M_{f}}})\) and construct the trajectories \(\overline{\gamma(t)}=(Q^{1}(t),\cdots,Q^{n-1}(t),Q^{n}(t),P_{1}(t),\cdots,P_{n -1}(t))\) on \(M_{f}\) by integrating the equation \(\dot{Q}^{n}=\frac{\partial K}{\partial f}(Q^{1},\cdots,Q^{n},P_{1},\cdots,P_{n -1})\). It is worth mentioning that this reduction process is possible due to the fact that the time evolution of each coordinate \(Q^{1},\cdots,Q^{n-1},P_{1},\cdots,P_{n-1}\) is independent of \(Q^{n}\). On the other hand, the notion of particular integrability as a more general notion than Liouville integrability has been presented in [12]. Let us see a brief review. **Definition 5**.: _We say that the functions \(f_{1},\cdots,f_{k}\in C^{\infty}(M)\) are in particular involution if_ \[\{f_{i},f_{j}\}=a_{1}^{ij}f_{1}+\cdots+a_{n}^{ij}f_{k} \tag{58}\] _for some functions \(a_{1}^{ij},\cdots,a_{k}^{ij}\in C^{\infty}(M)\) with real values on \(f_{1}=\cdots=f_{k}=0\)._ It is worth remarking that the functions \(a_{1}^{ij},\cdots,a_{k}^{ij}\in C^{\infty}(M)\) in the previous definition are such that \[\lim_{x\to x_{0}}a_{s}^{ij}(x)\ =\ a_{s}^{ij}(x_{0})\in\mathbb{R}\quad,\qquad \forall\ x_{0}\in M_{f}=\{x\in M:\ f_{1}(x)=\cdots=f_{k}(x)=0\}\.\] The following theorem is presented as a (partial) generalization of the Liouville theorem on integrability by quadratures. **Theorem 1**.: _If \(H=f_{1},\cdots,f_{n}\in C^{\infty}(M)\) are functionally independent and in particular involution functions, then for the Hamiltonian system \((M,\omega,H)\), the trajectories of the system that live in \(M_{f}=\{x\in M:\ f_{1}(x)=\cdots=f_{n}(x)=0\}\) can be found by quadratures._ **Definition 6**.: _A Hamiltonian system \((M,\omega,H)\) with \(n\) degrees of freedom is said to be particularly integrable if there exist \(n\) independent functions \(H=f_{1},\cdots,f_{n}\in C^{\infty}(M)\) in particular involution._ The proof of theorem (1) is based on the fact that the Hamiltonian vector fields \(X_{f_{1}|M_{f}},\cdots,X_{f_{n}|M_{f}}\), where \(M_{f}=\{x\in M:\ f_{1}(x)=\cdots=f_{n}(x)=0\}\) is a smooth submanifold of \(M\) of dimension \(dim(M)-n=2n-n=n\), are symmetries of \(X_{H}|_{M_{f}}\) and generate an Abelian (therefore solvable) Lie algebra with the Lie bracket of vector fields; then by Lie's theorem on integrability, the solutions of the equations of motion of the dynamical system defined by \(X_{H}|_{M_{f}}\) can be found by quadratures. The concept of solvable Lie algebra is more general than that of Abelian Lie algebra, indeed, every Abelian Lie algebra is trivially a solvable Lie algebra [37, 38]. Let \(v\) be a smooth vector field on \(\mathbb{R}^{n}\), the Lie's theorem on integrability [2, 17, 39, 40, 41] establishes sufficient conditions for the dynamical system defined by the vector differential equation \(\dot{x}=v(t)\) to be integrable by quadratures, it reads as follows: **Theorem 2**.: _Let \(u_{1},\ldots u_{n}\) be linearly independent possible time-dependent smooth vector fields on \(\mathbb{R}^{n}\). If \(u_{1},\ldots,u_{n}\) are symmetries of the possible time-dependent smooth vector field \(v\) (i.e., \([u,v]=0\)) and they generate a solvable Lie algebra with the Lie bracket \([,]\) of vector fields, then the solutions of the vector differential equation \(\dot{x}=v(x,t)\) can be found by quadratures._ Finally, in [12] is is shown that, as the involution condition for Liouville integrability, the condition for particular integrability is also a maximal condition in the following sense. **Proposition 1**.: _For a given symplectic Hamiltonian system we have that the maximum number of functionally independent functions in particular involution (satisfying condition (58)) is equal to the number of degrees of freedom._ ### Cosymplectic Hamiltonian systems Let \((M,\Omega,\eta,H)\) be a cosymplectic Hamiltonian system with \(n\) degrees of freedom (\(\dim(M)=2n+1\)). In order to introduce the notion of particular integrability in the cosymplectic formalism, we follow the spirit of the original notion, i.e., a particular integral must be characterized by the condition \(\{f,H\}=af\) for some function \(a\in C^{\infty}(M)\) with real values on \(f=0\). So we consider a particular integral of \((M,\Omega,\eta,H)\) as a function \(f\in C^{\infty}(M)\) such that \(X_{Hf}f=af\) for some function \(a\in C^{\infty}(M)\) with real values on \(f=0\); in terms of the Poisson bracket we have that \(f\) is a particular integral if and only if \(\{f,H\}=af\). It is important to remark that in the cosymplectic framework, the dynamical vector field of the Hamiltonian system \((M,\Omega,\eta,H)\) is the evolution vector field \(E_{H}=X_{H}+R\) (for details see section 3.2), so in order to be able to look for the trajectories of the system that live in \(M_{f}=\{\ x\in M:\ f(x)\,=\,0\,\}\) we must have \(E_{H}f|_{f=0}=0\), which is fulfilled by requiring that \(Rf=0\) (\(f\) is time-independent). In conclusion, we propose the following definition. **Definition 7**.: _A particular integral of \((M,\Omega,\eta,H)\) is a function \(f\in C^{\infty}(M)\) such that \(Rf=0\) and \(X_{H}f=af\) for some function \(a\in C^{\infty}(M)\) with real values on \(f=0\)._ Let us suppose that \(f\in C^{\infty}(M)\) is a particular integral of \((M,\Omega,\eta,H)\), we know that we can look for the trajectories of the system that live in \(M_{f}=\{\,x\in M:\ f(x)\,=\,0\,\}\), which provided that \(0\) is a regular value of \(f\) is a smooth submanifold of dimension \(2n+1-1=2n\). In canonical coordinates \((q^{1},\cdots,q^{n},p_{1},\cdots,p_{n},t)\) on \((M,\Omega,\eta)\), the trajectories of the system are the solutions of the Hamilton equations of motion \[\dot{q^{i}}=\frac{\partial H}{\partial p_{i}},\hskip 28.452756pt\dot{p_{i}}=- \frac{\partial H}{\partial q^{i}}\hskip 28.452756pt;\hskip 28.452756pt\dot{n}=1,2,3, \ldots,n. \tag{59}\] Let us suppose we can find a canonical transformation \((q,p,t)\mapsto(Q,P,t)\) such that \(P_{n}=f\), then the Hamilton equations of motion in the new coordinates \((Q,P,t)\) are \[\left\{\begin{array}{c}\dot{Q}^{1}=\frac{\partial K}{\partial P_{1}}\\ \vdots\\ \dot{Q}^{n-1}=\frac{\partial K}{\partial P_{n-1}}\\ \dot{Q}^{n}=\frac{\partial K}{\partial q^{i}}\\ \dot{P}_{1}=-\frac{\partial K}{\partial Q^{i}}\\ \dot{P}_{n-1}=-\frac{\partial K}{\partial Q^{i}}\\ \dot{P}_{n-1}=-\frac{\partial K}{\partial Q^{n-1}}\\ \dot{P}_{n}=\dot{f}=-\frac{\partial K}{\partial Q^{n}}\\ \end{array}\right., \tag{60}\] with \(K=K(Q,P,t)\) the new Hamiltonian function. Such a canonical transformation can be described by a generating function of the form \(F=F_{2}(q^{1},\cdots,q^{n},P_{1},\cdots,P_{n-1},f,t)-Q^{i}P_{i}\)[4, 36]. Since we are looking for the trajectories that live in \(M_{f}\) then we set up \(f=0\). Therefore, at \(f=0\) the system (60) reduces to \(P_{n}=f=0\) and system (55) (formally the reduced system is not exactly system (55) since in this case the function \(K\) is a possible time-dependent function, nevertheless it has exactly the same form, so we use the same form system). In this case, the system (55) is the non-autonomous system of equations of motion for the dynamical system defined by \(X_{H}|_{M_{f}}\), i.e., the mechanical system defined by \(X_{H}\) on \(M_{f}\) which is not Hamiltonian. The same remarks in the symplectic case are valid in the cosymplectic framework: the function \(K\big{|}_{M_{f}}\) does not depend on the coordinate \(Q^{n}\), therefore, the time evolution of each coordinate \(Q^{1},\cdots,Q^{n-1},P_{1},\cdots,P_{n-1}\) on \(M_{f}\) is independent of \(Q^{n}\); the time evolution of \(Q^{n}\), \(\dot{Q}^{n}=\frac{\partial K}{\partial f}\big{|}_{f=0}\), may depend on \(Q^{n}\) as well. So one can solve the original system (55) by solving first the reduced system (56) and afterwards integrating the equation \(\dot{Q}^{n}=\frac{\partial K}{\partial f}(Q^{1},\cdots,Q^{n-1},Q^{n},P_{1}, \cdots,P_{n-1},t)\). \((Q^{1},\cdots,Q^{n-1},P_{1},\cdots,P_{n-1},t)\) are coordinates on the level submanifold \(M_{f,Q}=\{x\in M:\ f(x)=0,\ Q^{n}(x)=\text{constant}\}\), the system (56) is the system of Hamilton equations of motion for the Hamiltonian system with phase space \((M_{f,Q},\Omega\big{|}_{M_{f,Q}},\eta\big{|}_{M_{f,Q}})\) and Hamiltonian function \(K\big{|}_{M_{f}}=K\big{|}_{M_{f}}(Q^{1},\cdots,Q^{n-1},P_{1},\cdots,P_{n-1},t)\). As in the symplectic framework, the dynamics defined by the cosymplectic Hamiltonian system \((M_{f,Q},\Omega\big{|}_{M_{f,Q}},\eta\big{|}_{M_{f,Q}},K\big{|}_{M_{f}})\) with \(n\) degrees of freedom is the projection of the dynamics of \(X_{H}\big{|}_{M_{f}}\) into the submanifold \(M_{f,Q}\). For cosymplectic Hamiltonian systems the notion of Liouville integrability is completely analogous to the one in the symplectic case, namely, for a cosymplectic Hamiltonian system with \(n\) degrees of freedom, the existence of \(n\) functionally independent constants of motion in involution allows to find the solutions of the Hamilton equations of motion by quadratures [42, 43, 44]. Now we introduce the notion of particular integrability in the framework of cosymplectic geometry. Observe that the concept of particular involution only involve the Poisson bracket, so in the cosymplectic case we have the same notion, i.e., the functions \(f_{1},\cdots,f_{k}\in C^{\infty}(M)\) are in particular involution if they satisfy condition (58) for some functions \(a_{1}^{ij},\cdots,a_{k}^{ij}\in C^{\infty}(M)\) with real values on \(f_{1}=\cdots=f_{k}=0\). Eventually, we arrive to the following theorem analogous to theorem 1. **Theorem 3**.: _Let \(f_{1},\cdots,f_{n}\in C^{\infty}(M)\) be functionally independent and in particular involution functions such that \(Rf_{i}=0\). If for each \(i\in\{1,\cdots,n\}\)\(\{f_{i},H\}=c^{i1}f_{1}+\cdots+c^{in}f_{n}\) for some \(c^{ij}\in C^{\infty}(M)\) with real values on \(f_{1}=\cdots=f_{n}=0\), then the trajectories of the Hamiltonian system \((M,\Omega,\eta,H)\) that live in \(M_{f}=\{x\in M:f_{1}(x)=\cdots=f_{n}(x)=0\}\) can be found by quadratures._ Observe that \(f_{1},\cdots,f_{n}\) in theorem 3 may be particular integrals or even constants of motion in particular involution, of course we can see that the condition \(\{f_{i},H\}=c^{i1}f_{1}+\cdots+c^{in}f_{n}\) is more general than the condition for a particular integral. We can also observe that we have not taken \(H\) as one of the functions \(f_{1},\cdots,f_{n}\), it is because in the cosymplectic framework \(H\) is not necessarily a particular integral (we may have \(RH\neq 0\)). Before giving the proof of theorem 3 we present a brief review of the Lie integrability by quadratures for a special kind of time-dependent smooth vector fields. For our purposes we need to consider smooth vector fields \(v\) on \(\mathbb{R}^{n}\times\mathbb{R}\) of the form \(v(x,t)=v^{1}(x,t)\frac{\partial}{\partial x^{n}}+\cdots+v^{n}(x,t)\frac{ \partial}{\partial x^{n}}+\frac{\partial}{\partial t}\) (observe that this is the local form of the evolution vector field \(E_{H}\)), with \(t\) the coordinate in \(\mathbb{R}\). If we consider the dynamical system on \(\mathbb{R}^{n}\times\mathbb{R}\) defined by the vector field \(v\), then the equations of motion are \[\left\{\begin{array}{c}\dot{x}^{1}=v^{1}(x^{1},\ldots,x^{n},t),\\ \quad\quad\quad\quad\quad\vdots\\ \dot{x}^{n}=v^{n}(x^{1},\ldots,x^{n},t),\\ \quad\quad\quad\quad\dot{t}=1\end{array}\right. \tag{61}\] By integrating the last equation we have the solution for the variable \(t\), then we only have to solve a system of \(n\) differential equations \[\left\{\begin{array}{c}\dot{x}^{1}=v^{1}(x^{1},\ldots,x^{n},t),\\ \quad\quad\quad\quad\quad\vdots\\ \dot{x}^{n}=v^{n}(x^{1},\ldots,x^{n},t),\end{array}\right., \tag{62}\] which can be solved by quadratures provided the existence of \(n\) linearly independent vector fields of the form \(u^{1}(x,t)\frac{\partial}{\partial x^{1}}+\cdots+u^{n}(x,t)\frac{\partial}{ \partial x^{n}}\) that form a solvable Lie algebra of symmetries of the vector field \(\overline{v}=v^{1}(x,t)\frac{\partial}{\partial x^{1}}+\cdots+v^{n}(x,t) \frac{\partial}{\partial x^{n}}\). The following result well known in the symplectic framework is fundamental for the proof of theorem 3. **Lemma 2**.: _If \(f_{1},\cdots,f_{k}\in C^{\infty}(M)\) are functionally independent functions such that \(Rf_{i}=0\) for \(i=1,\cdots,k\) then the Hamiltonian vector fields \(X_{f_{1}},\ldots,X_{f_{k}}\) are linearly independent._ Proof.: Let us suppose that \(c_{1}X_{f_{1}}+\cdots+c_{k}X_{f_{k}}=0\) for some \(c_{1},\cdots,c_{k}\in\mathbb{R}\). We have that \((c_{1}X_{f_{1}}+\cdots+c_{k}X_{f_{k}})\Omega=c_{1}df_{1}+\cdots+c_{k}df_{k}\) then \[c_{1}df_{1}+\cdots+c_{k}df_{k}=0, \tag{63}\] which means that for regular points (points where the differential \(df_{1},\cdots,df_{k}\) are linearly independent) the vector fields \(X_{f_{1}},\ldots,X_{f_{k}}\) are linearly independent. Of course in the symplectic framework this result is obtained from the fact that \(X_{f}\)-\(\omega=df\) for each \(f\in C^{\infty}(M)\) Now we present the proof of theorem 3. Proof.: Since the functions \(f_{1},\cdots,f_{n}\) are functionally independent on \(M\) then \(M_{f}=\{x\in M:\ f_{1}(x)=\cdots=f_{n}(x)=0\}\) is a smooth submanifold of \(M\) of dimension \(dim(M)-n=2n+1-n=n+1\), and since \(Rf_{1}=\cdots=Rf_{n}=0\), i.e., the functions \(f_{1},\cdots,f_{n}\) are time-independent, then around any point in \(M_{f}\) we can find local coordinates \((y^{1},\cdots,y^{n},t)\). On the other hand we have that the evolution vector field is tangent to \(M_{f}\), indeed, \(E_{H}f_{i}=\{f_{i},H\}+Rf_{i}=c^{i1}f_{1}+\cdots+c^{in}f_{n}\) which is zero on \(M_{f}\); so the trajectories of the Hamiltonian system \((M,\Omega,\eta,H)\) that live in \(M_{f}\) are integral curves of \(E_{H}|_{M_{f}}\) which in local coordinates \((y^{1},\cdots,y^{n},t)\) on \(M_{f}\) has the form \[E_{H}(y,t)=v^{1}(y,t)\frac{\partial}{\partial y^{1}}+\cdots+v^{n}(y,t)\frac{ \partial}{\partial y^{n}}+\frac{\partial}{\partial t}, \tag{64}\] i.e., the trajectories of the Hamiltonian system \((M,\Omega,\eta,H)\) that live in \(M_{f}\) have the local form \(\gamma(t)=(y^{1}(t),\cdots,y^{n}(t),t)\) where the functions \(y^{1}(t),\cdots,y^{n}(t)\) are solutions of a system of differential equations of the form (62), namely, \[\left\{\begin{array}{c}\dot{y}^{1}=v^{1}(y^{1},\ldots,y^{n},t),\\ \vdots\\ \dot{y}^{n}=v^{n}(y^{1},\ldots,y^{n},t),\end{array}\right., \tag{65}\] which can be solved by quadratures provided the existence of \(n\) linearly independent vector fields of the form \(u^{1}(y,t)\frac{\partial}{\partial y^{1}}+\cdots+u^{n}(y,t)\frac{\partial}{ \partial y^{n}}\) that form a solvable Lie algebra of symmetries of the vector field \(\overline{E}_{H}|_{M_{f}}=v^{1}(y,t)\frac{\partial}{\partial y^{1}}+\cdots+v ^{n}(y,t)\frac{\partial}{\partial y^{n}}\). We have that the vector fields \(X_{f_{1}},\ldots,X_{f_{n}}\) are tangent to the submanifold \(M_{f}\) because \(X_{f_{i}}f_{j}=\{f_{j},f_{i}\}\) which is zero on \(M_{f}\), so we can consider the vector fields \(X_{f_{1}}|_{M_{f}},\cdots,X_{f_{n}}|_{M_{f}}\) on \(M_{f}\), which have the local form \(u^{1}_{i}(y,t)\frac{\partial}{\partial y^{1}}+\cdots+u^{n}_{i}(y,t)\frac{ \partial}{\partial y^{n}}\) and since \(X_{\{f,g\}}=-[X_{f},X_{g}]\), we have that they generate an Abelian (therefore solvable) Lie algebra with the Lie bracket of vector fields. In addition we have that \(X_{f_{1}}|_{M_{f}},\cdots,X_{f_{n}}|_{M_{f}}\) are symmetries of \(\overline{E}_{H}|_{M_{f}}\), indeed, we can observe that \(\overline{E}_{H}|_{M_{f}}=X_{H}|_{M_{f}}\) so \[[\overline{E}_{H}|_{M_{f}},X_{f_{i}}|_{M_{f}}]=[X_{H}|_{M_{f}},X_{f_{i}}|_{M_ {f}}]=X_{\{f_{i},H\}}|_{M_{f}}=0. \tag{66}\] We conclude that the vector fields \(X_{f_{1}}|_{M_{f}},\cdots,X_{f_{n}}|_{M_{f}}\) on \(M_{f}\), which have the local form \(u^{1}_{i}(y,t)\frac{\partial}{\partial y^{1}}+\cdots+u^{n}_{i}(y,t)\frac{ \partial}{\partial y^{n}}\), form a solvable Lie algebra of symmetries of the vector field \(\overline{E}_{H}|_{M_{f}}\), therefore the trajectories of the Hamiltonian system \((M,\Omega,\eta,H)\) that live in \(M_{f}\) can be found by quadratures. **Definition 8**.: _A cosymplectic Hamiltonian system \((M,\Omega,\eta,H)\) with \(n\) degrees of freedom is said to be particularly integrable if there exist \(n\) functionally independent functions \(f_{1},\cdots,f_{n}\in C^{\infty}(M)\) in particular involution such that \(Rf_{i}=0\) and for each \(i\in\{1,\cdots,n\}\)\(\{f_{i},H\}=c^{i1}f_{1}+\cdots+c^{in}f_{n}\) for some \(c^{ij}\in C^{\infty}(M)\) with real values on \(f_{1}=\cdots=f_{n}=0\)._ We finish this subsection by showing that as in the symplectic framework, the condition for particular integrability is also a maximal condition. **Proposition 2**.: _For a given cosymplectic Hamiltonian system we have that the maximum number of functionally independent functions \(f_{i}\) in particular involution with \(Rf_{i}=0\) is equal to the number of degrees of freedom of the system._ Proof.: The proof is completely analogous to the one given in [12] for the symplectic case. Let us suppose that for a cosymplectic Hamiltonian system with \(n\) degrees of freedom there exist more than \(n\) functionally independent functions \(f_{1},\cdots,f_{s}\) with \(s>n\) and \(Rf_{i}=0\) that satisfy condition (58). Since the functions \(f_{1},\cdots,f_{s}\) are functionally independent on \(M\) then \(M_{f}=\{x\in M:\ f_{1}(x)=\cdots=f_{s}(x)=0\}\) is a smooth submanifold of \(M\) of dimension \(dim(M)-s=2n+1-s<s\). The vector fields \(X_{f_{1}},\ldots,X_{f_{s}}\) are tangent to the submanifold \(M_{f}\) because \(X_{f_{i}}f_{j}=\{f_{j},f_{i}\}\) which is zero on \(M_{f}\), so we can consider the vector fields \(X_{f_{1}}|_{M_{f}},\ldots,X_{f_{s}}|_{M_{f}}\) on \(M_{f}\), we have that they are linearly independent at any regular point \(p\in M_{f}\), so they generate a vector subspace of \(T_{p}M_{f}\) of dimension \(s\) but the dimension of \(T_{p}M_{f}\) is \(2n-s\) which is less than \(s\), so we have a contradiction, we conclude that the maximum number of functionally independent functions in particular involution is \(n\) (the number of degrees of freedom). ### Contact Hamiltonian systems Let \((M,\theta,H)\) be a contact Hamiltonian system with \(n\) degrees of freedom (\(\dim(M)=2n+1\)). We introduce the notion of particular integral as follows. **Definition 9**.: _A particular integral of \((M,\theta,H)\) is a function \(f\in C^{\infty}(M)\) such that \(X_{H}f=af\) for some function \(a\in C^{\infty}(M)\) with real values on \(f=0\)._ We can observe that with this definition we still have the original characterization \(\{f,H\}=af\) (where in this case \(\{,\}\) defines a Jacobi bracket, see section 3.3 for details). Indeed, we know that \(X_{H}f=\{f,H\}-fRH\), so \(f\in C^{\infty}(M)\) is a particular integral if and only if \(af=\{f,H\}-fRH\) for some function \(a\in C^{\infty}(M)\), if and only if \(\{f,H\}=af+fRH=bf\) with \(b=a+RH\in C^{\infty}(M)\). In [13, 14, 18] the concept of constant of motion (conserved quantity) for the case of dissipative systems is generalized to the concept of dissipated quantity as follows. **Definition 10**.: _A dissipated quantity of the contact Hamiltonian system \((M,\theta,H)\) is a function \(f\in C^{\infty}(M)\) that is dissipated at the same rate as the contact Hamiltonian \(H\), i.e., \(X_{H}f=-fRH\)._ The present notion of particular integral generalizes both concepts of constant of motion and dissipated quantity. Indeed, it is clear that a particular integral \(f\) such that \(X_{H}f=af\) with \(a=0\) is a constant of motion and with \(a=-RH\) is a dissipated quantity. Let us suppose that \(f\in C^{\infty}(M)\) is a particular integral of \((M,\theta,H)\), we know that we can look for the trajectories of the system that live in \(M_{f}=\{\,x\in M:\ f(x)\,=\,0\,\}\), which provided that \(0\) is a regular value of \(f\), is a smooth submanifold of dimension \(2n+1-1=2n\). In canonical coordinates \((q^{1},\cdots,q^{n},p_{1},\cdots,p_{n},z)\) on \((M,\theta)\), the trajectories of the system are the solutions of the dissipative Hamilton equations of motion \[\dot{q^{i}}=\frac{\partial H}{\partial p_{i}},\hskip 28.452756pt\dot{p_{i}}=- \left(\frac{\partial H}{\partial q^{i}}+p_{i}\frac{\partial H}{\partial z} \right),\hskip 28.452756pt\dot{z}=p_{i}\frac{\partial H}{\partial p_{i}}-H. \tag{67}\] Canonical transformations in contact mechanics are defined as transformations leaving the contact form invariant [16], they are a particular case of the so called contact transformations, which are transformation that leaves the contact form invariant up to multiplication by a conformal factor [30, 45]. A canonical transformation of the form \((q,p,z)\mapsto(Q,P,Z)\) such that \(Q_{n}=f\) can be obtained by a generating function of the form \(Z=z-F_{1}(q^{1},\cdots,q^{n},Q^{1},\cdots,Q^{n-1},f)\), where \(F_{1}(q^{1},\cdots,q^{n},Q^{1},\cdots,Q^{n-1},f)\) is the generating function of a symplectic canonical transformation [30]. Let us suppose that we can find a canonical transformation \((q,p,z)\mapsto(Q,P,Z)\) such that \(P_{n}=f\), then the Hamilton equations of motion in the new coordinates \((Q,P,Z)\) are \[\left\{\begin{array}{c}\dot{Q}^{1}=\frac{\partial K}{\partial P_{1}}\\ \vdots\\ \dot{Q}^{n-1}=\frac{\partial K}{\partial P_{n-1}}\\ \dot{Q}^{n}=\frac{\partial K}{\partial f}\\ \dot{P}_{1}=-\left(\frac{\partial K}{\partial Q^{i}}+P_{1}\frac{\partial K}{ \partial Z}\right)\\ \vdots\\ \dot{P}_{n-1}=-\left(\frac{\partial K}{\partial Q^{n-1}}+P_{n-1}\frac{\partial K }{\partial Z}\right)\\ \dot{P}_{n}=\dot{f}=-\left(\frac{\partial K}{\partial Q^{i}}+f\frac{\partial K }{\partial Z}\right)\\ \dot{Z}=P_{i}\frac{\partial K}{\partial P_{i}}-K\end{array}\right., \tag{68}\] with \(K=K(Q,P,Z)\) the new Hamiltonian function. Since we are looking for the solutions of the Hamilton equations of motion that live in \(M_{f}\) then we set up \(f=0\). Therefore, at \(f=0\) the system (68) reduces to \(P_{n}=f=0\) and system \[\left\{\begin{array}{c}\dot{Q}^{1}=\frac{\partial K}{\partial P_{1}}\big{|}_ {f=0}\\ \vdots\\ \dot{Q}^{n-1}=\frac{\partial K}{\partial P_{n-1}}\big{|}_{f=0}\\ \dot{Q}^{n}=\frac{\partial K}{\partial f}\big{|}_{f=0}\\ \dot{P}_{1}=-\left(\frac{\partial K}{\partial Q^{i}}+P_{1}\frac{\partial K}{ \partial Z}\right)\big{|}_{f=0}\\ \vdots\\ \dot{P}_{n-1}=-\left(\frac{\partial K}{\partial Q^{n-1}}+P_{n-1}\frac{\partial K }{\partial Z}\right)\big{|}_{f=0}\\ \dot{Z}=\left(P_{i}\frac{\partial K}{\partial P_{i}}-K\right)\big{|}_{f=0}. \end{array}\right. \tag{69}\] So as in the symplectic and cosymplectic frameworks, by means of a particular integral we can find solutions of the Hamilton equations of motion that are solutions of a reduced systems of differential equations, i.e., we can find trajectories of a contact Hamiltonian system that are trajectories of a reduced dynamical system. Analogous remarks as in the symplectic framework are in order: * Since we are taking \(P_{n}=f=0\), thus, \(\dot{P}_{n}=-\frac{\partial K}{\partial P^{n}}\big{|}_{{}_{f=0}}=0\), so the function \(K\big{|}_{{}_{M_{f}}}\) does not depend on the coordinate \(Q^{n}\). Therefore, the time evolution of each coordinate \(Q^{1},\cdots,Q^{n-1},P_{1},\cdots,P_{n-1},Z\) on \(M_{f}\) is independent of \(Q^{n}\). * The time evolution of \(Q^{n}\), \(\dot{Q}^{n}=\frac{\partial K}{\partial f}\big{|}_{{}_{f=0}}\), may depend on \(Q^{n}\) as well. * One can solve the original system (69) by solving first the reduced system \[\left\{\begin{array}{c}\dot{Q}^{1}=\frac{\partial K}{\partial P_{1}}\big{|}_ {{}_{f=0}}\\ \vdots\\ \dot{Q}^{n-1}=\frac{\partial K}{\partial P_{n-1}}\big{|}_{{}_{f=0}}\\ \dot{P}_{1}=-\left(\frac{\partial K}{\partial Q^{2}}+P_{1}\frac{\partial K}{ \partial Z}\right)\big{|}_{{}_{f=0}}\\ \vdots\\ \dot{P}_{n-1}=-\left(\frac{\partial K}{\partial Q^{n-1}}+P_{n-1}\frac{ \partial K}{\partial Z}\right)\big{|}_{{}_{f=0}}\\ \dot{Z}=\left(P_{1}\frac{\partial K}{\partial P_{1}^{n}}-K\right)\big{|}_{{}_{ f=0}}(i=1,\cdots n-1)\end{array}\right.,\] (70) and afterwards integrating the equation \(\dot{Q}^{n}=\frac{\partial K}{\partial f}(Q^{1},\cdots,Q^{n-1},Q^{n},P_{1}, \cdots,P_{n-1},Z)\). System (70) is the system of Hamilton equations of motion for the Hamiltonian function \(K\big{|}_{{}_{M_{f}}}=K\big{|}_{{}_{M_{f}}}(Q^{1},\cdots,Q^{n-1},P_{1},\cdots, P_{n-1},Z)\) in the coordinates \((Q^{1},\cdots,Q^{n-1},P_{1},\cdots,P_{n-1},Z)\), which are coordinates on the level submanifold \(M_{f,Q}=\left\{x\in M:\ f(x)=0,\ Q^{n}(x)=\text{constant}\right\}\). The dynamics defined by the Hamiltonian system \((M_{f,Q},\omega\big{|}_{{}_{M_{f},Q}},K\big{|}_{{}_{M_{f}}})\) is the projection of the dynamics of \(X_{H}\big{|}_{{}_{M_{f}}}\) into the submanifold \(M_{f,Q}\) in the same sense as in the symplectic case. **Example 1**.: _Let us consider the motion of a particle in a vertical plane under the action of constant gravity with air friction. In canonical coordinates \((x,y,p_{x},p_{y},z)\) the Hamiltonian function is \(H=\frac{1}{2m}(p_{x}^{2}+p_{y}^{2})+mgy+\gamma z\)[18]. We have that \(f=p_{x}\) is a dissipated quantity, therefore a particular integral, indeed \(\{p_{x},H\}=0\). By taking \(p_{x}=0\) we have that the system of Hamilton equations of motion reduces to_ \[\left\{\begin{array}{c}\dot{y}=\frac{\partial K}{\partial p_{y}}\\ \dot{p}_{y}=-\frac{\partial K}{\partial p_{y}}+p_{y}\frac{\partial K}{\partial z }\\ \dot{z}=p_{y}\frac{\partial K}{\partial p_{y}}-K\end{array}\right., \tag{71}\] _and afterwards integrating the equation \(\dot{x}=\frac{\partial H}{\partial p_{x}}\), where \(K=H\big{|}_{{}_{p_{x}=0}}=\frac{1}{2m}p_{y}^{2}+mgy+\gamma z\)._ Now we address the problem of introducing the notion of particular integrability in the framework of contact geometry. The notion of Liouville integrability in contact Hamiltonian mechanics is a bit different from the one in the symplectic or cosymplectic framework; we know that there are important geometric differences between the phase spaces of contact Hamiltonian systems and symplectic or cosymplectic Hamiltonian systems, namely, symplectic and cosymplectic manifolds are Poisson manifolds, but a contact manifold is strictly a Jacobi manifold. In [46, 47] the following definition is presented: **Definition 11**.: _A contact Hamiltonian system \((M,\theta,H)\) with \(n\) degrees of freedom is said to be completely integrable (in the Liouville sense) if there exist \(n+1\) constants of motion \(H,f_{1},\cdots,f_{n}\) that are independent and in involution._ In this definition the functions \(f_{1},\cdots,f_{k}\) are said to be independent if the corresponding Hamiltonian vector fields \(X_{f_{1}},\cdots,X_{f_{k}}\) are linearly independent. This is not equivalent to the condition that the differentials \(df_{1},\cdots,df_{k}\) are linearly independent, since the latter does not hold when one of the Hamiltonian vector fields is \(\mathbb{R}\)-proportional to the Reeb field \(R\) whose Hamiltonian is the function \(1\)[46], i.e., under this context, the independence of functions is not equivalent to our notion of functional independence. Contact Hamiltonian systems with Hamiltonian function \(H\) constant are called of Reeb type, in these cases the Hamiltonian vector field \(X_{H}\) is \(\mathbb{R}\)-proportional to the Reeb field \(R\) and the Hamilton equations of motion are trivial, so we exclude this case in the notion on particular integrability. We have the following fundamental result relating functionally independent functions with linearly independent vector fields. **Lemma 3**.: _If \(f_{1},\cdots,f_{k}\in C^{\infty}(M)\) are functionally independent functions then the Hamiltonian vector fields \(X_{f_{1}},\ldots,X_{f_{k}}\) are linearly independent._ Proof.: Let us suppose that \(c_{1}X_{f_{1}}+\cdots+c_{k}X_{f_{k}}=0\) for some \(c_{1},\cdots,c_{k}\in\mathbb{R}\). We know that \[(c_{1}X_{f_{1}}+\cdots+c_{k}X_{f_{k}})\lrcorner d\theta=d(c_{1}f_{1}+\cdots+c_{ k}f_{k})-R(c_{1}f_{1}+\cdots+c_{k}f_{k})\theta, \tag{72}\] then \[d(c_{1}f_{1}+\cdots+c_{k}f_{k})-R(c_{1}f_{1}+\cdots+c_{k}f_{k})\theta=0, \tag{73}\] or equivalently \[d(c_{1}f_{1}+\cdots+c_{k}f_{k})=R(c_{1}f_{1}+\cdots+c_{k}f_{k})\theta. \tag{74}\] On the other hand, for any \(g\in C^{\infty}(M)\) we have \(g\theta\wedge d(g\theta)=g^{2}\theta\wedge d\theta\), then \[g\theta\wedge d(g\theta)^{n}=g^{n}\theta\wedge d\theta^{n}. \tag{75}\] So if \(g\neq 0\) in an open subset of \(M\) then \(g\theta\) is a contact form on than open submanifold. Now if we suppose that \(R(c_{1}f_{1}+\cdots+c_{k}f_{k})\neq 0\) in an open subset of \(M\) then \(R(c_{1}f_{1}+\cdots+c_{k}f_{k})\theta\) is a contact form on than open submanifold, but it cannot be since \(d(R(c_{1}f_{1}+\cdots+c_{k}f_{k})\theta)=d(d(c_{1}f_{1}+\cdots+c_{k}f_{k}))=0\); so we must have that \(R(c_{1}f_{1}+\cdots+c_{k}f_{k})\) is zero except possibly in a zero measure set, therefore \(d(c_{1}f_{1}+\cdots+c_{k}f_{k})=0\) which implies that \(c_{1}=\cdots=c_{k}=0\), i.e., for regular points (points where the differential \(df_{1},\cdots,df_{k}\) are linearly independent) the vector fields \(X_{f_{1}},\ldots,X_{f_{k}}\) are linearly independent. We can observe that in the definition of completely integrable contact Hamiltonian systems (in the Liouville sense) it is required the existence of \(n+1\) constants of motion and, in addition, the Hamiltonian function is one of them. In general, the Hamiltonian function is not a constant of motion. In fact, contact Hamiltonian systems where the Hamiltonian function is a constant of motion (the Hamiltonian function is invariant under the flow of the Reeb vector field) are called good contact Hamiltonian systems; and completely integrable contact Hamiltonian system where the constants of motion are also invariant under the flow of the Reeb vector field are called completely good [46]. Even for the notion of non-commutative integrability in the contact case, it is required that the Hamiltonian function is a constant of motion [48, 49]. The solutions of the Hamilton equations of motion of a completely good contact Hamiltonian system can be found by quadratures provided that the constants of motion form a solvable Lie algebra under the Jacobi bracket [17]. We have the following theorem on integrability involving particular involution in the contact framework. **Theorem 4**.: _Let \((M,\theta,H)\) be a good contact Hamiltonian system. If \(H=f_{1},\cdots,f_{n}\in C^{\infty}(M)\) are functionally independent and in particular involution functions such that \(Rf_{i}=0\) then the trajectories of the system \((M,\theta,H)\) that live in \(M_{f}=\{x\in M:\ f_{1}(x)=\cdots=f_{n}(x)=0\}\) can be found by quadratures._ Proof.: Since the functions \(f_{1},\cdots,f_{n}\) are functionally independent on \(M\) then \(M_{f}=\{x\in M:\ f_{1}(x)=\cdots=f_{n}(x)=0\}\) is a smooth submanifold of \(M\) of dimension \(dim(M)-n=2n+1-n=n+1\), and since \(Rf_{1}=\cdots=Rf_{n}=0\), i.e., the functions \(f_{1},\cdots,f_{n}\) are \(z\)-independent, then around any point in \(M_{f}\) we can find local coordinates \((y^{1},\cdots,y^{n},z)\). On the other hand we have that the vector fields \(X_{f_{1}},\cdots,X_{f_{n}}\) are tangent to \(M_{f}\), indeed, \(X_{f_{j}}f_{i}=\{f_{i},f_{j}\}-f_{i}Rf_{j}\) which is zero on \(M_{f}\); so the trajectories of the Hamiltonian system \((M,\theta,H)\) that live in \(M_{f}\) are integral curves of \(X_{H=f_{1}}|_{M_{f}}\), i.e., the trajectories of the Hamiltonian system \((M,\theta,H)\) that live in \(M_{f}\) have the local form \(\gamma(t)=(y^{1}(t),\cdots,y^{n}(t),z(t))\) and are solutions of the vector differential equation \(\dot{x}=X_{H}(x)\) with \(x=(y^{1},\ldots,y^{n},z)\) local coordinates on \(M_{f}\), which provided the existence of \(n+1\) linearly independent vector fields on \(M_{f}\) that form a solvable Lie algebra of symmetries of the vector field \(X_{H}(x)\), can be solved by quadratures. In the contact case the assignment \(f\mapsto X_{f}\) defines a Lie algebra antihomomorphism between the Lie algebras \((C^{\infty}(M),\{,\})\) and \((\mathfrak{X}(M),[,])\)[29], indeed, for \(f,g\in C^{\infty}(M)\) we have \[-[X_{f},X_{g}]=X_{\{f,g\}}. \tag{76}\] So we have that \(X_{f_{1}}|_{M_{f}},\cdots,X_{f_{n}}|_{M_{f}}\) generate an Abelian (therefore solvable) Lie algebra of symmetries of \(X_{H=f_{1}}|_{M_{f}}\) with the Lie bracket of vector fields, in addition the vector field \(R\) is also tangent to \(M_{f}\) and is a symmetry of \(X_{H=f_{1}}|_{M_{f}}\), indeed, we know that \([R,X_{H}]=X_{\{1,H\}}=0\). We conclude that the vector fields \(R,X_{f_{1}}|_{M_{f}},\cdots,X_{f_{n}}|_{M_{f}}\) are linearly independent and generate an Abelian (therefore solvable) Lie algebra of symmetries \(X_{H=f_{1}}|_{M_{f}}\) with the Lie bracket of vector fields, therefore the trajectories of the Hamiltonian system \((M,\theta,H)\) that live in \(M_{f}\) can be found by quadratures. **Definition 12**.: _A good contact Hamiltonian system \((M,\theta,H)\) with \(n\) degrees of freedom is said to be particularly integrable if there exist \(n\) functionally independent functions \(H=f_{1},\cdots,f_{n}\in C^{\infty}(M)\) in particular involution such that \(Rf_{i}=0\)._ The condition for particular integrability is also a maximal condition. **Proposition 3**.: _For a given good contact Hamiltonian system we have that the maximum number of functionally independent functions \(f_{i}\) in particular involution with \(Rf_{i}=0\) is equal to the number of degrees of freedom of the system._ The proof is completely analogous to the one given in the previous section for the cosymplectic case. ### Cocontact Hamiltonian systems We finish this section by introducing the notions of particular integral and particular integrability in the cocontact framework. The development of this subsection follows the guide of the previous ones. Let \((M,\theta,\eta,H)\) be a cocontact Hamiltonian system with \(n\) degrees of freedom (\(\dim(M)=2n+2\)). We introduce the notion of particular integral as follows. **Definition 13**.: _A particular integral of \((M,\theta,\eta,H)\) is a function \(f\in C^{\infty}(M)\) such that \(R_{t}f=0\) and \(X_{H}f=af\) for some function \(a\in C^{\infty}(M)\) with real values on \(f=0\)._ We can observe that, as in the previous subsection, in terms of the Jacobi bracket \(f\) is a particular integral if and only if \(\{f,H\}=af\) for some function \(a\in C^{\infty}(M)\). As in the cosymplectic case we require \(R_{t}f=0\) in order to be able to look for the trajectories of the system that live in \(M_{f}=\{\,x\in M:\ f(x)\,=\,0\,\}\), indeed, we must have \(E_{H}f|_{f=0}=0\), where \(E_{H}=X_{H}+R_{t}\) is the dynamical vector field (for details see section 3.4). In [15] dissipated quantities are studied for cocontact Hamiltonian systems. **Definition 14**.: _A dissipated quantity of the cocontact Hamiltonian system \((M,\theta,\eta,H)\) is a function \(f\in C^{\infty}(M)\) such that \(E_{H}f=-fR_{z}H\)._ As it is pointed out in [14, 15], the most general case is \(E_{H}H=-HR_{z}H+R_{t}H\). Thus, whenever \(H\) depends explicitly on \(t\) it is not a dissipated quantity itself. A dissipated quantity \(f\) such that \(R_{t}f=0\) is a particular integral. Let us suppose that \(f\in C^{\infty}(M)\) is a particular integral of \((M,\theta,\eta,H)\). We can look for the trajectories of the system that live in \(M_{f}=\{\,x\in M:\ f(x)\,=\,0\,\}\), which provided that \(0\) is a regular value of \(f\) is a smooth submanifold of dimension \(2n+2-1=2n+1\). Using a similar procedure as in the contact framework, by means of a particular integral, we can find solutions of the Hamilton equations of motion that are solutions of a reduced system of differential equations, i.e., we can find trajectories of a cocontact Hamiltonian system that are trajectories of a reduced dynamical system. As in the contact case, the notion of Liouville integrability for cocontact Hamiltonian systems is restricted to good Hamiltonian systems. A cocontact Hamiltonian system is good if the Hamiltonian function is invariant under the flow of the contact Reeb vector field; in this case the Hamiltonian function is not necessarily a constant of motion, indeed, let \((M,\theta,\eta,H)\) be a good cocontact Hamiltonian system, i.e., \(R_{z}H=0\), then \(E_{H}H=R_{t}H\) which is not necessarily zero. We say that a good cocontact Hamiltonian with \(n\) degrees of freedom is a completely good cocont Hamiltonian system if there exist \(n\) functionally independent constants of motion \(f_{1},\cdots,f_{n}\) with \(R_{z}f_{i}=0\) that are in involution. The solutions of the Hamilton equations of motion of a completely good cocontact Hamiltonian system can be found by quadratures provided that the constants of motion form a solvable Lie algebra under the Jacobi bracket [17]. We have the following theorem analogous to theorems 1, 3 and 4. **Theorem 5**.: _Let \((M,\theta,\eta,H)\) be a good cocontact Hamiltonian system. Let \(f_{1},\cdots,f_{n}\in C^{\infty}(M)\) be functionally independent and in particular involution functions such that \(R_{z}f_{i}=R_{t}f_{i}=0\). If for each \(i\in\{1,\cdots,n\}\)\(\{f_{i},H\}=c^{i1}f_{1}+\cdots+c^{in}f_{n}\) for some \(c^{ij}\in C^{\infty}(M)\) with real values on \(f_{1}=\cdots=f_{n}=0\), then the trajectories of the Hamiltonian system \((M,\theta,\eta,H)\) that live in \(M_{f}=\{x\in M:\ f_{1}(x)=\cdots=f_{n}(x)=0\}\) can be found by quadratures._ As in the cosymplectic case, we have not taken \(H\) as one of the functions \(f_{1},\cdots,f_{n}\) because \(R_{t}H\) is not necessarily zero. We have the following result analogous to lemmas 2 and 3. **Lemma 4**.: _If \(f_{1},\cdots,f_{k}\in C^{\infty}(M)\) are functionally independent functions such that \(R_{t}f_{i}=0\) for \(i=1,\cdots,k\) then the Hamiltonian vector fields \(X_{f_{1}},\ldots,X_{f_{k}}\) are linearly independent._ The proof is also analogous to the previous lemmas. Finally we present the proof of theorem 5. Proof.: Since the functions \(f_{1},\cdots,f_{n}\) are functionally independent on \(M\) then \(M_{f}=\{x\in M:\ f_{1}(x)=\cdots=f_{n}(x)=0\}\) is a smooth submanifold of \(M\) of dimension \(dim(M)-n=2n+2-n=n+2\), and since \(R_{z}f_{1}=\cdots=R_{z}f_{n}=R_{t}f_{1}=\cdots=R_{t}f_{n}=0\), i.e., the functions \(f_{1},\cdots,f_{n}\) are \(z\)-independent and time-independent, then around any point in \(M_{f}\) we can find local coordinates \((t,y^{1},\cdots,y^{n},z)\). On the other hand we have that the evolution vector field is tangent to \(M_{f}\), indeed, \(E_{H}f_{i}=\{f_{i},H\}+R_{t}f_{i}=c^{i1}f_{1}+\cdots+c^{in}f_{n}\) which is zero on \(M_{f}\); so the trajectories of the Hamiltonian system \((M,\theta,\eta,H)\) that live in \(M_{f}\) are integral curves of \(E_{H}|_{M_{f}}\) which in local coordinates \((t,y^{1},\cdots,y^{n},z)\) on \(M_{f}\) has the form \[E_{H}(t,y,z)=v^{1}(t,y,z)\frac{\partial}{\partial y^{1}}+\cdots+v^{n}(t,y,z) \frac{\partial}{\partial y^{n}}+v^{z}(t,y,z)\frac{\partial}{\partial z}+\frac{ \partial}{\partial t}, \tag{77}\] i.e., the trajectories of the Hamiltonian system \((M,\theta,\eta,H)\) that live in \(M_{f}\) have the local form \(\gamma(t)=(t,y^{1}(t),\cdots,y^{n}(t),z(t))\) where the functions \(y^{1}(t),\cdots,y^{n}(t),z(t)\) are solutions of the system of differential equations \[\left\{\begin{array}{l}\dot{y}^{1}=v^{1}(t,y^{1},\ldots,y^{n},z),\\ \vdots\\ \dot{y}^{n}=v^{n}(t,y^{1},\ldots,y^{n},z),\\ \dot{z}=v^{i}(t,y^{1},\ldots,y^{n},z),\end{array}\right., \tag{78}\] which can be solved by quadratures provided the existence of \(n+1\) linearly independent vector fields of the form \(u^{1}(t,y,z)\frac{\partial}{\partial y^{1}}+\cdots+u^{n}(t,y,z)\frac{\partial} {\partial y^{n}}++u^{2}(t,y,z)\frac{\partial}{\partial z}\) that form a solvable Lie algebra of symmetries of the vector field \(\overline{E}_{H}|_{M_{f}}=v^{1}(t,y,z)\frac{\partial}{\partial y^{1}}+\cdots +v^{n}(t,y,z)\frac{\partial}{\partial y^{n}}+v^{z}(t,y,z)\frac{\partial}{ \partial z}\). As in the contact case we have that \(R_{z},X_{f_{1}}|_{M_{f}},\cdots,X_{f_{n}}|_{M_{f}}\) are linearly independent and generate an Abelian (therefore solvable) Lie algebra of symmetries of \(\overline{E}_{H}|_{M_{f}}=X_{H}|_{M_{f}}\) with the Lie bracket of vector fields. Therefore, the trajectories of the Hamiltonian system \((M,\theta,\eta,H)\) that live in \(M_{f}\) can be found by quadratures. **Definition 15**.: _A good cocontact Hamiltonian system \((M,\theta,\eta,H)\) with \(n\) degrees of freedom is said to be particularly integrable if there exist \(n\) functionally independent functions \(f_{1},\cdots,f_{n}\in C^{\infty}(M)\) in particular involution such that for each \(i\in\{1,\cdots,n\}\), \(R_{z}f_{i}=R_{t}f_{i}=0\) and \(\{f_{i},H\}=c^{1}f_{1}+\cdots+c^{in}f_{n}\) for some \(c^{ij}\in C^{\infty}(M)\) with real values on \(f_{1}=\cdots=f_{n}=0\)._ Of course, like in the contact case, the condition for particular integrability in the cocontact framework is a maximal condition. ## 5 Conclusions We have introduced the notions of particular integral and particular integrability for classical Hamiltonian systems on cosymplectic, contact and cocontact manifolds. These results extend those presented in [12] for the symplectic case. A constant of motion (a first Liouville integral) can be regarded as a special case of the concept of particular integral. Especially, in the contact framework the present study generalizes the notion of a dissipated quantity. For a non necessarily integrable (in the Liouville sense) cosymplectic, contact or cocontact Hamiltonian system we have shown that, provided the existence of a sufficient number of functionally independent particular integrals, it is possible to find some of the trajectories by quadratures. If we do not have enough number of particular integrals required to achieve particular integrability, we are able to reduce the Hamilton equations of motion to a simpler form at least. For future work, we plan to investigate the existence of infinitesimal symmetries related to particular integrals in the above mentioned geometric frameworks. ## Acknowledgments The author R. Azuaje wishes to thank CONAHCYT (Mexico) for the financial support through a postdoctoral fellowship in the program Estancias Posdoctorales por Mexico 2022. In addition, R. Azuaje thanks Dr. Alessandro Bravetti for his helpfull comments on some aspects of contact geometry.
2309.07145
ETP: Learning Transferable ECG Representations via ECG-Text Pre-training
In the domain of cardiovascular healthcare, the Electrocardiogram (ECG) serves as a critical, non-invasive diagnostic tool. Although recent strides in self-supervised learning (SSL) have been promising for ECG representation learning, these techniques often require annotated samples and struggle with classes not present in the fine-tuning stages. To address these limitations, we introduce ECG-Text Pre-training (ETP), an innovative framework designed to learn cross-modal representations that link ECG signals with textual reports. For the first time, this framework leverages the zero-shot classification task in the ECG domain. ETP employs an ECG encoder along with a pre-trained language model to align ECG signals with their corresponding textual reports. The proposed framework excels in both linear evaluation and zero-shot classification tasks, as demonstrated on the PTB-XL and CPSC2018 datasets, showcasing its ability for robust and generalizable cross-modal ECG feature learning.
Che Liu, Zhongwei Wan, Sibo Cheng, Mi Zhang, Rossella Arcucci
2023-09-06T19:19:26Z
http://arxiv.org/abs/2309.07145v1
# ETP: Learning Transferable ECG Representations via ECG-Text Pre-Training ###### Abstract In the domain of cardiovascular healthcare, the Electrocardiogram (ECG) serves as a critical, non-invasive diagnostic tool. Although recent strides in self-supervised learning (SSL) have been promising for ECG representation learning, these techniques often require annotated samples and struggle with classes not present in the fine-tuning stages. To address these limitations, we introduce ECG-Text Pre-training (ETP), an innovative framework designed to learn cross-modal representations that link ECG signals with textual reports. For the first time, this framework leverages the zero-shot classification task in the ECG domain. ETP employs an ECG encoder along with a pre-trained language model to align ECG signals with their corresponding textual reports. The proposed framework excels in both linear evaluation and zero-shot classification tasks, as demonstrated on the PTBL-XL and CPSC2018 datasets, showcasing its ability for robust and generalizable cross-modal ECG feature learning. Che Liu\({}^{1,\star}\), Zhongwei Wan\({}^{2,\star}\), Sibo Cheng\({}^{1}\), Mi Zhang\({}^{2}\), Rossella Arcucci\({}^{1}\)\({}^{1}\) Imperial College London, UK \({}^{2}\) The Ohio State University, USA {che.liu21, sibo.cheng, r.arcucci}@imperial.ac.uk, {wan.512, mizhang.1}@osu.edu Electrocardiogram, ECG-Text Pre-training, Self-supervised Learning ## 1 Introduction The Electrocardiogram (ECG) is a crucial clinical diagnostic tool for various cardiac conditions. While deep learning has shown promise in ECG classification, its effectiveness often depends on the availability of high-quality labels and expert review, making the process labor-intensive and costly. In the quest to circumvent the pitfalls of extensive annotation, self-supervised learning (SSL) has emerged as a promising avenue, excelling particularly with datasets harboring limited annotations [1, 2, 3]. SSL paves the way for harnessing ECG representations beneficial for a spectrum of downstream tasks like abnormality detection and arrhythmia classification [4, 5]. However, a significant bottleneck remains: extant ECG SSL [6, 7, 8, 9] strategies still lean heavily on substantial annotated data for fine-tuning on downstream applications as shown in Fig 2b. Such a dependency becomes particularly limiting for rare cardiac conditions, steering research attention to the zero-shot classification. This paradigm aims to negate the need for annotated samples of unseen categories by leveraging cross-modal representation from ECG and disease-related textual prompt and utilize the ECG-text similarity to determine the predicted disease that do not need annotated data in downstream tasks as depicted in Fig 2a. The path to zero-shot learning for ECG isn't devoid of obstacles. Primarily, there exists a semantic disjunction between the continuous numerical nature of ECG and the discrete clinical terminologies in textual reports [10, 11, 12]. Further complications arise from domain adaptation issues and scalability concerns, with zero-shot models often requiring considerable computational resources [13]. While recent studies, such as those by [14] and [15], have made headway in ECG zero-shot classification, they remain tethered to supervised learning during pre-training, demanding extensive annotated ECG data. Witnessing the potential of vision-language pre-training in broader contexts, as evidenced by works like CLIP [16], we introduce **ECG-Text Pre-training (ETP)**. This innovative approach seeks to leverage the 12-leads ECG and its corresponding textual reports within a cross-modal learning paradigm. ETP features a language model paired with an ECG encoder to yield text and ECG embeddings. Leveraging a priori clinical knowledge, the text is channeled through a sizeable frozen language model with 1D CNN serving as the ECG encoder's backbone. Both components possess linear projection heads, ensuring the harmonization of text and ECG dimensions. Following this, the concordance between ECG and text embeddings becomes the focal point to minimize contrastive learning loss and yield classification probabilities for diverse ECG categories. The key contributions from our research are outlined as follows: * We are the pioneers in delving into and unveiling the potential of ECG-Text Pre-training (ETP) specifically for ECG signals. * Our approach not only achieves state-of-the-art (SOTA) results in the fine-tuning phase but also becomes the first to demonstrate the viability of zero-shot tasks. Furthermore, compared to uni-modal SSL, our method exhibits enhanced robustness and transferability. * We have established the comprehensive benchmark for ETP, focusing on the confluence of ECG-Text pre-training and ECG signals. ## 2 Methodology ### ECG-Text Pre-training Incorporating both ECG signals and paired textual description, we employ the following modifications based on the CLIP framework: Given the CLIP framework as a reference [17], we integrate a contrastive learning aiming to predict the associated pair \((e_{ecg,i},t_{ecg,i})\) among the \(N\times N\) probable ECG-text combinations, while strategically positioning the remaining \(N^{2}-N\) negative combinations at a distance. In this context, two distinct encoders for ECG signals and text, denoted as \(\mathcal{F}_{ecg}\) and \(\mathcal{F}_{text}\) respectively, transform \(\mathbf{e}_{ecg,i}\) and \(\mathbf{t}_{ecg,i}\) into a latent embedding space, represented as \([\hat{\mathbf{e}}]_{i=1}^{N}\). Subsequently, two separate non-linear projectors for ECG signals and text, denoted as \(\mathcal{P}_{ecg}\) and \(\mathcal{P}_{text}\) respectively, convert \(\mathbf{e}_{ecg,i}\) and \(\mathbf{t}_{ecg,i}\) into a consistent dimension, termed \(d\). This process can be represented as: \[\hat{\mathbf{e}}_{ecg,i} =\mathcal{P}_{ecg}(\mathcal{F}_{ecg}(\mathbf{e}_{ecg,i})), \tag{1}\] \[\hat{\mathbf{t}}_{ecg,i} =\mathcal{P}_{t}(\mathcal{F}_{text}(\mathbf{t}_{ecg,i})), \tag{2}\] with both \(\hat{\mathbf{e}}_{ecg,i}\) and \(\hat{\mathbf{t}}_{ecg,i}\) belonging to the set \(\mathbb{R}^{d}\). From the training set, we extract ECG feature vectors denoted by \([\hat{\mathbf{e}}_{ecg,i}]_{i=1}^{N}\) and text feature vectors represented by \([\hat{\mathbf{t}}_{ecg,i}]_{i=1}^{N}\). Following this, we then calculate the cosine similarities as \(r_{i,i}^{e2t}=\hat{\mathbf{e}}_{ecg,i}^{\top}\hat{\mathbf{t}}_{ecg,i}\) and \(r_{i,i}^{t2e}=\hat{\mathbf{t}}_{ecg,i}^{\top}\hat{\mathbf{e}}_{ecg,i}\), which illustrate the ECG-text and text-ECG capabilities respectively. The loss function, \(\mathcal{L}_{\text{CE}}\), is then expressed as: \[\mathcal{L}_{e}^{e2t}=-\log\frac{\exp(r_{i,i}^{e2t}/\sigma_{1})}{\sum_{j=1}^{ K}\exp(r_{i,j}^{e2t}/\sigma_{1})}, \tag{3}\] \[\mathcal{L}_{i}^{t2e}=-\log\frac{\exp(r_{i,i}^{t2e}/\sigma_{1})}{\sum_{j=1}^{ K}\exp(r_{i,j}^{t2e}/\sigma_{1})} \tag{4}\] \[\mathcal{L}_{\text{total}}=\frac{1}{2K}\sum_{i=1}^{N}\left(\mathcal{L}_{e}^{e2 t}+\mathcal{L}_{t}^{t2e}\right), \tag{5}\] Here, \(\mathcal{L}_{e}^{e2t}\) and \(\mathcal{L}_{t}^{t2e}\) are ECG-text and text-ECG cross-modal contrastive losses respectively. \(\sigma_{1}\) denotes the temperature hyper-parameter, which in our research was fixed at 0.07. Meanwhile, \(K\) symbolizes the batch size per step, with \(K\) being a subset of \(N\). Through the total loss, \(\mathcal{L}_{\text{total}}\), our model gets trained to maximize mutual information between the aligned ECG-text pairs that encompass cross-referential attributes in a batch. ### Self-supervised Contrastive Learning Conventional contrastive-based SSL methods [18, 19, 20, 7, 6] rely on strong augmentation to generate two distinct views for the input data, such as random segmentation and inversion, to the original ECG signals. This creates augmented views that serve as positive pairs \([(e_{ecg,i},e_{ecg,i}^{\prime})]_{i=1}^{N}\), with the other ECG signals in the mini-batch being considered as negative examples. The pipeline is depicted in Fig 1b. This data augmentation approach aligns with the strategy outlined in [7]. Next, we derive the representations of these augmented views, represented as \([\hat{\mathbf{c}}^{\prime}]_{i=1}^{N}\), using the ECG projector \(p_{ecg}\) and ECG encoder \(\mathcal{F}_{ecg}\). This is analogous to obtaining the representations \([\hat{\mathbf{e}}]_{i=1}^{N}\). Consequently, our ECG invariant learning goal is defined as: \[\mathcal{L}_{SSL}=-\frac{1}{K}\sum_{j=1}^{N}\log\frac{\exp(r_{i,i}^{e2e^{ \prime}}/\sigma_{2})}{\sum_{j=1}^{N}\exp(r_{i,j}^{e2e^{\prime}}/\sigma_{2})} \tag{6}\] \[\hat{\mathbf{e}}_{ecg,i}=\mathcal{F}_{ecg}(e_{ecg,i}),\hat{\mathbf{e}}_{ecg,i}^ {\prime}=\mathcal{F}_{ecg}(e_{ecg,i}^{\prime}) \tag{7}\] \[r_{i,i}^{e2e^{\prime}}=\hat{\mathbf{e}}_{ecg,i}^{\top}\hat{\mathbf{e}}_{ecg,i} ^{\prime} \tag{8}\] In Eq 6, the temperature hyper-parameter \(\sigma_{2}\) retains its value of 0.07 when considering the overall loss objective \(\mathcal{L}_{SSL}\). ## 3 Experiments and Analysis ### Datasets **PTB-XL** The ECG dataset under examination is substantial, encompassing 21,837 ECG signals that were accumulated from 18,885 patients during the period of October 1989 to June 1996. The collected data consists of 12-lead ECG, each sampled at a rate of 500 Hz with a duration of 10 seconds, where each ECG signal is paired with the corresponding ECG reports. The reports are generated by the standard protocol and only describe the ECG without final diagnosis. The original ECG reports were written in 70.89% German, 27.9% English, and 1.21% Swedish, and were converted into structured SCP-ECG statements. For downstream tasks, we follow the official split from [6] to build the train/val/test split only with single category. Furthermore, each record in downstream task setting is classified under one of five primary diagnostic categories: Normal (NORM), Myocardial Infarction (MI), ST/T Change (STTC), Conduction Disturbance (CD), and Hypertrophy (HYP). **CPSC2018** This dataset, which is publicly accessible, comprises 6,877 standard 12-lead ECG records, each sampled at a rate of 500 Hz, and the duration of these records ranges from 6 to 60 seconds. The dataset is annotated with nine distinct labels, which include Atrial fibrillation (AF), First-degree atrioventricular block (I-AVB), Left bundle branch block (LBBB), Right bundle branch block (RBBB), Premature atrial contraction (PAC), Premature ventricular contraction (PVC), ST-segment depression (STD), ST-segment elevation (STE), and normal (Normal). For both datasets, we adhere to the official split as outlined in [6] and only select samples that belong to a single category. ### Implementation The ECG encoder we utilize is ResNet18-1D. This is adapted from its two-dimensional version, ResNet18-2D [21], by transitioning to 1D convolutional layers. For text encoding, we employ BioClinicalBERT [22], pre-trained on clinical notes and bio-clinical articles. Our model integrates two linear projection heads: one for the ECG encoder and another for the text encoder. Both are characterized by an output dimension of 512 and utilize a temperature parameter \(\tau\) initialized to 0.07. The ECG encoder's optimization is handled using the Adam optimizer, set with a learning rate and weight decay of \(2\times e^{-3}\) and \(1\times e^{-5}\). During pre-training, we operate over 50 epochs with a batch size of 128, while all subsequent downstream tasks are processed with a batch size of 32. All experimental procedures are executed using PyTorch 2.0 on 1 NVIDIA A100-40GB GPU. ### Results on Linear Evaluation In the task of linear evaluation, we rigorously test the quality and robustness of the ECG representations generated by our ETP framework. To do this, we keep the pre-trained ECG encoder fixed and only update a linear classifier that is initialized randomly. This evaluation methodology is applied to two large-scale public ECG datasets with disease-level annotation, PTB-XL and CPSC2018, using Area Under the Curve (AUC) score and F1-score as the primary metrics for performance assessment. As clearly indicated in Table 1, the ETP framework sets new performance standards, outclassing all existing baseline methods. Specifically, it achieves an AUC of 83.5 and an F1-score of 61.3 on the PTB-XL dataset. Similarly, on the CPSC2018 dataset, ETP registers an AUC of 86.1 and an F1-score of 63.4. These compelling results not only validate the effectiveness of ETP but also firmly establish it as the leading methodology for learning ECG representations. ### Results on Zero-shot Classification To delve deeper into the capabilities of learn cross-modal representation from proposed ETP framework, we conducted zero-shot classification tasks on both PTB-XL and CPSC2018 datasets. The results are presented in Tab 2 and 3. Our zero-shot classification pipeline is inspired by the CLIP framework [17]. We employ the phrase 'this ECG indicates disease name' as a positive prompt and calculate the cosine similarity between the ECG and prompt embeddings. The Figure 1: Comparison between ETP and SSL. Figure 2: The pipeline of zero-shot classification and fine-tune classification. prompt with the highest similarity is selected as the predicted category. **PTB-XL** As shown in Table 2, the ETP pre-trained model consistently surpasses models with random initialization across various metrics, including AUC, ACC, and F1-score. For example, in the 'NORM' category, ETP achieves an AUC of 71.8, compared to 52.7 for random initialization. It also attains an ACC of 87.4 in the 'HYP' category, as opposed to 10.6 with random initialization. However, it's important to note that the AUC score for ETP is lower in the 'MI' category, indicating potential areas for improvement in specific disease classifications. Overall, ETP demonstrates significant enhancements, with average scores of 54.6 for AUC, 60.8 for ACC, and 33.1 for F1-score. **CPSC2018** Table 3 shows similar trends. The ETP pre-trained model consistently outperforms models initialized randomly across various metrics, such as AUC, ACC, and F1-score. Specifically, in categories like 'LBBB,' ETP achieves an AUC of 81.3, compared to 33.3 from a randomly initialized model. Additionally, ETP attains an ACC of 72.3 in the 'PAC' category, as opposed to 9.6 from a randomly initialized model. The average scores across all categories further underscore ETP's superiority, with an average AUC of 57.1, ACC of 60.9, and F1-score of 27.1. These substantial improvements across all disease categories highlight the effectiveness of the cross-modal representation learned by ETP. The results affirm the efficacy of ETP in learning robust cross-modal representations for ECG and paired reports. While ETP shows promising results in most categories, there are specific areas, such as the 'MI' category in the PTB-XL dataset, where further refinement could be beneficial. Overall, the ETP framework demonstrates a compelling advantage over random initialization in zero-shot classification tasks, thereby validating its potential for practical applications in cardiovascular healthcare. ## 4 Conclusion In this work, we propose ETP, the novel framework to learn cross-modal representation from unannotated ECG and associated report. We also first build the comprehensive benchmark on linear evaluation and zero-shot classification with ECG cross-modal learning and SSL. ETP surpass all SSL methods on linear evaluation task and endow the zero-shot ability to ECG community via the proposed framework and evaluated on two large-scale public datasets, PTB-XL and CPSC2018. Overall, this work establishes the first comprehensive benchmark for ECG zero-shot classification and cross-modal learning, demonstrating the capability and potential of jointly learning ECG signals and paired reports. \begin{table} \begin{tabular}{|c|c c|c c|} \hline \hline & \multicolumn{2}{c|}{PTB-XL} & \multicolumn{2}{c|}{CPSC2018} \\ \hline Method & AUC & F1 & AUC & F1 \\ \hline Random init & 71.5 & 52.3 & 72.1 & 59.9 \\ \hline CPC [23] & 70.3 & 54.2 & 74.6 & 53.6 \\ \hline SimCLR [18] & 67.5 & 55.5 & 73.2 & 56.8 \\ \hline BYOL [19] & 76.1 & 56.8 & 77.4 & 61.3 \\ \hline SimSiam [20] & 71.4 & 56.8 & 75.5 & 62.0 \\ \hline TS-TCC [7] & 81.8 & 56.4 & 83.5 & 62.2 \\ \hline CLOCS [24] & 81.7 & 55.8 & 82.0 & 61.3 \\ \hline ASTCL [6] & 82.0 & 57.4 & 84.2 & 62.8 \\ \hline \hline **ETP** & **83.5** & **61.3** & **86.1** & **63.4** \\ \hline \hline \end{tabular} \end{table} Table 1: Linear evaluation results on PTB-XL and CPSC2018. Best results are in bold. \begin{table} \begin{tabular}{|c|c|c c c|} \hline \hline & & \multicolumn{2}{c|}{CPSC2018} \\ \hline Category & Method & AUC & ACC & F1 \\ \hline \multirow{2}{*}{Normal} & Random init & 51.6 & 23.1 & 21.9 \\ & **ETP** & **55.1** & **60.6** & **26.8** \\ \hline \multirow{2}{*}{AF} & Random init & 49.9 & 15.9 & 27.4 \\ & **ETP** & **50.9** & **52.7** & **32.5** \\ \hline \multirow{2}{*}{I-AVB} & Random init & 46.3 & 47.4 & 22.7 \\ & **ETP** & **50.8** & **51.8** & **23.4** \\ \hline \multirow{2}{*}{LBBB} & Random init & 33.3 & 4.9 & 6.0 \\ & **ETP** & **81.3** & **94.0** & **35.1** \\ \hline \multirow{2}{*}{RBBB} & Random init & 45.3 & 24.3 & 39.3 \\ & **ETP** & **55.3** & **24.3** & **39.3** \\ \hline \multirow{2}{*}{PAC} & Random init & 39.9 & 9.6 & 16.4 \\ & **ETP Pre-trained** & **46.3** & **72.3** & **18.8** \\ \hline \multirow{2}{*}{PVC} & Random init & 43.8 & 10.8 & 19.4 \\ & **ETP** & **65.9** & **78.3** & **27.0** \\ \hline \multirow{2}{*}{STD} & Random init & 39.5 & **35.1** & 22.7 \\ & **ETP** & **47.0** & 19.6 & **24.3** \\ \hline \multirow{2}{*}{STE} & Random init & 45.0 & 27.0 & 8.1 \\ & **ETP** & **61.2** & **95.0** & **16.2** \\ \hline \multirow{2}{*}{Average} & Random init & 43.8 & 24.2 & 18.2 \\ & **ETP** & **57.1** & **60.9** & **27.1** \\ \hline \hline \end{tabular} \end{table} Table 2: Zero-shot classification Results on PTB-XL. Best results are in bold. \begin{table} \begin{tabular}{|c|c c|c c|} \hline \hline & & \multicolumn{2}{c|}{PTB-XL} \\ \hline Category & Method & AUC & ACC & F1 \\ \hline \multirow{2}{*}{NORM} & Random init & 52.7 & 58.8 & 72.8 \\ & **ETP** & **71.8** & **56.8** & **73.4** \\ \hline \multirow{2}{*}{MI} & Random init & **57.6** & **54.4** & **28.7** \\ & **ETP** & 46.4 & 15.5 & 26.6 \\ \hline \multirow{2}{*}{STTC} & Random init & 55.3 & 43.7 & 26.4 \\ & **ETP** & **56.3** & **57.8** & **24.8** \\ \hline \multirow{2}{*}{CD} & Random init & 35.5 & 10.6 & 19.3 \\ & **ETP Pre-trained** & **52.6** & **87.4** & **28.1** \\ \hline \multirow{2}{*}{HYP} & Random init & 25.4 & 3.5 & 6.3 \\ & **ETP** & **45.8** & **86.4** & **12.2** \\ \hline \multirow{2}{*}{Average} & Random init & 45.3 & 34.2 & 30.7 \\ & **ETP** & **54.6** & **60.8** & **33.1** \\ \hline \hline \end{tabular} \end{table} Table 3: Zero-shot classification Results on CPSC2018. Best results are in bold.
2301.00008
Effects of Data Geometry in Early Deep Learning
Deep neural networks can approximate functions on different types of data, from images to graphs, with varied underlying structure. This underlying structure can be viewed as the geometry of the data manifold. By extending recent advances in the theoretical understanding of neural networks, we study how a randomly initialized neural network with piece-wise linear activation splits the data manifold into regions where the neural network behaves as a linear function. We derive bounds on the density of boundary of linear regions and the distance to these boundaries on the data manifold. This leads to insights into the expressivity of randomly initialized deep neural networks on non-Euclidean data sets. We empirically corroborate our theoretical results using a toy supervised learning problem. Our experiments demonstrate that number of linear regions varies across manifolds and the results hold with changing neural network architectures. We further demonstrate how the complexity of linear regions is different on the low dimensional manifold of images as compared to the Euclidean space, using the MetFaces dataset.
Saket Tiwari, George Konidaris
2022-12-29T17:32:05Z
http://arxiv.org/abs/2301.00008v1
# Effects of Data Geometry in Early Deep Learning ###### Abstract Deep neural networks can approximate functions on different types of data, from images to graphs, with varied underlying structure. This underlying structure can be viewed as the geometry of the data manifold. By extending recent advances in the theoretical understanding of neural networks, we study how a randomly initialized neural network with piece-wise linear activation splits the data manifold into _regions_ where the neural network behaves as a linear function. We derive bounds on the density of boundary of linear regions and the distance to these boundaries on the data manifold. This leads to insights into the expressivity of randomly initialized deep neural networks on non-Euclidean data sets. We empirically corroborate our theoretical results using a toy supervised learning problem. Our experiments demonstrate that number of linear regions varies across manifolds and the results hold with changing neural network architectures. We further demonstrate how the complexity of linear regions is different on the low dimensional manifold of images as compared to the Euclidean space, using the MetFaces dataset. ## 1 Introduction The capacity of Deep Neural Networks (DNNs) to approximate arbitrary functions given sufficient training data in the supervised learning setting is well known (Cybenko, 1989; Hornik et al., 1989; Anthony and Bartlett, 1999). Several different theoretical approaches have emerged that study the effectiveness and pitfalls of deep learning. These studies vary in their treatment of neural networks and the aspects they study range from convergence (Allen-Zhu et al., 2019; Goodfellow and Vinyals, 2015), generalization (Kawaguchi et al., 2017; Zhang et al., 2017; Jacot et al., 2018; Sagun et al., 2018), function complexity (Montufar et al., 2014; Mhaskar and Poggio, 2016), adversarial attacks (Szegedy et al., 2014; Goodfellow et al., 2015) to representation capacity (Arpit et al., 2017). Some recent theories have also been shown to closely match empirical observations (Poole et al., 2016; Hanin and Rolnick, 2019; Kunin et al., 2020). One approach to studying DNNs is to examine how the underlying structure, or geometry, of the data interacts with learning dynamics. The manifold hypothesis states that high-dimensional real world data typically lies on a low dimensional manifold (Tenenbaum, 1997; Carlsson et al., 2007; Fefferman et al., 2013). Empirical studies have shown that DNNs are highly effective in deciphering this underlying structure by learning intermediate latent representations (Poole et al., 2016). The ability of DNNs to "flatten" complex data manifolds, using composition of seemingly simple piece-wise linear functions, appears to be unique (Brahma et al., 2016; Hauser and Ray, 2017). DNNs with piece-wise linear activations, such as ReLU (Nair and Hinton, 2010), divide the input space into linear regions, wherein the DNN behaves as a linear function (Montufar et al., 2014). The density of these linear regions serves as a proxy for the DNN's ability to interpolate a complex data landscape and has been the subject of detailed studies (Montufar et al., 2014; Telgarsky, 2015; Serra et al., 2018; Raghu et al., 2017). The work by Hanin and Rolnick (2019a) on this topic stands out because they derive bounds on the average number of linear regions and verify the tightness of these bounds empirically for deep ReLU networks, instead of larger bounds that rarely materialize. Hanin and Rolnick (2019a) conjecture that the number of linear regions correlates to the expressive power of randomly initialized DNNs with piece-wise linear activations. However, they assume that the data is uniformly sampled from the Euclidean space \(\mathbb{R}^{d}\), for some \(d\). By combining the manifold hypothesis with insights from Hanin and Rolnick (2019a), we are able to go further in estimating the number of linear regions and the average distance from _linear boundaries_. We derive bounds on how the geometry of the data manifold affects the aforementioned quantities. To corroborate our theoretical bounds with empirical results, we design a toy problem where the input data is sampled from two distinct manifolds that can be represented in a closed form. We count the exact number of linear regions and the average distance to the boundaries of linear regions on these two manifolds that a neural network divides the two manifolds into. We demonstrate how the number of linear regions and average distance varies for these two distinct manifolds. These results show that the number of linear regions on the manifold do not grow exponentially with the dimension of input data. Our experiments do not provide estimates for theoretical constants, as in most deep learning theory, but demonstrate that the number of linear regions change as a consequence of these constants. We also study linear regions of deep ReLU networks for high dimensional data that lies on a low dimensional manifold with unknown structure and how the number of linear regions vary on and off this manifold, which is a more realistic setting. To achieve this we present experiments performed on the manifold of natural face images. We sample data from the image manifold using a generative adversarial network (GAN) (Goodfellow et al., 2014) trained on the curated images of paintings. Specifically, we generate images using the pre-trained StyleGAN (Karras et al., 2019, 2020b) trained on the curated MetFaces dataset (Karras et al., 2020a). We generate _curves_ on the image manifold of faces, using StyleGAN, and report how the density of linear regions varies on and off the manifold. These results shed new light on the geometry of deep learning over structured data sets by taking a data intrinsic approach to understanding the expressive power of DNNs. ## 2 Preliminaries and Background Our goal is to understand how the underlying structure of real world data matters for deep learning. We first provide the mathematical background required to model this underlying structure as the geometry of data. We then provide a summary of previous work on understanding the approximation capacity of deep ReLU networks via the complexity of linear regions. For the details on how our work fits into one of the two main approaches within the theory of DNNs, from the expressive power perspective or from the learning dynamics perspective, we refer the reader to Appendix C. ### Data Manifold and Definitions We use the example of the MetFaces dataset (Karras et al., 2020a) to illustrate how data lies on a low dimensional manifold. The images in the dataset are \(1028\times 1028\times 3\) dimensional. By contrast, the number of _realistic_ dimensions along which they vary are limited, e.g. painting style, artist, size and shape of the nose, jaw and eyes, background, clothing style; in fact, very Figure 1: A 2D surface, here represented by a 2-torus, is embedded in a larger input space, \(\mathbb{R}^{3}\). Suppose each point corresponds to an image of a face on this 2-torus. We can chart two curves: one straight line cutting across the 3D space and another curve that stays on the torus. Images corresponding to the points on the torus will have a smoother variation in style and shape whereas there will be images corresponding to points on the straight line that are not faces. dimensional images correspond to realistic faces. We illustrate how this affects the possible variations in the data in Figure 1. A manifold formalises the notion of limited variations in high dimensional data. One can imagine that there exists an unknown function \(f:X\to Y\) from a low dimensional space of variations, to a high dimensional space of the actual data points. Such a function \(f:X\to Y\), from one open subset \(X\subset\mathbb{R}^{m}\), to another open subset \(Y\subset R^{k}\), is a _diffeomorphism_ if \(f\) is bijective, and both \(f\) and \(f^{-1}\) are differentiable (or smooth). Therefore, a manifold is defined as follows. **Definition 2.1**.: _Let \(k,m\in\mathbb{N}_{0}\). A subset \(M\subset\mathbb{R}^{k}\) is called a smooth \(m\)-dimensional submanifold of \(\mathbb{R}^{k}\) (or \(m\)-manifold in \(\mathbb{R}^{k}\)) iff every point \(x\in M\) has an open neighborhood \(U\subset\mathbb{R}^{k}\) such that \(U\cap M\) is diffeomorphic to an open subset \(\Omega\subset\mathbb{R}^{m}\). A diffeomorphism (i.e. differentiable mapping),_ \[f:U\cap M\to\Omega\] _is called a coordinate chart of M and the inverse,_ \[h:=f^{-1}:\Omega\to U\cap M\] _is called a smooth parametrization of \(U\cap M\)._ For the MetFaces dataset example, suppose there are 10 dimensions along which the images vary. Further assume that each variation can take a value continuously in some interval of \(\mathbb{R}\). Then the smooth parametrization would map \(f:\Omega\cap\mathbb{R}^{10}\to M\cap\mathbb{R}^{1028\times 1028\times 3}\). This parametrization and its inverse are unknown in general and computationally very difficult to estimate in practice. There are similarities in how geometric elements are defined for manifolds and Euclidean spaces. A smooth curve, on a manifold \(M\), \(\gamma:I\to M\) is defined from an interval \(I\) to the manifold \(M\) as a function that is differentiable for all \(t\in I\), just as for Euclidean spaces. The shortest such curve between two points on a manifold is no longer a straight line, but is instead a _geodesic_. One recurring geometric element, which is unique to manifolds and stems from the definition of smooth curves, is that of a _tangent space_, defined as follows. **Definition 2.2**.: _Let \(M\) be an \(m\)-manifold in \(\mathbb{R}^{k}\) and \(x\in M\) be a fixed point. A vector \(v\in\mathbb{R}^{k}\) is called a tangent vector of \(M\) at \(x\) if there exists a smooth curve \(\gamma:I\to M\) such that \(\gamma(0)=x,\dot{\gamma}(0)=v\) where \(\dot{\gamma}(t)\) is the derivative of \(\gamma\) at \(t\). The set_ \[T_{x}M:=\{\dot{\gamma}(0)|\gamma:\mathbb{R}\to M\text{ is smooth}\gamma(0)=x\}\] _of tangent vectors of \(M\) at \(x\) is called the tangent space of \(M\) at \(x\)._ In simpler terms, the plane tangent to the manifold \(M\) at point \(x\) is called the tangent space and denoted by by \(T_{x}M\). Consider the upper half of a 2-sphere, \(S^{2}\subset\mathbb{R}^{3}\), which is a 2-manifold in \(\mathbb{R}^{3}\). The tangent space at a fixed point \(x\in S^{2}\) is the 2D plane perpendicular to the vector \(x\) and tangential to the surface of the sphere that contains the point \(x\). For additional background on manifolds we refer the reader to Appendix B. ### Linear Regions of Deep ReLU Networks The higher the density of these linear regions the more complex a function a DNN can approximate. For example, a \(\sin\) curve in the range \([0,2\pi]\) is better approximated by 4 piece-wise linear regions as opposed to 2. To clarify this further, with the 4 "optimal" linear regions \([0,\pi/2),[\pi/2,\pi),[\pi,3\pi/2),\) and \([3\pi/2,2\pi]\) a function could approximate the \(\sin\) curve better than any 2 linear regions. In other words, higher density of linear regions allows a DNN to approximate the variation in the curve better. We define the notion of boundary of a linear regions in this section and provide an overview of previous results. We consider a neural network, \(F\), which is a composition of activation functions. Inputs at each layer are multiplied by a matrix, referred to as the weight matrix, with an additional bias vector that is added to this product. We limit our study to ReLU activation function [Nair and Hinton, 2010], which is piece-wise linear and one of the most popular activation functions being applied to various learning tasks on different types of data like text, images, signals etc. We further consider DNNs that map inputs, of dimension \(n_{\text{in}}\), to scalar values. Therefore, \(F:\mathbb{R}^{n_{\text{in}}}\to\mathbb{R}\) is defined as, \[F(x)=W_{L}\sigma(B_{L-1}+W_{L-1}\sigma(...\sigma(B_{1}+W_{1}x))), \tag{1}\] where \(W_{l}\in\mathbb{M}^{n_{l}\times n_{l-1}}\) is the weight matrix for the \(l^{\text{th}}\) hidden layer, \(n_{l}\) is the number of neurons in the \(l^{\text{th}}\) hidden layer, \(B_{l}\in\mathbb{R}^{n_{l}}\) is the vector of biases for the \(l^{\text{th}}\) hidden layer, \(n_{0}=n_{\text{in}}\) and \(\sigma:\mathbb{R}\to\mathbb{R}\) is the activation function. For a neuron \(z\) in the \(l^{\text{th}}\) layer we denote the _pre-activation_ of this neuron, for given input \(x\in\mathbb{R}^{n_{\text{in}}}\), as \(z_{l}(x)\). For a neuron \(z\) in the layer \(l\) we have \[z(x)=W_{l-1,z}\sigma(...\sigma(B_{1}+W_{1}x)), \tag{2}\] for \(l>1\) (for the base case \(l=1\) we have \(z(x)=W_{1,z}x\)) where \(W_{l-1,z}\) is the row of weights, in the weight matrix of the \(l^{\text{th}}\) layer, \(W_{l}\), corresponding to the neuron \(z\). We use \(W_{z}\) to denote the weight vector for brevity, omitting the layer index \(l\) in the subscript. We also use \(b_{z}\) to denote the bias term for the neuron \(z\). Neural networks with piece-wise linear activations are piece-wise linear on the input space (Montufar et al., 2014). Suppose for some fixed \(y\in\mathbb{R}^{n_{\text{in}}}\) as \(x\to y\) if we have \(z(x)\rightarrow-b_{z}\) then we observe a discontinuity in the gradient \(\nabla_{x}\sigma(b_{z}+W_{z}z(x))\) at \(y\). Intuitively, this is because \(x\) is approaching the boundary of the linear region of the function defined by the output of \(z\). Therefore, the boundary of linear regions, for a feed forward neural network \(F\), is defined as: \[\mathcal{B}_{F}=\{x|\nabla F(x)\text{ is not continuous at }x\}.\] Hanin and Rolnick (2019) argue that an important generalization for the approximation capacity of a neural network \(F\) is the \((n_{\text{in}}-1)-\)dimensional volume density of linear regions defined as \(\text{vol}_{n_{\text{in}}-1}(\mathcal{B}_{F}\cap K)/\text{vol}_{n_{\text{in}}} (K),\) for a bounded set \(K\subset\mathbb{R}^{n_{\text{in}}}\). This quantity serves as a proxy for density of linear regions and therefore the expressive capacity of DNNs. Intuitively, higher density of linear boundaries means higher capacity of the DNN to approximate complex non-linear functions. The quantity is applied to lower bound the distance between a point \(x\in K\) and the set \(\mathcal{B}_{F}\), which is \[\text{distance}(x,\mathcal{B}_{F})=\min_{\text{neurons }z}|z(x)-b_{z}|/||\nabla z(x)||,\] which measures the sensitivity over neurons at a given input. The above quantity measures how "far" the input is from flipping any neuron from inactive to active or vice-versa. Informally, Hanin and Rolnick (2019) provide two main results for a randomly initialized DNN \(F\), with a reasonable initialisation. Firstly, they show that \[\mathbb{E}\Big{[}\frac{\text{vol}_{n_{\text{in}}-1}(\mathcal{B}_{F}\cap K)}{ \text{vol}_{n_{\text{in}}}(K)}\Big{]}\approx\#\{\text{ neurons}\},\] meaning the density of linear regions is bound above and below by some constant times the number of neurons. Secondly, for \(x\in[0,1]^{n_{\text{in}}}\), \[\mathbb{E}\Big{[}\text{distance}(x,\mathcal{B}_{F})\Big{]}\geq C\#\{\text{ neurons}\}^{-1},\] where \(C>0\) depends on the distribution of biases and weights, in addition to other factors. In other words, the distance to the nearest boundary is bounded above and below by a constant times the inverse of the number of neurons. These results stand in contrast to earlier worst case bounds that are exponential in the number of neurons. Hanin and Rolnick (2019) also verify these results empirically to note that the constants lie in the vicinity of 1 throughout training. ## 3 Linear Regions on the Data Manifold One important assumption in the results presented by Hanin and Rolnick (2019) is that the input, \(x\), lies in a compact set \(K\subset\mathbb{R}^{n_{\text{in}}}\) and that \(\text{vol}_{n_{\text{in}}}(K)\) is greater than 0. Also, the theorem pertaining to the lower bound on average distance of \(x\) to linear boundaries the input assumes the input uniformly distributed in \([0,1]^{n_{\text{in}}}\). As noted earlier, high-dimensional real world datasets, like images, lie on low dimensional manifolds, therefore both these assumptions are false in practice. This motivates us to study the case where the data lies on some \(m-\)dimensional submanifold of \(\mathbb{R}^{n_{\text{in}}}\), i.e. \(M\subset\mathbb{R}^{n_{\text{in}}}\) where \(m\ll n_{\text{in}}\). We illustrate how this constraint effects the study of linear regions in Figure 2. As introduced by Hanin and Rolnick (2019), we denote the "\((n_{\text{in}}-k)-\)dimensional piece" of \(\mathcal{B}_{F}\) as \(\mathcal{B}_{F,k}\). More precisely, \(\mathcal{B}_{F,0}=\emptyset\) and \(\mathcal{B}_{F,k}\) is recursively defined to be the set of points \(x\in\mathcal{B}_{F}\setminus\{\mathcal{B}_{F,0}\cup...\cup\mathcal{B}_{F,k-1}\}\) with the added condition that in a neighbourhood of \(x\) the set \(\mathcal{B}_{F,k}\) coincides with hyperplane of dimension \(n_{\text{in}}-k\). We provide a detailed and formal definition for \(\mathcal{B}_{F,k}\) with intuition in Appendix E. In our setting, where the data lies on a manifold \(M\), we define as \(\mathcal{B}_{F,k}\cap M\), and note that \(\dim(\mathcal{B}^{\prime}_{F,k})=m-k\) (Appendix E Proposition E.4). For example, the _transverse_ intersection (see Definition E.3) of a plane in 3D with the 2D manifold \(S^{2}\) is a 1D curve in \(S^{2}\) and therefore has dimension \(1\). Therefore, \(\mathcal{B}^{\prime}_{F,k}\) is a submanifold of dimension \(3-2=1\). This imposes the restriction \(k\leq m\), for the intersection \(\mathcal{B}_{F,k}\cap M\) to have a well defined volume. We first note that the definition of the determinant of the Jacobian, for a collection of neurons \(z_{1},...,z_{k}\), is different in the case when the data lies on a manifold \(M\) as opposed to in a compact set of dimension \(n_{\text{in}}\) in \(\mathbb{R}^{n_{\text{in}}}\). Since the determinant of the Jacobian is the quantity we utilise in our proofs and theorems repeatedly we will use the term Jacobian to refer to it for succinctness. Intuitively, this follows from the Jacobian of a function being defined differently in the ambient space \(\mathbb{R}^{n_{\text{in}}}\) as opposed to the manifold \(M\). In case of the former it is the volume of the parallelepiped determined by the vectors corresponding to the directions with steepest ascent along each one of the \(n_{\text{in}}\) axes. In case of the latter it is more complex and defined below. Let \(\mathcal{H}^{m}\) be the \(m-\)dimensional Hausdorff measure (we refer the reader to the Appendix B for background on Hausdorff measure). The Jacobian of a function on manifold \(M\), as defined by Krantz and Parks (2008) (Chapter 5), is as follows. **Definition 3.1**.: _The (determinant of) Jacobian of a function \(H:M\to\mathbb{R}^{k}\), where \(k\leq\dim(M)=m\), is defined as_ \[J^{M}_{k,H}(x)=\sup\Big{\{}\frac{\mathcal{H}^{k}(D_{M}H(P))}{\mathcal{H}^{k}(P )}\Big{|}P\text{ is a $k$-dimensional parallelepiped contained in $T_{x}M$.}\Big{\}},\] _where \(D_{M}:T_{x}M\to\mathbb{R}^{k}\) is the differential map (see Appendix B) and we use \(D_{M}H(P)\) to denote the mapping of the set \(P\) in \(T_{x}M\), which is a parallelepiped, to \(\mathbb{R}^{k}\). The supremum is taken over all parallelepipeds \(P\)._ We also say that neurons \(z_{1},...,z_{k}\) are good at \(x\) if there exists a path of neurons from \(z\) to the output in the computational graph of \(F\) so that each neuron is activated along the path. Our three main results that hold under the assumptions listed in Appendix A, each of which extend and improve upon the theoretical results by Hanin and Rolnick (2019a), are: **Theorem 3.2**.: _Given \(F\) a feed-forward ReLU network with input dimension \(n_{\text{in}}\), output dimension \(1\), and random weights and biases. Then for any bounded measurable submanifold \(M\subset\mathbb{R}^{n_{\text{in}}}\) and any \(k=1,....,m\) the average \((m-k)-\)dimensional volume of \(\mathcal{B}_{F,k}\) inside \(M\),_ \[\mathbb{E}[\text{vol}_{m-k}(\mathcal{B}_{F,k}\cap M)]=\sum_{\text{distinct neurons $z_{1},...,z_{k}$ in $F$}}\int_{M}\mathbb{E}[Y_{z_{1},...,z_{k}}]\text{dvol}_{m}(x), \tag{3}\] _where \(Y_{z_{1},...,z_{k}}\) is \(J^{M}_{m,H_{k}}(x)\rho_{b_{1},...,b_{k}}(z_{1}(x),...,z_{k}(x)),\) times the indicator function of the event that \(z_{j}\) is good at \(x\) for each \(j=1,...,k\). Here the function \(\rho_{b_{z_{1}},...,b_{z_{k}}}\) is the density of the joint distribution of the biases \(b_{z_{1}},...,b_{z_{k}}\)._ This change in the formula, from Theorem 3.4 by Hanin and Rolnick (2019a), is a result of the fact that \(z(x)\) has a different direction of steepest ascent when it is restricted to the data manifold \(M\), for any \(j\). The proof is presented in Appendix E. Formula 3 also makes explicit the fact that the data manifold has dimension \(m\leq n_{\text{in}}\) and therefore the \(m-k\)-dimensional volume is a more representative measure of the linear boundaries. Equipped with Theorem 3.2, we provide a result for the density of boundary regions on manifold \(M\). Figure 2: A circle is an example of a 1D manifold in a 2D Euclidean space. The effective number of linear regions on the manifold, the upper half of the circle, are the number of linear regions on the arc from \(-\pi\) to \(\pi\). In the diagram above, each color in the 2D space corresponds to a linear region. When the upper half of the circle is flattened into a 1D space we obtain a line. Each color on the line corresponds to a linear region of the 2D space. **Theorem 3.3**.: _For data sampled uniformly from a compact and measurable \(m\) dimensional manifold \(M\) we have the following result for all \(k\leq m\):_ \[\frac{\text{vol}_{m-k}(\mathcal{B}_{F,k}\cap M)}{\text{vol}_{m}(M)}\leq\binom{ \text{\# neurons}}{k}\,(2C_{\text{grad}}C_{\text{bias}}C_{M})^{k},\] _where \(C_{\text{grad}}\) depends on \(||\nabla z(x)||\) and the DNN's architecture, \(C_{M}\) depends on the geometry of \(M\), and \(C_{\text{bias}}\) on the distribution of biases \(\rho_{b}\)._ The constant \(C_{M}\) is the supremum over the matrix norm of projection matrices onto the tangent space, \(T_{x}M\), at any point \(x\in M\). For the Euclidean space \(C_{M}\) is always equal to 1 and therefore the term does not appear in the work by Hanin and Rolnick (2019), but we cannot say the same for our setting. We refer the reader to Appendix F for the proof, further details, and interpretation. Finally, under the added assumptions that the diameter of the manifold \(M\) is finite and \(M\) has polynomial volume growth we provide a lower bound on the average distance to the linear boundary for points on the manifold and how it depends on the geometry and dimensionality of the manifold. **Theorem 3.4**.: _For any point, \(x\), chosen randomly from \(M\), we have:_ \[\mathbb{E}[\text{distance}_{M}(x,\mathcal{B}_{F}\cap M)]\geq\frac{C_{M,\kappa }}{C_{\text{grad}}C_{\text{bias}}C_{M}\#\text{neurons}},\] _where \(C_{M,\kappa}\) depends on the scalar curvature, the input dimension and the dimensionality of the manifold \(M\). The function distance\({}_{M}\) is the distance on the manifold \(M\)._ This result gives us intuition on how the density of linear regions around a point depends on the geometry of the manifold. The constant \(C_{M,\kappa}\) captures how volumes are distorted on the manifold \(M\) as compared to the Euclidean space, for the exact definition we refer the reader to the proof in Appendix G. For a manifold which has higher volume of a unit ball, on average, in comparison to the Euclidean space the constant \(C_{M,\kappa}\) is higher and lower when the volume of unit ball, on average, is lower than the volume of the Euclidean space. For background on curvature of manifolds and a proof sketch we refer the reader to the Appendices B and D, respectively. Note that the constant \(C_{M}\) is the same as in Theorem 3.3. Another difference to note is that we derive a lower bound on the geodesic distance on the manifold \(M\) and not the Euclidean distance in \(\mathbb{R}^{k}\) as done by Hanin and Rolnick (2019). This distance better captures the distance between data points on a manifold while incorporating the underlying structure. In other words, this distance can be understood as how much a data point should change to reach a linear boundary while ensuring that all the individual points on the curve, tracing this change, are "valid" data points. ### Intuition For Theoretical Results One of the key ingredients of the proofs by Hanin and Rolnick (2019) is the _co-area formula_(Krantz and Parks, 2008). The co-area formula is applied to get a closed form representation of the \(k-\)dimensional volume of the region where any set of \(k\) neurons, \(z_{1},z_{2},...,z_{k}\) is "good" in terms of the expectation over the Jacobian, in the Euclidean space. Instead of the co-area formula we use the _smooth co-area formula_(Krantz and Parks, 2008) to get a closed form representation of the \(m-k-\)dimensional volume of the region intersected with manifold, \(M\), in terms of the Jacobian defined on a manifold (Definition 3.1). The key difference between the two formulas is that in the smooth co-area formula the Jacobian (of a function from the manifold \(M\)) is restricted to the tangent plane. While the determinant of the "vanilla" Jacobian measures the distortion of volume around a point in Euclidean space the determinant of the Jacobian defined as above (Definition 3.1) measures the distortion of volume on the manifold instead for the function with the same domain, the function that is 1 if the set of neurons are good and 0 otherwise. The value of the Jacobian as defined in Definition 3.1 has the same volume as the projection of the parallelepiped defined by the gradients \(\nabla z(x)\) onto the tangent space (see Proposition F.1 in Appendix). This introduces the constant \(C_{M}\), defined above. Essentially, the constant captures how the magnitude of the gradients, \(\nabla z(x)\), are modified upon being projected to the tangent plane. Certain manifolds "shrink" vectors upon projection to the tangent plane more than others, on an average, which is a function of their geometry. We illustrate how two distinct manifolds "shrink" the gradients differently upon projection to the tangent plane as reflected in the number of linear regions on the manifolds (see Figure 11 in the appendix) for 1D manifolds. We provide intuition for the curvature of a manifold in Appendix B, due to space constraints, which is used in the lower bound for the average distance in Theorem 3.4. The constant \(C_{M,\kappa}\) depends on the curvature as the supremum of a polynomial whose coefficients depend on the curvature, with order at most \(n_{\text{in}}\) and at least \(n_{\text{in}}-m\). Note that despite this dependence on the ambient dimension, there are other geometric constants in this polynomial (see Appendix G). Finally, we also provide a simple example as to how this constant varies with \(n_{\text{in}}\) and \(m\), for a simple and contrived example, in Appendix G.1. ## 4 Experiments ### Linear Regions on a 1D Curve To empirically corroborate our theoretical results, we calculate the number of linear regions and average distance to the linear boundary on 1D curves for regression tasks in two settings. The first is for 1D manifolds embedded in 2D and higher dimensions and the second is for the high-dimensional data using the MetFaces dataset. We use the same algorithm, for the toy problem and the high-dimensional dataset, to find linear regions on 1D curves. We calculate the exact number of linear regions for a 1D curve in the input space, \(x:I\to\mathbb{R}^{n_{\text{in}}}\) where \(I\) is an interval in real numbers, by finding the points where \(z(x(t))=b_{z}\) for every neuron \(z\). The solutions thus obtained gives us the boundaries for neurons on the curve \(x\). We obtain these solutions by using the programmatic activation of every neuron and using the sequential least squares programming (SLSQP) algorithm (Kraft, 1988) to solve for \(|z(x(t))-b_{z}|=0\) for \(t\in I\). In order to obtain the programmatic activation of a neuron we construct a Deep ReLU network as defined in Equation 2. We do so for all the neurons for a given DNN with fixed weights. ### Supervised Learning on Toy Dataset We define two similar regression tasks where the data is sampled from two different manifolds with different geometries. We parameterize the first task, a unit circle without its north and south poles, by \(\psi_{\text{circle}}:(-\pi,\pi)\to\mathbb{R}^{2}\) where \(\psi_{\text{circle}}(\theta)=(\cos\theta,\sin\theta)\) and \(\theta\) is the angle made by the vector from the origin to the point with respect to the x-axis. We set the target function for regression task to be a periodic function in \(\theta\). The target is defined as \(z(\theta)=a\sin(\nu\theta)\) where \(a\) is the amplitude and \(\nu\) is the frequency (Figure 3). DNNs have difficulty learning periodic functions (Ziyin et al., 2020). The motivation behind this is to present the DNN with a challenging task where it has to learn the underlying structure of the data. Moreover the DNN will have to split the circle into linear regions. For the second regression task, a tractrix is parametrized by \(\psi_{\text{tractrix}}:\mathbb{R}^{1}\to\mathbb{R}^{2}\) where \(\psi_{\text{tractrix}}(y)=(y-\tanh y,\operatorname{sech}y)\) (see Figure 3). We assign a target function \(z(t)=a\sin(\nu t)\). For the purposes of our study we restrict the domain of \(\psi_{\text{tractrix}}\) to \((-3,3)\). We choose \(\nu\) so as to ensure that the number of peaks and troughs, 6, in the periodic target function are the same for both the manifolds. This ensures that the domains of both the problems have length close to 6.28. Further experimental details are in Appendix H. The results, averaged over 20 runs, are presented in Figures 4 and 5. We note that \(C_{M}\) is smaller for Sphere (based on Figure 4) and the curvature is positive whilst \(C_{M}\) is larger for tractrix and the curvature is negative. Both of these constants (curvature and \(C_{M}\)) contribute to the lower bound Figure 3: The tractrix (a) and circle (b) are plotted in grey and the target function is in blue. This is for illustration purposes and does not match the actual function or domains used in our experiments. in Theorem 3.4. Similarly, we show results of number of linear regions divided by the number of neurons upon changing architectures, consequently the number of neurons, for the two manifolds in Figure 8, averaged over 30 runs. Note that this experiment observes the effect of \(C_{M}\times C_{\text{grad}}\), since changing the architecture also changes \(C_{\text{grad}}\) and the variation in \(C_{\text{grad}}\) is quite low in magnitude as observed empirically by Hanin and Rolnick (2019). The empirical observations are consistent with our theoretical results. We observe that the number of linear regions starts off close to \(\#\)neurons and remains close throughout the training process for both the manifolds. This supports our theoretical results (Theorem 3.3) that the constant \(C_{M}\), which is distinct across the two manifolds, affects the number of linear regions throughout training. The tractrix has a higher value of \(C_{M}\) and that is reflected in both Figures 4 and 5. Note that its relationship is inverse to the average distance to the boundary region, as per Theorem 3.4, and it is reflected as training progresses in Figure 5. This is due to different "shrinking" of vectors upon being projected to the tangent space (Section 3.1). ### Varying Input Dimensions To empirically corroborate the results of Theorems 2 and 3 we vary the dimension \(n_{\text{in}}\) while keeping \(m\) constant. We achieve this by counting the number of linear regions and the average distance to boundary region on the 1D circle as we vary the input dimension in steps of 5. We draw samples of 1D circles in \(\mathbb{R}^{n_{\text{in}}}\) by randomly choosing two perpendicular basis vectors. We then train a network with the same architecture as the previous section on the periodic target function (\(a\sin(\nu\theta)\)) as defined above. The results in Figure 6 shows that the quantities stay proportional to \(\#neurons\), and do not vary as \(n_{\text{in}}\) is increased, as predicted by our theoretical results. Our empirical study asserts how the relevant upper and lower bounds, for the setting where data lies on a low-dimensional manifold, does not grow exponentially with \(n_{\text{in}}\) for the density of linear regions in a compact set of \(\mathbb{R}^{n_{\text{in}}}\) but instead depend on the intrinsic dimension. Further details are in Appendix H. ### MetFaces: High Dimensional Dataset Our goal with this experiment is to study how the density of linear regions varies across a low dimensional manifold and the input space. To discover latent low dimensional underlying structure of data we employ a GAN. Adversarial training of GANs can be effectively applied to learn a mapping from a low dimensional latent space to high dimensional data (Goodfellow et al., 2014). The generator is a neural network that maps \(g:\mathbb{R}^{k}\rightarrow\mathbb{R}^{n_{\text{in}}}\). We train a deep ReLU network on the MetFaces dataset with random labels (chosen from \(0,1\)) with cross entropy loss. As noted by Zhang et al. (2017), training with random labels can lead to the DNN memorizing the entire dataset. We compare the log density of number of linear regions on a curve on the manifold with a straight line off the manifold. We generate these curves using the data sampled by the StyleGAN by (Karras et al., 2020). Specifically, for each curve we sample a random pair of latent vectors: \(z_{1},z_{2}\in\mathbb{R}^{k}\), this gives us the start and end point of the curve using the generator \(g(z_{1})\) and \(g(z_{2})\). We then generate 100 images to approximate a curve connecting the two images on the image manifold in a piece-wise manner. We do so by taking 100 points on the line connecting \(z_{1}\) and \(z_{2}\) in the latent space that are evenly spaced and generate an image from each one of them. Therefore, the \(i^{\text{th}}\) image is generated as: \(z_{i}^{\prime}=g((100-i)\times z_{1}+i\times z_{2})/100)\), using the StyleGAN generator \(g\). We qualitatively verify the images to ensure that they lie on the manifold of images of faces. The straight line, with two fixed points \(g(z_{1})\) and \(g(z_{2})\), is defined as \(x(t)=(1-t)g(z_{1})+tg(z_{2})\) with \(t\in[0,1]\). The approximated curve on the manifold is defined as \(x^{\prime}(t)=(1-t)g(z_{i}^{\prime})+tg(z_{i+1}^{\prime})\) where \(i=\texttt{floor}(100t)\). We then apply the method from Section 4.1 to obtain the number of linear regions on these curves. The results are presented in Figure 9. This leads us to the key observation: the density of linear regions is significantly lower on the data manifold and devising methods to "concentrate" these linear regions on the manifold is a promising research direction. That could lead to increased expressivity for the same number of parameters. We provide further experimental details in Appendix I. ## 5 Discussion and Conclusions There is significant work in both supervised and unsupervised learning settings for non-Euclidean data (Bronstein et al., 2017). Despite these empirical results most theoretical analysis is agnostic to data geometry, with a few prominent exceptions (Cloninger and Klock, 2020; Shaham et al., Figure 4: Graph of number of linear regions for tractrix (blue) and sphere (orange). The shaded regions represent one standard deviation. Note that the number of neurons is 26 and the number of linear regions are comparable to 26 but different for both the manifolds throughout training. Figure 8: The effects of changing the architecture on the number of linear regions. We observe that the value of \(C_{M}\) effects the number of linear regions proportionally. The number of hidden units for three layer networks are in the legend along with the data manifold. Figure 6: We observe that as the dimension \(n_{\text{in}}\) is increased, while keeping the manifold dimension constant, the number of linear regions remains proportional to number of neurons (26). Figure 7: We observe that as the dimension \(n_{\text{in}}\) is increased, while keeping the manifold dimension constant, the average distance varies very little. Figure 9: We observe that the log density of number of linear regions is lower on the manifold (blue) as compared to off the manifold (green). This is for the MetFaces dataset. 2015; Schmidt-Hieber, 2019). We incorporate the idea of data geometry into measuring the effective approximation capacity of DNNs, deriving average bounds on the density of boundary regions and distance from the boundary when the data is sampled from a low dimensional manifold. Our experimental results corroborate our theoretical results. We also present insights into expressivity of DNNs on low dimensional manifolds for the case of high dimensional datasets. Estimating the geometry, dimensionality and curvature, of these image manifolds accurately is a problem that remains largely unsolved (Brehmer and Cranmer, 2020; Perraul-Joncas and Meila, 2013), which limits our inferences on high dimensional dataset to observations that guide future research. We note that proving a lower bound on the number of linear regions, as done by Hanin and Rolnick (2019), for the manifold setting remains open. Our work opens up avenues for further research that combines model geometry and data geometry and can lead to empirical research geared towards developing DNN architectures for high dimensional datasets that lie on a low dimensional manifold. ## 6 Acknowledgements This work was funded by L2M (DARPA Lifelong Learning Machines program under grant number FA8750-18-2-0117), the Penn MURI (ONR under the PERISCOPE MURI Contract N00014- 17-1-2699), and the ONR Swarm (the ONR under grant number N00014-21-1-2200). This research was conducted using computational resources and services at the Center for Computation and Visualization, Brown University. We would like to thank Sam Lobel, Rafael Rodriguez Sanchez, and Akhil Bagaria for refining our work, multiple technical discussions, and their helpful feedback on the implementation details. We also thank Tejas Kotwal for assistance on deriving the mathematical details related to the 1D Tractrix and sources for various citations. We thank Professor Pedro Lopes de Almeida, Nihal Nayak, Cameron Allen and Aarushi Kalra for their valuable comments on writing and presentation of our work. We thank all the members of the Brown robotics lab for their guidance and support at various stages of our work. Finally, we are indebted to, and graciously thank, the numerous anonymous reviewers for their time and labor as their valuable feedback and thoughtful engagement have shaped and vastly refine our work.
2309.15541
Classical conformal blocks, Coulomb gas integrals, and quantum integrable models
In this paper, we recall Richardson's solution of the reduced BCS model, its relationship with the Gaudin model, and the known implementation of these models in conformal field theory. The CFT techniques applied here are based on the use of the free field realization, or more precisely, on the calculation of saddle-point values of Coulomb gas integrals representing certain (perturbed) WZW conformal blocks. We identify the saddle-point limit as the classical limit of conformal blocks. We show that this observation implies a new method for calculating classical conformal blocks and can be further used in the study of quantum integrable models.
Marcin R. Piatek
2023-09-27T10:01:44Z
http://arxiv.org/abs/2309.15541v2
# Classical conformal blocks, Coulomb gas integrals, ###### Abstract In this paper, we recall Richardson's solution of the reduced BCS model, its relationship with the Gaudin model, and the known implementation of these models in conformal field theory. The CFT techniques applied here are based on the use of the free field realization, or more precisely, on the calculation of saddle-point values of Coulomb gas integrals representing certain (perturbed) WZW conformal blocks. We identify the saddle-point limit as the classical limit of conformal blocks. We show that this observation implies a new method for calculating classical conformal blocks and can be further used in the study of quantum integrable models. ## 1 Introduction In this note, we discuss the Richardson and Gaudin quantum integrable models and their implementation in conformal field theory (CFT). We point out that the latter is related to the classical limit of conformal blocks. Exploring this relationship in depth may lead to new methods for analyzing quantum many-body systems, on the one hand, and for obtaining novel results concerning conformal blocks, on the other hand. We give examples of these possibilities. Conformal blocks \(\mathcal{F}(\{\lambda\}_{i=1}^{n}\,,\{\lambda_{p}\}_{p=1}^{3g-3+n},c\,|\,\cdot\,)\) represent holomorphic contributions to physical correlation functions. Although they are fully determined by conformal symmetry, they are not known in a closed form except for a few examples. These functions depend on the central charge \(c\) of the Virasoro algebra, external conformal weights \(\{\Delta_{i}\}_{i=1}^{n}\), conformal weights \(\{\Delta_{p}\}_{p=1}^{3g-3+n}\) in the intermediate channels, vertex operators locations, and modular parameters in case of conformal field theories living on surfaces with genus \(g>0\). Lately, a central issue concerning conformal blocks is the problem of calculating their classical limit: \[c\longrightarrow\infty\quad\Longleftrightarrow\quad b\longrightarrow 0\quad\text{or}\quad b \longrightarrow\infty\quad\text{for}\;\;c=1+6\left(b+\frac{1}{b}\right)^{2}. \tag{1}\] Based on concrete examples one can conjecture how conformal blocks behave in the classical limit. If all the weights are heavy, i.e., \((\Delta_{i},\Delta_{p})=b^{2}(\delta_{i},\delta_{p})\) and \(\delta_{i},\delta_{p}=\mathcal{O}(b^{0})\) then in the limit (1) blocks exponentiate to functions known as Zamolodchikovs' classical conformal blocks [1]:2 Footnote 2: Analogously for \(b\longrightarrow 0\) and \((\Delta_{i},\Delta_{p})=b^{-2}(\delta_{i},\delta_{p})\). \[\mathcal{F}\Big{(}\{\Delta_{i}\},\{\Delta_{p}\},c\,|\,\cdot\, \Big{)}\stackrel{{ b\to\infty}}{{\sim}}\,\mathrm{e}^{b^{2}f\left( \{\delta_{i}\},\{\delta_{p}\}\right|\,\cdot\,\right)}. \tag{2}\] If the external weights are heavy and light (\(\lim_{b\to\infty}b^{-2}\Delta_{k}^{\rm light}=0\)) then in the classical limit conformal blocks decompose into a product of the "light contribution" \(\psi_{\rm light}(\cdot)\) and the exponent of the classical block:3 Footnote 3: Analogously for \(b\longrightarrow 0\) and \(\lim_{b\to 0}b^{2}\Delta_{k}^{\rm light}=0\). \[{\cal F}\Big{(}\{\Delta_{I}\}\cup\{\Delta_{k}^{\rm light}\},\{\Delta_{p}\},c\,| \,\cdot\,\Big{)}\ \stackrel{{ b\to\infty}}{{\sim}}\psi_{\rm light}(\cdot)\ {\rm e}^{b^{2}f \big{(}\{\delta i\},\{\delta p\}\,|\,\cdot\,\big{)}}. \tag{3}\] If all the weights are fixed then in the limit \(c\longrightarrow\infty\) conformal blocks reduce to the so-called global blocks, i.e., contributions to the correlation functions from representations of the \({\rm sl}(2,{\bf C})\) algebra. It turns out that semiclassical asymptotics of conformal blocks have fascinating mathematical and physical applications. Monodromy problems, uniformization, hyperbolic geometry, string field theory, Bethe/gauge and AGT correspondences, entanglement, quantum chaos, thermalization, AdS\({}_{3}\)/CFT\({}_{2}\) holography, and perturbation theory of black holes are just some of the topics in which the classical limit of conformal blocks is used. The present article discusses yet one more such area of application, i.e., in the field of quantum integrable systems. This exposition is partially based on the work [2]. ## 2 Models of Richardson and Gaudin The Richardson model, also known as the reduced BCS model, is defined by the Hamiltonian, \[\hat{\sf H}_{\sf B BCS}=\sum_{j,\sigma=\pm}\varepsilon_{j\sigma}c_{j\sigma}^{ \dagger}c_{j\sigma}-gd\sum_{i,j^{\prime}}c_{j+}^{\dagger}c_{j-}^{\dagger}c_{j -}c_{j^{\prime}+} \tag{4}\] which consists of a kinetic term and an interaction term describing the attraction between Cooper pairs. Here, \(c_{j\sigma}^{\dagger}\), \(c_{j\sigma}\) are the fermion creation and annihilation operators in time-reversed states \(|\,j,\pm\,\rangle\) with energies \(\varepsilon_{j\pm}\), \(j=1,\ldots,\Omega\). The Hamiltonian (4) is a simplified version of the Bardeen-Cooper-Schrieffer (BCS) Hamiltonian, where all couplings have been set equal to a single one, namely \(g\). The constant \(d\) is a mean level spacing. \(\hat{\sf H}_{\sf BCS}\) can be written in terms of the "hard-core" boson operators \(b_{j}^{\dagger}=c_{j+}^{\dagger}c_{j-}^{\dagger}\), \(b_{j}=c_{j-}c_{j+}\) which create, annihilate fermion pairs, respectively and obey the following commutation rules \([b_{j},b_{j^{\prime}}^{\dagger}]=\delta_{j,j^{\prime}}(1-2\hat{\sf N}_{j})\), \(\hat{\sf N}_{j}=b_{j}^{\dagger}\,b_{j}\). The Hamiltonian (4) rewritten in terms of these operators reads as follows: \[\hat{\sf H}_{\sf B BCS}=\sum_{j}2\varepsilon_{j}b_{j}^{\dagger}b_{j}-gd\sum_{ j,j^{\prime}}b_{j}^{\dagger}b_{j^{\prime}}. \tag{5}\] As above, the sums are taken over a set \(\Omega\) of doubly degenerate energy levels \(\varepsilon_{j\pm}\). In the 1960s Richardson exactly solved an eigenvalue problem for (5) through the Bethe ansatz [3, 4]. Richardson proposed an ansatz for an exact eigenstate, namely, \(|\,N\,\rangle=\prod_{\nu=1}^{N}b_{\nu}^{\dagger}|\,0\,\rangle\), where the pair operators \(B_{\nu}^{\dagger}=\sum_{j=1}^{\Omega}b_{j}^{\dagger}/(2\varepsilon_{j}-u_{\nu})\) have the form appropriate to the solution of the one-pair problem. The quantities \(u_{\nu}\) are pair energies. They are understood as auxiliary parameters which are then chosen to fulfill the eigenvalue equation \(\hat{\sf H}_{\sf BCS}\,|\,N\,\rangle={\sf E}_{\sf BCS}(N)\,|\,N\,\rangle\), where \({\sf E}_{\sf FBCS}(N)=\sum_{\nu=1}^{N}u_{\nu}\). The state \(|\,N\,\rangle\) is an eigenstate of \(\hat{\sf H}_{\sf BCS}\) if the \(N\) pair energies \(u_{\nu}\) are, complex in general, solutions of the (Bethe ansatz) equations: \[\frac{1}{gd}+\sum_{i=1}^{\Omega}\frac{1}{u_{\nu}-z_{i}}=\sum_{\mu\neq\nu}^{N} \frac{2}{u_{\nu}-u_{\mu}}\ \ \ \ {\rm for}\ \ \nu=1,\ldots,N,\ \ \ z_{i}=2\varepsilon_{i}. \tag{6}\] There is a connection between the Richardson model and a class of integrable spin models obtained by Gaudin. Indeed, in 1976 Gaudin proposed the so-called rational, trigonometric and elliptic integrable models based on sets of certain commuting Hamiltonians [5, 6]. The simplest model in this family is the rational model defined by a collection of the following Hamiltonians: \[\hat{\sf H}_{\sf G,i}=\sum_{j\neq i}^{\Omega}\frac{1}{\varepsilon_{i}- \varepsilon_{j}}\left[t_{i}^{0}t_{j}+\frac{1}{2}\left(t_{i}^{+}t_{j}^{-}+t_{i}^ {+}t_{j}^{+}\right)\right]=:\sum_{j\neq i}^{\Omega}\frac{{\bf t}_{i}\cdot{ \bf t}_{j}}{\varepsilon_{i}-\varepsilon_{j}}. \tag{7}\] Each separate spin corresponds to a spin-\(\frac{1}{2}\) realization of the su(2) algebra generated by \(\mathfrak{t}^{0}\), \(\mathfrak{t}^{+}\), \(\mathfrak{t}^{-}\). The spin-\(\frac{1}{2}\) generators can be written in terms of the hard-core boson operators: \(\mathfrak{t}^{+}_{j}=b^{+}_{j}\), \(\mathfrak{t}^{-}_{j}=b_{j}\), \(\mathfrak{t}^{0}_{j}=\frac{1}{2}-\hat{\mathsf{N}}_{j}\). Therefore, \(\hat{\mathsf{H}}_{\mathsf{G},i}\) can be diagonalized by means of the Richardson method. As before the energy eigenvalue is given by \(\mathsf{E}_{\mathsf{G},i}(N)=\sum_{\nu=1}^{N}u_{\nu}\), but this time the parameters \(u_{\nu}\) satisfy equations which are nothing but the Richardson equations (6) in the limit \(g\longrightarrow\infty\). In 1997 Cambiagio, Rivas and Saraceno (CRS) uncovered [7] that conserved charges of the reduced BCS model are given in terms of the rational Gaudin Hamiltonians, i.e., \(\hat{\mathsf{R}}_{i}=-\mathfrak{t}^{0}_{i}-gd\,\hat{\mathsf{H}}_{\mathsf{G},i}\). The quantum integrals of motion \(\hat{\mathsf{R}}_{i}\) itself can be seen as a set of commuting Hamiltonians. This is a famous Gaudin model of magnets also known as the central spin model.4 Knowing \(\hat{\mathsf{R}}_{i}\) one can express \(\hat{\mathsf{H}}_{\mathsf{BCS}}\) in terms of these quantum integrals of motion. As a result one gets: Footnote 4: Actually, it describes a central spin at position “\(0\)” which is coupled to bath spins through long-range interactions, \(\hat{\mathsf{H}}=\mathsf{B}\mathsf{S}^{0}_{+}+2\sum_{j=1}^{\Omega}(\mathbf{s} _{0}\cdot\mathbf{s}_{j})/(\varepsilon_{0}-\varepsilon_{j})\). Here, \(\varepsilon_{0}=0\) and \(\varepsilon_{j}\) are energy levels of the Richardson-BCS model. The magnetic field has been chosen as \(\mathrm{B}=-2/g\) and \(d=1\). \[\hat{\mathsf{H}}_{\mathsf{BCS}}=\hat{\mathsf{H}}_{\mathsf{XY}}+\sum_{j=1}^{ \Omega}\varepsilon_{j}+gd\left(\frac{1}{2}\Omega-N\right),\quad\hat{\mathsf{ H}}_{\mathsf{XY}}=\sum_{j=1}^{\Omega}2\varepsilon_{j}\hat{\mathsf{R}}_{j}+gd \Big{(}\sum_{j=1}^{\Omega}\hat{\mathsf{R}}_{j}\Big{)}^{2}-\frac{3}{4}gd\, \Omega. \tag{8}\] Eq. (8) opens a possibility to calculate eigenvalues of \(\hat{\mathsf{H}}_{i}\) by applying Richardson's solution of the spectral problem for \(\hat{\mathsf{H}}_{\mathsf{BCS}}\). However, the eigenvalues of CRS operators have been computed in a different way. More specifically, in 2000 Sierra found [8] closed expression for them, i.e., \[\lambda_{i}=\frac{gd}{2}\frac{\partial U(\mathbf{z},\mathbf{u}^{c})}{\partial z _{i}}\Big{|}_{z_{i}=2\varepsilon_{i}}=-\frac{1}{2}+gd\left(\sum_{\nu=1}^{N} \frac{1}{2\varepsilon_{i}-u^{c}_{\nu}}-\frac{1}{4}\sum_{j\neq i}^{\Omega} \frac{1}{\varepsilon_{i}-\varepsilon_{j}}\right), \tag{9}\] using methods of CFT. The quantity \(U(\mathbf{z},\mathbf{u}^{c})\) named "Coulomb energy" in [8] is the critical value of the "potential": \[U(\mathbf{z},\mathbf{u}) = -\sum_{i<j}^{\Omega}\log(z_{i}-z_{j})-4\sum_{\nu<\mu}^{N}\log(u_{ \nu}-u_{\mu}) \tag{10}\] \[+ 2\sum_{i=1}^{\Omega}\sum_{\nu=1}^{N}\log(z_{i}-u_{\nu})+\frac{1} {gd}\left(-\sum_{i=1}^{\Omega}z_{i}+2\sum_{\nu=1}^{N}u_{\nu}\right).\] Here, \(\mathbf{u}^{c}=(u^{c}_{1},\ldots,u^{c}_{N})\) is a solution of the conditions \(\partial U(\mathbf{z},\mathbf{u})/\partial u_{\nu}=0\), \(\nu=1,\ldots,N\) which are nothing but the Richardson equations (6). To solve eigenproblems for the Richardson model conserved charges Sierra has shown in [8] that the Knizhnik-Zamolodchikov (KZ) equation obeyed by the \(\widetilde{\mathrm{su}(2)}_{k}\) WZW block, i.e., \[\left(\kappa\partial_{z_{i}}-\sum_{j\neq i}^{\Omega+1}(\mathbf{t}_{i}\cdot \mathbf{t}_{j})/(z_{i}-z_{j})\right)\psi^{\mathsf{WZW}}(z_{1},\ldots,z_{\Omega +1})=0,\qquad\kappa=(k+2)/2 \tag{11}\] is completely equivalent to the following: \[(2gd)^{-1}\hat{\mathsf{R}}_{i}\psi=-\kappa\partial_{z_{i}}\psi,\qquad\psi^{ \mathsf{WZW}}=\exp\left[(2gd\kappa)^{-1}\hat{\mathsf{H}}_{\mathsf{XY}}\right]\psi. \tag{12}\] Here, \(\psi=\psi^{\mathrm{CG}}_{\mathbf{m}}(\mathbf{z})\) is certain perturbed WZW conformal block in the free field (Coulomb gas) representation. More precisely, \(\psi^{\mathrm{CG}}_{\mathbf{m}}(\mathbf{z})\) consists of (i) the \(\widetilde{\mathrm{su}(2)}_{k}\) WZW chiral primary fields \(\Phi^{j}_{m}(z)=(\gamma(z))^{j-m}\,\mathsf{V}_{\alpha}(z)\) built out of the \(\gamma\)-field of the \(\beta\gamma\)-system and Virasoro chiral vertex operators \(\mathsf{V}_{\alpha}(z)\) represented as normal ordered exponentials with conformal weights \(\Delta_{\alpha}=\alpha(\alpha-2\alpha_{0})=j(j+1)/(k+2)\);5 (ii) WZW screening charges; (iii) an additional operator \({\sf V}_{gd}\) which breaks conformal invariance. Within this realization to every energy level \(z_{i}=2\varepsilon_{i}\) corresponds the field \(\Phi_{m_{i}}^{j}(z_{i})\) with the spin \(j=\frac{1}{2}\) and the "third component" \(m_{i}=\frac{1}{2}\) (or \(m_{i}=-\frac{1}{2}\)) if the corresponding energy level is empty (or occupied) by a pair of fermions. Integration variables \(\mu_{v}\) in screening operators are the Richardson parameters. The operator \({\sf V}_{gd}\) implements the coupling \(gd\) and is a source of the term \(\frac{1}{gd}\) in the Richardson equations (6). After ordering, \(\psi_{\bf m}^{\rm CG}({\bf z})\) has a structure of a multidimensional contour integral, Footnote 5: Here, \(\alpha=(k+2)^{-\frac{1}{2}}j=-2\alpha_{0}j\). \[\psi_{\bf m}^{\rm CG}({\bf z})=\oint\limits_{C_{1}}{\rm d}u_{1}\ldots\oint \limits_{C_{N}}{\rm d}u_{N}\,\psi_{\bf m}^{\beta\gamma}({\bf z},{\bf u}){\rm e} ^{-\alpha_{0}^{2}U({\bf z},{\bf u})}, \tag{13}\] and in the limit \(\alpha_{0}\longrightarrow\infty\Leftrightarrow k\longrightarrow-2\Leftrightarrow\kappa\longrightarrow 0\) can be calculated using the saddle point method. The stationary solutions of \(U({\bf z},{\bf u})\) are then given by the solutions of the Richardson equations. After all one gets \(\psi_{\bf m}^{\rm CG}({\bf z})\sim\psi^{\rm R}{\rm e}^{-\alpha_{0}^{2}U({\bf z },{\bf u}^{\rm c})}\) for \(\alpha_{0}\longrightarrow\infty\), where \(\psi^{\rm R}\) is the Richardson wave function. Using this asymptotic limit to the equation (12) one obtains \(\widehat{\sf f}_{i}\psi^{\rm R}=\lambda_{i}\psi^{\rm R}\) in the limit \(\kappa\longrightarrow 0\), where \(\lambda_{i}\) are given by (9). As a final remark in this section let us note that the Coulomb energy \(U({\bf z},{\bf u}^{\rm c})\) and eigenvalues \(\lambda_{i}\) depend on the Richardson parameters \({\bf u}^{\rm c}=(\mu_{1}^{\rm c},\ldots,u_{N}^{\rm c})\). It would be nice to have techniques that allow to calculate functions such as \(U({\bf z},{\bf u}^{\rm c})\) without need to solve the Bethe ansatz equations. In our opinion, it is possible to develop such a method. ## 3 Virasoro analogues of the Coulomb energy As an example of the last statement in the previous section let us consider first the Coulomb gas representation of some spherical four-point block, namely, \[{\sf Z}(\,\cdot\,|{\bf z}_{f}) = \left\langle:{\rm e}^{\hat{\alpha}_{1}\phi(0)}::{\rm e}^{\hat{ \alpha}_{2}\phi(x)}::{\rm e}^{\hat{\alpha}_{3}\phi(1)}::{\rm e}^{\hat{\alpha} _{4}\phi(\infty)}:\left[\int\limits_{0}^{x}:{\rm e}^{b\phi(u)}:{\rm d}u\right]^ {N_{1}}\left[\int\limits_{0}^{1}:{\rm e}^{b\phi(u)}:{\rm d}u\right]^{N_{2}}\right\rangle\] \[=x^{\frac{\alpha_{1}\alpha_{2}}{2\beta}}(1-x)^{\frac{\alpha_{2} \alpha_{3}}{2\beta}}\prod_{\mu=1}^{N_{1}}\int\limits_{0}^{x}{\rm d}u_{\mu}\prod _{\mu=N_{1}+1}^{N_{1}+N_{2}}\int\limits_{0}^{1}{\rm d}u_{\mu}\prod_{\mu<\nu} \left(u_{\nu}-u_{\mu}\right)^{2\beta}\prod_{\mu}u_{\mu}^{\alpha_{1}}\left(u_{ \mu}-x\right)^{\alpha_{2}}\left(u_{\mu}-1\right)^{\alpha_{3}},\] where \({\bf z}_{f}:=(0,x,1,\infty)\). It was not clear for a long time how to choose integration contours to get an integral representation of the four-point block consistent with historically first Belavin-Polyakov-Zamolodchikov (BPZ) power series representation [9]:6 Footnote 6: In Eq. (14) symbols \(V_{\Delta_{i}}(z_{i})\) stand for Virasoro chiral vertex operators; \(\left[G_{c,\Delta}^{n}\right]^{IJ}\) is an inverse of the Gram matrix \(\left[G_{c,\Delta}^{n}\right]_{IJ}=\left\langle\,\Delta_{I}^{n}\,|\Delta_{J}^ {n}\,\right\rangle\) calculated in the basis \(\left\{\left|\Delta_{I}^{n}\,\right\rangle\right\}\) of the subspace \({\cal V}_{c,\Delta}^{n}\) of the Verma module \(\bigoplus\limits_{n=0}^{\infty}{\cal V}_{c,\Delta}^{n}\) with basis vectors labeled by partitions \(I=(i_{k}\geq\ldots\geq i_{1}\geq 1)\) with the length \(n=i_{k}+\ldots+i_{1}=:|I|\). \[{\cal F}(\,\cdot\,|\,x\,)=\,x^{\Delta-\Delta_{2}-\Delta_{1}}\left(1+\sum_{n=1}^ {\infty}x^{n}\sum_{n=|I|=|I|}\left\langle\,\Delta_{4}\,|V_{\Delta_{3}}(1)|\, \Delta_{I}^{n}\,\right\rangle\!\left|G_{c,\Delta}^{n}\right\rangle\!\left|{} ^{IJ}\left\langle\,\Delta_{J}^{n}\,|V_{\Delta_{2}}(1)|\,\Delta_{1}\,\right\rangle \right). \tag{14}\] Mironov, Morozov and Shakirov (MMS) showed [10] that \({\sf Z}(\,\cdot\,|{\bf z}_{f})\) precisely reproduces the BPZ four-point block expansion. Thus, there are two ways to compute the \(b\longrightarrow\infty\) asymptotic of \({\sf Z}(\,\cdot\,|{\bf z}_{f})\). On the one hand, it's just a saddle point limit of the integral. On the other hand, it is the classical limit of the BPZ four-point block, \[{\cal F}(\Delta_{i},\Delta,c\,|\,x)\ \stackrel{{ b\to\infty}}{{\sim}} \mathrm{e}^{b^{2}f(\delta_{i},\delta\,|\,x)}\quad\Leftrightarrow\quad f( \delta_{i},\delta\,|\,x)\ =\ \lim_{b\to\infty}\frac{1}{b^{2}}\log{\cal F}(\Delta_{i},\Delta,c\,|\,x)\ =\] \[\ \ where \[W(N,a,z_{1},\ldots,z_{N})=-\sum_{r<s}2\log\theta_{*}(z_{r}-z_{s})+2N\sum_{r=1}^{N} \log\theta_{*}(z_{r})-\sum_{r=1}^{N}iaz_{r}, \tag{17}\] and \(\theta_{*}(z):=\sum_{n=0}^{\infty}(-1)^{n}q^{\frac{1}{2}n(n+1)}\sin\frac{(2n+1) z}{2}\). 2. The parameters \(\mathbf{x}^{c}=(z_{1}^{c},\ldots,z_{N}^{c})\) are solutions of the saddle point equations \(\partial W/\partial z_{r}=0\), \(r=1,\ldots,N\). The above result is new and will be discussed in detail in a separate paper. Here, we will just only announce that, based on this observation, one can connect the integral representation of the torus block and its classical/saddle point limit with the Bethe ansatz approach to the elliptic Calogero-Moser (eCM) model. The latter is a quantum many-body system with the \(M\)-particle Hamiltonian of the form [12]: \[\hat{\mathsf{H}}_{M}^{\tau,\ell}:=-\frac{1}{2}\sum_{i=1}^{M}\frac{\partial^{2 }}{\partial z_{i}^{2}}+\ell(\ell+1)\sum_{1\leq i<j\leq M}\left(\wp(z_{i}-z_{j },\tau)+2\eta\right), \tag{18}\] where \(\ell\in\mathbf{Z}_{>0}\) is the coupling constant and \(\wp(z,\tau)\) is the Weierstrass elliptic function. In the 2-particle case the Hamiltonian (18) reads as follows \(\hat{\mathsf{H}}_{2}^{\tau,\ell}=-\frac{d^{2}}{dz^{2}}+\ell(\ell+1)\left(\wp(z,\tau)+2\eta\right)\), where \(z=z_{1}-z_{2}\), and (cf. [12]): 1. the Bethe ansatz equations are given by \(\partial\Phi_{\tau}/\partial t_{i}=0\), \(i=1,\ldots,\ell\), where \[\Phi_{\tau}(\ell,m_{1},t_{1},\ldots,t_{\ell})=\mathrm{e}^{i\pi\sum_{j=1}^{ \ell}m_{1}t_{j}}\prod_{1\leq j\leq\ell}\theta(t_{j})^{-2\ell}\prod_{1\leq i<j \leq\ell}\theta(t_{i}-t_{j})^{2},\] \[\theta(x):=\frac{\hat{\theta}_{1}(x)}{\hat{\theta}_{1}^{\prime}(0)},\qquad \hat{\theta}_{1}(x):=2q^{\frac{1}{8}}\sum_{n=0}^{\infty}(-1)^{n}q^{\frac{1}{2} n(n+1)}\sin((2n+1)\pi x);\] 2. the eigenfunction (Bethe vector) of the operator \(\hat{\mathsf{H}}_{2}^{\tau,\ell}\) is equal to \(\mathrm{e}^{i\pi z}\theta(z-t_{1})\ldots\theta(z-t_{\ell})/\theta(z)^{\ell}\) up to a constant; 3. the eigenvalue of the operator \(\hat{\mathsf{H}}_{2}^{\tau,\ell}\) is equal to \[\mathrm{const.}-2\pi i\partial_{\tau}S\left(t_{1}^{0},\ldots,t_{\ell}^{0};\tau \right),\] (19) where \((t_{1}^{0},\ldots,t_{\ell}^{0})\) satisfy the Bethe ansatz equations and \[S\left(t_{1},\ldots,t_{\ell};\tau\right)=\mathrm{const.}\;2\sum_{i<j}\log \theta(t_{i}-t_{j})-2\ell\sum_{i}\log\theta(t_{i}).\] The eigenvalue equation for the 2-particle eCM Hamiltonian is nothing but the famous Lame equation, \(\psi^{\prime\prime}(z)-\left[\,\kappa\,\wp(z)+\mathsf{B}\,\right]\psi=0\). In CFT, one gets the latter from the classical limit of the null vector decoupling equation for the torus 2-point function with a light degenerate operator: \[\left[\frac{1}{b^{2}}\,\frac{\partial^{2}}{\partial z^{2}}+\left( 2\Delta_{+}\eta_{1}+2\eta_{1}z\frac{\partial}{\partial z}\right)+\Delta_{\beta }\left(\wp(z-w)+2\eta_{1}\right)\right.\] \[\left.\qquad\qquad+\left.\left(\zeta(z-w)+2\eta_{1}w\right)\frac{ \partial}{\partial w}\right]\left\langle\,\mathsf{V}_{+}(z)\mathsf{V}_{\beta}( w)\right\rangle_{\tau}=-\frac{2\pi i}{Z(\tau)}\,\frac{\partial}{\partial\tau} \left[Z(\tau)\left\langle\,\mathsf{V}_{+}(z)\mathsf{V}_{\beta}(w)\right\rangle _{\tau}\right].\] In this way one can show that the Lame eigenvalue \(\mathsf{B}\) is given in terms of the classical torus block [13]: \[\frac{\mathsf{B}}{4\pi^{2}}=q\frac{\partial}{\partial q}\,f_{\delta}^{\tilde{ \delta}}(q)-\frac{\tilde{\delta}}{12}\,\mathrm{E}_{2}(\tau),\qquad\tilde{ \delta}=-\kappa,\qquad q=\mathrm{e}^{2\pi i\tau}. \tag{20}\] Combining (16), (19) and (20) one can expect that the critical value of \(S(t_{1},\ldots,t_{\ell};\tau)\) in (19) is nothing but certain classical torus one-point block. It would be interesting to investigate how it is in case of the \(M\)-particle operator (18). ## 4 Discussion As a conclusion, we will share our thoughts on the possibility of using the classical limit of conformal blocks in further research on quantum many-body systems. The Coulomb energy calculated in [8] can be seen as the "perturbed su(2)\({}_{k}\) WZW classical block". It should be computable from the quantum block expansion. The success of this idea would open the possibility of developing new techniques of finding energy spectra of quantum integrable systems, which are alternative to the Bethe ansatz approach. Our preliminary calculations yield, that also the classical block on the torus and the classical irregular block can be represented as critical values of the corresponding Coulomb gas integrals. The saddle point equations for the torus classical block are very similar to the Bethe ansatz equations for the eCM model. We expect that in the case of the classical irregular block the corresponding integrable model will be the periodic Toda chain. Finally, one can apply the KZ/BPZ correspondence [14] in the limit \(c\longrightarrow\infty\) to the integrable systems, integrability of which follows from the KZ equation (eg., the Richardson model). We suppose that in this way it will be possible to show that the classical Virasoro blocks determine energy spectra of these models. There is one more formulation of the relationship between the Richardson model and CFT. This is an approach proposed by Sedrakyan and Galitski in [15] (see also [16]), which is close in spirit but technically different from the BCS/CFT correspondence discussed in [8]. The authors of [15] asked whether there is a deformation of the SU(2) WZW model, such that the correlation functions of it are solutions of the modified KZ equation, which contains the integrals of motion of the Richardson model instead of just the Hamiltonians of the rational Gaudin model. This deformed theory was identified in [15] as the boundary WZW model. Authors of [15] have shown that the generalized KZ equation can be solved exactly using the so-called off-shell Bethe ansatz technique. The solution of the latter can be given in an integral form. Analysis of this solution shows that this integral has a saddle point defined by the Richardson equations. Here, the same question arises as before. If the saddle point value of the chiral correlation function represented by the appropriate integral solves the eigenvalue equations of Richardson conserved charges (which has not been shown until the end in [15]), then _does a certain "classical block" correspond to this saddle point value?_ Moreover, one can ask directly about the limit \(c\longrightarrow\infty\) of the modified KZ equation. To understand what might happen here, we would like to use for this purpose the correspondence between the BPZ and KZ equations [14]. It turns out that the correspondence found by SG [15] concerns a variety of dynamical systems that can be mapped on the boundary WZW model and solved exactly in many cases. Such an example is two-level laser with pumping and damping. Moreover, within the SG approach one can study a nonequilibrium dynamics of various multi-level systems such as models with time-dependent interaction strength, multi-level Landau-Zenner models and some many-body generalizations. An understanding of the nonequilibrium dynamics of quantum systems is important in connection with quantum information problems and the idea of quantum computer (see refs. in [16]). The Coulomb gas integral (13) of the Richardson model is known in the theory of random matrices as the so-called multi-Penner type \(\beta\)-ensemble with sources. So, in parallel it is possible to use matrix models technology in this context. Precisely, we would like to use a well-known calculation scheme within matrix models -- their semiclassical ('t Hooft) limit corresponding to the large-\(c\) limit. This tool can be applied to investigate distributions of eigenvalues, i.e., the Richardson parameters (pair energies) of the reduced BCS model. It would be interesting to compare this approach with the continuum limit of the Richardson equations, cf. [17]. At least one reason is worth going in this direction. Gaudin proposed a continuum version of the Richardson equations. The assumption he made is that the solutions organize themselves into arcs \(\Gamma_{k}\), \(k=1,\ldots,\mathrm{K}\), which are symmetric under reflection on the real axis. For the ground state all the roots form a single arc K = 1. Still an open problem is [17]: "Study of solutions of Richardson equations with several arcs, i.e., K \(>\) 1....... they must describe very high excited states formed by separate condensates in interaction. This case may be relevant to systems such as arrays of superconducting grains or quantum dots...., the cases with K \(>\) 1 seem to be related to the theory of hyperelliptic curves and higher genus Riemann surfaces, which may shed some light on this physical problem.". The matrix models framework seems to be natural for these kinds of problems. A fascinating open question concerning isolated quantum many-body systems is how they evolve after a sudden perturbation or quench. For instance, in the paper [18] authors study a relaxation dynamics of the central spin model. Precisely, they analyze time evolutions of several quantities analytically and numerically. It has been observed that the quantum dynamics of Gaudin magnets reveals a break-down of thermalization. Methods used in the work [18] (the algebraic Bethe ansatz) do not go beyond those known from the Richardson solution and its implementation in CFT. Moreover, it is suggested in [18] to investigate scrambling and out-of-time-ordered correlators (OTOCs) for the Gaudin magnets. It should be stressed that OTOCs have recently been actively studied in the framework of CFT and these studies use the limit \(c\longrightarrow\infty\) of conformal blocks. If it is possible to apply the large-\(c\) limit of CFT to analyze OTOCs for the Gaudin magnets, it would be a very interesting research field for further exploration.
2309.13124
Investigating nonflow contribution subtraction in d-Au collisions with AMPT model
This paper presents research that focuses on nonflow contribution subtraction in heavy-ion collisions, using a multiphase transport model (AMPT). Specifically, the study aims to investigate the behavior of charged particle elliptic flow ($v_{\rm 2}$) in d-Au collisions at a collision energy of $\sqrt{s_{\rm NN}} = 200$ GeV and to determine the impact of nonflow sources, such as jet correlations and resonance decays, in small collision systems. To reduce nonflow effects, the per-trigger yield distribution in peripheral d-Au collisions or pp collisions with the same collision energy is subtracted. Our results show that the nonflow effects in central and mid-central collisions are not strongly dependent on subtracting the per-trigger yield distribution in peripheral d-Au collisions or pp collisions. Furthermore, the elliptic flow of charged particles, after removing nonflow effects through two subtracting methods from this work, exhibits consistency in various collision centrality classes. We also discuss comparisons with measurements from d-Au collisions at $\sqrt{s_{\rm NN}} = 200$ GeV. Overall, this work provides valuable insights and serves as a reference for researchers studying nonflow contribution subtraction in experiments with small collision systems.
Zuman Zhang, Sha Li, Ning Yu, Qiao Wu
2023-09-22T18:15:56Z
http://arxiv.org/abs/2309.13124v1
# Investigating nonflow contribution subtraction in d-Au collisions with AMPT model ###### Abstract This paper presents research that focuses on nonflow contribution subtraction in heavy-ion collisions, using a multiphase transport model (AMPT). Specifically, the study aims to investigate the behavior of charged particle elliptic flow (\(v_{2}\)) in d-Au collisions at a collision energy of \(\sqrt{s_{\rm NN}}=200\) GeV and to determine the impact of nonflow sources, such as jet correlations and resonance decays, in small collision systems. To reduce nonflow effects, the per-trigger yield distribution in peripheral d-Au collisions or pp collisions with the same collision energy is subtracted. Our results show that the nonflow effects in central and mid-central collisions are not strongly dependent on subtracting the per-trigger yield distribution in peripheral d-Au collisions or pp collisions. Furthermore, the elliptic flow of charged particles, after removing nonflow effects through two subtracting methods from this work, exhibits consistency in various collision centrality classes. We also discuss comparisons with measurements from d-Au collisions at \(\sqrt{s_{\rm NN}}=200\) GeV. Overall, this work provides valuable insights and serves as a reference for researchers studying nonflow contribution subtraction in experiments with small collision systems. **PACS numbers:** 25.75.Ld Collective flow, 24.10.Lx Monte Carlo simulations (including hadron and parton cascades and string breaking models) Introduction In high-energy heavy-ion collision research, the goal is to understand the properties of the quark-gluon plasma (QGP), a state of matter characterized by high energy density and temperature [1; 2]. One of the key observables in this field is the azimuthal anisotropy of final state particles, which provides information about the transport features of the QGP [3]. The elliptic flow (\(v_{2}\)) is a crucial observable in studying the collective motion of the QGP and is obtained through the Fourier expansion of the azimuthal distribution of emitted particles in transverse momentum space. The second-order coefficient in this expansion provides valuable insights into the behavior of the QGP and its properties. The study of elliptic flow is an important aspect of high-energy heavy-ion collision research as it helps to deepen our understanding of the quark-gluon plasma and its characteristics. In order to better understand the effects of cold nuclear matter on the interpretation of measurements in heavy-ion collisions, we conducted a study of small collision systems such as p(d)+A collisions. Our research focused on exploring key aspects of cold nuclear matter effects, such as the modification of parton distribution functions [4], the broadening of \(k_{\rm T}\)[5], and energy loss in cold nuclear matter [6]. To our surprise, in small collision systems, using the two-particle azimuthal correlation method, high-multiplicity p+A collisions at \(\sqrt{s_{\rm NN}}=5.02\) TeV in the midrapidity region by the ALICE, ATLAS, and CMS collaborations [7; 8; 9; 10; 11] and at forward rapidity by the LHCb collaboration [12], we discovered long-range structures in two-particle correlations that were associated with a positive \(v_{2}\) for hadrons. In addition, at lower beam energies, long-range correlations were also observed in d-Au [13; 14; 15] and \({}^{3}\)He-Au collisions [16] by the PHENIX and STAR collaborations at RHIC. To address nonflow effects, various strategies have been developed to eliminate correlations that are not associated with collectivity and arise from sources such as jet interactions, resonance decays, and so on. Typically, in small collision system experiments these non-flow contributions were suppressed by requiring a separation in pseudorapidity between paired particles or by subtracting correlations measured in low-multiplicity [8; 17] or pp collisions [18]. The long-range correlations were then isolated using a standard template fit procedure [19]. In the experimental analysis, the contribution of nonflow has not been calculated by both methods for same collision system. So, we analyze the \(p_{\rm T}\) distribution of anisotropic flow (\(v_{2}\)) in d-Au collisions at \(\sqrt{s_{\rm NN}}=200\) GeV, using the multiphase transport (AMPT) model [20] to investigate the nonflow contribution subtraction from both peripheral and pp collisions. The paper starts with a brief introduction to the AMPT model. Then, the method of nonflow contribution subtraction is detailed. Finally, the results are presented, along with a discussion and conclusion. ## II Event generation and definition of anisotropic flow ### A Multi-Phase Transport (AMPT) model The AMPT model [20] is a hybrid transport model used to study collective behavior in heavy ion collisions. It consists of four components: initial conditions, partonic interactions, conversion from partonic to hadronic matter, and hadronic interactions. The Lund string fragmentation function is determined by the parameters \(a\) and \(b\) in HIJING [21] as \(f(z)\propto z^{-1}(1-z)^{a}\exp{(-bm_{\perp}^{2}/z)}\), where \(z\) is the light-cone momentum fraction of the produced hadron of transverse mass \(m_{\perp}\) with respect to the fragmenting string. The Zhang's Parton Cascade (ZPC) model [22], which calculates parton-parton scattering using cross sections \(\sigma\approx\frac{9\pi\alpha_{s}^{2}}{2\mu^{2}}\) based on a Debye screening mass \(\mu\), is used to simulate the evolution of the partonic phase. After the partons stop interacting in ZPC, the quarks are subjected to a hadronization process based on the quark coalescence model, which combines the nearest quarks in coordinate space into hadrons. The hadrons formed during quark coalescence are then subjected to hadronic stage evolution, which is handled by a relativistic transport (ART) model [23] with the input cross section for various hadron-hadron scattering channels. In this work we use the program version with string melting AMPT model. We have conducted approximately 10 million AMPT events for d-Au collisions at \(\sqrt{s_{\rm NN}}=200\) GeV and pp collisions at \(\sqrt{s}=200\) GeV. The event centrality in \(d\)+Au is determined by impact parameter \(b\) from AMPT events. We use central, mid-central and peripheral event samples comprising the top 5%, 10-20%, 20-30%, 30-40%, 40-50% and 50-88% collision centrality intervals of the total charge particle elliptic flow (\(v_{2}\)) distributions, respectively. ### Definition of anisotropic flow Typically, the magnitude of azimuthal anisotropies is quantified using a Fourier decomposition of the particle azimuthal distribution given by \[\frac{\mathrm{d}^{2}N}{\mathrm{d}p_{\mathrm{T}}\mathrm{d}\varphi}=\frac{1}{2\pi} \frac{\mathrm{d}N}{\mathrm{d}p_{\mathrm{T}}}\bigg{(}1+2\sum_{n=1}^{\infty}v_{n }(p_{\mathrm{T}})\mathrm{cos}[n(\varphi-\Psi_{n})]\bigg{)}, \tag{1}\] where \(\varphi\) and \(p_{\mathrm{T}}\) are the particle azimuthal angle and transverse momentum, respectively. The anisotropy of produced particles is defined by the Fourier coefficients \(v_{n}\)[24] and the azimuthal angle of the symmetry plane for the \(n^{\mathrm{th}}\) harmonic is denoted by \(\Psi_{n}\). The largest contribution to the asymmetry of collisions is provided by the second Fourier coefficient \(v_{2}\) referred to as elliptic flow [3; 24]. ## III Nonflow contribution subtraction method The method using two-particle correlations to extract the azimuthal anisotropy is extensively discussed in Refs. [7; 8; 9; 10; 11; 25; 26; 27]. The correlation between two particles (denoted trigger and associated particle) is measured as a function of the azimuthal angle difference \(\Delta\phi\) (defined within \(-\pi/2\) and \(3\pi/2\)) and pseudorapidity difference \(\Delta\eta\). While the trigger particles are charged particles, the analysis is done for charged associated particles (denoted \(h-h\)). In this work, we follow the analysis in experiments at the RHIC energy [13]. In AMPT events, charged hadrons with \(0.5<p_{T}<3.5\) GeV/\(c\) are used, each pair includes at least one particle at low \(p_{T}\) (\(0.5<p_{T}<0.75\) GeV/\(c\)). To minimize the contribution from small-angle correlations pairs are restricted to pseudorapidity separations of \(0.48<|\Delta\eta|<0.7\). The correlation is expressed in terms of \(Y\), the associated yield per trigger particle defined as: \[Y=\frac{1}{N_{\mathrm{trig}}}\frac{\mathrm{d}N_{\mathrm{assoc}}}{\mathrm{d} \Delta\phi}, \tag{2}\] where \(N_{\mathrm{trig}}\) is the total number of trigger particles in the event class and \(p_{\mathrm{T}}\) interval, \(N_{\mathrm{assoc}}\) is the total number of associatied particles in the event class and \(p_{\mathrm{T}}\) interval. We use the zero-yield-at-minimum (ZYAM) method [28], where one assumes that the number of correlated pairs is zero at the correlation function minimum. This background contribution is obtained for the central, mid-central, peripheral and pp collisions samples by performing fits to the conditional yields using a functional form composed of a constant pedestal and two Gaussian peaks, centered at \(\Delta\phi=0\) and \(\pi\). The minimum of this function, \(b_{\rm ZYAM}\), is subtracted from the conditional yields, and the result is: \(Y(\Delta\phi)=\frac{1}{N_{\rm trig}}\frac{dN_{\rm masso}}{d\Delta\phi}-b_{\rm ZYAM}\). The conditional yields \(Y_{\rm c}(\Delta\phi)\), \(Y_{\rm mc}(\Delta\phi)\), \(Y_{\rm p}(\Delta\phi)\) and \(Y_{\rm pp}(\Delta\phi)\) are related to central, mid-central, peripheral and pp collisions events, respectively. Their difference are \(\Delta Y_{\rm cp}(\Delta\phi)=Y_{\rm c}(\Delta\phi)-Y_{\rm p}(\Delta\phi)\), \(\Delta Y_{\rm mcp}(\Delta\phi)=Y_{\rm mc}(\Delta\phi)-Y_{\rm p}(\Delta\phi)\), \(\Delta Y_{\rm cp}(\Delta\phi)=Y_{\rm c}(\Delta\phi)-Y_{\rm pp}(\Delta\phi)\) and \(\Delta Y_{\rm mcpp}(\Delta\phi)=Y_{\rm mc}(\Delta\phi)-Y_{\rm pp}(\Delta\phi)\) are associated with subtracting the per-trigger yield distribution in peripheral d-Au collisions or pp collisions, respectively. These subtraction removes any centrality independent correlations, such as effects from jet correlations, resonance decays, and so on. Fourier coefficients can be extracted from the \(\Delta\phi\) projection of the per-trigger yield by a fit with: \[\frac{1}{N_{\rm trig}}\frac{\mathrm{d}N_{\rm assoc}}{\mathrm{d}\Delta\phi}=a_{ 0}+\sum_{\rm n=1}^{3}2a_{\rm n}\mathrm{cos(n\Delta\phi)} \tag{3}\] To quantify the relative amplitude of the azimuthal modulation, we define \[c_{n}=a_{n}/\left(b_{\rm ZYAM}^{c}+a_{0}\right), \tag{4}\] where \(b_{\rm ZYAM}^{c}\) is \(b_{\rm ZYAM}\) in central events [28]. The method using two-particle correlations to the \(v_{n}^{h}\{2\mathrm{PC}\}\) coefficient of order \(n\) for a particle \(h\) is defined as [10]: \[v_{n}^{h}\{2\mathrm{PC}\}=\sqrt{c_{n}^{h-h}} \tag{5}\] ## IV Results and Discussions From Eq. (2), the charged particles conditional yields \(Y_{\rm c}(\Delta\phi)\), \(Y_{\rm p}(\Delta\phi)\) and \(Y_{\rm pp}(\Delta\phi)(0\%\)-5% most central, peripheral and pp collisions events, respectively) are shown in Fig. 1 and Fig. 2, along with their difference \(\Delta Y_{\rm cp}(\Delta\phi)=Y_{\rm c}(\Delta\phi)-Y_{\rm p}(\Delta\phi)\) and \(\Delta Y_{\rm cp}(\Delta\phi)=Y_{\rm c}(\Delta\phi)-Y_{\rm pp}(\Delta\phi)\) are expressed as subtracting the per-trigger yield distribution in peripheral d-Au collisions at \(\sqrt{s_{\rm NN}}=200\) GeV or pp collisions at \(\sqrt{s}=200\) GeV, respectively. It is worth noting that any signal in the peripheral events and pp events is subtracted from the signal in the central events. For \(\Delta\phi\) near 0 and \(\pi\), \(Y_{c}(\Delta\phi)\) is significantly larger than \(Y_{\rm p}(\Delta\phi)\) and \(Y_{\rm pp}(\Delta\phi)\). We discover that the distinction with 0-5% most central, 50-85% peripheral collisions and pp collisions are well described by the Eq. (3) as demonstrated in Fig. 1 and Fig. 2. The charged particles coefficients \(a_{n}\) are computed from the \(\Delta Y(\Delta\phi)\) distributions as: \(a_{n}=\left\langle\Delta Y(\Delta\phi)\cos(n\Delta\phi)\right\rangle\). The bracket \(\left\langle...\right\rangle\) denotes an average over particles in the event sample with \(\Delta\phi\). From Eq. (4),charged particles \(c_{2}\) is shown as a function of associated \(p_{\mathrm{T}}\) in Fig. 3 for central (0-5%) d-Au collisions. With the nonflow effect, the blue solid line is higher than red dashed line or green dotted line (without the nonflow effect). The nonflow effects for 0-5% most central d-Au collisions reduced by subtracting the per-trigger yield distribution in peripheral d-Au collisions (green dotted line) or pp collisions (red dashed line) are very similar. The charged particles \(c_{2}\) from AMPT events and data are consistent from 0.5 to 2.0 GeV/\(c\), then \(c_{2}\) from AMPT events is higher than from data. From Eq. (5),charged particles \(v_{2}\) is shown as a function of associated \(p_{\mathrm{T}}\) in Fig. 4 for central (0-5%) d-Au collisions. We can observe the charged particles \(v_{2}\) value increase with \(p_{\mathrm{T}}\) distribution, this is because at higher \(p_{\mathrm{T}}\), the particles are more affected by reflecting the initial spatial anisotropy of the collision geometry. With the nonflow effect, the blue solid line is higher than red dashed line or green dotted line (without the nonflow effect). The nonflow effects for 0-5% most central d-Au collisions reduced by subtracting the per-trigger yield distribution in peripheral d-Au collisions (green dotted line) or pp collisions (red dashed line) are very similar. This could indicate that the nonflow effects (such as jet correlations and resonance decays) in the 0-5% most central d-Au collisions are not strongly dependent on the subtracting the per-trigger yield distribution in peripheral d-Au collisions or pp collision system. The charged particles \(v_{2}\) from AMPT events are in agreement with data from 0.5 to 1.2 GeV/\(c\), then \(v_{2}\) from AMPT events is lower than data from 1.2 to 2.5 GeV/\(c\). In bottom panel of Fig. 4, we can observe the ratio between charged particles \(v_{2}\) of most central d-Au collisions reduced by subtracting the per-trigger yield distribution in peripheral d-Au collisions and pp collisions is shown by cyan shortdash-dotted line, the maximum deviation is less than 2%. The ratio between \(v_{2}\) of most central d-Au collisions reduced by subtracting the per-trigger yield distribution in peripheral d-Au collisions and without nonflow subtracting is shown by pink longdash-dotted line, the ratio value at higher \(p_{\rm T}\) is larger than at lower \(p_{\rm T}\) and the maximum deviation is less than 6%. It is worth noting that the nonflow contribution can vary with \(p_{\rm T}\) due to different physical mechanisms that contribute to particle production in different \(p_{\rm T}\) ranges. For example, at low \(p_{\rm T}\), the nonflow contribution may be dominated by resonance decays, while at high \(p_{\rm T}\), the nonflow contribution may be dominated by jet-like correlations. Therefore, studying the nonflow contribution ratio as a function of \(p_{\rm T}\) can provide important insights into the underlying physics of particle production in heavy-ion collisions. The charged particles \(v_{2}\) is shown as a function of associated \(p_{\rm T}\) in Fig. 5 at 10-20%, 20-30%, 30-40% and 40-50% d-Au collision centrality intervals. We can find the nonflow effects in the mid-central d-Au collisions are not strongly dependent on the subtracting the per-trigger yield distribution in peripheral d-Au collisions or pp collision system. On other hand, we observe the nonflow effects at different collision centrality intervals reduced by subtracting the per-trigger yield distribution in peripheral d-Au collisions (green dotted line) or pp collisions (red dashed line) are very similar. ## V Summary In this work, a multiphase transport model(AMPT) has been used to comprehensively study the behavior of elliptic flow(\(v_{2}\)) for d-Au collisions at \(\sqrt{s_{\rm NN}}=200\) GeV. Two-particle angular correlations of charged particles have been calculated in d-Au collisions at \(\sqrt{s_{\rm NN}}=200\) GeV and expressed as associated yields per trigger particle. The Fourier coefficient \(v_{2}\) was extracted from these correlations and studied as a function of \(p_{\rm T}\). The nonflow effects for central/mid-central d-Au collisions at \(\sqrt{s_{\rm NN}}=200\) GeV reduced by subtracting the per-trigger yield distribution in peripheral d-Au collisions at \(\sqrt{s_{\rm NN}}=200\) GeV or pp collisions Figure 4: In upper panel, the charged particles elliptic flow \(v_{2}\) of the 0–5% most central d-Au collision excess as a function of associated particle \(p_{\rm T}\). The blue solid line is for \(v_{2}\) of the 0–5% most central d-Au collisions with nonflow effects. The nonflow effects for 0–5% most central d-Au collisions reduced by subtracting the per-trigger yield distribution in peripheral d-Au collisions (green dotted line) or pp collisions (red dashed line). The \(v_{2}\) of the central d-Au collision in the data is also shown (black circles) from Ref. [13], which obtained under the assumption of factorization: \(c_{2}\left(p_{\rm T}^{t},p_{\rm T}^{a}\right)=v_{2}\left(p_{\rm T}^{t}\right) v_{2}\left(p_{\rm T}^{a}\right)\). In bottom panel, the ratio between charged particles \(v_{2}\) of most central collisions reduced by subtracting the per-trigger yield distribution in peripheral d-Au collisions and pp collisions is shown by cyan shortdash-dotted line, the ratio between \(v_{2}\) of most central d-Au collisions reduced by subtracting the per-trigger yield distribution in peripheral d-Au collisions and without nonflow subtracting is shown by pink longdash-dotted line. at \(\sqrt{s}=200\) GeV. Both techniques give compatible results. Discussions about comparisons with measurements from d-Au collisions at \(\sqrt{s_{\rm NN}}=200\) GeV are included. We can find the nonflow effects (such as jet correlations and resonance decays) in the central and mid-central d-Au collisions are not strongly dependent on the subtracting the per-trigger yield distribution in peripheral d-Au collisions or pp collision system. By subtracting the per-trigger yield distribution in peripheral d-Au collisions and without nonflow subtraction, the ratio of \(v_{2}\) for most central d-Au collisions shows a larger value at higher \(p_{\rm T}\) than at lower \(p_{\rm T}\). The maximum deviation is less than 6%. By analyzing the nonflow contribution ratio as a function of \(p_{\rm T}\), we can gain valuable insights into the underlying physics of particle production in heavy-ion collisions. At low \(p_{\rm T}\), resonance decays may dominate the nonflow contribution, whereas at high \(p_{\rm T}\), jet-like correlations may be the main contributor. On the other hand, we have observed nonflow effects in different collision centrality intervals. These results are consistent with the results of the previous studies. Figure 5: The charged particles elliptic flow \(v_{2}\) of the 10–20%, 20–30%, 30–40% and 40–50% collision excess as a function of associated particle \(p_{\rm T}\). The blue solid line is for charged particles \(v_{2}\) of the different centrality collisions with nonflow effects. The nonflow effects reduced by subtracting the per-trigger yield distribution in peripheral d-Au collisions (green dotted line) or pp collisions (red dashed line).
2309.03337
Leveraging Geometrical Acoustic Simulations of Spatial Room Impulse Responses for Improved Sound Event Detection and Localization
As deeper and more complex models are developed for the task of sound event localization and detection (SELD), the demand for annotated spatial audio data continues to increase. Annotating field recordings with 360$^{\circ}$ video takes many hours from trained annotators, while recording events within motion-tracked laboratories are bounded by cost and expertise. Because of this, localization models rely on a relatively limited amount of spatial audio data in the form of spatial room impulse response (SRIR) datasets, which limits the progress of increasingly deep neural network based approaches. In this work, we demonstrate that simulated geometrical acoustics can provide an appealing solution to this problem. We use simulated geometrical acoustics to generate a novel SRIR dataset that can train a SELD model to provide similar performance to that of a real SRIR dataset. Furthermore, we demonstrate using simulated data to augment existing datasets, improving on benchmarks set by state of the art SELD models. We explore the potential and limitations of geometric acoustic simulation for localization and event detection. We also propose further studies to verify the limitations of this method, as well as further methods to generate synthetic data for SELD tasks without the need to record more data.
Christopher Ick, Brian McFee
2023-09-06T19:34:30Z
http://arxiv.org/abs/2309.03337v1
Leveraging Geometrical Acoustic Simulations of Spatial Room Impulse Responses for Improved Sound Event Detection and Localization ###### Abstract As deeper and more complex models are developed for the task of sound event localization and detection (SELD), the demand for annotated spatial audio data continues to increase. Annotating field recordings with 360\({}^{\circ}\) video takes many hours from trained annotators, while recording events within motion-tracked laboratories are bounded by cost and expertise. Because of this, localization models rely on a relatively limited amount of spatial audio data in the form of spatial room impulse response (SRIR) datasets, which limits the progress of increasingly deep neural network based approaches. In this work, we demonstrate that simulated geometrical acoustics can provide an appealing solution to this problem. We use simulated geometrical acoustics to generate a novel SRIR dataset that can train a SELD model to provide similar performance to that of a real SRIR dataset. Furthermore, we demonstrate using simulated data to augment existing datasets, improving on benchmarks set by state of the art SELD models. We explore the potential and limitations of geometric acoustic simulation for localization and event detection. We also propose further studies to verify the limitations of this method, as well as further methods to generate synthetic data for SELD tasks without the need to record more data. Christopher Ick Brian McFee Music and Audio Research Laboratory, New York University Brooklyn, NY 11201, United States Acoustic Simulation, Localization, Data Augmentation ## 1 Introduction Sound event detection and localization (SELD) is the union of two active fields of research; sound event detection (SED), and localization, or direction-of-arrival (DoA) estimation. Expanding the scene-description capabilities of SED with the spatiotemporal characterization of localization sees applications ranging from autonomous robot navigation [1] and urban monitoring [2], to speaker diarization [3] and immersive experiences in virtual and augmented reality devices. While earlier techniques for SELD have focused on traditional signal-processing or parametric models such as [4, 5, 6, 7], recent literature is dominated by deep neural network (DNN) approaches, which have been shown superior performance in both pure localization [8, 9] as well as joint SELD tasks [10]. A surge of interest in this field can be attributed to the introduction of SELD as a task in the DCASE2019 challenge [11]. This challenge included the release of several 4-channel audio datasets with spatial and temporal annotations for sound events. The audio was generated by convolving sound events with spatial room impulse responses (SRIRs) recorded in 5 separate rooms at 504 unique azimuth-elevation-distance combinations. This was further iterated upon by the SELD challenge in DCASE2020 [12] with 13 unique rooms. Recently, DCASE2022 reintroduced the challenge with hand-annotated real-world recordings in the STARSS22 dataset [13], providing one of the first datasets with real-world data upon which to evaluate SELD models. This dataset uses a combination of 360\({}^{\circ}\) video, and motion capture to extract spatiotemporal annotations that were manually validated. In addition to this, the DCASE2022 dataset also included a release of the SRIRs used to generate the training data for the SELD task, as well as the code for the generator itself, allowing users to generate their own annotated spatial data [14]. This data included SRIRs measured over a wide range of positions over 9 different rooms in Tampere University's campus. These datasets are unique in the density of SRIR measurements across particular paths, the variety of acoustic enclosures, and the large amount of SRIRs. Because of their scale, visibility, and quality, these datasets have become some of the most cited SRIR datasets for DNN-based approaches to SELD, because of their ability to meet the data requirements of these highly-parametrized models. Despite this, these datasets are still severely limited by the recording procedure of SRIRs, which require time, expertise, and a low-noise environment to produce at a high quality. Increasing the spatial density, variety of trajectories types, and number of trajectory paths becomes multiplicatively time consuming to develop. Furthermore, the range of rooms in which these measurements can be recorded is inherently limited by the limitations of the recording facilities, usually limited to a dozen rooms or so in the best of cases. However, without a wide range of acoustic environments to perform these measurements, generalization to a variety of unseen acoustic environments becomes impossible. Physical acoustic simulations provide an attractive solution to the limitations of field-recorded SRIRs. Acoustic simulation is typically split into two categories: wave based methods, which simulate the propagation of sound waves through physical media, and geometric modeling methods, which model the transportation of acoustic energy through acoustic rays, mimicking popular methods for modeling optical rays. Geometrical acoustics approximate the wavelength of the propagating sound to have wavelength relatively small compared to the room geometries of interest, and neglecting wave effects such as diffraction or scattering. Nevertheless because of ease of implementation and computational efficiency, geometrical acoustic modeling methods have seen wide success in several tasks, including modeling architectural acoustics [15] and room parameter estimation [16]. In this work, we propose utilizing one method of geometrical acoustics modeling, the image source method, to generate simulated SRIRs for training DNN models for SELD. We demonstrate the effectiveness of this simulated method for SRIR generation using the framework and data provided in previous DCASE SELD challenges. By creating an audio dataset from simulated SRIRs, we train a SELD model with similar performance to one utilizing real-world SRIRs. By directly compares simulated SRIRs with a datasets of recorded SRIRs of the similar size, room geometries, and DoA distributions, we demostrate the downstream effects of simulation in place of recording as being relatively minimal, differing our work from prior studies [17]. Furthermore, we augment a typical SRIR dataset with simulated SRIRs, training models that outperform those trained solely on recorded SRIRs. Finally we propose further experiments to explore the use of simulated SRIRs for training SELD models. The code associated with this work is released in an open-source github repository 1 to further work in using synthetic SRIRs for training DNN models. Footnote 1: [https://github.com/ChrisIck/DCASE_Synth_Data](https://github.com/ChrisIck/DCASE_Synth_Data) ## 2 Acoustic Simulations ### The Image Source Method The image source method (ISM) is a technique used in architectural acoustics and room modeling to predict the sound field in enclosed spaces [18]. The ISM considers the primary sound source and virtual images reflected by the room's boundaries. These virtual sources are assumed to emit sound with the same magnitude and phase as the primary source, but with a delay due to the additional path length traveled. Typically, this starts with defining the room geometry, including the positions and shapes of the walls, ceiling, and floor. For each reflecting surface, virtual image sources are "mirrored" across the boundary. The number of virtual sources depends on the order of reflections considered. From here, the interaction between the primary sound source and the virtual image sources is calculated by determining the path lengths, time delays, and attenuation factors associated with each source-receiver combination, as well as the material properties of each surface through which the sound path is reflected upon. By summing the contributions of the primary source and its image sources, the sound field at various locations in the room can be predicted, providing an estimation of the sound pressure level, arrival times, and directivity patterns. It's important to note several limitations of the ISM model. The ISM implicitly assumes that all surfaces are perfectly reflective and flat, with idealized acoustic properties, which fails to account for acoustic effects such as scattering or diffraction. Furthermore, the order of reflections/virtual sources scales the computational cost exponentially, meaning compute late-stage reflections in an SRIR prohibitively expensive. Despite its limitations, the image source method is widely used due to its computational efficiency and effectiveness in predicting sound fields in enclosed spaces. Because localization is more reliant on direct sounds/early reflections, the limitations caused by use of the the ISM for computing SRIRs can be expected to be relatively minimal. ### The TAU-SRIR Dataset To validate the use of ISM-generated SRIRs in a direct-comparison, we take the existing TAU-SRIR database [14] as an example database for which well established metrics for SELD have been measured. The TAU-SRIR database contains SRIRs recorded in 9 different rooms throughout Tampere University's campus. Each SRIR is computed by recording a maximum-length sequence (MLS) played through a loudspeaker, recorded on an Eigenmike spherical microphone array. Each SRIR was downsampled to 24kHz and truncated at 300ms, resulting in \(7200\) samples per RIR. The data is stored in a 4-channel audio corresponding to a tetrahedral microphone array with the geometry in spherical coordinates \((\phi,\theta)\), specified in Table 1. For each room, the position of the microphone array was provided. SRIRs were measured along either circular or linear traces at fixed distance from the microphone array along the z-axis at a number of trajectory groups, separated by distance and reflection across the axis of the microphone array in the case of linear traces. Circular trajectory groups had a specified radius of orbit, whereas linear trajectories had a specified start and end point in 3D space. Each trajectory was repeated at a number of different heights, and each trajectory had a fixed number SRIR measurements and corresponding DoA measurements recorded as Cartesian components of a unit vector. The number of SRIR measurements vary across different trajectories/heights, spaced in roughly \(1^{\circ}\) increments. The total number measurements can be seen in Table 2 ### Room Simulation We recreate this dataset using the python package _pyroomacoustics_[19], a pythonic implementation of the ISM, that has demonstrated use in implementations of various algorithms for beamforming, direction finding, adaptive filtering, source separation, and single channel denoising. To replicate the acoustic conditions of each of the rooms in the TAU-SRIR dataset, we randomly sampled the RIRs uniformly in each room until we had a sample of 5 single-channel RIRs. Using the Schroeder method [20], we estimated the RT\({}_{60}\) of each room by \begin{table} \begin{tabular}{c|r r} & Azimuth \((\phi)\) & Elevation \((\theta)\) \\ \hline M1 & \(45^{\circ}\) & \(35^{\circ}\) \\ M2 & \(-45^{\circ}\) & \(-35^{\circ}\) \\ M3 & \(135^{\circ}\) & \(-35^{\circ}\) \\ M4 & \(-135^{\circ}\) & \(35^{\circ}\) \\ \end{tabular} \end{table} Table 1: Microphone Geometry for TAU-SRIR dataset. Each microphone is 4.2cm from the center, and is modeled with a hypercardioid response. \begin{table} \begin{tabular}{l|c c c c} Room Name & Traj. type & \(N_{t}\) & \(N_{h}\) & \(N_{\text{SRIRs}}\) \\ \hline Bomb shelter & Circular & 2 & 9 & 6480 \\ Gym & Circular & 2 & 9 & 6480 \\ PB132 & Circular & 2 & 9 & 6480 \\ PC226 & Circular & 2 & 9 & 6480 \\ SA203 & Linear & 6 & 3 & 1594 \\ SC203 & Linear & 4 & 5 & 1592 \\ SE203 & Linear & 4 & 4 & 1760 \\ TB103 & Linear & 4 & 3 & 1184 \\ TC352 & Circular & 2 & 9 & 6480 \\ \end{tabular} \end{table} Table 2: Trajectory information for rooms contained in the TAU-SRIR dataset [14]. Each room contains trajectories across a number of trajectory groups (\(N_{t}\)) and a number of heights (\(N_{h}\)), for a total of \(N_{t}\times N_{h}\) trajectories per room. Each trajectory is sampled in roughly \(1^{\circ}\) increments. hand-selecting the early decay of the energy-decay function of the RIR samples and computing the linear fit. This can be see in in Figure 1. Using the inverse Sabine formula, we used this to estimate the mean absorption coefficient of the rooms and the number of required reflection orders to approximate a room of a similar RT\({}_{60}\). We combined these parameters with the geometry estimations from the TAU-SRIR dataset to construct virtual rooms matching those of the 9 rooms in the TAU-SRIR dataset. To this room, we added a virtual tetrahedral microphone with the geometry specified in Table 1, with each virtual microphone using a hypercardioid response pattern centered at the position specified in the TAU-SRIR dataset. To estimate the positions of the SRIRs in space corresponding to the DoA measurements in the TAU-SRIR dataset, we chose points along the DoA that most closely matched the corresponding path on the line specified by the trajectory in the dataset. Circular datasets specified their height and radius, and were centered along the same z-line as the microphone array. Linear trajectories specified the start point and end points of their traces. These matched points were estimated by projecting the DoA vectors onto a cylinder that matched the radius of the trajectory, after translation by the position of the microphone array and the height of the trajectory of interest (see Figure 2). Once these points were estimated, they were placed into the simulated room in a position relative to the virtual microphone array, and the 4-channel SRIR was computed with the ISM for each point, at the sample rate of 2400kHz. To match the dimensions of the original TAU-SRIR dataset, we truncate the SRIRs to 300ms, providing us with an SRIR with dimensions of \(7200\times 4\) for each point. ## 3 Methodology To evaluate the performance of this dataset in SELD tasks, we generated a dataset of audio events consistent with the methodology of dataset generation for training the baseline SELD model in the DCASE 2022 challenge [21]. We generated 3 datasets, one using the original SRIR database, which we will refer to as the TAU-SRIR datset. We also generated a dataset using only the synthetic SRIRs, which we refer to as the SIM-SRIR dataset. Finally, we generated a third dataset that equally samples both the original and simulated SRIRs, which we will refer to as the augmented SRIR dataset, or AUG-SRIR. ### Data Generation To generate our annotated spatialized audio, we followed the procedure used in DCASE2019-2021, by convolving various sound events with SRIRs. The sound events were drawn from the FSD50k audioset [22], a subset containing over 20k sound events of 13 classes selected for the DCASE challenge. These sound events were spatialized into virtual recordings, each corresponds to a singular room, allowing for up to three concurrently active sources. The sources can be static or dynamic, with equal probability, and the dynamic sources can move at slow (\(10^{\circ}\)/sec), moderate (\(20^{\circ}\)/sec), or fast (\(40^{\circ}\)/sec) angular speeds. Each sample lasted 60 seconds, 40 of which had at least one active event class. For each of the 3 SRIR datasets, 1200 recordings were created in separate folds, 900 for training and 300 for validation. The training and validation sets used 6 and 3 rooms respectively, such that none of the same rooms overlapped both folds. ### Model The model architecture is identical to the one used in the DCASE2022 Task 3 challenge baseline [13]; a SELDnet style CRNN with multi-ACCDOA representation for co-occurring events [23]. The model takes \(T\) frames of an STFT time-frequency representation of the multichannel features, and outputs \(T/5\times N\times C\times 3\) vector coordinates, and \(N\) is the assumed maximum co-occurring events, in our case \(3\). The input features are 4-channel 64-band log-mel spectrograms combined with SALSA-lite spatial features, all of which are truncated to include bins up to 9kHz, without mel-band aggregation following [24]. Each model was trained on the training folds generated from the SRIR datasets described above. In addition to this, data from the Figure 1: The log-scale energy decay from a sample of RIRs from a singular room. The region in the dashed lines is roughly linear, suggesting it corresponds to mid-late reflections, and is used for an estimate of RT\({}_{60}\) Figure 2: SRIR measurement positions reconstructed from the DoAs provided in the TAU-SRIR dataset (blue), compared to the path specified by the height, radius, and position labels provided (red) Sony-TAu Realistic Spatial Soundscapes 2022 (STARSS22) [13] was added for training, using the 54 development sound mixtures for training, but witholding the remaining 52 clips for evaluation of the models, ensuring the results were exclusively on real recorded data from unseen rooms. The models were trained for 100 epochs each, ### Evaluation Evaluation was completed using join localization-detection metrics established in the DCASE 2020 challenge. The detection metrics used were error-rate and F1 score for a spatial threshold within \(20^{\circ}\) (\(ER_{20^{\circ}}\) and \(F_{20^{\circ}}\)). F1 score was macro-averaged to account for class distribution differences in the FSD50k audio subset used. Localization metrics are class dependent localization error and recall (\(LE_{\text{CD}}\) and \(LR_{\text{CD}}\)). ## 4 Results Despite the coarse physical approximations made by the ISM, the entirely synthetic SRIRs generated with this process performed nearly as well as the SRIRs recorded in real world settings. Furthermore, the dataset of real SRIRs augmented by synthetic SRIRs outperformed both by a narrow margin, showing benefits of geometrical acoustics simulation for data augmentation for SELD tasks. Regarding the cross-class average performance of the models in Table 3, we can see that for our classification metrics, all three models perform relatively similarly, with a slight performance edge to the AUG-SRIR dataset trained models. Looking into the per-class results in figure 3, we can see that generally, all three models struggle with similar classes (telephone, laughter, door), but the AUG-SRIR dataset outperforms both in certain classes for which both other models perform poorly on (Water tapFaucet, and Knock). Looking at the localization based results, it appears that some amount of the performance differences between the SIM-SRIR trained models and the TAU-SRIR trained models can possibly be attributed to model fine-tuning; while SIM-SRIR trained models had poor localization performance on certain results (Water tapFaucet, and Knock), the AUG-SRIR model outperformed the baseline TAU-SRIR dataset. This suggests that the SIM-SRIR datasets are actually providing beneficial information for these sound classes missing from the TAU-SRIR datasets. With more thorough model tuning, it's possible that the performance for SIM-SRIR trained models it even closer to that of the baseline. ## 5 Conclusion In this work we demonstrated the potential of using acoustic simulation to generate spatial audio data for training SELD models. We've shown that simulated SRIR data can improve the performance of SELD models as a form of data augmentation. In addition to this, we've shown that simulated SRIRs, while not as effective as those recorded in real acoustic environments, can be used to effectively train SELD models, removing the relatively high cost of producing additional data for similarly performing results in a relatively limited setting. Generating larger volumes of SRIRs over a wider range of acoustic conditions could provide even better results than these baselines, potentially demonstrating greater robustness over varying acoustic environments. Furthermore, using a high-volume of simulated SRIRs to train a model, and using a hold-out of limited high-quality real-world data to fine the model could produce SoTA results. This result is promising for future experiments involve SRIRs for use in acoustic simulation data. Understanding the requirements for angular density in dynamic SRIR recordings can help inform future dataset collection practices, as well as the robustness of these models to noise; limited work was done exploring the effect of noise on the models trained with simulated SRIRs. Further ablation studies are necessary to understand the limitations of geometrical acoustic methods for SELD-based tasks, but these early experiments suggest that these can provide a low-resource alternative to real-world SRIR recordings.
2309.11733
A renewal approach to prove the Four Color Theorem unplugged, Part I: RGB-tilings on maximal planar graphs
This is the first part of three episodes to demonstrate a renewal approach for proving the Four Color Theorem without checking by a computer. The second and the third episodes have subtitles: ``R/G/B Kempe chains in an extremum non-4-colorable MPG'' and ``Diamond routes, canal lines and $\Sigma$-adjustments,'' where R/G/B stand for red, green and blue colors to paint on edges and an MPG stands for a maximal planar graph. In this first part, we introduce R/G/B-tilings as well as their tri-coexisting version RGB-tiling on an MPG or a semi-MPG. We associate these four kinds of edge-colorings with 4-colorings by 1/2/3/4 on vertices in MPS's or semi-MPG's. Several basic properties for tilings on MPG's and semi-MPG's are developed. Especially the idea of R/G/B-canal lines, as well as canal system, is a cornerstone. This work started on May 31, 2018 and was first announced by the author~\cite{Liu2020} at the Institute of Mathematics, Academia Sinica, Taipei, Taiwan, on Jan.\ 22, 2020, when the pandemic just occurred.
Shu-Chung Liu
2023-09-21T02:13:33Z
http://arxiv.org/abs/2309.11733v2
# A renewal approach to prove ###### Abstract. This is the first part of three episodes to demonstrate a renewal approach for proving the Four Color Theorem without checking by a computer. The second and the third episodes have subtitles: "R/G/B Kempe chains in an extremum non-4-colorable MPG" and "Diamond routes, canal lines and \(\Sigma\)-adjustments," where R/G/B stand for red, green and blue colors to paint on edges and an MPG stands for a maximal planar graph. In this first part, we introduce R/G/B-tilings as well as their tri-coexisting version RGB-tiling on an MPG or a semi-MPG. We associate these four kinds of edge-colorings with 4-colorings by 1/2/3/4 on vertices in MPS's or semi-MPG's. Several basic properties for tilings on MPG's and semi-MPG's are developed. Especially the idea of R/G/B-canal lines, as well as canal system, is a cornerstone. This work started on May 31, 2018 and was first announced by the author [5] at the Institute of Mathematics, Academia Sinica, Taipei, Taiwan, on Jan. 22, 2020, when the pandemic just occurred. Key words and phrases:Four Color Theorem; Kempe chain; triangulation; edge-coloring; RGB-tiling; \(e\)-diamond 2020 Mathematics Subject Classification: Primary 05C10; 05C15 ## 1. Introduction The famous Four Color Theorem has been challenging and soaking up the mind of many mathematicians for 170 years. The original four-color conjecture was first proposed by Francis Guthrie in 1852, due to his working experience: trying to color the map of counties of England [6]. No doubt a good application was to color the map of the world by assuming no enclave, or made some adjustment to fix very few enclave lands. Graph theory experts soon translated this problem to be a new mathematical model: set every county (or region, country) to be a vertex, and the relation of two neighboring countries to be an edge. A map without enclave lands then turns to a planar graph. In graph-theoretic terms, the Four Color Theorem states that any simple planar graph \(G\) has its chromatic number \(\chi(G)\leq 4\). The chromatic number \(\chi\) is defined to be the minimum number of colors assigned individually to every vertex of \(G\) such that adjacent vertices have different colors. The first proof of the Four Color Theorem came out on June 21, 1976. Kenneth Appel and Wolfgang Haken at the University of Illinois [1, 2]: If the four-color conjecture were false, there would be at least one map with the smallest possible number of regions that requires five colors. The proof showed that such a minimal counterexample cannot exist. They wrote a program and checked by computer that a minimal counterexample to the four-color conjecture could not exist. In their check, 1,834 possible reducible configurations were achieved their own 4-coloring one-by-one. Since then several different proofs for the Four Color Theorem assisted by computer were contributed to mathematical societies. However, the mathematicians in the field of graph theory never give up the hope to find some kind of artificial proof. A proof for The Four Color Theorem unplugged is still one of the most-wanted academic research in the Math world. Also well-known and even widely acclaimed as a textbook proof for the five color theorem was given by Alfred Kempe in 1879 [4] and Percy Heawood in 1890 [3]. Kempe's proof was once claimed for the Four Color Theorem until 1890, when Percy Heawood indicated an error. In addition to exposing the flaw in Kempe's proof, Heawood proved the five color theorem and generalized the four color conjecture to topological surfaces of arbitrary genus [3]. Because this proof for five color theorem is so classical and easy to comprehend in graph theory, it is shown in the math world popularly. "Kempe chains" given on the title of this article is one of the important tools invented by Kempe in that paper. The title also said "a renewal approach", which means we will provide modification on Kempe chains, and offer a new point of view to deal with the (pseudo) extremum as a non-4-colorable planar graph. Another early proposed proof was given by Peter Guthrie Tait in 1880 [8]. It was not until 1891 that Tait's proof was shown incorrect by Julius Petersen. In fact, over his lifetime Tait gave several incomplete or incorrect proofs of the Four Color Theorem. In graph theory, Petersen dedicated two famous contributions: the Petersen graph, exhibited in 1898, served as a counterexample to Tait's proof on the Four Color problem: a bridgeless 3-regular graph is factorable into three 1-factors [7], and the theorem: a connected 3-regular graph with at most two leaves contains a 1-factor. The purpose of this study is to offer a renewal method improved from Kempe's classical proof. We try to write the series of three parts self-sustaining and the author only need the basic knowledge on graph theory. ## 2. Vertex-colorings and RGB-edge-colorings Normally we use the natural numbers 1, 2, 3, 4, 5,... to color vertices such that two adjacent vertices have different colors (different number). As a renewal method, the corresponding edge-coloring after a vertex-coloring has been done. Let us start up with \(K_{4}\) which is a basic graph to color by all \(\{1,2,3,4\}\). Originally, the leftmost graph \(K_{4}\) has only vertices and edges; so far no vertex-coloring and no edge-coloring. We provide two different vertex-colorings of \(K_{4}\) with Co[\(v_{i}\):i] and Co[\(v_{1}\):4, \(v_{2}\):3, \(v_{3}\):1, \(v_{4}\):2] which are shown as the middle and the right graphs in Figure 1 respectively. For convenience, we will also use vertex-coloring function \(f:V\rightarrow\{1,2,\ldots\}\). The corresponding edge-colorings of them are demonstrated in the same time. Basically, red edges (R-edges) are for those edges incident to colors 1 and 3, and those edges incident to colors 2 and 4; green edges (G-edges) are for 1 and 4, as well as 2 and 3; blue edges (B-edges) are for 1 and 2, as well as 3 and 4. In other words 1-3 and 2-4 edges 1-4 and 2-3 edges shall be painted by 1-2 and 4-4 edges red or R, r; green or G, g; blue or B, b; uncertain in RGB black or bl, sometime gray. Among these colored edges, we assign those one incident to 1 by a thick line, because all red edges (as well as green and blue edges respectively) are majorly separated into two sub-classes. For a same edge-color, the edges of the two sub-classes never incident to each other. To assign those edges incident to 1 "thick" red, green, blue is not mandatory, because our purpose is just to distinguish the two disconnect sub-classes. The property of disconnecting is the main idea of Kempe's proof. For any graph with a 4-coloring function, we can immediate reach an RGB-edge-coloring. However, the the reverse direction is not correct unless we set up some special circumstance. By the way, we draw the rightmost \(K_{4}\) differently to emphasize that it is planar. Most of time we shall draw planar graphs in this way. ## 3. \(\mathcal{N}4\), \(m\mathcal{N}4\) and \(e\mathcal{N}4\) From now on all graphs in this article are simple, planar and connected, i.e., no self-loops, no multiple edges and at least a path between any two vertices. Actually multiple edges change nothing for coloring, but they are not only annoying, but Figure 1. The corresponding RGB-edge-colorings also disturb the structure of our planar graphs. Also disconnected graphs cause no difference for the Four Color Map Theorem. Trying to color the map of counties of England and Taiwan in the same time is actually two independent and irrelevant processes. If the four-color conjecture were false, there would be at least one planar graph \(G\) which is non-4-colorable. Let us denote the following three sets under the assumption that the four-color conjecture were false: \[\mathcal{N}4 = \{G\mid G\text{ is a non-4-colorable simple connected planar graph.}\}\] \[m\mathcal{N}4 = \{G\mid G\text{ is minimal in $N4$ according to the partially ordered}\] \[\text{ set with the inclusion relation of graphs.}\}\] \[e\mathcal{N}4 = \{G\in m\mathcal{N}4\mid G\text{ is a minimum w.r.t. the number of vertices.}\}\] Clearly, \(e\mathcal{N}4\subseteq m\mathcal{N}4\subseteq\mathcal{N}4\). Also by the theory of the partially ordered, \[e\mathcal{N}4=\emptyset\ \Leftrightarrow\ m\mathcal{N}4=\emptyset\ \Leftrightarrow\ \mathcal{N}4=\emptyset.\] Of course, the set \(e\mathcal{N}4\) turns to be the target of the mathematicians' investigation. If a planar \(G\) has a bridge \(ab\) such that \(G=G_{1}\biguplus\{ab\}\biguplus G_{2}\), then \(G\) is not in \(m\mathcal{N}4\), even though \(G\) might be non-4-colorable. This idea also works for a planar \(G\) with a cut vertex. Therefore, any graph \(M\in m\mathcal{N}4\) is at least 2-connected and then any vertex \(u\in M\) lays on at least one cycle of \(M\). Furthermore, there exist several minimal cycles passing through \(u\) and these cycles are called _facets_. In geometry, the name "facet" means a triangle, a rectangles, a pentagons, etc., witch is a 2-dimensional substructure belonging to a 3-dimensional polyhedron or polytope. Clearly, the number of facets involving \(u\) equals \(\deg(u)\) because \(M\) is a planar graph. **Theorem 3.1**.: _Let \(M\in m\mathcal{N}4\). (a) Any supergraph of \(M\) is non-4-colorable and any non-trivial subgraph of \(M\) is 4-colorable. (b) Any vertex \(v\in M\) must have \(\deg(v)\geq 5\)._ Proof.: (a): About the non-trivial subgraph, it is the basic minimal property in a partially ordered set. As for any supergraph of \(M\), say \(M^{\prime}\), we can first color \(M\). Only at this stage, 4 different colors are not enough. So \(\chi(M^{\prime})>4\). (b): By the definition of \(m\mathcal{N}4\), we know \(M\) is non-4-colorable and \(M-\{v\}\) is 4-colorable for any vertex \(v\in m\mathcal{N}4\). For 2-connected, no vertices in \(M\) are degree 1. Suppose there is a vertex \(v\in m\mathcal{N}4\) with \(\deg(v)=2\) or \(\deg(v)=3\) then the 4-colorability of \(M-\{v\}\) will make \(M\) 4-colorable. Suppose there is a vertex \(v\in M\) with \(\deg(v)=4\). Let \(v_{1},v_{2},v_{3},v_{4}\) are four neighbors of \(v\) clockwise around. Due to the fact that \(M\) is non-4-colorable and \(M-\{v\}\) is 4-colorable, we shall have a 4-coloring function \(f:V(M-\{v\})\to\{1,2,3,4\}\) such that \(f(v_{i})=i\) for \(i=1,2,3,4\) and then \(f(v)=5\) is inevitable. In \(M-\{v\}\), let us finish the corresponding RGB-edge-coloring. Between \(v_{1}\) and \(v_{3}\) or between \(v_{2}\) and \(v_{4}\), there is "at less one" pair which is red-edge-disconnected, because a pair being red-edge-connected (or simply red-connected) will block another pair's connection. Without loss of generality, suppose \(v_{2}\) and \(v_{4}\) are red-disconnected. Now by switch colors 2 and 4 over the red-connected component containing \(v_{4}\), we get a new a 4-coloring function \(f^{\prime}:V(M-\{v\})\to\{1,2,3,4\}\) that keeps the same colors on \(v_{1},v_{2},v_{3}\) as \(f\), but \(f^{\prime}(v_{4})=2\). Finally by assigning \(f^{\prime}(v)=4\), we reach \(f^{\prime}\) a 4-coloring of \(M\) and a contradiction. Now we guarantee that \(\deg(v)\geq 5\) for every \(v\in M\). _Remark 3.2_.: Later we will show that "at less one" in this proof is actually "exactly one" if we deal with triangulated planar graphs which is the next topic. ## 4. Maximal planar graphs and \(e\mathcal{MPGN}4\) We introduce triangulated graphs or MPG's, the abbreviation standing for maximal planar graphs, by offering two equivalent definitions: * A planar graph \(G\) is said to be triangulated (also called maximal planar) if the addition of any edge between two non-adjacent vertices of \(G\) results in a non-planar graph. * (A much simple definition) As a planar graph of \(G\), all facets of \(G\) are triangles, including the outer one. Then the following property is a good reason for us to narrow down our target graphs. **Theorem 4.1**.: _All planar graphs are 4-colorable if and only if all MPG's are 4-colorable._ Proof.: [\(\Rightarrow\)]: This direction is trivial. [\(\Leftarrow\)]: Given a planar graph \(G\), we add some edges to make it an MPG, say \(H\). As an MPG, \(H\) is 4-colorable, then by removing these "some edges" we turn back to G and it adopts the 4-coloring of \(H\). The purpose is to prove the Four Color Theorem; but assuming \(e{\mathcal{N}}4\), \(m{\mathcal{N}}4\) and \({\mathcal{N}}4\) non-empty (especially the first one) is a regular way of proving by contradiction. Let us denote a new set: \[e{\mathcal{MPG}}{\mathcal{N}}4 = \{E\in{\mathcal{N}}4\mid\text{ $E$ is an MPG and a minimum w.r.t.}\] \[\text{the number of vertices.}\}\] Compared with \(e{\mathcal{N}}4\), the new set \(e{\mathcal{MPG}}{\mathcal{N}}4\) is better to deal with. The fundamental relation between \(e{\mathcal{MPG}}{\mathcal{N}}\) and \(e{\mathcal{N}}4\) is a well-known result. For self-sustaining of this paper, we will redo the proof in the rest of this section. To prove the Four Color Theorem by contrapositive, in the whole paper and our study in future, we always assume \(e{\mathcal{MPG}}{\mathcal{N}}4\) non-empty and discuss many different kinds of situations with much more details set up for an \(EP\in e{\mathcal{MPG}}{\mathcal{N}}4\). For instance, in Part III of this paper, we exam the situation that two degree 5 vertices are neighbors in \(EP\). A well-known fact: any simple planar graph can be triangulated and still simple. It can be proved by induction. Given any planar graph \(G\), let \(\hat{G}_{i}\) be one of triangulated planar graph of \(G\) by linking edges for some non-adjacent pairs of vertices along \(n\)-gons (\(n>3\)). Let \(e\hat{\mathcal{N}}4:=\{\hat{G}_{i}\mid G\in e\mathcal{N}4,i=1,2,\ldots\}\). Because \(e\mathcal{N}4\) consists of minimum graphs \(G\in m\mathcal{N}4\) w.r.t. the number of vertices, the following lemma is trivial. **Lemma 4.2**.: _We have \(e\hat{\mathcal{N}}4\subseteq e\mathcal{N}4\), and \(e\hat{\mathcal{N}}4\) is non-empty if and only if \(e\mathcal{N}4\) is non-empty._ **Theorem 4.3**.: (a)_\(e\mathcal{MPGN}4=e\hat{\mathcal{N}}4=e\mathcal{N}4\)._ (b) _If \(EP\in e\mathcal{MPGN}4\) then any non-trivial subgraph of \(EP\) is 4-colorable. Also any MPG graph \(G\) with \(|G|<|EP|\) is 4-colorable._ Proof.: If \(e\mathcal{N}4\) is empty then all three sets are empty; therefore (a) holds and (b) has nothing to check. Suppose \(e\mathcal{N}4\) non-empty. Let \(G\in e\mathcal{N}4\) and then we choose one \(\hat{G}\in e\hat{\mathcal{N}}4\). Notice that \(e\mathcal{MPGN}4\) is now non-empty because of \(\hat{G}\). Let \(EP\in e\mathcal{MPGN}4\). By definition, we relation on the order (the number of vertices) of these three graphs as \(|G|=|\hat{G}|\geq|EP|\). On the other hand, we know the fact \(e\mathcal{MPGN}4\subseteq\mathcal{N}4\), and let us consider the minimum orders of both sides. So we have \(|EP|\geq|G^{\prime}|\) for any \(G^{\prime}\in e\mathcal{N}4\). Finally we obtain \(|EP|=|G|\) for any \(G\in e\mathcal{N}4\), and we conclude that \(e\mathcal{MPGN}4=e\hat{\mathcal{N}}4\subseteq e\mathcal{N}4\). Furthermore, any \(EP\in e\mathcal{MPGN}4\subseteq e\mathcal{N}4\subseteq m\mathcal{N}4\) obeys a property of \(m\mathcal{N}4\): any non-trivial subgraph of \(EP\), particularly removing some edges from \(EP\), must be 4-colorable. Therefore, we conclude (b) and \(e\mathcal{MPGN}4=e\mathcal{N}4\). **Corollary 4.4**.: _Let \(EP\in e\mathcal{MPGN}4\). (a) Any supergraph of \(EP\) is non-4-colorable and any non-trivial subgraph of \(EP\) is 4-colorable. (b) Any vertex \(v\in EP\) must have \(\deg(v)\geq 5\)._ We would like to show another poof for the part (b) of Theorem 4.3. We need a trivial lemma as follows about non-trivial 3-cycles. Also, we will offer another interesting theorem about non-trivial 4-cycles in Part II of this paper. **Lemma 4.5**.: _There is no non-trivial 3-cycle in \(EP\), i.e., every 3-cycle in \(EP\) forms a 3-facet._ Proof.: Suppose there is a non-trivial 3-cycle \(\Omega:=a\)-\(b\)-\(c\)-\(a\) in \(EP\). By \(\Omega\), our \(EP\) is separated into two non-empty regions: \(\Sigma\) (inside) and \(\Sigma^{\prime}\) (outside) with \(\Sigma\cap\Sigma^{\prime}=\Omega\). Both \(\Sigma\) and \(\Sigma^{\prime}\) are still two MPG's with \(|\Sigma|,|\Sigma^{\prime}|\geq 4\). Also \(|\Sigma|\) and \(|\Sigma^{\prime}|\) are less than \(|EP|\), thus \(\Sigma\) and \(\Sigma^{\prime}\) are 4-colorable with coloring map \(f\) and \(f^{\prime}\). Because the intersection \(\Sigma\cap\Sigma^{\prime}\) consists of only three vertices, we can manipulate vertex-color-switching on \(f^{\prime}\) to get \(f(x)=f^{\prime}(x)\) for \(x=a,b,c\). Now \(f\) and \(f^{\prime}\) together offer a 4-coloring map of \(EP\), and this is a contradiction. Proof.: (Another proof for Theorem 4.3(b)) A maximal non-trivial subgraph of \(EP\) is obtained by removing any edge, say \(uv\) in \(EP\). Once we show every maximal non-trivial subgraph of \(EP\) is 4-colorable, then all other subgraphs of \(EP\) are 4-colorable. Let \(\deg(u)=k\geq 5\) (by Theorem 3.1) with the neighbors \(v=v_{1},v_{2},\ldots,v_{k}\) clockwise around \(u\). Now we form a new MPG, say \(G\), from \(EP\) by removing vertex \(u\) and connecting new edges \(vv_{3},vv_{4},\ldots,vv_{k-1}\). Notice that the edge \(vv_{i}\) for \(i=3,4\ldots,k-1\) is valid if \(v\) and \(v_{i}\) are not adjacent in \(EP-\{u\}\). What happens if \(v\) and \(v_{i}\) are adjacent? It is impossible; otherwise we would see a non-trivial 3-cycle \(v\)-\(v_{i}\)-\(u\)-\(v\) in \(EP\), and this violates Lemma 4.5. With these new edges, we obtain a new MPG, say \(G\), whose order is less than \(EP\). So \(G\) has a 4-coloring map \(f\) due to the definition of \(e\mathcal{MPGN}4\). For the maximal nontrivial \(EP-\{uv\}\), we need to keep same coloring for the vertices in \(G\) and assign \(f(u)=f(v)\). This \(f\) is definitely a 4-colorable map for \(EP-\{uv\}\). _Remark 4.6_ (Very important).: Before link a new edge between \(v\) and \(v_{i}\), we must make sure that \(v\) and \(v_{i}\) are not adjacent in the current graph. ## 5. Tilings and 4-colorings A _semi-MPG_ is nearly an MPG but it has some facets, called _outer facets_, which form the border of this structure as a planar graph. Usually an outer facet is an sided polygon (\(n\)-gon) with \(n\geq 4\). However, in some rare cases we allow \(3\)-gons to be outer facets if we really need them to play as the border of this structure. (See Sections 6 and 7.) If there is a single outer facet of \(n\) sided polygon, then we call it an \(n\)-semi-MPG. Generally an \((n_{1},n_{2},\ldots,n_{k})\)-semi-MPG has \(k\) outer facets with sizes \(n_{1},n_{2},\ldots,n_{k}\) respectively. Most of the time, we forbid any two outer facets sharing a same edge, because we need every edge must belong to one or two triangles. We do have exceptions in Lemma 6.2(a') and in some other place. Particularly we call an MPG or an \(n\)-semi-MPG _One Piece_, because all loops in these structures are free loops who can be shrunk into one point in topology. Of course, an \((n_{1},n_{2},\ldots,n_{k})\)-semi-MPG is not One Piece, unless we allow two outer facets sharing a same edge. We draw a brief sketch: \[\begin{array}{ccccc}&\text{MPG}&\\ &\text{semi-MPG}&\\ &\text{\Large{\Large{\Large{\Large{\Large{\Large{\Large{\Large{\Large{\Large{\Large {\Large{\Large{\Large{\Large{\Large{\Large{\Large{ \Large{ \Large{ \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \(v_{3}v_{4}\) and \(v_{4}v_{1}\). If a triangle with only one red edge, then we call it a _red half-tile_ or a _red triangle_. Of course, there are also green or blue tiles and half-tiles. For a fixed edge-color, there is exactly one edge of such color for every single triangle. Let \(G\) be an MPG or a semi-MPG. We try to edge-color exactly one of three edges of every triangle by red. We call the process and a possible result an _R-tiling_ on \(G\). The Figure 4 in the following shows R-tiling I and R-tiling II on a same graph \(G\). **Definition 5.1**.: An R-tiling is actually a function \(T_{r}:E(G)\rightarrow\{\text{red, black}\}\), where \(E(G)\) is the edge set of \(G\), such that every triangle of \(G\) has one one Figure 4. Two R-tilings; the right one induces a 4-coloring. Figure 3. A red tile \(v_{2}v_{4}\)-diamond and a red half-tile Now we have the formal reason why we forbid any two facets of \(G\) sharing a same edge, because to make tiling every edge must belong to one or two triangles. For short, we use r or R to stand for red, g or G for green, b or B for blue, and bl for black. Black color also means uncertain for R/G/B. Not just R-tilings, we work with G-tilings and B-tilings. If we only consider a single-color tiling, then R, G and B are independent. However, most of time we wish them would coexist. That means each triangle in an MPG or a semi-MPG shall have three different edge-colors: R, G and B. If three different color-tilings coexist, then we call this representation an _RGB-tiling_, denoted by \(T_{rgb}:E(Q)\rightarrow\{\text{red, green, blue}\}\) and then no more black color for any edge. Notice that there is a red 5-cycle, namely \(v_{5}\)-\(v_{6}\)-\(v_{7}\)-\(v_{c}\)-\(v_{a}\)-\(v_{5}\), in R-tiling I. That means no coexisting G-tiling as well as B-tiling inside this 5-cycle. Also notice that there no red odd-cycle in R-tiling II; thus a 4-coloring can be induced by R-tiling II. The rightmost below is an RGB-tiling which is an example to fulfill coexisting G-tiling and B-tiling w.r.t. R-tiling II. As for R-tiling I, it tells another story. Even though no fully 4-coloring of \(G\) by using R-tiling I, we may sacrifice some edges to reach a minor result. The leftmost graph is what we just said a minor result. It is an RGB-tiling on \(G-\{v_{a}v_{c}\}\). Because \(v_{a}v_{c}\) lays on that red 5-cycle, removing it makes the new \((4,7)\)-semi-MPG 4-colorable generated by R-tiling I. This story has an interesting implication: Of course, if we return \(v_{a}v_{c}\) as a red edge, then the Figure 5. Two RGB-tilings; the right one induces a 4-coloring. red 5-cycle is back. Amazingly, if we return \(v_{a}v_{c}\) as a green edge, then we create a green odd-cycle. If we return \(v_{a}v_{c}\) as a blue edge, then we create a blue odd-cycle. This fact is very important. Notice that tilings make sense only in an MPG or a semi-MPG. In a normal 4-colorable graph, we introduce RGB-edge-colorings in Section 2. The concept of RGB-tilings is advanced compared with the one of RGB-edge-colorings when we deal with (nearly) triangulated graphs. So far we have not guaranteed that an MPG or a semi-MPG does have an R-tiling. We just showed some examples for R-tilings and induced possible RGB-tilings. The existence of R-tilings is a topic in the next section when we focus on \(e\mathcal{MPGN}4\). ## 6. RGB-tilings and 4-coloring on MPG's or \(n\)-semi-MPG's Let \(M\) be an \((n_{1},n_{2},\ldots,n_{k})\)-semi-MPG. It has \(k\)_outer facets_ with sizes \(n_{i}\), i.e., each outer facet as a piece of the border of this planar graph \(M\) is an \(n_{i}\) sided polygon (\(n_{i}\)-gon). Normally we have \(n_{i}\geq 4\) because there are triangles nearly everywhere in \(M\); however, 3-gon outer facets are allowed in this and the next sections if we do point them out precisely. As a 3-gon outer facet, it does not need to follow Definition 5.1 for any provided R-tiling \(T_{r}\). Also notice that Lemma 4.5 guarantees no non-trivial 3-cycle in \(EP\). After Lemma 6.2, we allow two outer facets to share edges in \(M\) in this and the next sections. As for edge counting, those shared edges shall be count by multiplicity 2. _Remark 6.1_.: In this paper, a non-trivial triangle in an MPG or in a semi-MPG is either a 3-gon outer facet or a triangle with vertices and edges both inside and outside. A trivial 4-cycle is defined that its four vertices are the surrounding four of a diamond. Even though we have not yet prove that an R-tiling must layout in \(M\), we can show some properties under the assumption that an R-tiling exists. **Lemma 6.2**.: _Let \(M\) be an \(n\)-/\((n_{1},n_{2},\ldots,n_{k})\)-semi-MPG for \(n,n_{i}\geq 3\)._ * _If_ \(M\) _has an R-tiling, then the number of black edges along_ \(\Omega(M)\) _must be even. Of course, the rest edges along_ \(\Omega(M)\) _are red and each associates with a red half-tile._ * _This is a supplemental item w.r.t._ (a) _and here we allow two outer facets to share edges in_ \(M\)_. The index_ \(\sum_{i=1}^{k}n_{i}\) _counts such shared edges by multiplicity 2. Also a single edge that is shared by two outer facets associates with no triangles; so their colors are free to choose from red and black. Now the result in item_ (a) _still works here._ * _If_ \(M\) _has an RGB-tiling, then the three groups of edges along_ \(\Omega(M)\) _sorted by red, green and blue must be either all even or all odd in cardinality. Particularly, when_ \(|\Omega(M)|\) _is even then they are all even, and when_ \(|\Omega(M)|\) _is odd then they are all odd._ Proof.: (a): First we shall deal with those red edges along \(\Omega(M)\). Such an edge \(e\) is a red half-tile. Let us add up to two extra adjacent black edges (with "V" shape) to complete a red \(e\)-diamond. Now we see a new semi-MPG, say \(\hat{M}\), and a new R-tiling that is made by red diamonds, where all edges along \(\Omega(\hat{M})\) are black. A simple equation \[|\Omega(\hat{M})|=4\#(\text{red diamonds})-2\#(\text{black edges inside }\hat{M})\] verifies that \(|\Omega(\hat{M})|\) is even. Now removing all "V" shapes from \(\Omega(\hat{M})\), we prove that the number of black edges along \(\Omega(M)\) must be even. (a'): For this supplemental item, we simply remove all edges that are shared by two outer facet and get a new semi-MPG \(\bar{M}\). Apply (a) on \(\bar{M}\) and then the number of black edges along \(\Omega(\bar{M})\) must be even. How about the removed edges originally along the outer facets of \(M\)? Because the counting index \(\sum_{i=1}^{k}n_{i}\) counts those edges shared by two outer facets with multiplicity 2, and so do the counting for those black edges shared by two outer facets of \(M\). The proof of this supplemental item is complete. (b): The concept is easy. In an RGB-tiling which made by coexisting R-tiling, G-tiling and B-tiling. In view of R-tiling, green and blue edges are treated as black. So, this property is a corollary of (a). _Remark 6.3_.: So far we have no supplemental item (b') for Theorem 6.2. Those edges shared by two outer facets have no rule to color R, G and B in order to reach an "good" RGB-tiling. We could modify the definition of an RGB-tiling by additionally requiring that the numbers of red, green and blue must be either all even or all odd along \(\underline{\text{each}}\) outer facet. With this new definition, removing some shared edges to modify outer facets or changing colors on some shared edges will still keep all-even/all-odd property; because any shared edge is counted by multiplicity 2. Wait! do we really need this stronger definition? Or actually all even or all odd property on \(\Omega(M)\) is always true like Lemma 6.2(a'). For this "stronger definition", please refer to Theorem 7.12; as for a new supplemental item (b'), please see the comming Corollary 6.4. Also Remark 6.5 offers a more advance point of view. **Corollary 6.4**.: _Let \(M\) be an \(n\)-/\((n_{1},n_{2},\ldots,n_{k})\)-semi-MPG for \(n,n_{i}\geq 3\) with an RGB-tiling \(T_{rgb}\). Here we allow two outer facets to share edges in \(M\). Along \(\Omega(M)\), the three numbers of red, green blue edges are all even if \(|\Omega(M)|\) is even; all odd if \(|\Omega(M)|\) is odd. Notice that any shared edge is counted by multiplicity 2._ Proof.: We simply remove those shared edges and obtain a new semi-MPG without any shared edge. Then apply Lemma 6.2(b). It does not matter what colors are on those shared edges, because they shall be counted by multiplicity 2. _Remark 6.5_.: Let \(M\) be an MPG or an \(n\)-/\((n_{1},n_{2},\ldots,n_{k})\)-semi-MPG for \(n,n_{i}\geq 3\). Here we allow two outer facets to share edges in \(M\). Let \(C\) be any cycle in \(M\) that might pass through some edges along \(\Omega(M)\) except those shared edges. Let \(\Sigma^{+}\) (\(\Sigma^{-}\)) denote the subgraph of \(M\) inside (outside) of \(C\). Both \(\Sigma^{+}\) and \(\Sigma^{-}\) are semi-MPG's with an outer facet \(C\). Once we have a \(T_{r}\) or \(T_{rgb}\) on \(M\), then we can apply Lemma 6.2 and Corollary 6.4 on \(\Sigma^{+}\) and \(\Sigma^{-}\). **Example 6.6**.: There are two red-connected components on the following graph and these red edges form an R-tiling. However, is this R-tiling good? What criteria do we think about "good"? Of course, a good R-tiling means easy to paint by 4 colors or easy to tell that we have to use the color 5. So far we just want to experience some examples. In this graph, we see several thick black edges linking two vertices in a single red-connected component; so there are many different cycles that have nearly all red edges but only one thick black edge. There are also some cycles who have more odd number of black edges; for instance, the cycle \(C:v_{4}\)-\(v_{5}\)-\(v_{b}\)-\(v_{7}\)-\(v_{8}\)-\(v_{c}\)-\(v_{a}\)-\(v_{4}\) is one of them. We also see that there are a 5-gon and a 7-gon outer facets; the former has one black edge and the latter has three black edges. By Remark 6.5, any cycle \(C^{\prime}\) that separates the 5-gon and the 7-gon on the two side of itself will have odd number of black edges along \(C^{\prime}\). Any cycle \(\bar{C}\) who makes the 5-gon and the 7-gon on the same side of itself will have even number of black edges along \(\bar{C}\). From Lemma 6.2, Corollary 6.4 and Example 6.6, we find that an MPG and an \(n\)-semi-MPG are easy to deal with. That is why we call them _One Piece_ on purpose. Figure 6. Along 5 or 7 facet, the numbers of black edges are both odd. **Theorem 6.7** (The First Fundamental Theorem v1: for R-/RGB-tilings and 4-colorability).: _Let \(M\) be an MPG or an \(n\)-semi-MPG (\(n\geq 4\)). Then the following are equivalent:_ 1. \(M\) _is 4-colorable._ 2. \(M\) _has an RGB-tiling._ 3. \(M\) _has an R-tiling without red odd-cycle._ Proof.: The diagram of the whole proof is (a) \(\Rightarrow\) (b) \(\Rightarrow\) (c) \(\Rightarrow\) (a). However, we do not have enough tools to prove [(c) \(\Rightarrow\) (a)] clearly. So, we just leave a rain check for a while and this part will be proved in the next section right after Theorem 7.9. [(a) \(\Rightarrow\) (b)]: A 4-coloring function on any graph induces an RGB-edge-coloring. Since \(M\) is an MPG or an \(n\)-semi-PG, an RGB-edge-coloring of \(M\) is actually an RGB-tiling on \(M\). [(b) \(\Rightarrow\) (c)]: Suppose \(C\) is any red cycle in \(M\). Since \(M\) is an MPG or an \(n\)-semi-MPG, we see at least one of the two sides of \(C\) forms a \(|C|\)-semi-MPG with outer facet \(C\). The red is the only color along this outer facet; thus by Lemma 6.2(b) (or Corollary 6.4) the length \(|C|\) must be even for no green and blue along \(C\). **Corollary 6.8**.: _Let \(M\) be an MPG or an \(n\)-semi-MPG (\(n\geq 4\)). The graph \(M\) is non-4-colorable if either there is no R-tiling on \(M\) or every R-tiling on \(M\) has at least a red odd-cycle._ Theorem 7.12 in the the next section is a generalization of this theorem. Although the proof of [(c) \(\Rightarrow\) (a)] is still a rain check, we apply it as the following corollary. **Corollary 6.9**.: _In an MPG or an \(n\)-semi-MPG (\(n\geq 4\)), if we have any two coexisting R/G/B-tilings, then the third coexisting single coloring tiling is immediately ready. Also there are no red/green/blue odd-cycles._ Since \(e\mathcal{MPGN}4=e\mathcal{N}4\) and by definition, all MPG's in this set have same number of vertices (same order). Let \(\omega\) be this order. For convenience we usually use \(EP\) to denote one of the extremum planar graph in \(e\mathcal{MPGN}4\). The order of \(EP\) is \(\omega\). If \(G\) is a planar graph (not necessarily an MPG or a semi-MPG) such that \(|G|<\omega\) or \(G\) is a subgraph of \(EP\) with less number of edges, then \(G\) must be \(4\)-colorable. And then this \(4\)-coloring function of \(G\) induces an R-tiling on \(G\) without red odd-cycles. _Remark 6.10_.: In set theory, ordinal numbers and cardinal numbers are so fundamental and fancy in mathematics. We use \(|EP|=\omega\) in purpose, because \(\omega\) is the first ordinal number corresponding to infinity. **Example 6.11** (Counterexample).: Here a counterexample is provided to show that "(c) \(\Rightarrow\) (a)" of Theorem 6.7 might not work for an \((n_{1},n_{2})\)-semi-MPG. We show a \((5,7)\)-semi-MPG and an R-tiling without red odd-cycle on it in Figure 7. According to the rule, red edges are used to connect vertices with colors \(1\) and \(3\), or colors \(2\) and \(4\). Without loss of generality, a \(4\)-coloring on the red-connected components \(rC_{v_{0}}:=\{v_{0},v_{1},v_{2},v_{c}\}\) and \(rC_{v_{8}}:=\{v_{4},v_{5},\ldots,v_{9}\}\) has been done. Now we find that there is not proper way to color vertices \(v_{3},v_{a},v_{b}\), particularly for \(v_{a}\); unless we are allowed to set coloring function \(f(v_{a})=5\). This example shows that on \((n_{1},n_{2},\ldots,n_{k})\)-semi-MPG providing an R-tiling without red odd-cycle is not enough to achieve a \(4\)-coloring function. The example also shows that three (or Figure 7. Odd number of black edges along \(5\) and \(7\) facets odd number of) red-connected components \(rC_{v_{0}}\), \(rC_{v_{8}}\) and \(rC_{v_{3}}:\{v_{3},v_{a},v_{b}\}\) "in a loop" either along the 5-gon or the 7-gon will cause problems. **Example 6.12** (Counterexample).: Here we give another counterexample to show that "(b) or (c) \(\Rightarrow\) (a)" of Theorem 6.7 might not work for an \((n_{1},n_{2})\)-semi-MPG. We show a \((5,5)\)-semi-MPG and assign a fixed R-tiling \(T_{r}\) without red odd-cycle as the left graph in Figure 8. Let us try to color green or blue for the rest black edges. Even if we can extend \(T_{r}\) to all kinds of different coexisting \(T_{rgb}\), but none of these \(T_{rgb}\) can induce a 4-coloring function. In particular, we focus on the element in \(X:=\{v_{1}v_{5},v_{2}v_{6},v_{3}v_{7},v_{4}v_{8},v_{0}v_{9}\}\). Without loss of generality, we assume that among \(X\) the number of green edges is greater than the number of blue edges. (A) If all five edges in \(X\) are green then \(v_{0}\)-\(v_{1}\)-\(v_{2}\)-\(v_{3}\)-\(v_{4}\)-\(v_{0}\) and \(v_{5}\)-\(v_{6}\)-\(v_{7}\)-\(v_{8}\)-\(v_{9}\)-\(v_{5}\) must be two blue 5-cycles; thus we cannot finish (A) with a 4-coloring function from this RGB-tiling. (B) Suppose edges in \(X\) use both green and blue. Without loss of generality, we set \(v_{1}v_{5}\) green and \(v_{2}v_{6}\) blue, and then \(v_{0}v_{1}\) and \(v_{5}v_{6}\) must be blue, and \(v_{1}v_{2}\) and \(v_{6}v_{7}\) must be green. This temporary status is shown as the middle graph in Figure 8. According to the possible colors of the rest three edges in \(X\), we need only consider the following three subcases: (B1) Let all three of \(v_{3}v_{7}\), \(v_{4}v_{8}\), \(v_{0}v_{9}\) be green. Even though this is an RGB-tiling, the blue path from \(v_{1}\) to \(v_{7}\) cannot induce a 4-coloring. It is quite interesting that a single blue-connected component "in a loop" either along the 5-gon or the 7-gon will cause problems. (B2) Let \(v_{3}v_{7}\) be blue and \(v_{4}v_{8}\), \(v_{0}v_{9}\) green and we get the left graph in Figure 9. The bug occurs Figure 8. Given \(T_{r}\), let us finish it by (A) and (B). Here is (B1). because three green-connected components "in a loop"; so, there is no 4-coloring. (B3) Let \(v_{4}v_{8}\) be blue and \(v_{3}v_{7}\), \(v_{0}v_{9}\) green, and then we get the right graph in Figure 9. Again, the bug occurs because three green-connected components "in a loop"; so, there is no 4-coloring. Finally, we make a conclusion: Once we assign a fixed R-tiling \(T_{r}\) without red odd-cycle as the left graph in Figure 8, there are already five red-connected components "in a loop" along both two 5-gons. By the way, the red and green dashed-lines in (B2) and (B3) are drawn for Example 7.3 in advance, not now. ## 7. Grand canal lines over One Piece In this and the last sections, 3-gon outer facets are allowed. Sometime we also allow two outer facets to share edges in a semi-MPG. As a 3-gon outer facet, it does not need to follow Definition 5.1 for any provided R-tiling \(T_{r}\). As for edge counting, a shared edge shall be counted by multiplicity 2 w.r.t the two outer facets that it belong to. Our interest is focused on MPG's and semi-MPG's. In the last section, we saw several examples that an \((5,7)\)-semi-MPG might cause problems, and then we realized that an MPG and an \(n\)-semi-MPG are good to to deal with. For fun we call an MPG as well as an \(n\)-semi-MPG _One Piece_: a Japanese manga series), because they really are a solid planar piece, where "solid" means no hole. The three (counter) examples in the last section all provided a non-one-piece which looks like an eye of volcanic island or a belt of ring. Even though a belt Figure 9. (B2) and (B3) of ring or an \((n_{1},n_{2},\ldots,n_{k})\)-semi-MPG is planar and definitely 4-colorable, but its topological structure creates obstacles for our renewal approach, i.e., sometimes we fail to translate a single color tiling on an \((n_{1},n_{2},\ldots,n_{k})\)-semi-MPG to a 4-coloring function. To fix this problem, we shall study some extra requirement set for a "good" single color on an \((n_{1},n_{2},\ldots,n_{k})\)-semi-MPG. One Piece is definitely the main topic of further discussion. However, here is the last section that we care about an \((n_{1},n_{2},\ldots,n_{k})\)-semi-MPG \(M\). Once an R-tiling \(T_{r}\) exists on an \((n_{1},n_{2},\ldots,n_{k})\)-semi-MPG \(M\), there are many red-connected components \(rC_{i}\) for \(i=1,2,\ldots\) as induced subgraphs of \(M\); even if \(rC_{i}\) is just a single point. We can also define \(T_{r}:=\bigcup_{i}E(rC_{i})\) as a set of red edges. If we just consider a short line of red edges in any \(rC_{i}\), they forms a wall of red _canal bank_ (of river bank). For instance, \(v_{6}\)-\(v_{0}\)-\(v_{1}\)-\(v_{2}\)-\(v_{e}\)-\(v_{2}\)-\(v_{f}\)-\(v_{2}\) in Figure 10 is a (part of) canal bank. We shall define _canal lines_ first and then explain canal banks precisely. **Definition 7.1**.: Let \(M\) be a MPG or semi-MPG with an R-tiling \(T_{r}=\bigcup_{i}E(rC_{i})\). We use Figure 10 to demonstrate examples. Among and inside these \(rC_{i}\) are red _canal lines_, denoted by \(rCL_{j}\) for \(i=1,2,\ldots\). For example, there are two red canal lines \(rCL_{1}\) and \(rCL_{2}\) represented by dashed lines. The two sides of any \(rCL\) are Figure 10. Canal lines and canal banks; Four deja-vu edges for \(rCL_{2}\) its coexisting _left/right canal bank_, denoted by \(rCL^{r}/rCL^{l}\) (or \(rCB_{j}\) as general notation). For example, if we consider \(rCL_{1}\) flow clockwise then \(rCL_{1}^{r}=v_{e}\) and \(rCL_{1}^{l}=v_{0}\)-\(v_{1}\)-\(\ldots\)-\(v_{6}\)-\(v_{0}\). Following the current along a red canal line \(rCL\), we might see a red edge twice from its different sides. We call it a _deja-vu_ edge for \(rCL\). In Figure 10, there is no deja-vu edge for \(rCL_{1}\) and there are four deja-vu edges for \(rCL_{2}\), namely \(v_{3}v_{a}\), \(v_{8}v_{9}\), \(v_{2}v_{e}\) and \(v_{2}v_{f}\). In manga and comic style, the ships of cargo and pirate follow the one-way _current_ of a red canal line. The basic or elemental segment of a red canal line is a single red half-tile (triangle) who has one red edge and two black edges. A complete red canal line is made by a sequence of red half-tiles, while the current flow into and out of a triangle by crossing two black edges. The collect of all these canal lines \(rCL_{i}\) created by an fixed R-tiling on \(M\) is called a red _canal system_, denote and defined by \(rCLS=\bigcup_{j}rCL_{j}\). We shall treat canal system first as a normal graph and then a directed graph. While treated as a normal graph, every triangle of red half-tile, which is a triangle, plays as a node of degree two, where the two links1 are translated from the two black edges of this triangle. A red canal line \(rCL_{j}\) is than a collection of connected links. Because every node is degree two, each canal line \(rCL_{j}\) is either a cycle or a path, where the latter one is from one outer facet to another facet (maybe the same outer facet). A canal system must have the following properties. Footnote 1: We shall avoid to use the name edge/vertex for the canal system \(rCLS\) which is actually a subgraph of the dual graph of \(M\). **Lemma 7.2**.: _Let \(M\) be an MPG or an \(n\)-/\((n_{1},n_{2},\ldots,n_{k})\)-semi-MPG with an R-tiling \(T_{r}\)._ * _A red tiling_ \(T_{r}:=\bigcup_{i}rC_{i}\) _on_ \(M\) _and a red canal system_ \(rCLS:=\bigcup_{j}rCL_{j}\) _are different perspectives of looking the same thing._ * _By linking nodes of triangles, a red canal line_ \(rCL_{j}\) _of_ \(T_{r}\) _is either (_b1_) a close cycle, called_ canal ring_, or (_b2_) a path starting from one outer facet and ending at another outer facet (maybe the same outer facet), while the pair of entrance and exit on the two end of this path are both black edges along the outer facets._ 3. _If_ \(M\) _is an MPG, then every red canal line_ \(rCL_{j}\) _is ring. If_ \(M\) _is an_ \(n\)_-semi-MPG, then the connection of entrances and exits of this red canal system_ \(rCLS\) _creates a non-crossing match among all black edges along the unique outer facet._ Proof.: The proofs of (a), (b) and the first part of (c) have been already stated in the previous paragraphs. We need only provide a proof for the second part of (c). The reason is very simple: a \(n\)-semi-MPG is planar. Now we need to indicate the _direction of current_ for each red canal line, i.e., we see canal system as a directed graph. While treated as a directed graph, a single triangle as a node is incident with two directed links which are one in and one out. Here is the rule of choosing direction: If a single color R-tiling on an \((n_{1},n_{2},\ldots,n_{k})\)-semi-MPG associates with a 4-coloring, along the current of any red canal line we shall always see color-1 vertices laid on the canal bank of our right hand side. So the perfect currents for a red canal system \(rCLS\) has a important requirement: For every red diamond, the directions of two currents on the two sides of this red edge must be opposite. As for a G-tiling or a B-tiling on an \((n_{1},n_{2},\ldots,n_{k})\)-semi-MPG, we still obey the rule of choosing direction: "color-1 vertices laid on the canal bank of our right hand side" for \(GCS\) or \(BCS\). **Example 7.3** (Conquering Counter-Examples 6.11 and 6.12).: According to the hypothesis of Theorem 6.7, we had better focus on an MPG or an \(n\)-semi-MPG. However, here we still try to conquer the last two counterexamples without involving Theorem 6.7. For Example 6.11, we keep most red edges, but let \(v_{6}v_{7}\) black and let \(v_{0}v_{6}\), \(v_{7}v_{b}\) red. This is edge-color-switching along a diamond route. The idea of diamond routes will be formally introduced in the Part III of our paper. The new R-tiling is shown by the left graph below and it is a "good" R-tiling. As for Example 6.12, we shall back to Figure 9 and consider the right two graphs. In these two graphs, we left some dashed-lines in advance. These dashed-lines are canal lines according to their own colors. Along these red and green dashed-lines, we perform edge-color-switching and then obtain the right graph in Figure 11 from both (B2) and (B3). This new RGB-tiling is "good" enough to induce a 4-colorable function. The way to perform edge-color-switching along an R/G/B canal lines will be will be formally introduced in the Part III. One more thing to do is indicating the directions of current system by the four red dashed-lines. We made these four directions perfect by using two ins and two outs along the 5-gon alternately. In this way, the two directions on the two sides of every red edge in diamond are opposite. With the experience from Examples 6.6, 6.11, 6.12 and 7.3, we claim the following definition to make clear about "good" R-tilings. Here we define a _grand_ R-tiling in view of red canal banks and a _grand_ R-canal system in view of orientation of currents. Actually they are same concept from two points of view. **Definition 7.4**.: Let \(M\) be an \((n_{1},n_{2},\ldots,n_{k})\)-semi-MPG with an R-tiling \(T_{r}\). Here we allow two outer facets to share edges. Let \(M_{r}\) denote the subgraph of \(M\) by deleting all black edges and \(M_{bl}\) denote the subgraph of \(M\) by deleting all red edges. We say this R-tiling \(T_{r}\) is a _grand_ tiling if \(V(M)\) can be partitioned into two disjoint Figure 11. Countering the errors in Examples 6.11 and 6.12 parts \(V_{13}\) and \(V_{24}\), i.e., \(V(M)=V_{13}\uplus V_{24}\), such that \(M_{bl}\) is a bipartite graph with bipartite vertex sets \(V_{13}\) and \(V_{24}\), and no red edges link between \(V_{13}\) and \(V_{24}\). **Lemma 7.5**.: _Let \(M\) be an \((n_{1},n_{2},\ldots,n_{k})\)-semi-MPG. If \(M\) has a grand R-tiling \(T_{r}\), then the two subgraphs induced in \(M\) by \(V_{13}\) and \(V_{24}\) consist of all red edges of \(T_{r}\) and no black edges. Furthermore, for any vertices \(w,x\in V_{13}\) and \(y,z\in V_{24}\), any walk through black edges from \(x\) to \(y\) must be odd; and any walk through black edges from \(w\) to \(x\) or from \(y\) to \(z\) must be even._ Why do we use notation \(V_{13}\) and \(V_{24}\)? Please, refer to Figure 1. **Definition 7.6**.: We say the red canal system \(rCLS\) induced by an R-tiling on \(M\) is a _grand_ canal system if we can arrange orientation for all canal lines such that the flow directions are opposite on the two sides of each red edge in diamond. Notice that here we do not allow two outer facets to share edges. For \(M\) without any edge shared by two outer facets, these two definitions are equivalent by our general rule: when we follow the direction of red current \(rCL\) (a red canal line with a fixed direction), the red canal bank on the right-hand-side always belongs to \(V_{13}\) which contains vertex-color \(1\) and denoted by \(rCL^{r}\) (or \(rCB_{i}\) as general notation), and the red canal bank on the left-hand-side, denoted by \(rCL^{l}\), always belongs to \(V_{24}\) that has no vertex-color \(1\). Sometimes we will draw red edges in \(V_{13}\) thicker than those in \(V_{24}\), just like the right two graphs in Figure 1 and the right graph in Figure 5. Given a grand R-tiling \(T_{r}\) on \(M\), the red-connected components \(rC_{i}\) are sorted into two classes: some belong to the subgraph induced in \(M\) by \(V_{13}\), and the others belong to the one by \(V_{24}\). **Lemma 7.7**.: _Given an R-tiling \(T_{r}\) on an MPG or an \(n-\)/\((n_{1},n_{2},\ldots,n_{k})\)-semi-MPG, it is grand if and only if the red canal system \(rCLS\) induced by R-tiling is grand._ Once an R-tiling is grand, we can set \(V_{13}\) ready for colors \(1\) and \(3\), and \(V_{24}\) ready for colors \(2\) and \(4\). However, "ready" does not mean a real \(4\)-coloring function, we still need the crucial requirement: each red-connected component \(rC_{i}\) has no red odd-cycle. Once we stitch to focus on a G-tiling/B-tiling, we shall follow the general rule: those vertex-color 1 are always on the right-hand-side if we follow the green/blue current. Therefore, we see \(V_{14}/V_{12}\) as canal banks on the right-hand-side of all green/blue currents. **Lemma 7.8**.: _Given an R-tiling on an MPG or an \(n\)-/\((n_{1},n_{2},\ldots,n_{k})\)-semi-MPG, say \(M\),, if it is grand and has no red odd-cycles, then it can induce a 4-coloring function on \(M\)._ The next theorem tells us why One Piece is so important. **Theorem 7.9** (Theorem for One Piece).: _Every R-tiling on One Piece (which is either an MPG or an \(n\)-semi-MPG, \(n\geq 3\)) must be grand one._ Proof.: Denote this One Piece by \(M\) and it has an R-tiling \(T_{r}\). All we need to show is that [CLAIM] \(V(M)\) can be partitioned into two disjoint parts \(V_{13}\) and \(V_{24}\) such that the subgraph \(M_{bl}\) induced by black edges of \(T_{r}\) is a bipartite graph with bipartite vertex sets \(V_{13}\) and \(V_{24}\), and no red edges link between \(V_{13}\) and \(V_{24}\). We will prove it by induction. [A: \(T_{r}\) on an \(n\)-semi-MPG]: **Case A1.** Suppose all \(n\) edges along the outer facet \(n\)-gon are red. Notice that when we have a border of 3-gon outer facet, its three edges do not need to follow Definition 5.1. Without loss of generality, we set these \(n\) red edges along the unique out facet to be a part of \(V_{13}\). Let Figure 12 be a demonstration, where \(M\) has a 8-gon outer facet \(v_{0}\)-\(v_{1}\)-\(\ldots\)-\(v_{7}\)-\(v_{0}\). We also denote \(rC_{1}\) to be the red-connected component that contains these \(n\) red edges. The red edges of \(rC_{1}\) are exactly thick ones in Figure 12. Let us choose any red edge \(e\) (say \(v_{0}v_{7}\)) along this outer facet, and then we follow the red canal line \(rCL\) generated by \(e\)-triangle (there is no \(e\)-diamond) and indicate the direction of current to make \(e\) on the right hand side because \(v_{0},v_{7}\in V_{13}\). This \(rCL\), the red dashed line with direction in Figure 12, must be a ring because all edges along the outer facet are red. Obviously all red edges of canal bank on the right of \(rCL\), denoted the edge set by \(rCL^{r}\), belong to \(rC_{1}\). And then we can name \(rC_{2}\) to be the red-connected component that contains those red edges of canal bank on the left of \(rCL\), denoted the edge set by \(rCL^{l}\). In Figure 12, \(rCL^{r}\) and \(rCL^{l}\), separated by \(rCL\), are definitely red-disconnected; they are kind of parallel and linked by black edges as zigzag line drawing. (PS. This is not always true if the surface is not planar. The surface of torus can provide a counterexample.) How to guarantee [CLAIM]? At this stage (or every new stage), we only consider those vertices and edges involved by \(rCL\) (respectively \(rCL_{i}\)). We see the right and the left canal banks of \(rCL\) are two groups of vertices, say \(V(rCL^{r})\) and \(V(rCL^{l})\), which are subsets of the final \(V_{13}\) and \(V_{24}\) respectively. If we deal with \(rCL_{i}\) at some following stage, exactly one of \(V(rCL_{i}^{r})\) and \(V(rCL_{i}^{l})\)_inherits the mark obtained from either \(V_{13}\) or \(V_{24}\); it is at least one due to connected graph; it is only one due to \(M\) being planar graph and One Piece.2 Two groups of vertices in \(V(rCL^{r})\) and \(V(rCL^{l})\), separated by \(rCL\), only use black edges to link between. Also those edges linking among vertices in \(V(rCL^{r})\) (or in \(V(rCL^{l})\)) are red edges as the wall of canal bank of \(rCL\) at this stage. Therefore, the [CLAIM] is good Figure 12. Proving process of Case A1 for \(\bar{V}:=V(rCL^{r})\ cupV(rCL^{l})\) and those red edges and black edges involved by \(rCL\). Not only taking care themselves, but also \(V(rCL^{r})\) and \(V(rCL^{l})\) inherit their marks obtained from \(V_{13}\) and \(V_{24}\) and bring the marks to the following stages of our induction proof; because on the other side of every non-deja-vu and non-outer-facet red edge, we will deal with the other \(rCL_{i}\). We shall keep adding new vertices into \(\bar{V}\) as bipartite-way and check the behavior of those red edges and black edges involved by the new \(rCL_{i}\). While we obtain \(V(M)=\bar{V}\), the proof of [CLAIM] is done. Now we need to do something as preparation for the coming stages. We shall * delete all black edges along \(rCL\), and * delete all deja-vu edges along \(rCL\), as well as the red edges that are both on the right of \(rCL\) and along the \(n\)-gon. For our example, we shall delete (Red) four deja-vu edges, namely \(bw\), \(wx\), \(yz\) and \(st\). Also delete \(v_{4}v_{5}\), \(v_{5}v_{6}\), \(v_{6}v_{7}\) and \(v_{7}v_{0}\). The remaining subgraph consist of several parts as red/black-connected components. Actually every part must be red-2-connected for its border. Being only red-1-connected, they shall be treated as different parts rather then one part. (i) The most easy part is a single vertex incident to no edges. For example, \(v_{5}\), \(v_{6}\), \(v_{7}\), \(x\), \(y\) and \(t\). At this new stage, a single vertex is the base case in our induction proof. This single vertex belongs to either \(V(rCL^{r})\) or \(V(rCL^{l})\) is easy to judge by \(rCL\). (ii) Some parts are on the right of \(rCL\) each of which are again \(n_{i}\)-semi-MPG with all \(n_{i}\) edges along the outer facet \(n_{i}\)-gon being red. Let us denote these parts by \(M_{i}^{r}\). Some \(M_{i}^{r}\) and \(M_{j}^{r}\) might be red-1-connected; so we shall treat them as two distinct \(n_{i}\)-gon and \(n_{j}\)-gon outer facets as well as \(n_{i}\)- and \(n_{j}\)-semi-MPG. For our example, we only have \(M_{1}^{r}\) which has two big red cycles that are red-2-connected. The out facet of each \(M_{i}^{r}\) contains some vertices of the right canal bank of \(rCL\) (or \(V(rCL^{r})\)); therefore, all \(n_{i}\) vertices of this out facet inherit the mark from \(V_{13}\) made by the previous stage of our induction. Obvious, let us redo the process of Case A1 on these \(M_{i}^{r}\). (iii) Some parts are on the left of \(rCL\), and let us mark them by \(M_{j}^{l}\). In Figure 12, we have \(M_{1}^{l}\) and \(M_{2}^{l}\). (Let us further think about a new situation: if we merge vertices \(b\) and \(w\), then \(M_{1}^{l}\) and \(M_{2}^{l}\) are red-1-connected; so they shall still be treated as two independent semi-MPGs.) The argument are nearly the same as (ii), unless all vertices of these out facets shall inherit the mark from \(V_{24}\) made by the previous stage of our induction. Keep working on (ii) or (iii) by redoing the process of Case A1 until all threads are terminated by (i). The process and [CLAIM] will be done finally. **Case A2.** Suppose there are \(2k\) black edges (\(n\geq 2k\geq 0\)) along this single outer facet. This number must be even due to Lemma 6.2(a). We are going to apply induction proof for Case A2 by reducing this index \(k\). Notice that \(k=0\) is exactly Case A1 and it plays a role as the base case of our induction proof in this portion. The argument of the proof of Case A2 is similar to the proof of Case A1; so, we just sketch the main ideas of this proof. Especially how to guarantee [CLAIM]? Just follow the argument given in Case A1. Referring to Lemma 7.2(c), all black edges along the unique outer facet form a non-crossing match by the connection of entrances and exits of this red canal system of \(T_{r}\). Let us just choose a pair of black edges of this match, namely \(e_{1}\) and \(e_{2}\) in Figure 13 as a demonstration. A directed current (red canal line) \(rCL\) is immediately provided. Again, we shall delete (Blk) and (Red), in addition we shall Figure 13. Proving process of Case A2 delete \(e_{1}\) and \(e_{2}\). Like Case A1, we now have three kinds of parts as red/black-connected components: (i) a single vertex incident to no edges (ii) \(M_{i}^{r}\) (iii) \(M_{j}^{l}\). The argument of induction to show the [CLAIM] true is nearly the same. The key of induction is that \(e_{1}\) and \(e_{2}\) have been deleted and then each \(M_{i}^{r}\) and \(M_{j}^{l}\) has at most \(2k-2\) black edges along its unique outer facet. We can perform this induction proof with this important key. Now we finish sketching the main ideas of this proof. [B: \(T_{r}\) on an MPG]: Every \(rCL_{i}\) formed by this \(T_{r}\) is a ring. We simply choose one \(rCL\) and perform the same process given in Case A1, then we obtain three kinds of parts as red/black-connected components: (i) a single vertex incident to no edges (ii) \(M_{i}^{r}\) (iii) \(M_{j}^{l}\). The key of induction is that each \(M_{i}^{r}\) and \(M_{j}^{l}\) has its outer facet all red edges, and we shall directly apply Case A1 on each \(M_{i}^{r}\) and \(M_{j}^{l}\). Now we finish sketching the main ideas of the proof of this portion. It is the right time to cash in the previous rain check. Proof.: **[(c) \(\Rightarrow\) (a) for Theorem 6.7]:** An R-tiling on an \(M\), an MPG or an \(n\)-semi-MPG (One Piece), must be grand one by Theorem 7.9. So we have \(V(M)=V_{13}\uplus V_{24}\) and \(V_{13}\) is ready to color 1 and 3 and \(V_{24}\) is ready to color 2 and 4. No red odd-cycle in \(T_{r}\) guarantees a 4-coloring must be done. Now we give a generalization of Theorem 7.9. **Lemma 7.10**.: _Let \(M\) be an MPG, \(k\)-semi-MPG, or an \((k_{1},k_{2},\ldots,k_{t})\)-semi-MPG with an R-tiling \(T_{r}\). Here we allow two outer facets to share edges. The following three are equivalent:_ * _The number of black edges along any cycle in_ \(M\) _is always even._ * _The number of black edges along any outer face in_ \(M\) _is always even._ * _The red tiling_ \(T_{r}\) _is grand. (Definition_ 7.4_, not Definition_ 7.6_.)_ Proof.: [(a) \(\Leftrightarrow\) (b)]: Item (b) is weaker than (a), thus "(a) \(\Rightarrow\) (b)" is trivial. Let us prove "(a) \(\Leftarrow\) (b)." Provided any cycle \(C\) that is not an outer facet in \(M\), then inside of \(C\) there are some outer facets or probably none, i.e., the structure inside of \(C\) is then another MPG or another \((m_{1},m_{2},\ldots,m_{h})\)-semi-MPG, denoted by \(M_{C}\), where the cycle \(C\) itself forms a outer facet in \(M_{C}\). By Lemma 6.2(a'), the total number of black edges in \(\Omega(M_{C})\) is even. As (b) provided, we shall see even number of the total black edges along the original outer facets of \(M\) inside \(C\), and then we conclude that \(C\) must have even number of black edges. [(a) \(\Leftarrow\) (c)]: Given (c), the subgraph \(M_{bl}\), which is induced by black edges of \(T_{r}\), is bipartite with bipartite vertex set \(V_{13}\) and \(V_{24}\); also there is no red edge linking between \(V_{13}\) and \(V_{24}\). Given any cycle with red and black edges. If this cycle is made by pure black edges, then definitely it is even length. Suppose there are some red edges along this cycle. We simply remove these red edges which only links vertices in \(V_{13}/V_{24}\). So the total number of black edges zigzagging between \(V_{13}\) and \(V_{24}\) is even. [(a) \(\Rightarrow\) (c)]: Given (a), we need to show \(M_{bl}\), the subgraph induced by black edges of \(T_{r}\), is bipartite. There is an elementary equivalent condition for a bipartite graph: A graph is bipartite if and only if every cycle of the graph is even length, including the situation that there is no cycle at all. The statement (a) do provide all possible black cycles must be even length; hence \(M_{bl}\) is a bipartite graph. To finish this proof we still need to show that any red edge \(ab\) of \(T_{r}\) only links vertices in the same partite set. (A): If we do not allow two outer facets to share edges, then \(M_{bl}\) must be black-connected; because removing any red edge \(ab\) from \(M\), vertices \(a\) and \(b\) are still connected by at least two black edges belonging to red triangle or red half-tile of \(ab\). For black-connectivity, the bipartite sets \(V_{13}\) and \(V_{24}\) of \(M_{bl}\) has only one way, while switching 13 and 24 for whole \(V(M)\) is symmetric. In this case, the end vertices \(a\) and \(b\) of any red edge \(ab\) must either belong to \(V_{13}\) or \(V_{24}\). The reason still comes from the two black edges of the half-tile of \(ab\). (B): However, if we allow two outer facets to share edges, then \(M_{bl}\) might be black-disconnected. Here we offer two different ways, (B1) and (B2), to conquer this situation. (B1) For any shared edge \(ab\) with red color, we simply draw two extra black edges to make a new red half-tile of \(ab\). Now the hypothesis of (a) still holds for this new \((k^{\prime}_{1},k^{\prime}_{2},\dots,k^{\prime}_{t})\)-semi-MPG \(M^{\prime}\) with a new R-tiling \(T^{\prime}_{r}\), where \(M^{\prime}\) has no shared edges of two outer facets. Therefore, (A) guarantees \(M^{\prime}\) correct for (c), and then \(M^{\prime}\) guarantees \(M\) correct for (c) after removing all extra black edges. Then the proof is complete. (B2) Removing those shared red edges of two outer facets, denoted this edge set by \(SRE\), we might get a disconnected subgraph graph, say \(\bar{M}_{0}\), with components \(T_{1},T_{2},\dots,T_{p}\). Especially each \(T_{i}\) is black-connected and it is correct for (c) with two partite sets of its vertices. Since \(M\) is connected, there is a subset \(SRE^{\prime}\) of \(SRE\) such that restoring all red edges in \(SRE^{\prime}\) back to \(\bar{M}_{1}\) will link all \(T_{1},T_{2},\dots,T_{p}\) to be 1-connected, i.e., if we treat each \(T_{i}\) as a single node then \(SRE^{\prime}\) induces a tree. Now we can unify all pairs of bipartite sets of \(T_{i}\) to associate with their own family, \(V_{13}\) or \(V_{24}\), through this connection by \(SRE^{\prime}\); also all red edges so far definitely links vertices in \(V_{13}/V_{24}\). How about the rest red edges in \(SRE-SRE^{\prime}\)? We shall start with \(\bar{M}_{1}\) and restore these red edges one-by-one. Once we return a red edge \(ab\in SRE-SRE^{\prime}\) back to \(\bar{M}_{i}\) to reach the current stage \(\bar{M}_{i+1}\), all cycles passing through \(ab\) at this current stage have even number of black edges by (a); obviously both \(a\) and \(b\) belong to either \(V_{13}\) or \(V_{24}\) at the previous stage \(\bar{M}_{i}\); so this red edge \(ab\) satisfies (c). Notice that this returning method has to be one-by-one; for instance, we cannot deal with \(ab,a^{\prime}b^{\prime}\in\bar{M}_{i+1}-\bar{M}_{i}\), because a cycle passing through both \(ab\) and \(a^{\prime}b^{\prime}\) will cause a nightmare. Once \(\bar{M}_{j}=M\), the proof of (B2) is complete. _Remark 7.11_.: Lemma 6.2(a) and (a') check the parity of the total number of black edges along all outer facets of a semi-MPG with an R-tiling, and Lemma 7.10(b) also exams all outer facets, but check them one by one. **Theorem 7.12** (The First Fundamental Theorem v2: a generalized version of Theorem 6.7).: _Let \(M\) be an MPG or an n-/(\(n_{1},n_{2},\dots,n_{k}\))-semi-MPG. Here we allow two outer facets to share edges in \(M\). The following are equivalent:_ * \(M\) _is 4-colorable._ 2. \(M\) _has an RGB-tiling such that along every_ \(m\)_-cycle in_ \(M\) _the numbers of red, green and blue edges are all even if_ \(m\) _is even, and all odd if_ \(m\) _is odd._ 3. \(M\) _has an RGB-tiling such that along every_ \(n_{i}\)_-gon outer facet the numbers of red, green and blue edges are all even if_ \(n_{i}\) _is even, and all odd if_ \(n_{i}\) _is odd._ 4. \(M\) _has a grand R-tiling without red odd-cycles._ _Proof._ [(b) \(\Leftrightarrow\) (c)]: An RGB-tiling is also three coexisting R-, G- and B-tilings. We just apply Lemma 7.10(a) and (c) w.r.t. R-, G- and B-tilings, then we obtain "(b) \(\Leftrightarrow\) (c)" of this Theorem. [(a) \(\Rightarrow\) (b)]: Let \(f:V(M)\rightarrow\{1,2,3,4\}\) be a 4-colorable function and \(C:=v_{1}\)-\(v_{2}\)-\(\ldots\)-\(v_{m}\) be any \(m\)-cycle in \(M\). For symmetry we only show a general claim that the total number of red edges and blue edges along \(C\) is even. The function \(f\) restricted on \(C\) can be traced as a corresponding closed walk around the middle graph in Figure 1, i.e., we just follow the numbers \(f(v_{i})|_{v_{i}\in C}\). Since we only consider red edges and blue edges, which are the two vertical lines and the two diagonal lines of the middle graph in Figure 1; but we ignore the two horizontal lines. Let us consider an invisible horizontal line \(\ell\) which just lays in the middle of "the middle graph in Figure 1." To be a closed walk, definitely there are as many up direction crossings of \(\ell\) (including "up" vertically and "up" diagonally) as down direction crossings of \(\ell\). Therefore, the total number of red edges and blue edges along the closed walk as well as along \(C\) must be even. Thus the proof is done. [(b) \(\Rightarrow\) (d)]: Let \(T_{r}\) be an R-tiling which is induced by the given RGB-tiling on \(M\) claimed in (b). By the equivalence given in Lemma 7.10, \(T_{r}\) is a grand R-tiling when we treat green edges and blue edges as black color. If there is a red cycle in \(M\), the cycle must be even; because in this red cycle the numbers of green edges and blue edges are both zero. [(d) \(\Rightarrow\) (a)]: We already have had Lemma 7.8 to prove this direction. \(\square\) Let us think differently. Suppose that finding a 4-coloring function is not our primary purpose, but we do need to know whether this \(R\)-tiling can induce a coexisting G- or B-tiling or not. The coexistence property for whole \(M\) can be divided into small piece to check, i.e., we shall check each red canal line \(rCL_{i}\). The black edges zigzagging along \(rCL_{i}\) are supposed to color green and blue alternately. There is no problem for a \(rCL_{i}\) as a line; however for a \(rCL_{i}\) as a ring, we must require the number of triangles along \(rCL_{i}\) even. Furthermore, let us study the parity property that connects between those triangles along \(rCL_{i}\) and those cycles of red canal bank along \(rCL_{i}\). **Lemma 7.13**.: _Let \(M\) be an MPG or a \(k\)-/\((k_{1},k_{2},\ldots,k_{t})\)-semi-MPG with an R-tiling \(T_{r}\). Let us choose any red canal line \(rCL\) as a ring if there is. Recall the notation \(M_{i}^{r}\) and \(M_{j}^{l}\) defined in the proof of Theorem 7.9; but this time (Red): only delete deja-vu edges along \(rCL\)._ * _All_ \(M_{i}^{r}\)_/_\(M_{j}^{l}\) _are semi-MPG's._ * _We are only interested in those_ \(n_{i}\)_-/_\(m_{j}\)_-gon outer facets of_ \(M_{i}^{r}\)_/_\(M_{j}^{l}\) _along_ \(rCL\)_. The number of triangles along_ \(rCL\) _is even if and only if the sum_ \(\sum_{i}n_{i}+\sum_{j}m_{j}\) _is even._ Proof.: We would like to use Figure 14 as an example. After deletion by (Blk) and (Red), we obtain \(M_{1}^{r}\), \(M_{1}^{l}\) and \(M_{2}^{l}\) respectively with 11-, 3- and 6-gons, which are marked by black dots; so \(\sum_{i}n_{i}+\sum_{j}m_{j}=20\). Notice that \(M_{1}^{r}\) is an \((11,9)\)-semi-MPG and we are only interested in its 11-gon. There are 28 triangles along \(rCL\). Their difference is 8 and it is caused by 4 deja-vu edges. Here we need a general identity good for both ring or path: \[\#(\text{triangles along }rCL)=e(rCL^{r})+e(rCL^{l}),\] where \(e(rCL^{*})\) is the number of red edges along the canal bank \(rCL^{*}\) with each deja-vu edge counted by multiplicity 2. Because each deja-vu edge counted by multiplicity 2, so the parity of \(e(rCL^{r})+e(rCL^{l})\) equals the parity of \(\sum_{i}n_{i}+\sum_{j}m_{j}\). In a general graph, a circuit or closed walk (of odd length) made by edge set \(W\) of sequence, then there exists a subsequence \(C\subseteq W\) such that \(C\) is a minimal cycle (of odd length respectively). Let us back to Figure 14, we are interested in 11-, 3- and 6-gons of red cycles with vertices marked by black dots. Precisely we consider them the _minimal_ red cycles w.r.t. to this \(rCL\). Literally a minimal red cycle means no way to shrink it smaller. However, what is "smaller"? An MPG is kind of sphere and there are two sides to make a cycle smaller. That is why we need to add "w.r.t. to this \(rCL\)". **Example 7.14**.: In Figure 14 for instance, the 3-gon/6-gon red cycle is minimal not only w.r.t. to this \(rCL\) but also w.r.t. to another canal line inside itself. Let us look at \(M_{1}^{r}\) in Figure 14. There are seven different red cycles (there are eight different red cycles if including the cycle of empty). Obviously 3 of them are minimal ones, namely the 11-gon, 7-gon and 6-gon are minimal w.r.t. their own \(rCL_{i}\). The rest four cycles are made by combinations of these three minimal ones. Having all red cycles of \(T_{r}\) even length is the best. In this way we can achieve a coexisting RGB-tiling. However, this RGB-tiling might not be grand for its R-, G- and B-tilings. Such kind of examples have been shown in Example 6.12. Anyway, once an RGB-tiling exists, there are \(2^{N}\) different coexisting RGB-tiling induced by Figure 14. Process of Lemma 7.13 this R-tiling, where \(N\) is the total number of red canal lines \(rCL_{i}\) including both rings and paths. _Remark 7.15_.: In order to get an RGB-tiling \(T_{rgb}\) on an MPG or an \(n\)-\(/(n_{1},\dots,n_{k})\)-semi-MPG \(M\), we shall first try to pave an R-tiling \(T_{r}\). Suppose that occasionally we obtain a red odd-cycle \(C_{r}\) in the middle of our working. Let \(\Sigma^{+}\) (\(\Sigma^{-}\)) denote the subgraph of \(M\) inside (outside) of \(C_{r}\). If \(\Sigma^{+}\) is an \(|C_{r}|\)-semi-MPG with the only outer facet \(C_{r}\), then we might achieve an R-tiling \(T_{r}\) on \(\Sigma^{+}\), but never an RGB-tiling. Why? Let us consider the last step to achieve a possible \(T_{r}\) on \(\Sigma^{+}\). This last step can be two possible ways: (1) to pave a red diamond inside \(C_{r}\) or (2) to pave a red triangle along \(C_{r}\). Just before this last step to pave, we see a \((4,|C_{r}|)\)-semi-MPG \(M_{1}\) for (1); or a \((3,|C_{r}|)\)-semi-MPG \(M_{2}\) for (2), where outer facets 3-gon and \(|C_{r}|\)-gon share a same red edge. Now we shall refer to Corollary 6.4 and Remark 6.5. The all-even/all-odd property cannot hold for \(M_{1}\) and \(M_{2}\). We will see \(M=EP\) and both \(\Sigma^{+}\) and \(\Sigma^{-}\) in form of \(M_{2}\) in Part II of this paper. ## 8. Degree 5 vertices in any \(Ep\) Referring to Theorem 3.1 and \(EP\in e\mathcal{MPG}\mathcal{N}\subseteq m\mathcal{N}4\), we know any \(v\in V(EP)\) must have \(\deg(v)\geq 5\). An important preliminary knowledge is about vertices in \(EP\in e\mathcal{MPG}\mathcal{N}4\) being degree exactly 5. Given a graph \(G\), let \(V_{k}(G)\) (or simply \(V_{k}\)) denote the set of vertices of degree \(k\) in \(G\). Also let \(\#V_{k}(G)\) (or simply \(\#V_{k}\)) denote cardinality of \(V_{k}(G)\). If \(G\) is a planar graph, the general notation \(\#V,\#E,\#F\) are the numbers of vertices, edges, facets in \(G\) respectively. **Theorem 8.1**.: _Let \(G\) be an MPG with the degrees of all vertices at least 5. We have an inequality_ \[\#V_{5}=12+\#V_{7}+2\#V_{8}+3\#V_{7}+\cdots \tag{8.1}\] _Or there are at least 12 vertices are exactly degree 5._ Proof.: Let us first ignore the requirement that degree at least \(5\). We use the minimum requirement: all vertices are degree at least \(2\) to preserve a good looking for triangular facets of \(G\). We consider four equations: \(\#V-\#E+\#F=2\) (Euler's formula), \(3\#F=2\#E\) (every facet is a triangle), and \(\Sigma_{k\geq 2}k\#V_{k}=2\#E\), and \(\#V=\Sigma_{k\geq 2}\#V_{k}\). \[6\#V-(2+4)\#E+6\#F = 12;\] \[6\Sigma_{k\geq 2}\#V_{k}-(\Sigma_{k\geq 2}k\#V_{k}+6\#F)+6\#F = 12; \tag{8.2}\] \[4\#V_{2}+3\#V_{3}+2\#V_{4}+\#V_{5} = 12+\Sigma_{k\geq 7}(k-6)\#V_{k}.\] By hypothesis, \(\#V_{2}=\#V_{3}=\#V_{4}=0\). The proof is done. _Remark 8.2_.: Since \(EP\in e\mathcal{MPGN}4\) has all it vertex of degree at least \(5\), \(EP\) has at least \(12\) vertices are exactly degree \(5\). What an interesting minimum number is \(12\)! This minimum situation exactly corresponds to an icosahedron which is a convex polyhedron with \(20\) faces (trianglar facets), \(30\) edges and \(12\) vertices. If only \(\#V_{3}\) is non-zero, then \(\#V_{3}=4\) exactly corresponds to a tetrahedron. If only \(\#V_{4}\) is non-zero, then \(\#V_{4}=6\) exactly corresponds to an octahedron. For a general situation without asking some \(V_{i}\) must be \(0\), what is a combinatorial description or explanation for Equation 8.2? ### Subgraphs created by an RGB-tiling In this subsection, we consider a fixed MPG \(M\) with a fixed RGB-tiling \(T_{rgb}\). Let \(T_{r}\), \(T_{g}\) and \(T_{b}\) be the R-tiling, G-tiling and B-tiling induced by \(T_{rgb}\) respectively. We also use \(T_{r}\), \(T_{g}\) and \(T_{b}\) to represent the sets of red, green, blue edges respectively. For any vertex \(v\in V(M)\), it is nature to define \(\deg_{r}(v)\) to be the numbers of red edges incident to \(v\); so to define \(\deg_{g}(v)\) and \(\deg_{b}(v)\). First, we focus on \(M-T_{r}\) which is also a bipartite graph made by black edges \({T_{r}}^{-1}\)(black). The planar graph \(M-T_{r}\) is also made by squares which come from diamonds without red edges in middle. Let us define \(\deg^{\bar{r}}(v):=\deg_{g}(v)+\deg_{b}(v)\) for every \(v\in V(M-T_{r})\). The corresponding numbers \(\#V^{\bar{r}}\), \(\#E^{\bar{r}}\), \(\#F^{\bar{r}}\) and \(\#V_{j}^{\bar{r}}\) for \(M-T_{r}\) are nearly duplicated from the previous discussion. Clearly \(\#V^{\bar{r}}=\#V\), \(3\#E^{\bar{r}}=2\#E\) and \(2\#F^{\bar{r}}=\#F\). The following derivation steps are as same as the previous discussion. We have \(\#V^{\bar{r}}-\#E^{\bar{r}}+\#F^{\bar{r}}=2\) (Euler's formula), \(4\#F^{\bar{r}}=2\#E^{\bar{r}}\) (four black edges form a square), and \(\Sigma_{j\geq 2}j\#V_{j}^{\bar{r}}=2\#E^{\bar{r}}\), and \(\#V^{\bar{r}}=\Sigma_{j\geq 2}\#V_{j}^{\bar{r}}\). \[4\#V^{\bar{r}}-(2+2)\#E^{\bar{r}}+4\#F^{\bar{r}} = 8;\] \[4\Sigma_{j\geq 2}\#V_{j}^{\bar{r}}-\left(\Sigma_{j\geq 2}j\#V_{j}^{ \bar{r}}+4\#F^{\bar{r}}\right)+4\#F^{\bar{r}} = 8; \tag{8.3}\] \[2\#V_{2}^{\bar{r}}+\#V_{3}^{\bar{r}} = 8+\Sigma_{j\geq 5}(j-4)\#V_{j}^{\bar{r}}.\] The basic requirement of an RGB-tiling is three different edge-colors in every triangle. This nature tells us: \[\delta(\deg(v)\mbox{ is odd})\leq \deg_{r}(v) \leq\lfloor\deg(v)/2\rfloor; \tag{8.4}\] \[\lceil\deg(v)/2\rceil\leq \deg^{\bar{r}}(v) \leq\deg(v)-\delta(\deg(v)\mbox{ is odd}),\] where \(\delta(*)\) is the Kronecker Delta function. Let us denote \(\#V_{k,j}^{\bar{r}}\) to be the number of vertices \(v\) in \(M\) with \(\deg(v)=k\) and \(\deg^{\bar{r}}(v)=j\). By Inequality 8.4, we have \[\#V_{k}=\sum_{j=\lceil k/2\rceil}^{k-\delta(k\mbox{ is odd})}\#V_{k,j}^{\bar{r}}.\] Especially, we have \[\#V_{4} = \#V_{4,2}^{\bar{r}}\;+\;\#V_{4,3}^{\bar{r}}\;+\;\#V_{4,4}^{\bar{ r}};\] \[\#V_{5} = \#V_{5,3}^{\bar{r}}\;+\;\#V_{5,4}^{\bar{r}};\] \[\#V_{6} = \#V_{6,3}^{\bar{r}}\;+\;\#V_{6,4}^{\bar{r}}\;+\;\#V_{6,5}^{\bar{ r}}\;+\;\#V_{6,6}^{\bar{r}};\] \[\#V_{7} = \#V_{7,4}^{\bar{r}}\;+\;\#V_{7,5}^{\bar{r}}\;+\;\#V_{7,6}^{\bar{ r}}.\] We show these identities of \(\#V_{k}\) for the first few \(k\). Index \(k\) start with \(4\), because each MPG that we will deal with only has at most one vertices of degree \(4\) and the rest of vertices of degree at least \(5\). We also use \(\#V_{k,i}^{R}\) to denote the number of vertices in \(M\) which are degree \(k\) and red-degree \(i\); while the superscripts \(R\) and \(\bar{r}\) for \(\#V_{k,i}^{R}\) and \(\#V_{k,j}^{\bar{r}}\) are easy to distinguish. Clearly \(\#V_{k,i}^{R}=\#V_{k,k-i}^{\bar{r}}\). Using Equations 8.2 and 8.4, we plan to develop a skill to estimate how many degree 6 vertices are neighbors of a degree 5 vertex for any \(EP\in e\mathcal{MPGN}\).
2309.17342
Towards Free Data Selection with General-Purpose Models
A desirable data selection algorithm can efficiently choose the most informative samples to maximize the utility of limited annotation budgets. However, current approaches, represented by active learning methods, typically follow a cumbersome pipeline that iterates the time-consuming model training and batch data selection repeatedly. In this paper, we challenge this status quo by designing a distinct data selection pipeline that utilizes existing general-purpose models to select data from various datasets with a single-pass inference without the need for additional training or supervision. A novel free data selection (FreeSel) method is proposed following this new pipeline. Specifically, we define semantic patterns extracted from inter-mediate features of the general-purpose model to capture subtle local information in each image. We then enable the selection of all data samples in a single pass through distance-based sampling at the fine-grained semantic pattern level. FreeSel bypasses the heavy batch selection process, achieving a significant improvement in efficiency and being 530x faster than existing active learning methods. Extensive experiments verify the effectiveness of FreeSel on various computer vision tasks. Our code is available at https://github.com/yichen928/FreeSel.
Yichen Xie, Mingyu Ding, Masayoshi Tomizuka, Wei Zhan
2023-09-29T15:50:14Z
http://arxiv.org/abs/2309.17342v2
# Towards Free Data Selection with ###### Abstract A desirable data selection algorithm can efficiently choose the most informative samples to maximize the utility of limited annotation budgets. However, current approaches, represented by active learning methods, typically follow a cumbersome pipeline that iterates the time-consuming model training and batch data selection repeatedly. In this paper, we challenge this status quo by designing a distinct data selection pipeline that utilizes existing general-purpose models to select data from various datasets with a single-pass inference without the need for additional training or supervision. A novel free data selection (FreeSel) method is proposed following this new pipeline. Specifically, we define semantic patterns extracted from intermediate features of the general-purpose model to capture subtle local information in each image. We then enable the selection of all data samples in a single pass through distance-based sampling at the fine-grained semantic pattern level. FreeSel bypasses the heavy batch selection process, achieving a significant improvement in efficiency and being 530\(\times\) faster than existing active learning methods. Extensive experiments verify the effectiveness of FreeSel on various computer vision tasks. Our code is available at [https://github.com/yichen928/FreeSel](https://github.com/yichen928/FreeSel). ## 1 Introduction Deep Neural Network (DNN) models have achieved remarkable progress in various tasks, benefiting from abundant training samples and labels. Unfortunately, data labeling tends to be time-consuming and costly, especially for dense prediction tasks such as object detection and semantic segmentation, where experts may spend up to 90 minutes per image [33]. As such, effectively exploiting the limited annotation budget has become a long-standing problem in the advancement of computer vision. Many methods have been proposed to identify the most suitable samples for annotation, where the mainstream follows the active learning [43; 45] or subset selection [42] pipelines. However, both kinds of methods rely on task-specific models. As the most popular data selection strategy, active learning algorithms employ a time-consuming and computationally expensive batch selection strategy [44], as shown in Fig. 1a. Specifically, a task-specific model is first trained using a small initial set of labeled samples. Then, the model is utilized to select images within a specified batch budget size. These selected images are annotated and added to the labeled pool, after which the model is retrained or fine-tuned using all the labeled samples. This iterative process is repeated multiple times for a large unlabeled data pool. Since the selection of data is tightly coupled with the task-specific model, the entire pipeline needs to be restarted from scratch and repeated when working on different tasks or datasets. In many cases, it even requires up to _several days_ to select sufficient samples from a medium-sized data pool (_e.g._, Core-Set [44] in Tab. 1). In this paper, we challenge this _status quo_ by introducing an efficient data selection pipeline that enables the selection of data within a single pass (as illustrated in Fig. 0(b)), therefore achieving comparable efficiency to random selection. We identify the efficiency bottleneck of data selection methods as the training of the task-specific model. Building upon insights from recent research on unsupervised learning [2, 64], we recognize that pretrained models [5, 63] possess the ability to encode the semantic information of images in a fine-grained level. This observation inspires us to integrate pretrained models into the data selection process, thereby decoupling data selection from task-specific models and leveraging the inherent diversity captured by pretrained models. By leveraging publicly available pretrained models, our pipeline incurs no additional training costs. To provide a concrete foundation for our design, we consider the following three guiding principles. * **Generality:** We strive for decoupling data selection from task-specific models. It is desired that a _general_ model works on the data selection of multiple tasks or datasets. * **Efficiency:** The batch selection setting of active learning (Fig. 0(a)) is known to be time-consuming due to its iterative nature. It is expected to be replaced with a _single-pass_ model inference on unlabeled data pools. * **Non-supervision:** Annotators may not always respond in time, and the entire data selection progress may be delayed by frequent requests for labels. It is preferred that annotations are not required until the end of data selection. In view of the above principles, we propose the _first_ free data selection (FreeSel) method, to the best of our knowledge, satisfying all the above principles simultaneously. FreeSel selects data samples based on the diversity of local features. The features are extracted by a publicly available pretrained vision transformer [12], which is generic enough to facilitate data selection for different networks, datasets, and tasks after pretraining on large-scale datasets [11] in an unsupervised manner, _e.g._, DINO [5]. We extract our newly defined semantic patterns by clustering the intermediate local features after an attention filter. The images are selected following a distance-based sampling strategy at the level of semantic patterns. In pursuit of efficiency, this whole process is finished within a single-pass model inference without any extra training. The data selection process is indeed unsupervised, which relieves the troubles of assigning responsive annotators. As a result, our method pursues a _free_ data selection using public pretrained models with a time efficiency close to random selection. We conduct extensive experiments on different tasks, datasets, and networks. When compared with existing active learning methods, our algorithm can achieve comparable performance with significantly advantageous efficiency. Our contributions are three-fold. **1)** We for the first time, introduce a new free data selection pipeline that adheres to three important principles of _generality_, _efficiency_, and _non-supervision_ with negligible time costs. **2)** We propose FreeSel, a novel method following our proposed pipeline. It can fill in the annotation budget in a single pass based on the inherent diversity of semantic patterns captured by pretrained models. **3)** Extensive experiments on image classification, object detection, and semantic segmentation demonstrate the effectiveness of our pipeline. ## 2 Related Work **Active Learning.** Active learning aims to choose the most suitable samples for annotation so that model performance can be optimized with a limited annotation budget. Most existing work in this Figure 1: Comparisons between active learning pipeline and our proposed free selection pipeline. field [47, 57, 44, 18, 59, 60] follows a pool-based protocol, selecting samples based on the ranking of the whole dataset. There exist two mainstream sampling strategies for pool-based methods _i.e._ uncertainty and diversity. Uncertainty inside the model prediction reflects the difficulty of data samples, estimated by different heuristics such as probabilistic models [16, 13], entropy [26, 36], ensembles[3, 32], and loss function [57, 24]. Some other algorithms try to find the diverse subset which well represents the entire data pool. They measure the diversity with the Euclidean distance between global features [44], adversarial loss [47], or KL-divergence between local representations [1]. However, all these methods couple the data selection with a task model and require repetitive model training in the batch selection pipeline, resulting in inefficiency. Differently, our proposed pipeline selects samples through _a single-pass model inference_ on each unlabeled pool. **Subset Selection.** As another category of data selection algorithms, subset selection methods often select all the required samples in a single pass with the model trained on a labeled seed set. The subset is usually selected based on some criterion of uncertainty [27], diversity [6, 4], or their combination [42]. In contrast, our proposed pipeline needs neither extra training on the target dataset nor knowledge about the label space. **Unsupervised Learning.** Both contrastive methods [17, 20, 61, 53, 25, 48, 39] and generative models [52, 19, 49] have achieved great success in unsupervised representation learning. Contrastive methods discriminate different images without using any explicit categories. In contrast, generative methods directly predict masked visual information inside images. We exploit a general pretrained model [5] to represent input images for task-agnostic data selection. As a result, we do not train models specific to each task like the traditional active learning pipeline. **Data Selection with Pretrained Models.** There are some attempts to combine unsupervised pretraining and data selection. [56] selects data samples by the loss of pretext tasks, but requires different pretext tasks for different downstream tasks. [37] formulates active learning as an integer programming problem in the feature space, handling low-budget cases. [51] and [54] select samples based on the diversity of global features, targeted for semi-supervised learning and model finetuning settings respectively. Active labeling proposed in [23] is the most similar to our paper, but their method considers selective partial labeling in each sample instead of sample selection and is limited to 3D tasks with the same networks for pretraining and downstream tasks. ## 3 Preliminary Study: Off-the-Shelf Features for Data Selection Active learning work [44, 1] often selects representative samples based on the features extracted by task-specific models trained separately for each task. A straightforward alternative is to use off-the-shelf features instead, which are extracted by general-purpose models pretrained on a large-scale dataset. If it performs well, we can trivially improve the efficiency by eliminating the training step on each dataset. We conduct this preliminary study on the object detection task over the PASCAL VOC dataset [14]. \begin{table} \begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{**Methods**} & **Task** & **Batch** & **Multi-time** & \multirow{2}{*}{**Time**} \\ & **Model** & **Selection** & **Labeling** & \\ \hline Core-Set [44] & ✓ & ✓ & ✓ & \(\sim 42\ hours\) \\ Learn-Loss [57] & ✓ & ✓ & ✓ & + \\ CDAL [1] & ✓ & ✓ & ✓ & _label query_ \\ \hline FreeSel (ours) & ✗ & ✗ & ✗ & \(285\ s\) (\(\sim\)530\(\times\) faster) \\ \hline \hline \end{tabular} \end{table} Table 1: **Principles of Data Selection Methods: _Task Model_ refers to the coupling between data selection and a task-specific model. _Batch Selection_ shows whether the method repeats the data selection in batch multiple times. _Multi-time Labeling_ denotes whether it requests ground-truth labels in the data selection process. _Time_ estimates the approximate time to select \(8000\) images from PASCAL VOC datasets (Sec. 5.5).** Figure 2: **Core-Set over Off-the-Shelf Features** Consistent with our following experiments, we apply DeiT-S 2[50] for feature extraction in data selection. The model is pretrained in either supervised or unsupervised (with DINO framework [5]) manners on ImageNet [11]. For data selection, we implement the classical Core-Set algorithm [44] over the extracted global features, _i.e._ the [CLS] token feature in the last layer. We use Core-Set with these features to select various numbers of training samples, and train object detection models (SSD-300 [34]) over the selected subsets. Footnote 2: We follow the name of networks in [50] in our paper. DeiT-S is also called ViT-small in [5]. Fig. 2 shows results in comparison with random selection. Unfortunately, we find that this naive combination of off-the-shelf features and Core-Set algorithms degrades the object detection performance, especially with relatively low sampling ratios. We consider two potential reasons for this failure: **1) Complex scenes are hard to represent globally.** Images may contain multiple objects including some very small ones. It is difficult for a global feature to represent all useful details in the image. **2) K-Center selects corner cases.** In the feature space, in order to cover all the data samples with a small radius, the K-Center algorithm of Core-Set tends to select all the corner cases. The above two concerns motivate our design in Sec. 4. We represent each image with dense semantic patterns to maintain useful local information. Images are sampled based on some probability related to the distance between local semantic patterns to relieve the harmful preference for corner cases. ## 4 Methodology We detail our new data selection method FreeSel, formulated in Sec. 4.1. We define a concept called _semantic pattern_ in Sec. 4.2. Afterward, the sample selection strategy is explained in Sec. 4.3. An overview of our approach is illustrated in Fig. 3. ### Formulation We aim to select a diverse subset from the unlabeled data pool for annotation, which covers as much discriminative regional information in the original pool as possible. The regional information inside an image \(I\) is reflected by the local features \(\mathbf{f}^{I}=\{f_{r}^{I}|r=1,2,\ldots,HW\}\) of a pretrained DNN. \(H,W\) are the height and width of the feature map. The \(r\)-th region feature \(f_{r}^{I}\in\mathbb{R}^{K}\) in the feature map mainly describes the \(r\)-th region of the image [62; 41]. The discriminative power of all regional features \(\mathbf{f}^{I}\) can be represented by countable knowledge points [31]. \(f_{r}^{I}\) is considered as a knowledge point _w.r.t._ a pseudo-category \(c\) if it is similar enough to the corresponding direction vector \(\mu_{c}\). \[p(c|f_{r}^{I})=\frac{\pi_{c}\cdot p_{vMF}(f_{r}^{I}|c)}{\sum_{c^{\prime}}\pi_{ c^{\prime}}\cdot p_{vMF}(f_{r}^{I}|c^{\prime})}>\tau,\quad p_{vMF}(f|c)=C_{d}( \kappa_{c})\cdot\exp(\kappa_{c}\cdot\cos(f_{r}^{I},\mu_{c})) \tag{1}\] \(c\) is a pseudo-category describing some specific visual patterns, _e.g._ an object part, which is represented by a vector \(\mu_{c}\) in the feature space. \(\pi_{c}\) is the prior probability of pseudo-category \(c\), \(\kappa_{c}\) is a concentration parameter, and \(C_{d}(\kappa_{c})\) is a normalizing constant. Inversely, given knowledge points inside an image \(I\), they can be clustered to estimate the \(K\) pseudo-categories inside the image as \(\hat{\mu}_{j}^{I},j=1,2,\ldots,K\). We define the estimation as semantic patterns in Figure 3: **Overview of Our Proposed FreeSel: Our method uses a general pretrained vision transformer to extract features from images. Semantic patterns are derived from the intermediate features. Afterwards, we perform a distance-based sampling algorithm to select semantic patterns as well as the associated images. These selected images are labeled for downstream task model training.** Sec. 4.2. To ensure the diversity of our selection, our algorithm desires to find a subset of images \(S_{\mathcal{I}}\) in Sec. 4.3, whose semantic patterns \(\bigcup_{I\in S_{\mathcal{I}}}\{\hat{\mu}^{I}_{j}\}_{j=1}^{K}\) can be representative in the unlabeled pool. ### Per-Image Semantic Patterns Extraction To estimate the pseudo-categories, we define a novel notion called _semantic patterns_, which are extracted **from each image separately**. Given a pretrained vision transformer [12], we consider its last layer features for image \(I\) as \(\mathbf{f}^{I}=\{f^{I}_{r}\}_{r=1}^{HW}\), where each patch corresponds to a region \(r\). According to Eq. 1, only a few regional features may be considered as meaningful knowledge points, while other regions are useless or even distracting. However, it is non-trivial to distill these knowledge points without any information about the pseudo-categories. To this end, we resort to the [CLS] token self-attention map of the transformer, which serves as a natural filter for regional importance even without the supervision of category information [5]. **Attention Filter.** For image \(I\), the last layer [CLS] token attention map (average of multi-heads) is denoted as \(\mathbf{ca}^{I}=\{ca^{I}_{r}\in\mathbb{R}^{+}|r=1,2,\ldots,HW\},\sum_{r=1}^{HW }ca^{I}_{r}=1\). We can filter the important regional features that jointly represent the most useful information in the entire image with Eq. 2. \[F(\mathbf{f}^{I})=\{f^{I}_{r}|r=1,2,\ldots,t,\sum_{j=1}^{t}ca^{I}_{j}\leq\tau< \sum_{j=1}^{t+1}ca^{I}_{j}\} \tag{2}\] where regions \(r=1,2,\ldots,HW\) are sorted in the **decreasing order** of \(ca^{I}_{r}\), and \(\tau\in(0,1)\) is a hyper-parameter, meaning the **maintenance ratio** of information represented by the filtered important features. The filtered features \(F(\mathbf{f}^{I})\) are considered as the knowledge points inside the images. **Feature Clustering.** To estimate the feature vectors for pseudo-categories, we perform clustering over the filtered \(t\) knowledge points **inside each image separately**. Since K-Means is unreli Figure 4: **Visualization of Semantic Patterns: Every two images are considered as a group. _Left:_ The filtered local features (dots) of each image are grouped into semantic patterns (arrows). Gray features are eliminated in Eq. 2. Dimensions are reduced by PCA for visualization. _Right:_ Regions inside images can be associated with corresponding local features and then semantic patterns.** able in the high-dimensional feature space (details in supplementary materials), we adopt spectral clustering instead. The self-attention map provides strong cues about the region-wise similarity inside each image. We denote the last layer attention map between patch tokens for image \(I\) as \(\mathbf{pa}^{I}=\left[pa^{I}_{ij}\in\mathbb{R}\right]_{i,j=1,2,\ldots,HW},\sum_{j= 1}^{HW}pa^{I}_{ij}=1,\forall i\). It is more likely for nearby regions to interact with each other, so we only consider the attention between nearby patches [22]. \[\widehat{pa}^{I}_{ij}=\begin{cases}pa^{I}_{ij}&d(i,j)\leq d_{0}\\ 0&d(i,j)>d_{0}\end{cases} \tag{3}\] where \(d(i,j)\) is the Chebyshev distance between regions \(i,j\) in the feature map. We empirically set the threshold \(d_{0}=2\) in our experiments. Besides, we only consider the \(t\) regions after the filter in Eq. 2. In this case, we denote the new similarity matrix between patches as \(\widehat{\mathbf{pa}}^{I}=\left[\widehat{pa}^{I}_{ij}\right]_{i,j=1,2,\ldots,t}\). With this above \(t\times t\) similarity matrix, we utilize spectral clustering algorithms [38; 55] to divide the remaining \(t\) regions after filtering (Eq. 2) into \(K\) clusters \(C_{j},j=1,2,\ldots,K\), each corresponding to a pseudo-category, where \(K\) is a hyper-parameter. The details of the spectral clustering algorithm are in our supplementary materials. We average the corresponding feature \(f_{r},r=1,2,\ldots,t\) of each region \(r\) belonging to each cluster \(C_{j}\) as follows. \[\hat{\mu}^{I}_{j}=\frac{1}{|C_{j}|}\sum_{r\in C_{j}}f^{I}_{r},\qquad j=1,2, \ldots,K \tag{4}\] where \(f^{I}_{r}\in F(\mathbf{f}^{I}),r\in C_{j}\) are local features of image \(I\) grouped into cluster \(j\) through spectral clustering. \(\hat{\mathbf{\mu}}^{I}=\{\hat{\mu}^{I}_{j}\}\) represents **semantic patterns** inside the image \(I\). Fig. 4 visualizes some examples of \(\hat{\mu}^{I}_{j}\). The whole process of semantic pattern extraction is shown in Alg. 1 ### Sample Selection with Semantic Patterns Our main target of data selection is to make the distributions of selected samples diverse and representative in the level of local _semantic patterns_ instead of the global feature level. This fine-grained strategy guarantees that our selected subset can cover rich local visual patterns represented by different pseudo-categories, which are crucial for detection and segmentation tasks. To this end, we adopt a distance-based sampling strategy at the semantic pattern level. The detailed algorithm is shown in Alg. 2. Given an unlabeled image pool \(\mathcal{I}\), this process starts from randomly selecting an initial image \(I_{0}\)_i.e._ selecting all semantic patterns \(\hat{\mathbf{\mu}}^{I_{0}}\) inside it. Then, we choose the next semantic pattern \(\hat{\mu}^{I}_{j}\) inside image \(I\) with probability in proportion to its squared distances from the nearest already selected semantic pattern (Eq. 5). \[p(\hat{\mu}^{I}_{j})\propto\min_{\hat{\mu}\in S_{\mathcal{K}}}\left[D(\hat{\mu }^{I}_{j},\hat{\mu})\right]^{2},I\in\mathcal{I},j=1,2,\ldots,K \tag{5}\] where \(S_{\mathcal{K}}\) is the pool of all the already selected semantic patterns. When we choose a semantic pattern \(\hat{\mu}^{I}_{j}\), all the semantic patterns \(\hat{\mathbf{\mu}}^{I}\) inside the image \(I\) that contains \(\hat{\mu}^{I}_{j}\) are put into the selected pool \(S_{\mathcal{K}}\). We use cosine distance for \(D(\cdot,\cdot)\) as analyzed in the supplementary materials. This process continues until enough images have been selected. The selection only requires semantic patterns constructed from intermediate features offline beforehand. Consequently, only a _single-pass_ model inference _without_ any training or supervision is required in the entire data selection pipeline. ## 5 Experiments We evaluate FreeSel on object detection (Sec. 5.2), semantic segmentation (Sec. 5.3), and image classification (Sec. 5.4). The results of FreeSel are _averaged over three independent selections with different random seeds_. Features are extracted by the same general pretrained model for all the tasks (Sec. 5.1). We make some analysis of our proposed pipeline and method in Sec. 5.5. Finally, we examine the roles of different modules inside FreeSel in Sec. 5.6. We refer readers to supplementary materials for more implementation details, results, and ablation studies. ### General Model for Feature Extraction We adopt DeiT-S [50] (path size 16\(\times\)16) pretrained with the unsupervised framework DINO [5] on ImageNet [11] to extract features for data selection. The same pretrained model is used for all tasks. FreeSel can also fit other frameworks as well, as shown in supplementary materials. We emphasize that this pretrained DeiT-S model is only applied to the data selection process. For the downstream tasks, we still train the convolutional task models from scratch in accordance with prior work. ### Object Detection **Dataset and Task Model.** We carry out experiments on PASCAL VOC [14]. In line with prior work [1; 57], we combine the training and validation sets of PASCAL VOC 2007 and 2012 as the training data pool with \(16,551\) images. The performance of task model is evaluated on PASCAL VOC 2007 test set using _mAP_ metric. We follow previous work [57; 1] to train an SSD-300 model [34] with VGG-16 backbone [46] on the selected samples. It reaches \(77.43\) mAP with \(100\%\) training data. **Results and Comparison.** We compare our performance with existing active learning methods (Fig. 5) for multiple sampling ratios. For fairness, we only include task-agnostic methods instead of those designed specifically for object detection [59; 8] which should naturally perform better. Results show that FreeSel outperforms most traditional pipeline methods and remains competitive with the best ones. Besides, all these previous methods require repetitive model training and batch selection on each target dataset separately, while FreeSel can efficiently select all samples in a single pass. Sec. 5.6 also shows that FreeSel can outperform other alternative general-purpose model baselines. ### Semantic Segmentation **Dataset and Task Model.** We use Cityscapes [9] dataset for semantic segmentation. This dataset is composed of 3,475 frames with pixel-level annotation of 19 object classes. We report the result using _mIoU_ metric. We follow previous active learning research to apply DRN [58] model for this task. It reaches \(62.95\) mIoU with \(100\%\) training data. ### Image Classification **Dataset and Task Model.** We use CIFAR-10 [29] datasets and ResNet-18 [21] model in line with prior work [35; 57]. CIFAR-10 contains 60,000 images with size 32\(\times\)32 (50,000 for training and 10,000 for test) belonging to 10 categories. We report the results using _Top-1 Accuracy_ metric. The model reaches \(93.02\%\) Top-1 Accuracy with \(100\%\) training data on CIFAR-10. **Results.** We demonstrate the results of data selection methods in Fig. 8. Our performance is compared with traditional active learning methods as well. Since image classification focuses on global information, the advantage of semantic patterns cannot be fully demonstrated. However, with most sampling ratios, FreeSel still beats all its counterparts. ### Analysis **Time Efficiency Analysis.** Time efficiency of data selection is crucial for its practical use. Tab. 1 shows the comparison between FreeSel and other existing counterparts. The estimation is conducted on PASCAL VOC to choose \(8,000\) samples. We follow prior papers [1; 44; 57] to use SSD [34] as the task model (same as Sec. 5.2). The time is estimated on a single NVIDIA TITAN RTX GPU. Since FreeSel directly utilizes the publicly available pretrained model _instead of_ training models separately for each dataset, only the feature extraction, semantic pattern construction, and data selection time should be considered, _i.e._ 285 seconds in total. In contrast, for other active learning methods, networks are trained repetitively on each dataset. We follow [57; 1] to set their initial set size and batch selection budget both as \(1k\), so their model should be trained for 7 times over subsets of size \(1k\sim 7k\) to select 8,000 samples. These previous methods have similar time efficiency, requiring about 42 hours in total. They also need to wait for the oracle for ground-truth labels after selecting each batch of data. Based on the above information, our method can be **530x faster** than prior work. **Single-Pass Data Selection.** Unlike prior active learning methods, FreeSel follows the new pipeline to select all the data samples in a single pass. This allows for great practical use. Firstly, it makes our method free of a random initial set. For one thing, FreeSel can bring performance gain in the lowest sampling ratio. This is beneficial in practice when the annotation budget is extremely low. For another thing, FreeSel would not suffer from the imbalanced initial set. As discussed in [47], low-quality initial sets would hurt the performance of prior active learning work significantly. Secondly, FreeSel simplifies the active learning pipeline from the iterative _model training\(\rightarrow\)batch data selection\(\rightarrow\)batch annotation\(\rightarrow\)\(\cdots\)\(\cdots\)_ to a single-pass _data selection\(\rightarrow\)data annotation_, which saves notable efforts in the management, communication, and coordination of traditional sequential steps. **Introduction of Pretrained Model.** Our proposed pipeline introduces a pretrained model (Fig. 0(b)) to satisfy the three principles of our new pipeline. Since the pretraining is not designed specifically for the data selection, directly using a publicly available model would not lead to extra time-cost or expense. According to Sec. 3, it is non-trivial to improve the efficiency of active learning with a pretrained model. We further show that our great performance does not come from the pretrained model in Sec. 5.6. **Effect of Different Pretraining Algorithms.** In this part, we pay attention to the effect of pretraining on the final performance of FreeSel. In addition to DeiT-S [50] pretrained with DINO framework [5] in Sec. 5.1, we also adopt two alternative pretraining frameworks MoCoV3 [7] and iBOT [63] as well as a larger DeiT-B model [50]. Those different pretrained models are applied to the data selection on PASCAL VOC dataset [14]. Same as Sec. 5.2, we train an SSD-300 model [34] on the selected samples for the object detection task. Fig. 8 demonstrates that FreeSel with different pretrained models for data selection only has marginal differences in the performance of the downstream object detection task. This result verifies that FreeSel can widely fit different pretraining algorithms. The great performance of data selection comes from our carefully designed modules in FreeSel instead of the strong representative ability of some specific pretrained models. ### Ablation Study We delve into different parts of our method. Firstly, we analyze the contribution of each module inside FreeSel to the final performance. Then, the role of the pretrained DeiT-S model is also discussed. **Each Module Contribution.** Starting from the failure case in Fig. 2, modules of FreeSel are added to it one by one. Tab. 2 demonstrates the step-by-step performance improvement. This experiment is conducted on PASCAL VOC in the same setting as Sec. 5.2. The following three modules are analyzed. More quantitative analysis of hyper-parameters is available in the supplementary materials. * **Feature Extraction Manner:** In Sec. 3, the global feature of [CLS] token is directly used. We replace it with the proposed semantic patterns defined in Eq. 4. * **Attention Filter:** We apply the attention filter in Eq. 2 to filter local features. * **Selection Strategy:** Apart from the distance-based sampling in Eq. 5, we consider the alternative farthest-distance-sampling (FDS) _w.r.t._ semantic patterns, which is theoretically justified in [44] as an approximation of K-Centers. It chooses the next semantic pattern farthest from the nearest selected one as \(\hat{\mu}_{j}^{I}=arg\max_{\hat{\mu}_{j}^{I}}\min_{\hat{\mu}\in s_{\text{x}}} d(\hat{\mu}_{j}^{I},\hat{\mu})\). The failure case of Core-Set on off-the-shelf features is shown in the first line of Tab. 2. Then, we extract features by constructing semantic patterns (\(K=5\)) without applying the attention filter in the second line. It improves notably compared with the first line because the semantic patterns can represent useful local information important for object detection. However, it is only slightly better than random selection since the semantic patterns are dominated by local noisy information in this stage. We apply attention ratio \(\tau=0.5\) (Eq. 2) in the third line of the table, and the performance gets improved again. Finally, the FDS selection strategy is replaced by the distance-based probability sampling in Eq. 5. It provides extra performance gain because it would select more representative samples with fewer corner cases. **Role of Pretrained Model.** There is a concern that the performance gain of FreeSel comes from the great representation ability of the pretrained vision transformer for data selection instead of our designed method. About this, we conduct an ablation study on CIFAR-10 in the same setting as Sec. 5.4. We equip Core-Set [44] and Learn-Loss [57] with the same pretrained network for data selection, _i.e._ DeiT-S [50] pretrained with DINO [5]. During the data selection period, pretrained DeiT-S is finetuned supervisedly to select data samples with Core-Set and Learn-Loss algorithms. After selection, we still train ResNet-18 [21] over the selected samples from scratch. In Fig. 9, this pretrained DeiT-S damages the performance of Core-Set and Learn-Loss. A potential explanation comes from the differences in the representation spaces of pretrained DeiT and from-scratch ResNet. The samples selected by DeiT-S with Core-Set and Learn-Loss algorithms may not be suitable for the \begin{table} \begin{tabular}{c c c c c c} \hline \hline \multicolumn{1}{c}{**Feature**} & **Filter** & **Select** & \multicolumn{3}{c}{**Image Number**} \\ & & & 3k & 5k & 7k \\ \hline global & ✗ & FDS & 60.59 & 66.65 & 70.30 \\ SP & ✗ & FDS & 64.15 & 68.22 & 70.42 \\ SP & \(\tau=0.5\) & FDS & 64.45 & 68.49 & 71.35 \\ SP & \(\tau=0.5\) & Prob. & **65.66** & **69.24** & **71.79** \\ \hline \multicolumn{3}{c}{_random sampling_} & & 64.21 & 67.53 & 69.32 \\ \hline \hline \end{tabular} \end{table} Table 2: **Module Contribution: We discuss the contribution of each module inside FreeSel. SP means semantic pattern. Experiments are conducted on PASCAL VOC.** from-scratch training of the ResNet-18 model. This reflects that our performance gain does not come from the use of pretrained DeiT-S. Instead, the proposed FreeSel method plays an important role. **General-Purpose Model Baselines.** To further disentangle the roles of the general-purpose model and our designed FreeSel framework, we compare FreeSel with the following baselines which can also select a subset from the data pool using the general-purpose models. **1) K-Means:** We perform the K-Means algorithm on the global features extracted by the pretrained DeiT-S model [50, 5], choosing the sample closest to each cluster center. **2) Inconsistency:** We select the most difficult samples based on the inconsistency of multiple-time model predictions. To measure the inconsistency, we perform data augmentations (RandAugment [10]) to generate 10 different augmented copies for each image and calculate the average pairwise distances of global features between these copies extracted by the pretrained DeiT-S model [50, 5]. We select data samples by the order of inconsistency. **3) Entropy:** We select the most ambiguous samples based on the classification uncertainty of the pretrained model. Since the classification score is required, we adopt the DeiT-S model [50] pretrained on ImageNet in a supervised manner and measure the uncertainty with the entropy of classification scores. We select data samples by the order of entropy. Experiments are conducted on object detection task in the same settings as Sec. 5.2. Tab. 3 shows that all the above baselines perform notably worse than FreeSel, especially with low sampling ratios. This reflects the importance of our proposed FreeSel algorithm. Trivial utilization of a general-purpose model would not lead to great performance of data selection. ## 6 Conclusion and Limitations The main goal of this paper is to enable a free data selection pipeline by proposing a novel pipeline with three key principles: generality, efficiency, and non-supervision. We verify its feasibility by designing the first method FreeSel following this new pipeline. Through a single-pass model inference, the semantic patterns are constructed based on the intermediate features of a general pretrained model, over which a distance-based selection strategy is performed to find the most diverse and informative data samples. Our method outperforms most existing counterparts with remarkably superior efficiency on different tasks including detection, segmentation, and classification. We realize that FreeSel cannot beat all the other data selection methods in current stage due to the absence of training on the target datasets. Nevertheless, this direction matters in boosting the training of downstream models without any extra time and cost on the shoulders of existing general pretrained models. It gains more significance given the current landscape dominated by large foundation models pretrained on multi-modality data [40, 15], which we believe can help to extend our method to a wide range of domains and modalities. **Acknowledgement.** This work is partially supported by Berkeley DeepDrive. \begin{table} \begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{**Methods**} & \multirow{2}{*}{**Pretrained Model**} & \multicolumn{3}{c}{**Image Number**} \\ & & \(3k\) & \(5k\) & \(7k\) \\ \hline K-Means & DeiT-S (DINO) & 64.85 & 68.05 & 71.50 \\ Inconsistency & DeiT-S (DINO) & 63.29 & 67.65 & 71.35 \\ Entropy & DeiT-S (supervised) & 56.33 & 66.03 & 69.72 \\ \hline FreeSel & DeiT-S (DINO) & **65.66** & **69.24** & **71.79** \\ \hline \hline \end{tabular} \end{table} Table 3: **Baselines Using General-Purpose Model:** We compare FreeSel with other baselines using the general-purpose model. Experiments are conducted on PASCAL VOC object detection task. Figure 9: **Effect of Pretraining Methods: Experiments are conducted on PASCAL VOC.**
2301.13706
Non-convex sampling for a mixture of locally smooth potentials
The purpose of this paper is to examine the sampling problem through Euler discretization, where the potential function is assumed to be a mixture of locally smooth distributions and weakly dissipative. We introduce $\alpha_{G}$-mixture locally smooth and $\alpha_{H}$-mixture locally Hessian smooth, which are novel and typically satisfied with a mixture of distributions. Under our conditions, we prove the convergence in Kullback-Leibler (KL) divergence with the number of iterations to reach $\epsilon$-neighborhood of a target distribution in only polynomial dependence on the dimension. The convergence rate is improved when the potential is $1$-smooth and $\alpha_{H}$-mixture locally Hessian smooth. Our result for the non-strongly convex outside the ball of radius $R$ is obtained by convexifying the non-convex domains. In addition, we provide some nice theoretical properties of $p$-generalized Gaussian smoothing and prove the convergence in the $L_{\beta}$-Wasserstein distance for stochastic gradients in a general setting.
Dao Nguyen
2023-01-31T15:30:39Z
http://arxiv.org/abs/2301.13706v1
# Non-convex sampling for a mixture of locally smooth potentials ###### Abstract The purpose of this paper is to examine the sampling problem through Euler discretization, where the potential function is assumed to be a mixture of locally smooth distributions and weakly dissipative. We introduce \(\alpha_{G}\)-mixture locally smooth and \(\alpha_{H}\)-mixture locally Hessian smooth, which are novel and typically satisfied with a mixture of distributions. Under our conditions, we prove the convergence in Kullback-Leibler (KL) divergence with the number of iterations to reach \(\epsilon\)-neighborhood of a target distribution in only polynomial dependence on the dimension. The convergence rate is improved when the potential is 1-smooth and \(\alpha_{H}\)-mixture locally Hessian smooth. Our result for the non-strongly convex outside the ball of radius \(R\) is obtained by convexifying the non-convex domains. In addition, we provide some nice theoretical properties of \(p\)-generalized Gaussian smoothing and prove the convergence in the \(L_{\beta}\)-Wasserstein distance for stochastic gradients in a general setting. Keywords: ## 1 Introduction The task of sampling is crucial to a large number of fields, including computational statistics and statistical learning (Cesa-Bianchi and Lugosi, 2006; Chen et al., 2018; Kaipio and Somersalo, 2006; Rademacher and Vempala, 2008; Robert and Casella, 2013). Sampling problems often take the form of: \[\pi(\mathrm{x})=\mathrm{e}^{-U(x)}/\int_{\mathbb{R}^{d}}\mathrm{e}^{-U(y)} \mathrm{d}y,\] where the function \(U(\mathrm{x})\), also known as the potential function. There has been an increased interest in sampling from discretized dynamics, which leaves the objective distribution invariant. Here we study the over-damped Langevin diffusion (Parisi, 1981) associated with \(U\), assumed to be continuously differentiable: \[\mathrm{d}Y_{t}=-\nabla U(Y_{t})dt+\sqrt{2}\mathrm{d}B_{t}, \tag{1}\] where \((B_{t})_{t\geq 0}\) is a \(d\)-dimensional Brownian motion and its Euler discretization of Eq.(1) defines on the following updated equation: \[\mathrm{x}_{k+1}=\mathrm{x}_{k}-\eta_{k}\nabla U(\mathrm{x}_{k})+\sqrt{2\eta_{k}} \xi_{k}, \tag{2}\] where \((\eta_{k})_{k\geq 1}\) is a sequence of step sizes that can remain constant or decrease to 0, and \(\xi_{k}\sim\mathcal{N}(0,\;I_{d\times d})\) are independent Gaussian random vectors. The Euler discretization is sometimes referred to as the Langevin Monte Carlo (LMC) or the unadjusted Langevin algorithm (ULA). Historically, much of the theory of convergence of sampling has focused on asymptotic convergence without examining dimension dependence in detail. Non-asymptotic convergence rates have recently gained attention, especially those involving polynomial dependence on target distribution dimensions. Under the condition that \(U\) is strongly convex and gradient Lipschitz, Dalalyan (2017); Durmus and Moulines (2017); Durmus et al. (2019) established ULA convergence in Wasserstein distance and in total variation. Since then, non-asymptotic convergence rates of unadjusted Langevin algorithms for log-concave distributions have been extensively studied in (Dalalyan and Karagulyan, 2019; Durmus et al., 2019; Durmus and Moulines, 2017; Cheng and Bartlett, 2018; Brosse et al., 2019). The requirement for strong convexity for the potential \(U\) can be relaxed either by assuming convexity at infinity or dissipativity. When the former condition is satisfied, convergence results in the Wasserstein-1 distance have been shown by Cheng et al. (2018) and Majka et al. (2020) through the contraction property described in Eberle (2016). For certain conditions, Erdogdu et al. (2018) expanded the non-asymptotic analysis of the Langevin diffusion to a wider range of diffusions. Under the latter assumption, Xu et al. (2018) improved the convergence rate by directly analyzing the ergodicity of the overdamped Langevin Monte Carlo while Raginsky et al. (2017) established a non-asymptotic estimate in the Wasserstein-2 distance. Both methods, however, depend on the number of iterations. Using auxiliary continuous processes and the use of contraction results from Eberle et al. (2019) and Chau et al. (2021), a convergence rate of 1/2 in the Wasserstein-1 distance was obtained. Nevertheless, the Euler discretization of an underlying Langevin dynamics typically requires \(U(\mathrm{x})\) to have Lipschitz-continuous gradients (global smoothness). Frequently, this requirement is too strict and prevents many common applications (Durmus et al., 2018; Kaipio and Somersalo, 2006; Marie-Caroline et al., 2019). Generally speaking, non-globally smooth potentials arise from two sources: super-linear growth at infinity of the gradient, which drives the smoothness constant grow with radius; weakly smooth gradient, which causes the convexity non-uniform and the Hessian unbounded. It has been shown that Euler's discretization with super-linearly growing coefficients is unstable due to the fact that the moments of the discretization could diverge to infinity at a finite time. This problem is usually addressed by incorporating a taming technique (e.g. see (Hutzenthaler et al., 2012; Sabanis, 2013, 2016; Sabanis and Zhang, 2019; Brosse et al., 2019; Lovas et al., 2020; Lim et al., 2021)). The latter weakly smooth conditions are less well known, with only a few works to the best of our knowledge. Firstly, Chatterji et al. (2019) has established an original approach to dealing with weakly smooth (possibly non-smooth) potential problems through smoothing. This technique relies on results obtained from the optimization community, in which a Gaussian is used to perturb the gradient evaluating point. They do not demand strong assumptions, such as the existence of proximal maps, composite structure (Atchade, 2015; Durmus et al., 2018), or strong convexity (Hsieh et al., 2018). However, Chatterji et al. (2019) analyzes over-damped Langevin diffusion in the context of convex potential functions while many applications involve sampling in high dimensional spaces have non-convex settings. Secondly, Erdogdu and Hosseinzadeh (2020) proposed a very elegant result using tail growth for weakly smooth and weakly dissipative potentials. By us ing degenerated convex and modified log-Sobolev inequality, they prove that LMC gets \(\epsilon\)-neighborhood of a target distribution in KL divergence with the convergence rate of \(\tilde{O}(d^{\frac{1}{\alpha}+\frac{1+\alpha}{\alpha}(\frac{2}{\beta}-1(\beta +1))}\epsilon^{\frac{-1}{\alpha}})\) where \(\alpha\) and \(\beta\) are degrees of weakly smooth and dissipative defined in the next section. In the same vein, (Nguyen, 2022) relaxed the degenerated convex at infinity to the Poincare inequality and derive similar results as in these cases. (Chewi et al., 2021) provided result for chi-squared or Renyi divergences using Latala- Oleszkiewicz or modified log-Sobolev inequality, which interpolates between the Poincare and log-Sobolev inequalities. (Balasubramanian et al., 2022) proved that averaged Langevin Monte Carlo with \(\epsilon\)-relative Fisher information after \(O(L^{2}d^{2}/\epsilon^{2})\) iterations using only gradient Lipschitz condition. It is noteworthy, however, that most previous research has not covered mixtures of distributions with different tail growth behaviors, which may limit the range of real-life applications that can be applied to mixtures of distributions. It is therefore the purpose of this work to introduce generalized conditions for mixtures of distributions with different tail growth characteristics. In particular, we introduce \(\alpha_{G}\)-mixture locally smooth and \(\alpha_{H}\)-mixture locally Hessian smooth (defined in the next section), which are typically satisfied with a mixture of distributions. Using our novel conditions, we show that we can work with either super-linear growth at infinity of the gradient or weakly smooth gradient in a mixture, which provides additional applicability while preserving the convergence property. Additionally, we improve the convergence rate when the potential is \(\alpha_{G}\)-smooth and \(\alpha_{H}\)-mixture locally Hessian smooth. In all of our results, weak dissipative conditions are used, which are less restricted than strongly convex and log Sobolev conditions. Weak dissipative conditions implies Poincare inequality when \(\beta\geq 1\), but the results can be weakened to the case \(\beta>0\). We develop the results based on the convexification of a non-convex domain as isoperimetric inequality remains unchanged under bounded perturbation. To the best of our knowledge, the convexification results we obtain under \(\alpha_{G}\)-mixtures locally smooth are new because we do not require the commonly used strongly convex outside the ball conditions. The KL convergence in the previous section is then extended to include non-strongly convex outside the ball. A new smoothing scheme based on \(p\)-generalized Gaussian distribution is also presented. Since this smoothing covers heavy tail distributions as well as lighter tail distributions, it is typically more flexible than Gaussian smoothing. Changing the smoothing scheme to a different distribution than Gaussian has been recognized as potentially improving convergence rate. Here we provide some nice theoretical properties of \(p\)-generalized Gaussian smoothing and prove the result for stochastic gradients with a very general setting. In addition, we also provide convergence in \(L_{\beta}\)-Wasserstein distance for the smoothing potential. Our contributions can be outlined as follows. Assume that potential function \(U\), satisfies \(\beta\)-dissipative. Note that \(\beta\)-dissipative condition implies Poincare inequality, however, to give the convergence rate explicitly, we will assume the Poincare constant is \(\gamma\). First, we prove that ULA achieves the convergence rate in KL-divergence of \[O\left(\frac{\gamma^{1+\frac{1}{\alpha_{G}}d}|^{\frac{(\ell_{G}+\alpha_{G} \kappa+2)}{\beta}(\ell_{G}+\alpha_{G}\kappa+2)\left(1+\frac{1}{\alpha_{G}} \right)}+\frac{\frac{4(\ell_{G})}{\beta}}{2}+\frac{\frac{(\ell_{G}+\alpha_{G} \kappa)4\alpha_{G}\kappa}{\beta}}{\frac{2}{2}}\wedge\frac{\left\lfloor 2\alpha_{G \kappa}\kappa\right\rfloor}{2}\ln^{\left(1+\frac{1}{\alpha_{G}}\right)}\left( \frac{(H(p_{0}|\nu))}{\epsilon}\right)}{\epsilon^{(\ell_{G}+\alpha_{G}\kappa+ 1)\left(1+\frac{1}{\alpha_{G}}\right)+\frac{1}{\alpha_{G}}}}\right)} \tag{3}\] if the potential is \(\alpha_{G}\)-mixture locally smooth and \[O\left(\frac{\gamma^{2}d^{\frac{(\frac{\ell_{H}+\alpha_{HN}+3}{\beta})}{\left( \ell_{H}+\alpha_{HN}+3\right)}\right)+\frac{\left[\frac{4(\ell_{H}+\alpha_{HN}) }{\beta}\right]}{2}+\frac{\left[\frac{(\ell_{H}+\alpha_{HN}+1)(4\alpha_{HN}+4) }{\frac{\beta}{2}}\right]}{2}\ln^{2}\left(\frac{(H(p_{0}|v))}{\varepsilon} \right)}{\varepsilon^{2(\ell_{H}+\alpha_{HN}+2)+1}}\right)\] if the potential is \(\alpha_{H}\)-mixture locally Hessian smooth. Second, our convergence results are improved when the potential are higher order of smoothness. Specifically, when a potential is \(\alpha_{G}\)-smooth and \(\alpha_{H}\)-mixture locally Hessian smooth, it converges in \[O\left(\frac{d^{\frac{\lceil\frac{4\ell_{H}}{\beta}\rceil+\lceil\frac{(4\alpha _{HN}+4)}{\beta}\rceil}{2(\alpha_{H}+1)}+\lceil\frac{4}{\beta}\rceil\left(1+ \frac{1}{\alpha_{H}+1}\right)\ln\left(1+\frac{1}{\alpha_{H}+1}\right)\left( \frac{(H(p_{0}|v))}{\varepsilon}\right)}}{\varepsilon^{\left(1+\frac{2}{ \alpha_{H}+1}\right)}}\right).\] steps. Third, we apply the result to the case of non strongly convex outside the ball of radius \(R\) and obtain the convergence rate in KL divergence of \[\tilde{O}\left(\frac{\left(32C_{K}^{2}d\left(\frac{a+b+2aR^{2}+3}{a}\right)e^{ 4\left(2\sum_{i}L_{i}R^{1+\alpha_{i}}\right)}\right)^{1+\frac{1}{\alpha_{G}}} d^{\lceil\frac{\alpha_{GV}+2}{\beta}\rceil(\alpha_{GN}+2)\left(1+\frac{1}{ \alpha_{G}}\right)+\frac{\lceil 2\alpha_{GN}\rceil}{2}}}{\varepsilon^{\frac{ \alpha_{GN}^{2}+2\alpha_{GN}+2}{\alpha_{G}}}}\right)\] for potential is \(\alpha_{G}\)-mixture locally smooth. Fourth, we extend the result to stochastics gradient by \(p\)-generalized Gaussian smoothing and obtain \(\tilde{O}\left(\frac{d^{\lceil\frac{2\alpha_{GN}^{2}}{\beta}\rceil}\frac{1}{ \alpha_{G}+\lceil\frac{\alpha_{GN}+2}{\beta}\rceil(\alpha_{GN}+2)\left(1+ \frac{1}{\alpha_{G}}\right)}}{\gamma^{1+\frac{1}{\alpha_{G}}}\varepsilon^{( \alpha_{GN}+1)\left(1+\frac{1}{\alpha_{G}}\right)+\frac{1}{\alpha_{G}}}}\right)\) for \(\alpha_{G}\)-mixture locally smooth with \(\ell_{G}=0\). Note that we also covers results of smoothing potentials, satisfying \(\gamma\)-Poincare inequality, \(\alpha_{G}\)-mixture locally smooth \(\ell_{G}=0\), and \(\beta\)-dissipative with convergence rate in \(L_{\beta}\)-Wasserstein distance of \[\tilde{O}\left(\frac{d^{\frac{2}{\beta}\left(\lceil\frac{2\alpha_{GN}^{2}}{ \beta}\rceil\frac{1}{\alpha_{G}}+\lceil\frac{\alpha_{GN}+2}{\beta}\rceil( \alpha_{GN}+2)\left(1+\frac{1}{\alpha_{G}}\right)\right)+2+\frac{4}{\alpha_{G }}}}{\gamma_{1}^{\left(1+\frac{1}{\alpha_{G}}\right)}\varepsilon^{(\alpha_{GN }+1)\left(1+\frac{1}{\alpha_{G}}\right)+\frac{1}{\alpha_{G}}}}\right). \tag{4}\] Finally, our convergence results remain valid under finite perturbations, indicating that it is applicable to an even larger class of potentials. Last but not least, convergence in KL divergence implies convergence in total variation and in \(L_{2}\)-Wasserstein metrics, which in turn gives convergence rates of \(O(\cdot\varepsilon^{-\left(6+\frac{8}{\alpha}\right)})\) and \(O(\cdot\varepsilon^{-\left(6+\frac{8}{\alpha}\right)\beta}d^{6+\frac{8}{\alpha }})\) in place of \(O(\cdot\varepsilon^{-\left(3+\frac{4}{\alpha}\right)})\) in the first case above, respectively for total variation and \(L_{2}\)-Wasserstein metrics. The rest of the paper is organized as follows. Section 2 sets out the notation and smoothing properties necessary to give our main results in section 3. Section 4 apply the result of (Nguyen et al., 2021) for non-strongly convex outside the ball while Section 5 gives some simple applications. Section 6 presents our conclusions and possible directions of extentions. ## 2 Preliminaries We furnish the space \(\mathbb{R}^{d}\) with the regular \(p\)-norm and throughout the paper, we drop the subscript and just write \(\|x\|\stackrel{{\triangle}}{{=}}\|x\|_{2}\) whenever \(p=2\). We use \(\langle\,\ \rangle\) to specify inner products and let \(|s|\), for a real number \(s\in\mathbb{R}\), denote its absolute value. For a function \(U:\mathbb{R}^{d}\rightarrow\mathbb{R}\), which is twice differentiable, we use \(\nabla U(x)\) and \(\nabla^{2}U(x)\) to denote correspondingly the gradient and the Hessian of \(U\) with respect to \(x\). We use \(A\succeq B\) if \(A-B\) is a positive semi-definite matrix. We use big-oh notation \(O\) in the following sense that if \(f(x)=O(g(x))\) implies \(\lim_{x\rightarrow\infty}\sup\frac{f(x)}{g(x)}<\infty\) and \(\tilde{O}\) suppresses the logarithmic factors. While sampling from the exact distribution \(\pi(\mathrm{x})\) is generally computationally demanding, it is largely adequate to sample from an approximated distribution \(\tilde{\pi}(\mathrm{x})\) which is in the vicinity of \(\pi(\mathrm{x})\) by some distances. In this paper, we use KL-divergence and Wasserstein distance and briefly define them in Appendix A. We suppose some of the following conditions hold: **Assumption 1**: _(\(\alpha_{G}\)-mixture locally-smooth) There exist \(\ell_{G}\geq 0,\ 0<\alpha_{G}=\alpha_{G1}\leq...\leq\alpha_{GN}\leq 1,\ i=1,..,N \ 0<L_{Gi}\leq L_{G}<\infty\) so that \(\forall x,\ y\in\mathbb{R}^{d}\), we obtain \(\|\nabla U(x)-\nabla U(y)\|\leq\left(1+\|x\|^{\ell_{G}}+\|y\|^{\ell_{G}}\right) \sum_{i=1}^{N}L_{i}\left\|x-y\right\|^{\alpha_{Gi}}\) where \(\nabla U(x)\) represents a gradient of \(U\) at \(x\)._ **Assumption 2**: _(\(\alpha_{H}\)-mixture Hessian locally-smooth) There exist \(\ell_{H}\geq 0,\ 0\leq\alpha_{H}=\alpha_{H1}\leq...\leq\alpha_{HN}\leq 1,\ i=1,..,N\ 0<L_{Hi}\leq L _{H}<\infty\) so that \(\forall x,\ y\in\mathbb{R}^{d}\), we obtain \(\left\|\nabla^{2}U(x)-\nabla^{2}U(y)\right\|_{op}\leq\left(1+\|x\|^{\ell_{H}}+ \|y\|^{\ell_{H}}\right)\sum_{i=1}^{N}L_{i}\left\|x-y\right\|^{\alpha_{Hi}}\) where \(\nabla^{2}U(x)\) represents a Hessian of \(U\) at \(x\)._ **Assumption 3**: _(\(\beta-\)dissipativity). There exists \(\beta>0\), \(a\), \(b>0\) such that \(\forall x\in\mathbb{R}^{d}\), \(\langle\nabla U(x),x\rangle\geq a\left\|x\right\|^{\beta}-b\)._ **Assumption 4**: _(\(LSI\left(\gamma\right)\)) There exists some \(\gamma>0,\) so that for all probability distribution \(p\left(x\right)\) absolutely continuous w.r.t. \(\pi(x)\), \(H(p|\pi)\leq\frac{1}{2\gamma}I(p|\pi)\) where \(H\) and \(I\) are Kullback-Leibler (KL) divergence and relative Fisher information defined respectively in Appendix A below._ **Assumption 5**: _(\(PI\left(\gamma\right)\)) There exists some \(\gamma>0,\) so that for all smooth function \(g\colon\mathbb{R}^{d}\rightarrow\mathbb{R}\), \(\mathrm{Var}_{\pi}(g)\leq\frac{1}{\gamma}E_{\pi}\left[\left\|\nabla g\right\| ^{2}\right]\) where \(\mathrm{Var}_{\pi}(g)=E_{\pi}[g^{2}]-E_{\pi}[g]^{2}\) is the variance of \(g\) under \(\pi\)._ **Assumption 6**: _(non-strongly convex outside the ball) For every \(\left\|x\right\|\geq R\), the Hessian of twice differentiable potential function \(U(x)\) is positive semi-definite, that is for every \(y\in\mathbb{R}^{d}\), \(\left\langle y,\nabla^{2}U(x)\ y\right\rangle\geq 0\)._ **Assumption 7**: _The function \(U(x)\) has stationary point at zero \(\nabla U(0)=0\)._ Remark 1: Assumption 7 is imposed without loss of generality. Condition 1 often holds for a mixture of distribution with different tail growth behaviors. Condition 1 is an extension of \(\alpha_{G}\)- mixture weakly smooth (Nguyen, 2022), that is when \(\ell_{G}=0\), we recover the normal \(\alpha_{G}\) mixture-weakly smooth. When \(N=1\) we have a \(\alpha_{G}-\)Holder continuity of the gradients of \(U\) while \(\alpha_{G}=1\) gives us a Lipschitz continuous gradient. Note that when \(\ell>0\), we only have the potential behaves locally smooth. Similarly, condition 2 extension of \(\alpha_{H}\)-mixture Hessian smooth when \(\ell_{H}=0\). When \(N=1\) and \(\alpha_{H}=1\),we get back to the Hessian smoothness condition. A feature that follows straightforwardly from Assumption 1 is that for \(\forall x,\ y\in\mathbb{R}^{d}\): Lemma 1: _If potential \(U:\mathbb{R}^{d}\to\mathbb{R}\) satisfies an \(\alpha_{G}\)-mixture quasi-smooth for some \(\ell_{G}\geq 0,\)\(0<\alpha_{G}=\alpha_{G1}\leq...\leq\alpha_{GN}\leq 1,\)\(i=1,..,N\)\(0<L_{Gi}\leq L_{G}<\infty\), then:_ \[U(y)\leq U(x)+\langle\nabla U(x),\ y-x\rangle\leq U(x)+\langle\nabla U(x),\ y-x \rangle+\sum_{i}\left(1+L_{G}\right)\left(\|x\|^{\ell_{G}}+\|y\|^{\ell_{G}} \right)\|x-y\|^{1+\alpha_{Gi}}. \tag{5}\] _In addition, from Assumption 7, for any \(x\in\mathbb{R}^{d}\),_ \[\|\nabla U(x)\| \leq L_{G}\left(1+\|x\|^{\ell_{G}}\right)\sum_{i=1}^{N}\|x\|^{ \alpha_{Gi}}\] \[\leq 2NL_{G}\left(1+\|x\|^{\ell_{G}+\alpha_{N}}\right).\] Proof: See Appendix A2. A similar property that follows from Assumption 2 is that for \(\forall x,\ y\in\mathbb{R}^{d}\): Lemma 2: _If potential \(U:\mathbb{R}^{d}\to\mathbb{R}\) satisfies an \(\alpha_{H}\)-mixture locally Hessian smooth for some \(\ell_{H}\geq 0,\)\(0\leq\alpha_{H}=\alpha_{H1}\leq...\leq\alpha_{HN}\leq 1\), \(i=1,..,N\)\(0<L_{Hi}\leq L_{H}<\infty\), then:_ \[\|\nabla U(y)-\nabla U(x)\|\leq\left\|\nabla^{2}U(x)\right\|_{\text{op}}\|x- y\|+\sum_{i}\left(1+L_{H}\right)\left(\|x\|^{\ell_{H}}+\|y\|^{\ell_{H}} \right)\|x-y\|^{1+\alpha_{Hi}}. \tag{6}\] _In addition, from Assumption 7, for any \(x\in\mathbb{R}^{d}\), let \(C_{H}=\left\|\nabla^{2}U(0)\right\|_{\text{op}}\lor 2\sum_{i=1}^{N}L_{Hi}:\)_ \[\left\|\nabla^{2}U(x)\right\|_{\text{op}} \leq\left\|\nabla^{2}U(0)\right\|_{\text{op}}+\left(1+\|x\|^{ \ell_{H}}\right)\sum_{i=1}^{N}L_{i}\left\|x\right\|^{\alpha_{Hi}}\] \[\leq C_{H}\left(1+\|x\|^{\ell_{H}+\alpha_{HN}}\right).\] Proof: See Appendix A2. ## 3 Convergence under Poincare inequality Main result: Convergence under Poincare inequality, \(\beta-\)dissipative, \(\alpha-\)mixture locally smooth We first review the Langevin dynamics in continuous time under the Poincare inequality before examining KL divergence in discrete time along the Unadjusted Langevin Algorithm (ULA). The Langevin dynamics for target distribution \(\nu\propto e^{-U}\) is a continuous-time stochastic process \((X_{t})_{t\geq 0}\) in \(\mathbb{R}^{d}\) that proceeds as follows: \[dX_{t}=-\nabla U(X_{t})\,dt+\sqrt{2}\,dW_{t} \tag{7}\] where \((W_{t})_{t\geq 0}\) is the standard Brownian motion in \(\mathbb{R}^{d}\). If \((X_{t})_{t\geq 0}\) is updated by the Langevin dynamics (7), then their probability density function \((p_{t})_{t\geq 0}\) will fulfill the Fokker-Planck equation: \[\frac{\partial p_{t}}{t}\,=\,\nabla\cdot\left(p_{t}\nabla U\right)+\Delta p_{t }\,=\,\nabla\cdot\left(p_{t}\nabla\log\frac{p_{t}}{\nu}\right). \tag{8}\] As a distribution evolves along the Langevin dynamics, it will get nearer to the target distribution \(\pi\). Along the Langevin dynamics (7) (or correspondingly, the Fokker-Planck equation (8)), we have, \[\frac{d}{dt}(\chi^{2}(p_{t}|\nu))=-E_{\pi}\left\|\nabla\frac{p_{t}}{\nu}\right\| ^{2}, \tag{9}\] where \(\chi^{2}(p|\nu)\stackrel{{\triangle}}{{=}}\int_{\mathbb{R}^{d}} \left(\frac{p(x)}{\pi(x)}\right)^{2}\nu(x)dx-1\). \(\chi^{2}\) divergence with respect to \(\nu\) is decreasing along the Langevin dynamics as \(E_{\nu}\left\|\nabla\frac{p_{t}}{\nu}\right\|^{2}\geq 0\). In fact, when \(\nu\) satisfies Poincare inequality (PI), \(\chi^{2}\) divergence converges exponentially fast along the Langevin dynamics. PI is retained under bounded perturbation (Holley and Stroock, 1986), Lipschitz mapping, tensorization, among others and we will consider potential satisfied PI in this section. For any fixed step size \(\eta>0\), ULA converges to a biased limiting distribution \(\nu_{\eta}\neq\nu\), which implies that \(H(p_{k}|\nu)\) does not converge to 0 along ULA, as it has an asymptotic bias \(H(\nu_{\eta}|\nu)>0\). Here, we can adapt the technique proof of (Vempala and Wibisono, 2019) to analyze the convergence rate of ULA when the true target distribution \(\nu\) satisfies an \(\alpha_{G}\)-mixture locally smooth. This discretization technique has been used in many papers, including the papers (Erdogdu and Hosseinzadeh, 2020) and (Nguyen et al., 2021), but it is non-trivial to apply to our setting. Our proofs are rested on it and the following key observations. The first observation is to bound the norm of the gradient to power \(r\), which is rather general in the sense that \(r\) could be any real number. Let \(x_{k}\) be the interpolation of the discretized process (2) and let \(p_{k}\) denote its distribution, \(\mathbb{E}_{p_{k}}\left[\|\nabla U(x_{k})\|^{r}\right]\) can be upper bounded by the following lemma. Lemma 3: _Suppose \(\nu\) is \(\beta\)-dissipative \(\beta\geq 1\), \(\alpha_{G}\)-mixture locally-smooth. Start ULA algorithm from \(x_{0}\) with the step size \(\eta>0,\) we have for any \(r\in R\),\(r\geq 0:\)_ \[E_{p_{k}}\left[\|\nabla U(x)\|^{r}\right]\leq O\left(d^{\lceil\frac{(\ell_{G} +\alpha_{G})\nu}{\beta}\rceil}\right).\] Proof: See Appendix B.2. A result that follows directly from the first observation is that: Lemma 4: _Suppose \(\nu\) is \(\beta\)-dissipative, \(\alpha_{G}\)-mixture locally-smooth. If \(0<\eta\leq\min\left\{1,\left(\frac{\varepsilon}{2T\beta}\right)^{\frac{1}{ \alpha_{G}}}\right\}\), then along each step of ULA (2),_ \[\frac{d}{dt}H(p_{k,t}|\nu)\leq-\frac{3}{4}I(p_{k,t}|\nu)+\eta^{\alpha_{G}}D, \tag{10}\] _where_ \[D=O\left(d^{\frac{\lceil\frac{4\ell_{G}}{2}\rceil}{2}+\left(\frac{\lfloor \frac{(\ell_{G}+\alpha_{G})\nu}{\beta}\rfloor}{2}\vee\frac{\lceil 2\alpha_{G}\rceil}{2} \right)}\right),\] Proof: See Appendix B.3. The second observation is to bound the KL divergence along the dynamics. **Lemma 5**: _Suppose that \(\nu\) satisfies \(\gamma\)-Poincare inequality, \(\alpha_{G}\)-mixture locally-smooth, then for any distribution \(\mu\),_ \[H(\mu|\nu)\leq C\gamma^{-\frac{1}{2\eta}}M_{\ell_{G}+\alpha_{GN}+2}\left(\mu+\nu \right)I^{\frac{1}{G+4\alpha_{GN}+2}}\left(\mu|\nu\right),\] _where \(M_{s}(g)=\int g(x)\left(1+\left\|x\right\|^{2}\right)^{\frac{\eta}{2}}dx\) for any function \(g\)._ Proof: See Appendix B.4. Based on both observations, we are now ready to state the main result in this section. **Theorem 1**: _Suppose \(\nu\) is \(\gamma\)-Poincare inequality, \(\beta\)-dissipative \(\beta\geq 1\), \(\alpha_{G}\)-mixture locally-smooth. For any \(x_{0}\sim p_{0}\) with \(H(p_{0}|\nu)=C_{0}<\infty\), the iterates \(x_{k}\sim p_{k}\) of ULA with step size \(\eta\) sufficiently small satisfying the following conditions_ \[\eta=\min\left\{1,\left(\frac{\epsilon}{2TD}\right)^{\frac{1}{\alpha_{G}}} \right\}.\] _The ULA iterates reach \(\epsilon\)-accuracy of the target \(\nu\) in KL divergence after_ \[K=O\left(\frac{\gamma^{1+\frac{1}{\alpha_{G}}}d^{\frac{\lceil\frac{\ell_{G}+ \alpha_{GN}+2}{B}\rceil(\ell_{G}+\alpha_{GN}+2)\left(1+\frac{1}{\alpha_{G}} \right)+\frac{\lceil\frac{\alpha_{G}}{B}\rceil}{2}+\frac{\lceil\frac{(\ell_{G} +\alpha_{GN})}{B}\rceil\alpha_{GN}}{2}\vee\frac{\lceil 2\alpha_{GN}\rceil}{2}\ln\left(1+\frac{1}{\alpha_{G}} \right)\left(\frac{\left(H(p_{0}|\nu)\right)}{\epsilon}\right)}}{\epsilon^{( \ell_{G}+\alpha_{GN}+1)\left(1+\frac{1}{\alpha_{G}}\right)+\frac{1}{\alpha_{G }}}}\right)}\right)\] _steps. If we choose \(\beta\geq 2\alpha_{GN}\) and \(\ell_{G}=0\), then \(K\approx\tilde{O}\left(\frac{\gamma^{1+\frac{1}{\alpha_{G}}}d^{\frac{\lceil \frac{\alpha_{GN}+2}{B}\rceil(\alpha_{GN}+2)\left(1+\frac{1}{\alpha_{G}} \right)+\frac{\lceil 2\alpha_{GN}\rceil}{2}}}}{\epsilon^{\frac{\alpha_{GN}^{2}+2 \alpha_{GN}+2}{\alpha_{G}}}}\right).\)_ Proof: See Appendix B.6. If we initialize with a Gaussian distribution \(p_{0}=N(0,\frac{1}{L}I)\), we have the following lemma. **Lemma 6**: _Suppose \(\nu=e^{-U}\) is \(\alpha_{G}\)-mixture locally smooth. Let \(p_{0}=N(0,\frac{1}{L}I)\). Then \(H(p_{0}|\nu)=O\left(d^{\frac{\left(1+1\right)\alpha_{GN}}{2}}\right).\)_ Proof: See Appendix F.1. Therefore, Theorem 1 states that to achieve \(H(p_{k}|\pi)\leq\epsilon\), ULA has computation complexity \[\tilde{O}\left(\frac{\gamma^{1+\frac{1}{\alpha_{G}}d}^{\frac{\lceil\frac{\ell_ {G}+\alpha_{GN}+2}{B}\rceil(\ell_{G}+\alpha_{GN}+2)\left(1+\frac{1}{\alpha_{G }}\right)+\frac{\lceil\frac{\alpha_{G}}{B}\rceil}{2}+\left(\frac{\lceil\frac{ \left(\ell_{G}+\alpha_{GN}\right)}{B}\rceil\alpha_{GN}}{2}\vee\frac{\lceil 2\alpha_{GN} \rceil}{2}\right)}}{\epsilon^{(\ell_{G}+\alpha_{GN}+1)\left(1+\frac{1}{\alpha _{G}}\right)+\frac{1}{\alpha_{G}}}}\right).\text{By}\;\text{Pinsker's inequal-}\] ity, we have \(TV\left(p_{k}|\nu\right)\leq\sqrt{\frac{H(p_{k}|\nu)}{2}}\) which implies that to get \(TV\left(p_{k}|\pi\right)\leq\epsilon\), it is enough to obtain \(H(p_{k}|\pi)\leq 2\epsilon^{2}\). This bound indicates that the number of iteration to reach \(\epsilon\) accuracy for total variation is \[\tilde{O}\left(\frac{\gamma^{1+\frac{1}{\alpha_{G}}d}^{\frac{\lceil\frac{\ell_ {G}+\alpha_{GN}+2}{B}\rceil(\ell_{G}+\alpha_{GN}+2)\left(1+\frac{1}{\alpha_{ G}}\right)+\frac{\lceil\frac{\alpha_{G}}{B}\rceil}{2}+\left(\frac{\lceil \frac{\left(\ell_{G}+\alpha_{GN}\right)}{B}\rceil\alpha_{GN}}{2}\vee\frac{ \lceil 2\alpha_{GN}\rceil}{2}\right)}}{\epsilon^{2(\ell_{G}+\alpha_{GN}+1)\left(1+ \frac{1}{\alpha_{G}}\right)+\frac{1}{\alpha_{G}}}}\right).\] On the other hand, from Lemma F.4 we know that \(\int e^{\frac{\mu}{4\beta}\|x\|^{\beta}}\pi(x)dx\leq e^{d+\tilde{\varepsilon}}<\infty\). By (Bolley and Villani, 2005)'s Corollary 2.3, we can bound Wasserstein distance by \[W_{\beta}(p_{k},\ \nu)\leq 2\left[\frac{a}{4\beta}\left(1.5+\tilde{d}+\tilde{ \varepsilon}\right)\right]^{\frac{1}{\beta}}\left(H(p_{k}|\nu)^{\frac{1}{ \beta}}+H(p_{k}|\nu)^{\frac{1}{2\beta}}\right).\] To have \(W_{\beta}(p_{K},\ \pi)\leq\varepsilon\), it is sufficient to choose \(H(p_{k}|\nu)^{\frac{1}{2\beta}}=\tilde{O}\left(\varepsilon d^{\frac{-1}{\beta }}\right)\), which in turn implies \(H(p_{k}|\nu)=\tilde{O}\left(\varepsilon^{2\beta}d^{-2}\right).\) By replacing this in the bound above, we obtain the number of iteration for \(L_{\beta}\)-Wasserstein distance is \[\tilde{O}\left(\frac{\gamma^{1+\frac{1}{\alpha_{G}}d^{\frac{\left\{\varepsilon _{G}+\alpha_{G}\pi+2\right\}}{\beta}}\left|\left(\ell_{G}+\alpha_{G\!N}+2 \right)\left(1+\frac{1}{\alpha_{G}}\right)+\frac{\left\lceil\frac{\delta( \varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{\left\lceil\frac{\delta( \varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{\left\lceil\frac{\delta( \varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{\left\lceil\frac{\delta( \varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{\left\lceil\frac{\delta( \varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{\left\lceil\frac{\delta( \varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{\left\lceil\frac{\delta( \varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{\left\lceil\frac{\delta( \varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{\left\lceil\frac{\delta( \varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{\left\lceil\frac{\delta( \varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{\left\lceil\frac{\delta( \varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{\left\lceil\frac{\delta( \varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{\left\lceil\frac{\delta( \varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{\left\lceil\frac{\delta( \varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{\left\lceil\frac{\delta( \varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{\left\lceil\frac{\delta( \varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{\left\lceil\frac{\delta( \varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{\left\lceil\frac{\delta( \varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{\left\lceil\frac{\delta( \varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{\left\lceil\frac{\delta( \varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{\left\lceil\frac{\delta( \varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{\left\lceil\frac{\delta( \varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{\left\lceil\frac{\delta( \varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{\left\lceil\frac{\delta( \varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{\left\lceil\frac{\delta( \varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{\left\lceil\frac{\delta( \varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{\left\lceil\frac{\delta( \varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{\left\lceil\frac{\delta( \varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{\left\lceil\frac{\delta( \varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{\left\lceil\frac{\delta( \varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{\left\lceil\frac{\delta( \varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{\left\lceil\frac{\delta( \varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{\left\lceil\frac{\delta( \varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{\left\lceil\frac{\delta( \varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{\left\lceil\frac{\delta( \varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{\left\lceil\frac{\delta( \varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{\left\lceil\frac{\delta( \varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{\left\lceil\frac{\delta( \varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{\left\lceil\frac{\delta( \varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{\left\lceil\frac{\delta( \varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{\left\lceil\frac{\delta( \varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{\left\lceil\frac{\delta( \varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{\left\lceil\frac{\delta( \varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{\left\lceil\frac{\delta( \varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{\left\lceil\frac{\delta( \varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{\left\lceil\frac{\delta( \varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{\left\lceil\frac{\delta( \varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{\left\lceil\frac{\delta( \varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{\left\lceil\frac{\delta( \varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{\left\lceil\frac{\delta( \varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{\left\lceil\frac{\delta( \varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{\left\lceil\frac{\delta( \varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{\left\lceil\frac{\delta( \varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{\left\lceil\frac{\delta( \varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{\left\lceil\frac{\delta( \varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{\left\lceil\frac{\delta( \varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{\left\lceil\frac{\delta( \varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{\left\lceil\frac{\delta( \varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{\left\lceil\frac{\delta( \varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{\left\lceil\frac{\delta( \varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{\left\lceil\frac{\delta( \varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{\left\lceil\frac{\delta( \varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{\left\lceil\frac{\delta( \varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{\left\lceil\frac{\delta(\varepsilon_{G}+ \alpha_{G\!N})}{\beta}+\frac{\left\lceil\frac{\delta(\varepsilon_{G}+\alpha_{G\!N})}{ \beta}+\frac{\left\lceil\frac{\delta(\varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{ \left\lceil\frac{\delta(\varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{\left\lceil \frac{\delta(\varepsilon_{G}+\alpha_{G\!N})}{\beta}+\frac{\left\lceil\frac{\delta( \varepsilon_{G}+\alpha_{G\! _where \(D=O\left(d\frac{[\frac{4(t_{H}+a_{H\nu N}+3)}{\beta}]}{2}+\frac{[\frac{(t_{H}+2a_{ H\nu N}+1)4(1+a_{H\nu N})}{\beta}]}{2}\right)\). The ULA iterates reach \(\epsilon\)-accuracy of the target \(\nu\) in KL divergence after_ \[K=O\left(\frac{\gamma^{2}d^{l^{\frac{(t_{H}+a_{H\nu N}+3)}{\beta}}[(\ell_{H}+a_ {H\nu N}+3)2+\frac{[\frac{4(t_{H}+a_{H\nu N})}{\beta}]}{2}+\frac{[\frac{(t_{H}+a _{H\nu N}+1)(4a_{H\nu N}+4)}{\beta}]}{2}\ln^{2}\left(\frac{(H(p_{0}|\nu))}{ \epsilon}\right)}{\epsilon^{2(\ell_{H}+a_{H\nu}+2)+1}}\right)}{\epsilon^{2( \ell_{H}+a_{H\nu}+2)+1}}\right)\] _steps. If \(\ell_{H}=0\), then \(K\approx\tilde{O}\left(\frac{\gamma^{2}d^{2^{\frac{1}{2}}\frac{a_{H\nu N}+3}{ \beta}}[(a_{H\nu N}+3)+\frac{[\frac{4a_{H\nu N}}{\beta}]}{2}+\frac{[\frac{(a_ {H\nu}+1)(4a_{H\nu N}+4)}{\beta}]}{\epsilon^{2a_{H\nu}+5}}}{\epsilon^{2}}} \right).\)_ Proof: See Appendix B.6. Convergence \(\beta-\)dissipative, gradient Lipschitz and \(\alpha_{H}\)-mixture Hessian locally-smooth Although our main results were obtained under the smoothness assumption on Lipschitz gradients of the potential, prior analyses of Langevin algorithms, (Mou et al., 2022; Balasubramanian et al., 2022) suggest that the convergence rate improves with additional assumptions on Hessian smoothness. Lemma 7: _Suppose \(\nu\) is \(\alpha_{G}\)-smooth, and \(\alpha_{H}\)-mixture Hessian locally-smooth, the following bound holds for the discretization error._ \[\mathbb{E}\left[\left\|\nabla U(x_{k,t})-\mathbb{E}\left[\nabla U(x_{k})|x_{k,t}\right]\right\|^{2}\right]\leq 24L_{G}^{2}\eta^{2}I\left(p_{k,t}|\nu\right)+12d \eta^{2}L_{G}^{3}+O\left(d^{\frac{4L_{H}}{\beta}+[\frac{(4a_{H\nu N}+4)}{\beta }]}{2}\right)\eta^{\alpha_{H}+1}.\] Proof: See Appendix B.2. Lemma 8: _Suppose that \(\nu\) satisfies \(\gamma\)-Poincare inequality, \(\alpha_{G}\)-smooth, \(\alpha_{H}\)-mixture locally Hessian smooth, then for any distribution \(\mu\),_ \[H(\mu|\nu)\leq\left(\sqrt{2}+2L_{G}\sqrt{\frac{1}{\gamma}}\right)M_{4}^{\frac {1}{2}}\left(\mu+\nu\right)\sqrt{I\left(\mu|\nu\right)}.\] Proof: See Appendix B.4. A result that follows directly from the second observation is that: **Lemma 9**: _Suppose \(\nu\) is \(\beta\)-dissipative, \(\alpha_{H}\)-mixture locally-Hessian smooth. If \(0<\eta=\min\left\{1,\left(\frac{\epsilon}{2TD}\right)\right\}\), then along each step of ULA (2),_ \[\frac{d}{dt}H(p_{k,t}|\nu)\leq-\frac{1}{2}I(p_{k,t}|\nu)+\eta^{\alpha_{H}+1}D, \tag{12}\] _where_ \[D=O\left(d^{\frac{\lfloor\frac{4\mu}{B}\rfloor+\lceil\frac{(4\alpha_{H}\nu+4 )}{B}\rceil}{2}}\right),\] Proof: See Appendix B.3. Based on both observations, we are now ready to state the main result in this section. Theorem 3.3: _Suppose \(\nu\) is \(\gamma\)-Poincare inequality, \(\beta\)-dissipative, \(\alpha_{H}\)-mixture locally Hessian smooth. For any \(x_{0}\sim p_{0}\) with \(H(p_{0}|\pi)=C_{0}<\infty\), the iterates \(x_{k}\sim p_{k}\) of ULA with step size \(\eta\) sufficiently small satisfying the following conditions_ \[\eta=\min\left\{1,\left(\frac{\epsilon}{2TD}\right)\right\},\] _where \(D=O\left(d^{\frac{\lceil\frac{4\mu}{B}\rfloor+\lceil\frac{(4\alpha_{H}\nu+4)} {B}\rceil}{2}}\right)\)._ _The ULA iterates reach \(\epsilon\)-accuracy of the target \(\nu\) in KL divergence after_ \[K=O\left(\frac{d^{\frac{\lceil\frac{4\mu}{B}\rfloor+\lceil\frac{(4\alpha_{H} \nu+4)}{B}\rceil}{2(\alpha_{H}+1)}+\lceil\frac{4}{B}\rceil\left(1+\frac{1}{ \alpha_{H}+1}\right)\ln\left(1+\frac{1}{\alpha_{H}+1}\right)\left(\frac{(H(p_ {0}|\nu))}{\epsilon}\right)}}{\epsilon^{\left(1+\frac{2}{\alpha_{H}+1}\right)}}\right)\] _steps. If \(\ell_{H}=0\), \(\alpha_{H}=1\), then \(K\approx\tilde{O}\left(\frac{d^{\frac{\lceil\frac{\epsilon}{B}\rceil}{4}+ \lceil\frac{4}{B}\rceil}{2}}{\epsilon^{2}}\right).\)_ Proof: See Appendix B.6. ### Sampling via smoothing potential In this case, \(p\)-generalize Gaussian smoothing is used to compensate for the weakly smooth behavior of some distributions in the mixture, (Nguyen et al., 2021). Specifically, for some \(\mu\geq 0\), they consider \[U_{\mu}(y):=\mathrm{E}_{\xi}[U(y+\mu\xi)]=\frac{1}{\kappa}\int_{\mathbb{R}^{d} }U(y+\mu\xi)e^{-\|\xi\|_{p}^{p}/p}\mathrm{d}\xi,\] where \(\kappa\stackrel{{ def}}{{=}}\int_{\mathbb{R}^{d}}e^{-\|\xi\|_{p}^ {p}/p}\mathrm{d}\xi=\frac{2^{d}\Gamma^{d}(\frac{1}{p})}{p^{d-\frac{\mu}{B}}}\) and \(\xi\sim N_{p}(0,I_{d\times d})\) (the \(p\)-generalized Gaussian distribution). A \(p\)-generalized Gaussian smoothing is used instead of the origin potential because \(U_{\mu}\) is smooth whereas \(U\) is not. Due to its ability to provide normal distributions when \(p=2\), Laplace distributions when \(p=1\), tails heavier or lighter than normal and even continuous uniform distributions in the limit, this distribution family is preferred over Gaussian smoothing. More significantly, it can be proved that a smoothing potential \(U_{\mu}(x)\) is actually smooth in any order. This nice property is novel and useful in the sampling process, especially when the potential exhibits some sort of weakly smooth behaviors and we want to improve the order of smoothness. Here, we extend (Nguyen et al., 2021)'s \(p\)-generalized Gaussian smoothing by considering \(p\in\mathbb{R}\), \(p>1\) and some primary features of \(U_{\mu}\) based on adapting those results of (Nesterov and Spokoiny, 2017). Lemma 10: _If potential \(U:\mathbb{R}^{d}\rightarrow\mathbb{R}\) satisfies an \(\alpha\)-mixture weakly smooth for some \(0<\alpha=\alpha_{1}\leq...\leq\alpha_{N}\leq 1\), \(i=1,..,N\)\(0<L_{i}<\infty\), let \(L=1\vee\max\left\{L_{i}\right\}\) then:_ _(i) \(\forall x\in\mathbb{R}^{d}:\big{\|}U_{\mu}(x)-U(x)\big{|}\leq\frac{NL\mu^{1+ \alpha}}{(1+\alpha)}d^{\frac{2}{p}},\)_ _(ii) \(\forall x\in\mathbb{R}^{d}\): \(\big{\|}\nabla U_{\mu}(x)-\nabla U(x)\big{\|}\leq\begin{cases}\frac{NL\mu^{1+ \alpha}}{(1+\alpha)}d^{\frac{3}{p}}&1\leq p\leq 2,\\ \frac{NL\mu^{1+\alpha}}{(1+\alpha)}d^{\frac{5}{p}}&p>2,\end{cases}\)_ _(iii) \(\forall x,\ y\in\mathbb{R}^{d}\): \(\big{\|}\nabla U_{\mu}(y)-\nabla U_{\mu}(x)\big{\|}\leq\begin{cases}\frac{NL} {\mu^{1-\alpha}}d^{\frac{2}{p}}\left\|y-x\right\|&1\leq p\leq 2,\\ \frac{NL}{\mu^{1-\alpha}}d^{2}\left\|y-x\right\|&p>2.\end{cases}\)_ _(iv) \(\forall x,\ y\in\mathbb{R}^{d}\): for \(p>2,\big{\|}\nabla^{2}U_{\mu}(y)-\nabla^{2}U_{\mu}(x)\big{\|}_{\text{op}}\leq \frac{NL}{\mu^{2-\alpha}}d^{4-\frac{2}{p}}\left\|y-x\right\|\)._ _If \(p=2,\big{\|}\nabla^{2}U_{\mu}(y)-\nabla^{2}U_{\mu}(x)\big{\|}\leq\frac{2NL}{ \mu^{2-\alpha}}d^{2}\left\|y-x\right\|.\)_ Proof: Due to space limitation, we provide the proof in the Supplement. Based on a result of (Nguyen et al., 2021), we study the convergence of the discrete-time process for the smoothing potential that have the following form: \[U_{\mu}(x):=\mathbb{E}_{\xi}[U(y+\mu\xi)]. \tag{13}\] Keep in mind that \(U(\cdot)\) is \(\alpha_{G}\)-mixture locally smooth with \(\ell_{G}=0\) but \(U_{\mu}(x)\) is smooth. In terms of the smoothing potential \(U_{\mu}\), ULA can be specified as: \[x_{k+1}=x_{k}-\eta_{k}\nabla U_{\mu}(x_{k})+\sqrt{2\eta_{k}}\varsigma_{k}, \tag{14}\] where \(\varsigma_{k}\sim N(0,\ I_{d\times d})\) are independent Gaussian random vectors. From (Nguyen et al., 2021)'s Lemma 3.4, \(W_{2}^{2}(\nu,\ \nu_{\mu})\leq 8.24NL\mu^{1+\alpha}d^{\frac{2}{p}}E_{2},\) for any \(\mu\leq\left(\frac{0.05}{NL\mu^{\frac{2}{p}}}\right)^{\frac{1}{1+\alpha}}\) where \(E_{2}=\int\left\|x\right\|^{2}\nu(x)dx<\infty\), \(L=1\vee\max\left\{L_{i}\right\}.\) Moreover, Poincare is preserved under bounded perturbation, we have \(U_{\mu}\) also satisfies Poincare inequality. As a result, we obtain the following lemma. Lemma 11: _Suppose that \(\nu\) satisfies \(\gamma\)-Poincare inequality and \(\alpha_{G}\)-mixture weakly smooth. Then for any distribution \(p\),_ \[H\left(p|\pi_{\mu}\right)\leq\left(\sqrt{2}+2\frac{NL\mu^{1+\alpha_{G}}}{(1+ \alpha)}d^{\frac{2}{p}\sqrt{2}}\sqrt{\frac{1}{\gamma_{1}}}\right)M_{4}^{\frac {1}{2}}\left(p+\nu_{\mu}\right)\sqrt{I},\] _where \(\gamma_{1}=\gamma e^{-4\mu^{1+\alpha}d^{\frac{1+\alpha}{2\gamma_{1}\rho}}}.\)_ Proof: See Appendix D.1. In addition, we observe that \(U_{\mu}\) is \(\beta\) dissipative with constant \(\left(\frac{\alpha}{2},b+\frac{L}{2}\mu^{\alpha_{G}}d^{\frac{5}{2}\nu\frac{3}{ \beta}}\left(\frac{L\mu^{\alpha_{G}}d^{\frac{5}{2}\nu\frac{3}{\beta}}}{a} \right)^{\frac{1}{\beta-1}}\right).\) With all of these properties, Theorem 1 is applicable to sampling from \(U_{\mu}\). However, in general, we do not have access to \(\nabla U_{\mu}(x)\), but an unbiased estimate of it: \[g_{\mu}(x,\xi)=\nabla U(x+\mu\xi) \tag{15}\] where \(\xi\sim N_{p}(0,I_{d})\). (Nguyen et al., 2021)'s Lemma 3.3 states that the variance of the estimate can be bounded. Lemma 12: _For any \(x_{k}\in\mathbb{R}^{d}\), \(g_{\mu}\left(x_{k},\xi\right)\) is an unbiased estimator of \(\nabla U_{\mu}\) such that_ \[\mathrm{Var}\left[g_{\mu}\left(x_{k},\xi\right)\right]\leq 4N^{2}L^{2}\mu^{2 \alpha_{G}}d^{\frac{2\alpha_{G}}{p}}.\] Let \(x_{\mu,k}\) be the interpolation of the discretized process (14) and let \(p_{\mu,k}\) denote its distribution, \(\mathbb{E}_{p_{k}}\left[\|\nabla U(x_{k})\|^{2\alpha_{N}}\right]\) can similarly be upper bounded by the following lemma.With stochastic approximation of the gradient of the smoothing potential, we have the following bound. Lemma 13: _Suppose \(\pi\) is \(\beta\)-dissipative, \(\alpha_{G}\)-mixture weakly smooth. If \(0<\eta\leq\min\left\{1,\left(\frac{\epsilon}{2TD_{\mu}}\right)^{\frac{1}{ \alpha_{G}}}\right\}\), then along each step of ULA (2),_ \[\frac{d}{dt}H(p_{\mu,k,t}|\nu_{\mu})\leq-\frac{3}{4}I(p_{\mu,k,t}|\nu_{\mu})+ \eta^{\alpha_{G}}D_{\mu}, \tag{16}\] _where \(D_{\mu}=O\left(d^{\lceil\frac{2\alpha_{G}^{2}\nu}{\beta}\rceil}\right).\)_ Proof: See Appendix D.3. Another result is stated in the subsequent theorem. Theorem 4: _Suppose \(\pi\) is \(\gamma\)-Poincare inequality, \(\beta\)-dissipative, \(\alpha_{G}\)-mixture weakly smooth. For any \(x_{0}\sim p_{0}\) with \(H(p_{0}|\pi)=C_{0}<\infty\), the iterates \(x_{k}\sim p_{k}\) of ULA with step size \(\eta\) sufficiently small satisfying the following conditions_ \[\eta=\min\left\{1,\left(\frac{\epsilon}{2TD_{\mu}}\right)^{\frac{1}{\alpha_{ G}}}\right\},\] _where \(D_{\mu}\) defined as above. For any even integer \(k>4\), the ULA iterates reach \(\epsilon\)-accuracy of the target \(\nu\) in_ \[K=O\left(\frac{\gamma^{1+\frac{1}{\alpha_{G}}}d^{\lceil\frac{2\alpha_{G}^{2} \nu\nu}{\beta}\rceil\frac{1}{\alpha_{G}}+\lceil\frac{\alpha_{G}\nu+2}{\beta} \rceil(\alpha_{G}\nu+2)\left(1+\frac{1}{\alpha_{G}}\right)\ln^{\left(1+\frac{ 1}{\alpha_{G}}\right)}\left(\frac{\left(H(p_{0}|\nu)\right)}{\epsilon} \right)}}{\epsilon^{\left(\alpha_{G}\nu+1\right)\left(1+\frac{1}{\alpha_{G}} \right)+\frac{1}{\alpha_{G}}}}\right)\] steps. If we choose \(\eta\) small enough then for any \(\epsilon>0\), to achieve \(W_{\beta}(p_{K},\nu)<\epsilon\), it suffices to run ULA with step size_ \[\eta=\min\left\{1,\left(\frac{\epsilon}{2TD_{\mu}}\right)^{\frac{1}{\alpha_{G}}},\left(\frac{\epsilon}{9\sqrt{NLE_{2}}d^{\frac{1}{p}}}\right)^{\frac{2}{\alpha _{G}}}\right\},\] _for_ \[K\approx\tilde{O}\left(\frac{d^{\frac{2}{\beta}\left(\lceil\frac{2\alpha_{G \Sigma}^{2}}{\beta}\rceil\frac{1}{\alpha_{G}}+\lceil\frac{\alpha_{G\Sigma}+2}{ \beta}\rceil(\alpha_{G\Sigma}+2)\left(1+\frac{1}{\alpha_{G}}\right)\right)+2+ \frac{4}{\alpha_{G}}}}{\gamma_{1}^{\left(1+\frac{1}{\alpha_{G}}\right)} \epsilon^{\left(\alpha_{G\Sigma}+1\right)\left(1+\frac{1}{\alpha_{G}}\right)+ \frac{1}{\alpha_{G}}}}\right),\] _iterations._ Proof: See Appendix D.4. ## 4 Extended result ULA convergence under non-strongly convex outside the ball, \(\alpha\)-mixture weakly smooth and \(\beta-\)dissipativity Since Poincare inequalities are preserved under bounded perturbations by (Holley and Stroock, 1986)'s theorem, we provide our extended results through convexification of non-convex domain (Ma et al., 2019; Yan, 2012). Convexification of non-convex domain is an original approach proposed by (Ma et al., 2019; Yan, 2012), developed and apply to strongly convex outside a compact set by (Ma et al., 2019). Adapted techniques from (Ma et al., 2019) for non-strongly convex and \(\alpha_{G}\)-mixture weakly smooth potentials, (Nguyen et al., 2021) derive a tighter bound for the difference between constructed convex potential and the original one. Using this result, we obtain the following lemma. Lemma 14: _Suppose \(\nu\) is non-strongly convex outside the ball of radius \(R\), \(\alpha_{G}\)-mixture weakly smooth and \(\beta-\)dissipativity, there exists \(\tilde{U}\in C^{1}(\mathbb{R}^{d})\) with a Hessian that exists everywhere on \(\mathbb{R}^{d}\), and \(\tilde{U}\) is convex on \(\mathbb{R}^{d}\) such that_ \[\sup\left(\tilde{U}(\ x)-U(\ x)\right)-\inf\left(\tilde{U}(\ x)-U(\ x)\right) \leq\sum_{i}L_{i}R^{1+\alpha_{G_{i}}}. \tag{17}\] Proof: It comes directly from (Nguyen et al., 2021) Lemma 4.2. Based on it, we get the following result. Theorem 4.5: _Suppose \(\nu\) is non-strongly convex outside the ball \(\mathbb{B}(0,R)\), \(\beta\)-dissipative, \(\alpha_{G}\)-mixture weakly smooth. For any \(x_{0}\sim p_{0}\) with \(H(p_{0}|\nu)=C_{0}<\infty\), the iterates \(x_{k}\sim p_{k}\) of ULA with step size \(\eta\) sufficiently small satisfying the following conditions_ \[\eta=\min\left\{1,\left(\frac{\epsilon}{2TD_{\mu}}\right)^{\frac{1}{\alpha_{G }}}\right\},\] _where \(D_{\mu}\) defined as above. The ULA iterates reach \(\varepsilon\)-accuracy of the target \(\nu\), after_ \[K\approx\tilde{O}\left(\frac{\left(32C_{K}^{2}d\left(\frac{a+b+2aR^{2}+3}{a} \right)e^{4\left(2\sum_{i}L_{i}R^{1+a_{i}}\right)}\right)^{1+\frac{1}{a_{G}}}d ^{\lceil\frac{a_{GX}+2}{\beta}\rceil\left(\alpha_{GN}+2\right)\left(1+\frac{1} {a_{G}}\right)+\frac{\lceil 2a_{GX}\rceil}{2}}}{\varepsilon^{\frac{a_{GX}^{2}+2a_{GN}+2}{a_{G}}}}\right)\] _steps where \(C_{k}\) is a universal constant. If we choose \(\eta\) small enough then, for any \(\varepsilon>0\), to achieve \(H(p_{k}|\nu)<\varepsilon\), it suffices to run ULA with step size_ \[\eta=\min\left\{1,\left(\frac{\varepsilon}{2TD_{\mu}}\right)^{\frac{1}{a_{G}}} \right\},\] _for_ \[K\approx\tilde{O}\left(\frac{\left(32C_{K}^{2}d\left(\frac{a+b+2aR^{2}+3}{a} \right)e^{4\left(2\sum_{i}L_{i}R^{1+a_{i}}\right)}\right)^{1+\frac{1}{a_{G}}} d^{\lceil\frac{a_{GN}+2}{\beta}\rceil\left(\alpha_{GN}+2\right)\left(1+\frac{1} {a_{G}}\right)+\frac{\lceil 2a_{GX}\rceil}{2}}}{\varepsilon^{\frac{a_{GN}^{2}+2a_{GN}+2}{a_{G}}}}\right)\] _iterations._ Proof: See Appendix E.1. ## 5 Applications We employ the outcomes of Sections 3 and 4 to a few of illustrative potential functions in this section. To the best of our knowledge, these results can not be obtained by any of these previous work. Example 1: (\(\alpha_{G}\)-mixture locally smooth potential with lighter tails). Let us analyze the potential function \(U(x)=\sum_{i}^{N}L_{i}\left\|x\right\|^{\alpha_{i}}\) for \(2<\alpha\leq\alpha_{i}\in(2,3],L_{i}>0\). Since \(\nabla U(x)=\sum_{i}L_{i}\alpha_{i}x\left\|x\right\|^{\alpha_{i}-2}\), by triangle inequality \[\left\|\nabla U(x)-\nabla U(y)\right\| \leq\sum_{i}L_{i}\alpha_{i}\left\|x\left\|x\right\|^{\alpha_{i}-2 }-y\left\|y\right\|^{\alpha_{i}-2}\right\|\] \[\leq 8\sum_{i}L_{i}\alpha_{i}\left\|x-y\right\|^{\frac{\alpha_{i}-1} {3}}\left(1+\left\|x\right\|^{\frac{2\left(\alpha_{N}-1\right)}{3}}+\left\|y \right\|^{\frac{2\left(\alpha_{N}-1\right)}{3}}\right),\] where \(1\) is the result of Lemma 27 below. This indicates that the potential \(U(x)\) is \(\left(\frac{\alpha_{i}-1}{3}\right)\)-mixture locally smooth. In addition, we have \[\left\langle\nabla U(x),\;x\right\rangle =\left\langle\sum_{i}L_{i}\alpha_{i}x\left\|x\right\|^{\alpha_{i}- 2},x\right\rangle\] \[\geq a\left\|x\right\|^{\alpha_{N}}-0,\] which implies \(U(x)\) is \(\alpha_{N}\)-dissipative. In order to apply the mixture of tail condition, we need a specific Poincare constant \(\gamma\) from our assumption. As a result, we can use Theorem 1 to get \(\varepsilon\)-precision in KL-divergence in \(K\approx\tilde{O}\left(\frac{\gamma^{1+\frac{1}{a_{G}}}d^{\frac{2\left(10 \alpha_{N}+8\right)}{3}\left(1+\frac{1}{a_{G}}\right)+2+4a_{N}}}{\varepsilon^{ \left(5a_{N}+1\right)\left(1+\frac{1}{a_{G}}\right)+\frac{1}{a_{G}}}}\right)\) steps. In general, this bound is weaker compared to previous single tail growth results but it is applicable for larger range of mixture distributions. If we apply Theorem 4, we can obtain \(\epsilon\) precision in \(L_{\alpha_{N}}\)-Wasserstein distance after taking \[K\approx\tilde{O}\left(\frac{d^{\frac{2}{\beta}\left(1+\frac{1}{\alpha}\right)+ 2\lor\frac{3\alpha_{N}}{p}+2+\frac{4}{\alpha}}}{\epsilon^{2\alpha_{N}\left(1+ \frac{2}{\alpha}\right)}}\right).\] Example 2: (\(\alpha_{H}\)-mixture locally Hessian smooth potential with lighter tails). Let us analyze the potential function \(U(x)=\sum_{i}^{N}L_{i}\left\|x\right\|^{\alpha_{i}}\) for \(2<\alpha\leq\alpha_{i}\in(2,3]\), \(L_{i}>0\). Since \(\nabla U(x)=\sum_{i}L_{i}\alpha_{i}x\left\|x\right\|^{\alpha_{i}-2}\), by triangle inequality \[\left\|\nabla^{2}U(x)-\nabla U^{2}(y)\right\| \leq\sum_{i}L_{i}\alpha_{i}\left\|x\left\|x\right\|^{\alpha_{i}- 2}-y\left\|y\right\|^{\alpha_{i}-2}\right\|\] \[\leq\sum_{i}^{1}L_{i}\left(\alpha_{i}-1+\left(\alpha_{i}-2\right) 2^{6-\alpha_{i}}\right)\left\|x-y\right\|^{\alpha_{i}-2},\] where \(1\) is the result of Lemma 27 below. This indicates that the potential \(U(x)\) is \(\left(\frac{\alpha_{i}-2}{3}\right)\)-mixture locally Hessian smooth. In addition, we have \[\left\langle\nabla U(x),\ x\right\rangle =\left\langle\sum_{i}L_{i}\alpha_{i}x\left\|x\right\|^{\alpha_{i} -2},x\right\rangle\] \[\geq a\left\|x\right\|^{\alpha_{N}}-0,\] which implies \(U(x)\) is \(\alpha_{N}\)-dissipative. In order to apply the mixture of tail condition, we need a specific Poincare constant \(\gamma\) from our assumption. As a result, we can use Theorem 1 to get \(\epsilon\)-precision in KL-divergence in \(K\approx\tilde{O}\left(\frac{\gamma^{2}d^{\left(\epsilon\alpha_{N}+19\right) }}{\epsilon^{2\alpha_{N}+5}}\right)\) steps. In general, this bound is weaker compared to previous single tail growth results but it is applicable for larger range of mixture distributions. If we apply Theorem 4, we can obtain \(\epsilon\) precision in \(L_{\alpha_{N}}\)-Wasserstein distance after taking \[K\approx\tilde{O}\left(\frac{d^{\frac{2}{\beta}\left(1+\frac{1}{\alpha} \right)+2\lor\frac{3\alpha_{N}}{p}+2+\frac{4}{\alpha}}}{\epsilon^{2\alpha_{N} \left(1+\frac{2}{\alpha}\right)}}\right).\] Example 3: (Mixture of smooth potential with linear tails): We consider \(U(x)=\sum_{i}(1+\left\|x\right\|^{1+\alpha_{i}})^{\frac{1}{1+\alpha_{i}}}\) where \(1\leq\alpha\leq\alpha_{1}\ldots\leq\alpha_{N}\). Calculating its gradient we have \[\nabla U(x)=\sum_{i}(1+\left\|x\right\|^{1+\alpha_{i}})^{\frac{-\alpha_{i}}{1+ \alpha_{i}}}\left\|x\right\|^{\alpha_{i}-1}x.\] Therefore, \[\left\langle\nabla U(x),\ x\right\rangle =\left\langle\sum_{i}(1+\left\|x\right\|^{1+\alpha_{i}})^{\frac{- \alpha_{i}}{1+\alpha_{i}}}\left\|x\right\|^{\alpha_{i}-1}x,x\right\rangle\] \[\geq\sum_{i}(1+\left\|x\right\|^{1+\alpha_{i}})^{\frac{-\alpha_{i }}{1+\alpha_{i}}}\left\|x\right\|^{1+\alpha_{i}}\] \[\geq\sum_{i}(1+\left\|x\right\|^{1+\alpha_{i}})^{\frac{1}{1+\alpha _{i}}}-(1+\left\|x\right\|^{1+\alpha_{i}})^{\frac{-\alpha_{i}}{1+\alpha_{i}}}\] \[\geq\left\|x\right\|-1\] which suggests that \(U(x)\) is 1-dissipative. On the other hand, the Hessian of this potential can be calculated as \[\nabla^{2}U(x) =\sum_{i}(1+\left\|x\right\|^{1+\alpha_{i}})^{\frac{-\alpha_{i}}{1+ \alpha_{i}}}\left\|x\right\|^{\alpha_{i}-1}I_{d}-\alpha_{i}(1+\left\|x\right\| ^{1+\alpha_{i}})^{\frac{-\alpha_{i}}{1+\alpha_{i}}-1}\left\|x\right\|^{2\alpha _{i}-2}xx^{\mathrm{T}}\] \[+(\alpha_{i}-1)(1+\left\|x\right\|^{1+\alpha_{i}})^{\frac{-\alpha _{i}}{1+\alpha_{i}}}\left\|x\right\|^{\alpha_{i}-3}xx^{\mathrm{T}}\] \[=\sum_{i}(1+\left\|x\right\|^{1+\alpha_{i}})^{\frac{-\alpha_{i}}{ 1+\alpha_{i}}}\left\|x\right\|^{\alpha_{i}-3}(\left\|x\right\|^{2}I_{d}-xx^{T} )+\alpha_{i}(1+\left\|x\right\|^{1+\alpha_{i}})^{\frac{-\alpha_{i}}{1+\alpha_ {i}}-1}\left\|x\right\|^{\alpha_{i}-3}xx^{\mathrm{T}}\] \[=\sum_{i}(1+\left\|x\right\|^{1+\alpha_{i}})^{\frac{-1}{1+\alpha _{i}}}(1+\left\|x\right\|^{1+\alpha_{i}})^{\frac{-(\alpha_{i}-1)}{1+\alpha_{i} }}\left\|x\right\|^{\alpha_{i}-1}\left(\left\|x\right\|^{-2}(\left\|x\right\| ^{2}I_{d}-xx^{T})\right)\] \[+\alpha_{i}\sum_{i}(1+\left\|x\right\|^{1+\alpha_{i}})^{\frac{- \alpha_{i}}{1+\alpha_{i}}-1}\left\|x\right\|^{\alpha_{i}-1}\left(\left\|x \right\|^{-2}xx^{\mathrm{T}}\right).\] Since \(1\leq\alpha_{i}\), each component is bounded, which implies its Hessian is bounded, therefore, it satisfies 1-Holder continuous. Additionally, the norm of gradient is bounded by, \[\left\|\nabla U(x)\right\| =\left\|\sum_{i}(1+\left\|x\right\|^{1+\alpha_{i}})^{\frac{- \alpha_{i}}{1+\alpha_{i}}}\left\|x\right\|^{\alpha_{i}-1}x\right\|\] \[\leq\sum_{i}(1+\left\|x\right\|^{1+\alpha_{i}})^{\frac{-\alpha_{i }}{1+\alpha_{i}}}\left\|x\right\|^{\alpha_{i}}\] \[\leq N,\] which implies the potential is 1-smooth and 1-mixture locally Hessian smooth with \(\ell_{H}=0\). Applying to our Theorem 1, we achieve the convergence rate of \(K\approx\tilde{O}\left(\frac{d^{10}}{\epsilon^{2}}\right)\) in KL-divergence. ## 6 Conclusion In this paper, we develop polynomial-dimension theoretical justifications of unadjusted Langevin Monte Carlo algorithm for a class of \(\alpha_{G}\) mixture locally smooth potentials that satisfy weak dissipative inequality. In addition, we also study the class of potentials which are \(\alpha_{H}\) mixture locally Hessian smooth. Convergence results improve when a potential is \(\alpha_{G}\)-smooth and \(\alpha_{H}\)-mixture locally Hessian smooth. By convexifying non-convex domains, we get the result for non-strongly convex outside the ball of radius \(R\). We provide some nice theoretical properties of \(p\)-generalized Gaussian smoothing and prove the result for stochastic gradients in a very general setting. For the smoothing potential, we also provide convergence in the \(L_{\beta}\)-Wasserstein distance. Poincare inequality can be easily weakened, while computational complexity remains polynomial of \(d\) dimension. It is rather straightforward to generalize our condition to \(\beta>0\), which is typically satisfied by weak Poincare inequality. An interesting application of this approach would be to sample from higher order LMC or to integrate it into a derivative-free LMC algorithm. ## Appendix A Measure definitions and isoperimetry Let \(p,\pi\) be probability distributions on \(\mathbb{R}^{d}\) with full support and smooth densities, define the Kullback-Leibler (KL) divergence of \(p\) with respect to \(\pi\) as \[H(p|\pi)\stackrel{{\triangle}}{{=}}\int_{\mathbb{R}^{d}}p(x)\log \frac{p(x)}{\pi(x)}\,dx. \tag{18}\] Likewise, we denote the Renyi divergence of order \(q>1\) of a distribution \(p\) with respect to \(\pi\) as \[R_{q}(p|\pi)=\frac{1}{q-1}\log\int_{\mathbb{R}^{d}}\frac{p(x)^{q}}{\pi(x)^{q-1 }}\,dx\] and for \(\mathcal{B}(\mathbb{R}^{d})\) denotes the Borel \(\sigma\)-field of \(\mathbb{R}^{d}\), define the relative Fisher information and total variation metrics correspondingly as \[I(p|\pi)\stackrel{{\triangle}}{{=}}\int_{\mathbb{R}^{d}}p(x)\| \nabla\log\frac{p(x)}{\pi(x)}\|^{2}dx, \tag{19}\] \[TV(p,\ \pi)\stackrel{{\triangle}}{{=}}\sup_{A\in\mathcal{B}( \mathbb{R}^{d})}|\int_{A}p(x)dx-\int_{A}\pi(x)dx|. \tag{20}\] Furthermore, we define a transference plan \(\zeta\), a distribution on \((\mathbb{R}^{d}\times\mathbb{R}^{d},\ \mathcal{B}(\mathbb{R}^{d}\times\mathbb{R}^{d}))\) (where \(\mathcal{B}(\mathbb{R}^{d}\times\mathbb{R}^{d})\) is the Borel \(\sigma\)-field of \((\mathbb{R}^{d}\times\mathbb{R}^{d})\)) so that \(\zeta(A\times\mathbb{R}^{d})=p(A)\) and \(\zeta(\mathbb{R}^{d}\times A)=\pi(A)\) for any \(A\in\mathcal{B}(\mathbb{R}^{d})\). Let \(\Gamma(P,\,Q)\) designate the set of all such transference plans. Then for \(\beta>0\), the \(L_{\beta}\)-Wasserstein distance is formulated as: \[W_{\beta}(p,\pi)\stackrel{{\triangle}}{{=}}\left(\inf_{\zeta\in \Gamma(P,Q)}\int_{x,y\in\mathbb{R}^{d}}\|x-y\|^{\beta}\mathrm{d}\zeta(x,\ y)\right)^{1/\beta}. \tag{21}\] ## Appendix B Proofs under Poincare inequality ### Proof of \(\alpha_{G}\)-mixture locally-smooth property **Lemma 15**: _If potential \(U:\mathbb{R}^{d}\to\mathbb{R}\) satisfies \(\alpha_{G}\)-mixture locally-smooth then:_ \[U(y)\leq U(x)+\langle\nabla U(x),\ y-x\rangle+\frac{2L_{G}}{1+\alpha_{G}} \left(1+\|x\|^{\ell_{G}}+\|y\|^{\ell_{G}}\right)\sum_{l}\|x-y\|^{1+\alpha_{G }}.\] Proof: We have \[\|U(x)-U(y)-\langle\nabla U(y),x-y\rangle\|\] \[= \Big{|}\int_{0}^{1}\langle\nabla U(y+t(x-y)),x-y\rangle dt- \langle\nabla U(y),x-y\rangle\Big{|}\] \[= \Big{|}\int_{0}^{1}\langle\nabla U(y+t(x-y))-\nabla U(y),x-y \rangle dt\Big{|}.\] \[\leq \int_{0}^{1}\|\nabla U(y+t(x-y))-\nabla U(y)\|\,\|x-y\|\,dt\] \[\leq \int_{0}^{1}\left(1+\|tx+(1-t)y\|^{\ell_{G}}+\|y\|^{\ell_{G}} \right)\sum_{i=1}^{N}L_{Gi}t^{\alpha_{Gi}}\,\|x-y\|^{\alpha_{Gi}}\,\|x-y\|\,dt\] \[\leq \sum_{i}\left(\frac{2L_{Gi}}{1+\alpha_{Gi}}\right)\left(1+\|x\| ^{\ell_{G}}+\|y\|^{\ell_{G}}\right)\|x-y\|^{1+\alpha_{Gi}}\] \[\leq \frac{2L_{G}}{1+\alpha_{G}}\left(1+\|x\|^{\ell_{G}}+\|y\|^{\ell_ {G}}\right)\sum_{i}\|x-y\|^{1+\alpha_{Gi}},\] where the first line comes from Taylor expansion, the third line follows from Cauchy-Schwarz inequality and the fourth line is due to Assumption 1. This gives us the desired result. **Lemma 16**: _Suppose \(\pi=e^{-U}\) satisfies \(\alpha\)-mixture weakly smooth. Let \(p_{0}=N(0,\frac{1}{L}I)\). Then \(H(p_{0}|\pi)\leq U(0)-\frac{d}{2}\log\frac{2\Pi e}{L}+\sum_{i}\frac{L_{i}}{1+ \alpha_{i}}\left(\frac{d}{L}\right)^{\frac{1+\alpha_{i}}{2}}=O(d)\)._ _Proof_ Since \(U\) is \(\alpha_{G}\)-mixture locally-smooth, for all \(x\in\mathbb{R}^{d}\) we have \[U(x) \leq U(0)+\langle\nabla U(0),x\rangle+\frac{2L_{G}}{1+\alpha_{G} }\left(1+\|x\|^{\ell_{G}}\right)\sum_{i}\|x\|^{1+\alpha_{G}}\] \[=U(0)+\frac{2L_{G}}{1+\alpha_{G}}\left(1+\|x\|^{\ell_{G}}\right) \sum_{i}\|x\|^{1+\alpha_{G}}\,.\] Let \(X\sim p_{0}=N(0,\frac{1}{L}I)\). Then \[\mathbb{E}_{p_{0}}\left[U(X)\right] \leq U(0)+\mathbb{E}_{p_{0}}\left(\frac{2L_{G}}{1+\alpha_{G}} \left(1+\|x\|^{\ell_{G}}\right)\sum_{i}\|x\|^{1+\alpha_{G}}\right)\] \[\leq U(0)+\sum_{i}\frac{2L_{G}}{1+\alpha_{G}}\mathbb{E}_{p}\left( \|x\|^{2}\right)^{\frac{1+\alpha_{G}}{2}}+\sum_{i}\frac{2L_{G}}{1+\alpha_{G}} \mathbb{E}_{p}\left(\|x\|^{\ell_{G}+1+\alpha_{G}}\right)\] \[\leq U(0)+O(d)+O\left(\left(d+\ell_{G}+1+\alpha_{GN}\right)^{ \frac{(\ell_{G}+1+\alpha_{GN})}{2}}\right).\] Recall the entropy of \(p_{0}\) is \(H(p_{0})=-\mathbb{E}_{p_{0}}[\log p_{0}(X)]=\frac{d}{2}\log\frac{2\Pi e}{L}\). Therefore, for \(\ell_{G}\) is relative small compare to \(d\), the KL divergence is \[\mathbb{E}(p_{0}|\nu) =\int p_{0}\left(\log p_{0}+U\right)dx\] \[=-H(p_{0})+\mathbb{E}_{p_{0}}[U]\] \[\leq U(0)-\frac{d}{2}\log\frac{2\Pi e}{L}+O(d)+O\left(\left(d+ \ell_{G}+1+\alpha_{GN}\right)^{\frac{\ell_{G}+1+\alpha_{GN}}{2}}\right)\] \[=O\left(d^{\frac{\ell_{G}+1+\alpha_{GN}}{2}}\right).\] This is the desired result. ### Proof of Lemma 3 First, we preface the proof by a lemma. **Lemma 17**: _Let \(M_{e,\beta}\left(p_{t}\right)=E_{p_{t}}\left[e^{c\left(1+\|x\|^{2}\right)^{ \frac{\beta}{2}}}\right]\)_ \[M_{e,\beta}\left(p_{t}\right)\leq e^{-\frac{c^{2}}{2}t}M_{e,\beta}\left(p_{0} \right)+\frac{4\left(d+2a+b\right)}{a}e^{\frac{2d+3b+2b}{\beta}}\] _and if we initialize \(X_{0}\)with a Gaussian distribution \(p_{0}=N(0,\frac{1}{L}I)\), for any \(n,k>0\) and \(r\in\mathbb{R}\),is relative small compare to \(d\)_ \[E_{p_{t}}\left[\|x\|^{n\beta}\right]=\tilde{O}\left(n^{n}d^{n}\right).\] _Then, for any \(n,k>0\) and \(r\in\mathbb{R}\),is relative small compare to \(d\)_ \[E_{p_{t}}\left[\|x\|^{n\beta}\right]=\tilde{O}\left(n^{n}d^{n}\right).\] Proof.: We first consider the case \(n=1\). Then \[E_{p_{t}}\left[\|x\|^{n\beta}\right] =\tilde{O}\left(n^{n}d^{n}\right)\ Proof.: For any \(x\in\mathbb{R}^{d}\) let \(g_{\beta}\left(x\right)=e^{\zeta\left(1+\left\|x\right\|^{2}\right)}\)\({}^{\frac{\beta}{2}}\). First, let \(c=\frac{a}{j\beta}\) for some \(j\in N,j\geq 2\), we have \[\nabla g_{\beta}\left(x\right) =\frac{a}{j}e^{\zeta\left(1+\left\|x\right\|^{2}\right)}\] \[\nabla^{2}g_{\beta}\left(x\right) =\frac{a}{j}e^{\zeta\left(1+\left\|x\right\|^{2}\right)}\] \[\triangle\beta g_{\beta}\left(x\right) =\left(\frac{a}{j}e^{\zeta\left(1+\left\|x\right\|^{2}\right)} \right)^{\frac{\beta}{2}}\left(\frac{a}{j}\left(1+\left\|x\right\|^{2}\right) ^{\beta-2}\left\|x\right\|^{2}+\left(1+\left\|x\right\|^{2}\right)^{\frac{ \beta}{2}-1}d+\left(\beta-2\right)\left(1+\left\|x\right\|^{2}\right)^{\frac{ \beta}{2}-2}\left\|x\right\|^{2}\right)\right)\] \[\leq\frac{a}{j}e^{\zeta\left(1+\left\|x\right\|^{2}\right)}\] From these equations, we deduce: \[\frac{d}{dt}M_{e,\beta}\left(p_{t}\right) =\int p_{t}\left(x\right)\left(\triangle g_{\beta}\left(x \right)-\left\langle\nabla U\left(x\right),\nabla g_{\beta}\left(x\right) \right\rangle\right)dx\] \[\leq\frac{a}{j}\int p_{t}(x)e^{\zeta\left(1+\left\|x\right\|^{2 }\right)^{\frac{\beta}{2}}}\left[\left(\frac{a}{j}\left(1+\left\|x\right\|^{2} \right)^{\beta-1}+d\left(1+\left\|x\right\|^{2}\right)^{\frac{\beta}{2}-1} \right)-\left(1+\left\|x\right\|^{2}\right)^{\frac{\beta}{2}-1}\left\langle \nabla U\left(x\right),x\right\rangle\right]dx\] \[\leq\frac{a}{j}\int p_{t}(x)e^{\zeta\left(1+\left\|x\right\|^{2 }\right)^{\frac{\beta}{2}}}\left(1+\left\|x\right\|^{2}\right)^{\frac{\beta}{2 }-1}\left[\left(\frac{a}{j}\left(1+\left\|x\right\|^{2}\right)^{\frac{\beta}{2 }}+d\right)-a\left\|x\right\|^{\beta}+b\right]dx.\] Since \(1\geq\frac{\beta}{2}\geq 0\geq\frac{\beta}{2}-1\), for \(\left\|x\right\|\geq R=\left(4\frac{d+a+b}{a}\right)^{\frac{1}{B}}\), we have: \[\frac{d}{dt}M_{e,\beta}\left(p_{t}\right) \leq\frac{a}{j}\int p_{t}(x)e^{\zeta\left(1+\left\|x\right\|^{2 }\right)^{\frac{\beta}{2}}}\left(1+\left\|x\right\|^{2}\right)^{\frac{\beta}{ 2}-1}\left(-\frac{a}{2}\left(1+\left\|x\right\|^{2}\right)^{\frac{\beta}{2}}+ d+a+b\right)dx\] \[\leq-\frac{a^{2}}{4j}M_{e,\beta}\left(p_{t}\right).\] If \(\left\|x\right\|<\left(4\frac{d+a+b}{a}\right)^{\frac{1}{B}}\), we have \(M_{e,\beta}\left(p_{t}\right)\leq e^{\zeta\left(1+\delta^{2}\right)}\leq e^{ \zeta+4\zeta\frac{d+a+b}{a}}\leq e^{\frac{d+\zeta\zeta a+b}{\beta}}\) and \[\frac{d}{dt}M_{e,\beta}\left(p_{t}\right) \leq\frac{a\left(d+a+b\right)}{j}e^{\frac{d+\zeta\zeta a+b}{ \beta}}\] \[\leq-\frac{a^{2}}{4j}M_{e,\beta}\left(p_{t}\right)+\frac{a^{2}}{4 j}e^{\frac{d+\zeta\zeta a+b}{\beta}}+\frac{a\left(d+a+b\right)}{j}e^{\frac{d+\zeta \zeta a+b}{\beta}}\] \[\leq-\frac{a^{2}}{4j}M_{e,\beta}\left(p_{t}\right)+\frac{a\left(d +2a+b\right)}{j}e^{\frac{d+\zeta\zeta a+b}{\beta}}.\] Combining these inequalities and from Gronwall inequality, for any \(k\in N\) we have \[M_{e,\beta}\left(p_{k}\right)\leq e^{-\frac{a^{2}}{4j}\eta}M_{e,\beta}\left(p_ {\left(k-1\right)}\right)+\frac{4\left(d+2a+b\right)}{a}e^{\frac{d+\zeta\zeta a +b}{\beta}}\] So \[M_{e,\beta}\left(p_{k}\right)\leq e^{-\frac{a^{2}}{4j}\eta}M_{e,\beta}\left(p_ {0}\right)+\left(\frac{1}{1-e^{-\frac{a^{2}}{4j}\eta}}\right)\frac{4\left(d+2a +b\right)}{a}e^{\frac{d+\zeta\zeta a+b}{\beta}}\] or \[E_{p_{k}}\left[e^{\frac{d}{jB}\left(1+\left\|x\right\|^{2}\right)^{\frac{\beta} {2}}}\right]\leq E_{\beta_{0}}\left[e^{\frac{d}{B}\left(1+\left\|x\right\|^{2 }\right)^{\frac{\beta}{2}}}\right]+O\left(de^{\frac{4d}{B\beta}}\right)\] If we initialize with a Gaussian distribution \(p_{0}=N(0,\frac{1}{L}I)\), we have \[E_{p_{0}}\left[e^{\frac{1}{\beta}\left\|x\right\|^{2}}\right]=O\left(de^{2d}\right)\] from which if we choose \(2\leq j\) so that \(a\leq 4j\beta\) then \[E_{p_{k}}\left[e^{\frac{d}{\beta}\left(1+\left\|x\right\|^{2}\right)\frac{ \beta}{2}}\right]=O\left(de^{2d}\right).\] By Jensen inequality \[e^{E_{p_{k}}\left[\frac{d}{\beta}\left\|x\right\|^{\beta}\right]}\leq O\left( de^{2d}\right)\] which implies \[E_{p_{k}}\left[\left\|x\right\|^{\beta}\right]\leq\tilde{O}(d).\] Let \(E_{p_{k}}\left[\left(\frac{d}{\beta}\right)^{n}\left\|x\right\|^{n\beta} \right]=m\geq 0\) for some \(n\in N,n>1\). By Jensen inequality for any \(i\geq n,E_{p_{k}}\left[\left(\frac{d}{\beta}\right)^{i}\left\|x\right\|^{i \beta}\right]\geq m^{\frac{i}{n}}\geq 0\). Let \(f(m)=e^{m^{\frac{1}{n}}}-1-m^{\frac{1}{n}}-\frac{1}{2!}m^{\frac{n}{n}}...- \frac{1}{(n-1)!}m^{\frac{n-1}{n}}\), we have \[f(m) =\sum_{i\geq n}\frac{m^{\frac{i}{n}}}{i!}\] \[\leq\sum_{i\geq n}\frac{E_{p_{k}}\left[\left(\frac{d}{\beta} \right)^{i}\left\|x\right\|^{i\beta}\right]}{i!}\] \[\leq E_{p_{k}}\left[e^{\frac{d}{\beta}\left\|x\right\|^{\beta}}\right]\] \[\leq Kde^{2d},\] for some fixed \(K\). Differentiate \(f\) with respect to \(m\) we get \(f^{\prime}=\frac{1}{\sum_{n=m^{\frac{1}{n}}}}\left(e^{m^{\frac{1}{n}}}-1-m^{ \frac{1}{n}}-...-\frac{1}{(n-2)!}m^{\frac{n-2}{n}}\right)\geq 0\) so the function is increasing in \(m\). Since for \(d\) large enough \(f(2nd)=e^{2nd}-(2nd)^{\frac{1}{n}}...-\frac{1}{(n-1)!}\left(2nd\right)^{\frac {n-1}{n}}\geq Kde^{2d}\geq f(m^{\frac{1}{n}})\), which implies \(m^{\frac{1}{n}}\leq 2nd\) or \(m\leq 2^{n}m^{n}d^{n}\) which is the desired result. ### Proof of Lemma 4 Proof: First, recall that the discretization of the ULA is \[x_{k,t}=x_{k}-t\nabla U(x_{k})+\sqrt{2t}\,z_{k},\] where \(z_{k}\sim N(0,I)\) is independent of \(x_{k}\). Since \(U\) satisfies \(\alpha_{G}\)-mixture locally smooth, \(\left\|\nabla U(x_{k})\right\|\leq 2NL\left(1+\left\|x_{k}\right\|^{\left\|G +\alpha_{GN}\right\|}\right)\), which in turn implies \(\mathbb{E}_{p_{k}}\left[\left\|\nabla U(x_{k})\right\|^{r}\right]\leq C\left( 1+\mathbb{E}\left[\left\|x_{k}\right\|^{r\left(G+\alpha_{GN}\right)}\right] \right).\) We have \[\mathbb{E}_{p_{k}}\left\|x_{k,t}-x_{k}\right\|^{r} =\mathbb{E}_{p_{k}}\left\|-t\nabla U(x_{k})+\sqrt{2t}\,z_{k} \right\|^{r}\] \[\leq 2^{r-1}\eta^{r}\mathbb{E}_{p_{k}}\left\|\nabla U(x_{k})\right\| ^{r}+2^{r-1}2\frac{r}{2}\eta^{\frac{r}{2}}\left(d+\lceil\frac{r}{2}\rceil \right)^{\lceil\frac{r}{2}\rceil}\] \[\leq C\eta^{r}\mathbb{E}_{p_{k}}\left[1+\left\|x_{k}\right\|^{ \left(G+\alpha_{GN}\right)r}\right]+2^{r-1}2\frac{r}{2}\eta^{\frac{r}{2}}\left( d+\lceil\frac{r}{2}\rceil\right)^{\lceil\frac{r}{2}\rceil}\] \[=O\left(d^{i\frac{\left(G+\alpha_{GN}\right)r}{\beta}\rceil} \right)\eta^{r}+O\left(d^{i\frac{r}{2}\rceil}\right)\eta^{\frac{r}{2}}\] \[=O\left(d^{i\frac{\left(G+\alpha_{GN}\right)r}{\beta}\rceil\, \gamma\lceil\frac{r}{2}\rceil}\right)\eta^{\frac{r}{2}}.\] \[\mathbb{E}_{p_{\mu}}\left(1+\left\|x_{\lambda}\right\|^{r}+\left\|x_{ \lambda_{r}}\right\|^{r}\right) \leq C_{r}\mathbb{E}_{p_{\mu}}\left[1+\left\|x_{\lambda}\right\|^{r }+\left\|x_{\lambda_{r}}-x_{\lambda}\right\|^{r}\right]\] \[=O\left(d^{\frac{r}{2}}\right)+O\left(d^{\frac{r}{2}+\alpha_{G} \left(x\right)r}\frac{\left|x_{\lambda_{r}}\right|}{2}\right)\eta^{\frac{r}{2}}\] For \(\eta\) small enough, it is \[\mathbb{E}_{p_{\mu}}\left(1+\left\|x_{\lambda}\right\|^{r}+\left\|x_{\lambda_{ r}}\right\|^{r}\right)\leq O\left(d^{\frac{r}{2}}\right)\] \[\leq\mathbb{E}_{p_{\mu}}\left(1+\left\|x_{\lambda}\right\|^{t_{G} }+\left\|x_{\lambda_{r}}\right\|^{t_{G}}\right)^{2}\left(\sum_{i=1}^{N}L_{i} \left\|x_{k}-x_{k_{f}}\right\|^{\alpha_{G}}\right)^{2}\] \[\overset{1}{\leq}\sqrt{\mathbb{E}_{p_{\mu}}\left[\left(1+\left\| x_{\lambda}\right\|^{t_{G}}+\left\|x_{\lambda_{r}}\right\|^{t_{G}}\right)^{4} \right]}\sqrt{\mathbb{E}_{p_{\mu}}\left(\sum_{i=1}^{N}L_{i}\left\|x_{k}-x_{k_ {f}}\right\|^{\alpha_{G}}\right)^{4}}\] (22) \[\overset{2}{\leq}C\sqrt{\mathbb{E}_{p_{\mu}}\left[1+\left\|x_{ \lambda}\right\|^{4t_{G}}+\left\|x_{\lambda_{r}}\right\|^{4t_{G}}\right]} \sqrt{\sum_{i}\mathbb{E}_{p_{\mu}}\left\|x_{\lambda_{r}}-x_{\lambda}\right\|^ {4t_{G}}}\] \[\overset{3}{\leq}C\sqrt{\mathbb{E}_{p_{\mu}}\left[1+\left\|x_{ \lambda}\right\|^{4t_{G}}+\left\|x_{k_{f}}-x_{\lambda}\right\|^{4t_{G}}\right] }\sqrt{\sum_{i}\mathbb{E}_{p_{\mu}}\left\|x_{k_{f}}-x_{\lambda}\right\|^{4t_{G }}}\] (23) \[\leq\sqrt{\left[O\left(d^{\frac{4t_{G}}{2}\cdot}\right)+O\left(d^ {\frac{r}{2}+\alpha_{G}\left(x\right)\alpha_{G}\left\|x_{\lambda}\right\|}{2} \vee\frac{\left|x_{\alpha_{G}}\right|}{2}\right)\eta^{2\alpha_{G}}\right]} \sqrt{\left(\sum_{i}O\left(d^{\frac{r}{2}+\alpha_{G}\left(x\right)\alpha_{G} \left\|x_{\lambda}\right\|}{2}\right)\eta^{2\alpha_{G}}\right)}\] (24) \[=\left[O\left(d^{\frac{4t_{G}}{2}\cdot}\right)+O\left(d^{\frac{r \left\|x_{G}+\alpha_{G}\left(x\right)\alpha_{G}\left\|x_{\lambda}\right\|}{2} \vee\frac{\left|x_{\alpha_{G}}\right|}{2}\right)\eta^{\alpha_{G}}\right]} \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\ Since \(\nu\) satisfies \(\gamma\)-Poincare inequality, we obtain \[E_{\nu}(f)-E_{\nu}^{2}(\sqrt{f}) \leq\frac{1}{\gamma}E_{\nu}\left\|\nabla\left(\sqrt{f}\right)\right\| ^{2}\] \[\leq\frac{1}{4\gamma}E_{\nu}\frac{1}{f}\left\|\nabla f\right\|^{2}\] \[\leq\frac{1}{4\gamma}E_{\mu}\left\|\nabla\log f\right\|^{2}\] \[\leq\frac{1}{4\gamma}I(p|\nu)\,.\] As a result, we have \[\int\left(\sqrt{d\mu}-\sqrt{\nu}\right)^{2}dx\leq\frac{1}{2\gamma}I(\mu|\nu)\,. \tag{26}\] By using an inequality from (Villani, 2008), we get \[W_{q}^{q}(\mu,\nu) \leq 2^{q-1}\int_{\mathbb{R}^{d}}\left\|x\right\|^{q}\left|\mu(x)- \nu(x)\right|dx\] \[\overset{1}{\leq}2^{q-1}\left(\int_{\mathbb{R}^{d}}\left\|x \right\|^{2q}\left(\sqrt{\mu}+\sqrt{\nu}\right)^{2}dx\right)^{\frac{1}{2}} \left(\int\left(\sqrt{\mu}-\sqrt{\nu}\right)^{2}dx\right)^{\frac{1}{2}}\] \[\overset{2}{\leq}2^{q-1}\sqrt{2}\left(\int_{\mathbb{R}^{d}} \left\|x\right\|^{2q}\left(d\mu+d\nu\right)\right)^{\frac{1}{2}}\left(\int \left(\sqrt{\mu}-\sqrt{\nu}\right)^{2}dx\right)^{\frac{1}{2}}\] \[\leq 2^{q-1}M_{2q}^{\frac{1}{2}}\left(\mu+\nu\right)\sqrt{\frac{ 1}{\gamma}}\sqrt{I(\mu|\nu)}\,,\] where step 1 follows from Holder inequality, step 2 is because of Young inequality and both \(\mu\) and \(\nu\) are non-negative, and in the last step, we have used the definition of \(M_{2q}\) and the inequality 26 above. As a result, we have \[W_{q}(\mu,\nu)\leq 2M_{2q}^{\frac{1}{2q}}\left(\mu+\nu\right)\gamma^{-\frac{1 }{2q}}I^{\frac{1}{2q}}\left(\mu|\nu\right).\] Adapted Polyanskiy and Wu (2016)'s Proposition 1 technique, we have \[\left|\log\nu(x)-\log\nu(x^{*})\right| =\left|\int_{0}^{1}\left\langle\nabla\log\nu(tx+(1-t)x^{*}),x-x^ {*}\right\rangle dt\right|\] \[\leq\int_{0}^{1}\left\|\nabla\log\nu(tx+(1-t)x^{*})\right\|\left\| x-x^{*}\right\|dt\] \[\leq 2NL\int_{0}^{1}\left(1+\left\|tx+(1-t)x^{*}\right\|^{\ell_{G} +\alpha_{GN}}\right)\left\|x-x^{*}\right\|dt\] \[\leq 2NL\int_{0}^{1}\left(1+\left\|x\right\|^{\ell_{G}+\alpha_{GN }}+\left\|x^{*}\right\|^{\ell_{G}+\alpha_{GN}}\right)\left\|x-x^{*}\right\|dt\] \[\leq 2NL\left(1+\left\|x\right\|^{\ell_{G}+\alpha_{GN}}+\left\|x^ {*}\right\|^{\ell_{G}+\alpha_{GN}}\right)\left\|x-x^{*}\right\|.\] Let \((x,x^{*})\) be \(W_{q}\)-optimal coupling of \(\mu\) and \(\nu\), \(\frac{1}{q^{\prime}}+\frac{1}{q}=1\), let \(q=\frac{\ell_{G}+\alpha_{GN}}{2}+1\), \(q^{\prime}=\frac{\ell_{G}+\alpha_{GN}+2}{\ell_{G}+\alpha_{GN}}\), taking the expectation with respect to this optimal coupling we obtain \[H(\mu|\nu) \leq\mathbb{E}\left[2NL\left(1+\|x\|^{\ell_{G}+\alpha_{GN}}+\|x^{ \star}\|^{\ell_{G}+\alpha_{GN}}\right)\|x-x^{\star}\|\right]\] \[\overset{1}{\leq}2NL\mathbb{E}\left[\left(1+\|x\|^{\ell_{G}+\alpha _{GN}}+\|x^{\star}\|^{\ell_{G}+\alpha_{GN}}\right)^{q^{\prime}}\right]^{\frac{1} {q^{\prime}}}\left(\mathbb{E}\left\|x-x^{\star}\|^{q}\right)^{\frac{1}{q}}\] \[\overset{2}{\leq}2NL\mathbb{E}\left[3^{q^{\prime}-1}\left(1+\|x\| ^{q^{\prime}(\ell_{G}+\alpha_{GN})}+\|x^{\star}\|^{q^{\prime}(\ell_{G}+\alpha_{ GN})}\right)\right]^{\frac{1}{q^{\prime}}}W_{q}(\mu,\nu)\] \[\leq CM_{q^{\prime}(\ell_{G}+\alpha_{GN})}^{\frac{1}{q^{\prime}}} \left(\mu+\nu\right)W_{q}(\mu,\nu)\] \[\leq CM_{q^{\prime}(\ell_{G}+\alpha_{GN})}^{\frac{1}{q^{\prime}}} \left(\mu+\nu\right)M_{2q}^{\frac{1}{q^{\prime}}}\left(\mu+\nu\right)\gamma^{ -\frac{1}{2q}}I^{\frac{1}{2q}}\left(\mu|\nu\right)\] \[\overset{3}{\leq}C\gamma^{-\frac{1}{2q}}M_{2q}\left(\mu+\nu\right) I^{\frac{1}{2q}}\left(\mu|\nu\right),\] \[= C\gamma^{-\frac{1}{2q}}M_{\ell_{G}+\alpha_{GN}+2}\left(\mu+\nu \right)I^{\frac{1}{\ell_{G}+\alpha_{GN}+2}}\left(\mu|\nu\right),\] where step 1 follows from Holder inequality, step 2 is due to \(\alpha_{GN}\leq 1\) and Cauchy-Schwartz inequality, step 3 is because \(\frac{1}{q^{\prime}}+\frac{1}{q}=1\) and we have used \(q=\frac{\ell_{G}+\alpha_{GN}}{2}+1\), \(q^{\prime}=\frac{\ell_{G}+\alpha_{GN}+2}{\ell_{G}+\alpha_{GN}}\). ### Proof of Lemma 18 **Lemma 18**: _((Nguyen et al., 2021) Lemma 1). Suppose \(\nu\) is \(\beta\)-dissipative, \(\alpha_{G}\)-mixture locally smooth. If \(H(\tilde{p}_{k,\nu}|\nu)\leq C\gamma^{-\frac{1}{G}+\alpha_{GN}+2}I^{\frac{1}{G }+\alpha_{GN}+2}\left(\tilde{p}_{k,\nu}|\nu\right)M_{\ell_{G}+\alpha_{GN}+2} \left(\tilde{p}_{k,\nu}+\nu\right)\) then for step size \(\eta\) small enough_ \[H(p_{k+1}|\nu)\leq H(p_{k}|\nu)\left(1-\frac{3CH(p_{k}|\nu)^{\ell_{G}+\alpha_{ GN}+1}}{8\gamma\left(M_{\ell_{G}+\alpha_{GN}+2}^{\prime}(\tilde{p}_{k,\nu}+ \nu)\right)}\eta\right)+D_{3}\eta^{\alpha_{G}+1}.\] ### Proof of Theorem 1 Proof: Integrating both sides of equation from \(t=0\) to \(t=\eta\) we obtain \[H(p_{k+1}|\nu)-H(p_{k}|\nu)\leq D\eta^{1+\alpha_{G}},\] where the inequality holds since the first term is negative. Using discrete Gronwall inequality, we have, _for any_\(k\in\mathbb{N}\) \[H(p_{K}|\nu) \leq H(p_{k_{0}}|\nu)+KD\eta^{1+\alpha_{G}}\] \[\leq H(p_{k_{0}}|\nu)+TD\eta^{\alpha_{G}}\] \[\leq H(p_{k_{0}}|\nu)+\frac{\varepsilon}{2}.\] If there exists some \(k<K\) such that \(H(p_{k}|\nu)\leq\frac{\varepsilon}{2}\) then we can choose \(\eta\leq\left(\frac{\varepsilon}{2TD}\right)^{\frac{1}{2\omega_{G}}}\) so that \(H(p_{K}|\nu)\leq\varepsilon\). If there is no such \(k\), we will prove for sufficiently large \(K,H(p_{K}|\nu)\leq\varepsilon\). Let \(A=\frac{3C}{8\gamma\left(M_{\ell_{G}+\alpha_{GN}+2}^{\prime}(\tilde{p}_{k,\nu}+ \pi)\right)}\left(\frac{\varepsilon}{2}\right)^{\ell_{G}+\alpha_{GN}+1}\), the above expression leads to \[H(p_{k+1}|\nu)\leq H(p_{k}|\nu)\left(1-A\eta\right)+D\eta^{\alpha_{G}+1}.\] By iterating the process we get \[H(p_{k}|\nu)\leq H(p_{0}|\nu)\left(1-A\eta\right)^{k}+\frac{D}{A}\eta^{\alpha_ {G}}.\] To get \(H(p_{K}|\nu)\leq\epsilon\), for \(\eta\) small enough so that \(\eta\leq\left(\frac{4\epsilon}{2D}\right)^{\frac{1}{4G}}\), it suffices to run \(K\) iterations such that \[\left(1-A\eta\right)^{K}\leq\frac{\epsilon}{2H(p_{0}|\nu)}.\] As a result, we obtain \[K =\log_{(1-A\eta)}\left(\frac{\epsilon}{\left(2H(p_{0}|\nu)\right) }\right)\] \[=\frac{\ln\left(\frac{\left(H(p_{0}|\nu)\right)}{\epsilon}\right)} {\ln\left(\frac{1}{1-A\eta}\right)}\] \[\leq\frac{\ln\left(\frac{\left(H(p_{0}|\nu)\right)}{\epsilon} \right)}{\frac{3C}{8\gamma\left(M_{G^{2}}^{G^{2}+\alpha_{GW}+2}(\beta_{k,r}+ \pi)\right)}\left(\frac{\epsilon}{2}\right)^{G_{G}+\alpha_{GW}+1}\eta}.\] By plugging \(T=K\eta\) and assuming without loss of generality that \(T>1\) (since we can choose \(T\)), we obtain \[T\leq\frac{\ln\left(\frac{\left(H(p_{0}|\nu)\right)}{\epsilon}\right)}{8 \gamma\left(M_{G^{2}+\alpha_{GW}+2}^{G^{2}+\alpha_{GW}+2}(\beta_{k,r}+\pi) \right)}\left(\frac{\epsilon}{2}\right)^{G_{G}+\alpha_{GW}+1}\] which is satisfied if we choose \[T=O\left(\frac{\gamma\left(M_{G^{2}+\alpha_{GW}+2}^{G^{2}+\alpha_{GW}+2}( \beta_{k,r}+\nu)\right)\ln\left(\frac{\left(H(p_{0}|\nu)\right)}{\epsilon} \right)}{\epsilon^{\epsilon_{G}+\alpha_{GW}+1}}\right).\] Without loss of generality, since \(H(p_{0}|\nu)=O\left(d^{\frac{\epsilon_{G}+1+\alpha_{GW}}{2}}\right),\) we can assume that \(H(p_{0}|\nu)\geq 1>\epsilon\). We have \(\ln\left(\frac{\left(H(p_{0}|\nu)\right)}{\epsilon}\right)>1\). Therefore, \[\eta=\min\left\{1,\left(\frac{A\epsilon}{2D}\right)^{\frac{1}{4G}},\left(\frac {\epsilon}{2TD}\right)^{\frac{1}{4G}}\right\}=\left(\frac{\epsilon}{2TD} \right)^{\frac{1}{4G}}\] Using \(K=\frac{T}{\eta}\), we have \[K \leq O\left(\left(\frac{2TD}{\epsilon}\right)^{\frac{1}{4G}}\frac {\gamma\left(M_{G^{2}+\alpha_{GW}+2}^{G^{2}+2}(\beta_{k,r}+\pi)\right)\ln \left(\frac{\left(H(p_{0}|\pi|)\right)}{\epsilon}\right)}{\epsilon^{\epsilon_ {G}+\alpha_{GW}+1}}\right)\] \[\leq O\left(\frac{D^{\frac{1}{4G}}\gamma^{1+\frac{1}{4G}}d^{\frac{ \epsilon_{G}+\alpha_{GW}+2}{\beta}\left|(G_{G}+\alpha_{GW}+2}\right)\left(1+ \frac{1}{4G}\right)\ln\left(1+\frac{1}{4G}\right)\left(\frac{\left(H(p_{0}| \pi|)\right)}{\epsilon}\right)}{\epsilon^{\left(\epsilon_{G}+\alpha_{GW}+1 \right)\left(1+\frac{1}{4G}\right)+\frac{1}{4G}}}\right).\] Combining with these above results for \(\eta\) small enough, \(M_{G_{G}+\alpha_{GW}+2}(\beta_{k,r}+\nu)=O\left(d^{\frac{\epsilon_{G}+\alpha_ {GW}+2}{\beta}}\right)\) and \(D=O\left(d^{\frac{\epsilon_{G}}{2}+\alpha_{GW}+2}\frac{\left(\beta_{k,r}+ \alpha_{GW}\right)\alpha_{GW}}{\epsilon}\right),\) we obtain \[K=O\left(\frac{\gamma^{1+\frac{1}{4G}}d^{\frac{\left(G_{2}+\alpha_{GW}+2\right) }{\beta}\left|(G_{G}+\alpha_{GW}+2\right)\left(1+\frac{1}{4G}\right)+\frac{ \left(\frac{\epsilon_{G}+\alpha_{GW}+2}{\beta}\right)+\frac{\left(\frac{ \epsilon_{G}+\alpha_{GW}+2}{\beta}\right)}{\epsilon}\left|\frac{\left(G_{G}+ \alpha_{GW}+1\right)\left(1+\frac{1}{4G}\right)+\frac{1}{4G}}\right.}}{ \epsilon^{\left(\epsilon_{G}+\alpha_{GW}+1\right)\left(1+\frac{1}{4G}\right) +\frac{1}{4G}}}\right)}\right)\] which is our desired result. _Remark 3_: We can get a tighter result for each specific case. For example, by choosing \(\ell_{G}=0\), \(\alpha_{GN}=\alpha_{G}=1\), \(\beta\geq 1.5\) we obtain \[K\approx\tilde{O}\left(\frac{\gamma^{2}d^{13}}{\varepsilon^{5}}\right),\] a weaker but rather comparable to the result of (Erdogdu and Hosseinzadeh, 2020). ### Proof of \(\alpha_{H}\)-mixture Hessian locally-smooth property **Lemma 19**: _If potential \(U:\mathbb{R}^{d}\rightarrow\mathbb{R}\) satisfies \(\alpha\)-mixture locally-smooth then:_ \[\left\|\nabla U(x)-\nabla U(y)\right\|\leq\left(\frac{4L_{H}}{1+\alpha_{H}} \lor C_{H}\right)\left(1+\left\|x\right\|^{\ell_{H}+\alpha_{N}}+\left\|y \right\|^{\ell_{H}+\alpha_{N}}\right)\sum_{i=0}\left\|x-y\right\|^{1+\alpha_{ Hi}}.\] Proof: We have \[\left\|\nabla U(x)-\nabla U(y)-\nabla^{2}U(y)(x-y)\right\|\] \[= \left\|\int_{0}^{1}\left(\nabla^{2}U(y+t(x-y))(x-y)-\nabla^{2}U( y)(x-y)\right)dt\right\|\] \[\leq \int_{0}^{1}\left\|\nabla^{2}U(y+t(x-y))-\nabla^{2}U(y)\right\|_ {op}\left\|x-y\right\|dt\] \[\leq \int_{0}^{1}\left(1+\left\|tx+(1-t)y\right\|^{\ell_{H}}+\left\| y\right\|^{\ell_{H}}\right)\sum_{i=0}^{N}L_{Hi}t^{\alpha_{Hi}}\left\|x-y \right\|^{\alpha_{Hi}}\left\|x-y\right\|dt\] \[\leq \sum_{i}\frac{2L_{Hi}}{1+\alpha_{Hi}}\left(1+\left\|x\right\|^{ \ell_{H}}+\left\|y\right\|^{\ell_{H}}\right)\left\|x-y\right\|^{1+\alpha_{Hi}}\] \[\leq \left(\frac{4L_{Hi}}{1+\alpha_{Hi}}\lor C_{H}\right)\left(1+\left\| x\right\|^{\ell_{H}+\alpha_{N}}+\left\|y\right\|^{\ell_{H}+\alpha_{N}}\right) \sum_{i=0}\left\|x-y\right\|^{1+\alpha_{Hi}},\] where the first line comes from Taylor expansion, the third line follows from Cauchy-Schwarz inequality and the fourth line is due to Assumption 1. From the bound for \(\left\|\nabla^{2}U(x)\right\|\) we obtain: \[\left\|\nabla U(x)-\nabla U(y)\right\| =\left\|\int_{0}^{1}\nabla^{2}U\left((1-t)y+tx\right)(x-y)dt\right\|\] \[\leq\int_{0}^{1}\left\|\nabla^{2}U\left((1-t)y+tx\right)(x-y) \right\|dt\] \[\leq\int_{0}^{1}\left\|\nabla^{2}U\left((1-t)y+tx\right)\right\|_ {op}dt\left\|x-y\right\|\] \[\leq\int_{0}^{1}C_{H}\left(1+\left\|(1-t)y+tx\right\|^{\ell_{H}+ \alpha_{HN}}\right)dt\left\|x-y\right\|\] \[\leq C_{H}\left(1+\left\|x\right\|^{\ell_{H}+\alpha_{HN}}+\left\| y\right\|^{\ell_{H}+\alpha_{HN}}\right)\left\|x-y\right\|.\] From that, let \(y=0\),we get \[\left\|\nabla U(x)\right\| \leq C_{H}\left(1+\left\|x\right\|^{\ell_{H}+\alpha_{HN}}\right) \left\|x\right\|\] \[\leq 2C_{H}\left(1+\left\|x\right\|^{\left\lfloor\ell_{H}+\alpha_{ HN}+1\right\rfloor}\right).\] This gives us the desired result. ### Proof of \(\alpha_{H}\)-mixture Hessian locally-smooth property \[\mathbb{E}_{p_{\mu}}\left\|\nabla U(x_{k})-\nabla U(x_{k,t})\right\|^ {2}\] \[\overset{1}{\leq}\mathbb{E}_{p_{\mu}}\left(C_{H}\left(1+\left\|x_{ k}\right\|^{\ell_{H}+\alpha_{W}}\right)\left\|x_{k}-x_{k,t}\right\|+\frac{2L_{H}}{1+ \alpha_{H}}\left(1+\left\|x_{k}\right\|^{\ell_{H}}+\left\|x_{k,t}\right\|^{ \ell_{H}}\right)\sum_{i}\left\|x_{k}-x_{k,t}\right\|^{1+\alpha_{H}}\right)^{2}\] \[\overset{2}{\leq}2C_{H}^{2}\mathbb{E}_{p_{\mu}}\left(1+\left\|x_{ k}\right\|^{\ell_{H}+\alpha_{W}}\right)^{2}\left\|x_{k}-x_{k,t}\right\|^{2}+2N \left(\frac{2L_{H}}{1+\alpha_{H}}\right)^{2}\mathbb{E}_{p_{\mu}}\left[\left(1+ \left\|x_{k}\right\|^{\ell_{H}}+\left\|x_{k,t}\right\|^{\ell_{H}}\right)^{2} \sum_{i}\left\|x_{k}-x_{k,t}\right\|^{2+2\alpha_{H}}\right] \tag{27}\] \[\overset{3}{\leq}4C_{H}^{2}\mathbb{E}_{p_{\mu}}\left(1+\left\|x_ {k}\right\|^{2\ell_{H}+2\alpha_{W}}\right)\left\|x_{k}-x_{k,t}\right\|^{2}+6N \left(\frac{2L_{H}}{1+\alpha_{H}}\right)^{2}\mathbb{E}_{p_{\mu}}\left[\left(1+ \left\|x_{k}\right\|^{2\ell_{H}}+\left\|x_{k,t}\right\|^{2\ell_{H}}\right) \sum_{i}\left\|x_{k}-x_{k,t}\right\|^{2+2\alpha_{H}}\right]\] \[\overset{4}{\leq}4C_{H}^{2}\sqrt{\mathbb{E}_{p_{\mu}}\left(1+ \left\|x_{k}\right\|^{2\ell_{H}+2\alpha_{W}}\right)^{2}}\sqrt{\mathbb{E}_{p_{ \mu}}\left\|x_{k}-x_{k,t}\right\|^{4}}\] (28) \[+6N\left(\frac{2L_{H}}{1+\alpha_{H}}\right)^{2}\sqrt{\mathbb{E}_ {p_{\mu}}\left[\left(1+\left\|x_{k}\right\|^{2\ell_{H}}+\left\|x_{k,t}\right\|^ {2\ell_{H}}\right)^{2}\right]}\sqrt{\mathbb{E}_{p_{\mu}}\left[\left(\sum_{i} \left\|x_{k}-x_{k,t}\right\|^{2+2\alpha_{H}}\right)^{2}\right]}\] (29) \[\overset{5}{\leq}4\sqrt{2}C_{H}^{2}\mathbb{E}_{p_{\mu}}\left(1+ \left\|x_{k}\right\|^{2\ell_{H}+2\alpha_{W}}\right)\sqrt{O\left(d\big{[}\frac{ \left(\ell_{H}+\alpha_{W}\right)+1}{B}+1\big{]}\right)\eta^{2}}\] (30) \[+6\sqrt{3}N\left(\frac{2L_{H}}{1+\alpha_{H}}\right)^{2}\mathbb{E }_{p_{\mu}}\left[\left(1+\left\|x_{k}\right\|^{2\ell_{H}}+\left\|x_{k,t}\right\| ^{2\ell_{H}}\right)\right]\sqrt{\mathbb{E}_{p_{\mu}}\left[\left(\sum_{i} \left\|x_{k}-x_{k,t}\right\|^{2+2\alpha_{H}}\right)^{2}\right]}\] (31) \[\overset{5}{\leq}O\left(d\big{[}\frac{2\left(\ell_{H}+\alpha_{W} \right)\left(\ell_{H}+\alpha_{W}+1\right)}{B}+1\big{]}\right)\sqrt{O\left(d \big{[}\frac{\left(\ell_{H}+\alpha_{W}\right)+1}{B}+1\big{]}\right)\eta^{2}}\] (32) \[+\left(\left.\left(O\left(d\big{[}\frac{2\left(\ell_{H}+\alpha_{W} \right)\left(\ell_{H}+\alpha_{W}+1\right)}{B}+1\big{]}\right)+O\left(d\big{[} \frac{\left(\ell_{H}+\alpha_{W}\right)\left(1+\left|x_{H}\right\|+1\right)}{B }+1\big{]}\right)\eta^{\ell_{H}}\right)\left(\sum_{i}O\left(d\big{[}\frac{2 \left(\ell_{H}+\alpha_{W}\right)\left(1+\left|x_{H}\right\|+1\right)}{B}+1 \big{]}\right)\eta^{1+\alpha_{H}}\right)\] (33) \[=O\left(d\Big{[}\frac{2\left(\ell_{H}+\alpha_{W}\right)\left(\ell _{H}+\alpha_{W}+1\right)}{B}+1\big{]}+0.5\left[\frac{\left(\ell_{H}+\alpha_{W} \right)\left(1+\left|x_{H}\right\|+1\right)}{B}+1\big{]}\right)\eta,\] where step 1 follows from Assumption 1, step 2 comes from Young inequality and normal distribution, step 3 is because of Lemma 3 and \(\eta\leq 1\), step 4 comes from choosing \(\eta\) small enough. ### Proof of Theorem 1 Proof: Follow similar step as in the proof above, let \(A=\frac{3C}{8\gamma\left(M_{H}^{\frac{1}{2}+\alpha_{W}+3}\left(\ell_{H},x+ \nu\right)\right)}\left(\frac{\pi}{2}\right)^{\ell_{H}+\alpha_{W}+2}\). Combining with these above results for \(\eta\) small enough, \(M_{\ell_{H}+\alpha_{W}+3}(\bar{p}_{k,t}+\nu)=O\left(d^{\frac{\left(\ell_{H}+ \alpha_{W}+3\right)}{B}}\right)\) and \(D=O\left(d\frac{\left(\frac{4\left(\ell_{H}+\alpha_{W}\right)}{B}\right)}{ \frac{\pi}{2}}+\frac{\left(\frac{\left(\ell_{H}+\alpha_{W}+1\right)\left(4 \alpha_{W}+4\right)}{2}}{2}\right)\right),\) we obtain \(K=O\left(\frac{\gamma^{1+\frac{1}{\alpha_{H}+1}}d^{\frac{\left(\ell_{H}+\alpha_{W} +3\right)}{B}\left[\left(\ell_{H}+\alpha_{W}+3\right)\left(1+\frac{1}{\alpha_{H }+1}\right)+\frac{\left(\frac{4\alpha_{W}+4}{B}\right)}{2}+\frac{\left(\frac{ \left(\ell_{H}+\alpha_{W}+1\right)\left(4\alpha_{W}+4\right)}{B}\right)}{2} \right.}\right.}\right)\] which is our desired result. ## Appendix C Proof under gradient Lipschitz and \(\alpha_{H}\)-mixture Hessian locally-smooth ### Proof under gradient Lipschitz and \(\alpha_{H}\)-mixture Hessian locally-smooth \[\mathbb{E}_{p_{k}}\left\|\nabla U(x_{k})-\nabla U(x_{k,r})\right\|^ {2}\] \[\overset{1}{\leq}L_{G}^{2}\mathbb{E}_{p_{k}}\left[\left\|x_{k}-x _{k,r}\right\|^{2}\right]\] \[\overset{2}{\leq}O\left(d^{\lceil\frac{2}{p}\rceil}\right)\eta,\] where step 1 follows from Assumption 1, step 2 is because of Lemma 3 and \(\eta\leq 1\). ### Proof of gradient Lipschitz and \(\alpha_{H}\)-mixture Hessian locally-smooth Proof: We provide proof here for completeness since we do not use Hessian smooth condition in (Mou et al., 2022). We follow closely the Lemma from (Mou et al., 2022). We decompose the difference \(y-x\) into \[a_{1}(x,\,y) :=\left(I\,\,+(t-k\eta)\nabla^{2}U(y)\right)\left(y-x+(t-k\eta) \nabla U(y)\right),\] \[a_{2}(x,\,y) :=(t-k\eta)\nabla^{2}U(y)\left(y-x+(t-k\eta)\nabla U(y)\right)\] \[a_{3}(x,\,y) :=(t-k\eta)\nabla U(y).\] We define the conditional expectations \(I_{l}(x):=\mathbb{E}\left[a_{l}(x_{k},x_{k,r})|x_{k,r}=x\right]\) for \(i=1,2,3\) a potentials via the three terms separately. From (Mou et al., 2022) Lemma 4, \[\mathbb{E}\left\|I_{1}(x_{k,r})\right\|^{2}\leq(t-k\eta)^{2}\int p_{k}(x) \left\|\nabla\log p_{k}(x)\right\|^{2}dx.\] In addition \[\frac{\left\|I_{2}(x)\right\|}{t-k\eta} =\left\|\int\nabla^{2}U(y)\left(y-x+(t-k\eta)\nabla U(y)\right)(2 \pi(t-k\eta))^{-\frac{d}{2}}\exp\left(-\frac{\left\|x-y-(t-k\eta)\nabla U(y) \right\|^{2}}{2(t-k\eta)}\right)\frac{\hat{\pi}_{k\eta}(y)}{\hat{\pi}_{k}(x)} dy\right\|\] \[=\int\nabla^{2}U(y)\left(y-x+(t-k\eta)\nabla U(y)\right)\frac{ \hat{\pi}_{k\eta}(y)}{\hat{\pi}_{k}(x)}p\left(x_{k,r}=x|x_{k}=y\right)dy\] \[=\mathbb{E}\left[\nabla^{2}U(y)\left(y-x+(t-k\eta)\nabla U(y) \right)|x_{k,r}=x\right].\] Plugging into the squared integral yields \[\mathbb{E}\left\|I_{2}(x_{k,r})\right\|^{2} =(t-k\eta)^{2}\mathbb{E}\left\|\mathbb{E}\left[\nabla^{2}U(y) \left(y-x+(t-k\eta)\nabla U(y)\right)|x_{k,r}=x\right]\right\|^{2}\] \[\leq(t-k\eta)^{2}\mathbb{E}\mathbb{E}\left[\left\|\nabla^{2}U(y) \left(y-x+(t-k\eta)\nabla U(y)\right)\right\|^{2}\right]\] \[\leq(t-k\eta)^{2}\mathbb{E}\left[\left\|\nabla^{2}U(y)\right\|^{ 2}\left\|(y-x+(t-k\eta)\nabla U(y))\right\|^{2}\right]\] \[\leq(t-k\eta)^{2}L_{G}^{2}\mathbb{E}\left[\left\|x_{k}+(t-k\eta) \nabla U(x_{k})-x_{k,r}\right\|^{2}\right]\] \[\leq(t-k\eta)^{2}L_{G}^{2}\left(\mathbb{E}\left[\left\|\int_{k \eta}^{t}dB_{s}\right\|^{2}\right]\right)\] \[\leq L_{G}^{2}d\eta^{3}\] The size of norm of \(I_{3}\) can be controlled using \[\mathbb{E}\left\|I_{3}(x_{k,t})\right\|^{2} =(t-k\eta)^{2}\mathbb{E}\left\|\mathbb{E}\left(\nabla U(x_{k})|x_{k,t }\right)\right\|^{2}\] \[\leq\eta^{2}\mathbb{E}\left\|\nabla U(x_{k})\right\|^{2}\] \[\leq c_{1}\eta^{2}\mathbb{E}\left[\left(1+\left\|x_{k}\right\|^{2} \right)\right]\] \[\leq O\left(d^{\frac{1}{2}\frac{2}{\beta}}\right)\eta^{2}.\] We also bound the remainder term \(\hat{r}_{t}\) \[\hat{r}_{t}(x)=\mathbb{E}\left[\int_{0}^{1}\left(\nabla^{2}U\left((1-s)x_{k,t }+sx_{k}\right)-\nabla^{2}U(x)\right)\left(x_{k}-x_{k,t}\right)ds|x_{k,t}=x \right].\] Taking the global expectation leads to \[\mathbb{E}\left\|\hat{r}_{t}(x)\right\|^{2} \leq\mathbb{E}\left\|\mathbb{E}\int_{0}^{1}\left(\nabla^{2}U \left((1-s)x_{k,t}+sx_{k}\right)-\nabla^{2}U(x)\right)\left(x_{k}-x_{k,t} \right)ds|x_{k,t}=x\right\|^{2}\] \[\leq\mathbb{E}\int_{0}^{1}\mathbb{E}\left[\left\|\left(\nabla^{2 }U\left((1-s)x_{k,t}+sx_{k}\right)-\nabla^{2}U(x)\right)\left(x_{k}-x_{k,t} \right)\right\|^{2}|\hat{X}_{t}=x\right]ds\] \[\leq\mathbb{E}\int_{0}^{1}\mathbb{E}\left[\left(\left(1+\left\|x _{k}\right\|^{2t_{H}}+\left\|x_{k,t}\right\|^{2t_{H}}\right)\left(\sum_{i=1}^{ N}L_{Hi_{3}}a_{ii}\left\|x_{k}-x_{k,t}\right\|^{2i}\left\|x_{i}-x_{k,t}\right\|^{2i} \left\|\hat{X}_{t}=x\right\right]ds\right.\] \[\leq 12N\mathbb{E}\left[\left(1+\left\|x_{k}\right\|^{2t_{H}}+ \left\|x_{k,t}\right\|^{2t_{H}}\right)\sum_{i=1}^{N}\frac{L_{Hi_{3}}^{2}}{2a_{ Hi}}\left\|x_{k}-x_{k,t}\right\|^{2i}\left\|\hat{X}_{t}=x\right]\] \[\leq 12N\mathbb{E}\mathbb{E}\left[\left(1+\left\|x_{k}\right\|^{2 t_{H}}+\left\|x_{k,t}\right\|^{2t_{H}}\right)\sum_{i=1}^{N}\frac{L_{Hi_{3}}^{2}}{2a_{ Hi}+1}\left\|x_{k}-x_{k,t}\right\|^{2i+2a_{Hi}}\left\|\hat{X}_{t}=x\right]\] \[\leq 12N\frac{L_{H}^{2}}{2a_{H}+1}\mathbb{E}\left[\left(1+\left\| x_{k}\right\|^{2t_{H}}+\left\|x_{k,t}\right\|^{2t_{H}}\right)\sum_{i=1}^{N}\left\|x_{k}-x _{k,t}\right\|^{2i}2a_{ii}+2\right]\] \[\leq 12N\frac{L_{H}^{2}}{2a_{H}+1}\mathbb{E}^{\frac{1}{2}}\left[ \left(1+\left\|x_{k}\right\|^{2t_{H}}+\left\|x_{k,t}\right\|^{2t_{H}}\right)^{ 2}\right]\mathbb{E}^{\frac{1}{2}}\left[\left(\sum_{i=1}^{N}\left\|x_{k}-x_{k,t }\right\|^{2a_{ii}+2}\right)^{2}\right]\] \[\leq C_{H1}\mathbb{E}^{\frac{1}{2}}\left[1+\left\|x_{k}\right\|^{4 t_{H}}+\left\|x_{k,t}\right\|^{4t_{H}}\right\|\mathbb{E}^{\frac{1}{2}}\left[\sum_{i=1}^{ N}\left\|x_{k}-x_{k,t}\right\|^{4a_{ii}+4}\right]\] \[\leq O\left(d^{\frac{\left\lceil\frac{4t_{H}}{2}\right\rceil}{2}} \right)O\left(d^{\frac{\left\lceil\frac{4t_{H}}{2}\right\rceil}{2}}\right) \eta^{a_{ii}+1}\] \[\leq O\left(d^{\frac{\left\lceil\frac{4t_{H}}{2}\right\rceil+ \left\lceil\frac{4t_{H}+4}{2}\right\rceil}{2}}\right)\eta^{a_{ii}+1}.\] \[\mathbb{E}\left[\left\|\nabla U(x_{k,f})-\nabla U(x_{k})\right\|^{2}\right] =\mathbb{E}\left[\left\|\nabla U(x_{k,f})-\mathbb{E}\left[\nabla U (x_{k})|x_{k,f}\right]\right\|^{2}\right]\] \[=\mathbb{E}\left[\left\|\nabla^{2}U(x)\mathbb{E}\left[x_{k}-x_{k, f}|x_{k,f}=x\right]+\hat{r}_{t}(x)\right\|^{2}\right]\] \[\leq 2\mathbb{E}\left[\left\|\nabla^{2}U(x)\left(I_{1}+I_{2}+I_{3} \right)\right\|^{2}\right]+2\mathbb{E}\left\|\hat{r}_{t}(x)\right\|^{2}\] \[\leq 6I_{G}^{2}\mathbb{E}\left[\left\|I_{1}\right\|^{2}+\left\|I_{2} \right\|^{2}+\left\|I_{3}\right\|^{2}\right]+2\mathbb{E}\left\|\hat{r}_{t}(x) \right\|^{2}\] \[\leq 6I_{G}^{2}\eta^{2}\int p_{k}(x_{k})\left\|\nabla\log p_{k}(x_{k })\right\|^{2}dx+3I_{G}^{2}d\eta^{3}\] \[+O\left(d^{\left(\frac{\left(\frac{3}{2}\right)}\left[+1\right) \right)}\eta^{2}+O\left(d^{\frac{\left(\frac{4}{2}\right)}{B}\left[+\frac{4}{ 2}\right]}\right)\eta^{\alpha_{H}+1},\] From (Mou et al., 2022) Lemma 7, we have \[\int p_{k}(x_{k})\left\|\nabla\log p_{k}(x_{k})\right\|^{2}dx\leq 8\int p_{t}(x _{k,f})\left\|\nabla\log p_{t}(x_{k,f})\right\|^{2}dx+32\eta^{2}d^{2}L_{2}^{2}\] and from (Chewi et al., 2021) Lemma 16, it holds that \[\int p_{t}(x_{k,f})\left\|\nabla U(x_{k,f})\right\|^{2}\leq I\left(p_{k,f}| \nu\right)+2dL_{G}.\] Combining the above inequalities and Young inequality \[\int p_{t}(x_{k,f})\left\|\nabla\log p_{t}(x_{k,f})\right\|^{2}dx \leq 2\int p_{t}(x_{k,f})\left\|\nabla\log\frac{p_{t}(x_{k,f})}{ \nu}\right\|^{2}dx+2\int p_{t}(x_{k,f})\left\|\nabla U(x_{k,f})\right\|^{2}\] \[\leq 4I\left(p_{k,f}|\nu\right)+2dL_{G},\] which implies \[\mathbb{E}\left[\left\|\nabla U(x_{k,f})-\nabla U(x_{k})\right\|^{2}\right] \leq 24L_{G}^{2}\eta^{2}I\left(p_{k,f}|\nu\right)+12d\eta^{2}L_{G}^ {3}+O\left(d^{\frac{\left(\frac{4}{2}\right)}{B}\left[+\frac{\left(4\alpha_{H }\right)+4}{2}\right]}\right)\eta^{\alpha_{H}+1}.\] Therefore, from (Vempala and Wibisono, 2019) Lemma 3, the time derivative of KL divergence along ULA is bounded by \[\frac{d}{dt}H\left(p_{k,f}|\pi\right)\leq-\frac{1}{2}I\left(p_{k,f}|\pi\right) +D\eta^{\alpha_{H}+1},\] where in the last inequality, we have used the definitions of \(D=O\left(d^{\frac{4\mu_{H}|+\left\lfloor\frac{4\alpha_{H}y+4}{B}\right\rfloor }{2}}\right)\eta^{\alpha_{H}+1}\). ### Proof of Lemma 11 Proof: From Theorem (1) we have \[W_{2}^{2}(\mu,\nu)\leq 2M_{4}^{\frac{1}{2}}\left(\mu+\nu\right)\sqrt{\frac{1}{ \gamma}}\sqrt{I\left(\mu|\nu\right)}.\] On the other hand,\(W_{2}\) can be bounded directly again from (Villani, 2008) as \[smoothingW_{2}(\mu,\nu) \leq \left(2\int_{\mathbb{R}^{d}}\left\|x\right\|^{2}\left|\mu(x)-\nu (x)\right|dx\right)^{\frac{1}{2}}\] \[\leq \left(2\int_{\mathbb{R}^{d}}\left\|x\right\|^{2}\left(\mu(x)+\nu( x)\right)dx\right)^{\frac{1}{2}}\] \[\leq \sqrt{2}\sqrt{M_{2}\left(\mu+\nu\right)}.\] Since \(\nu\) is Lipschitz gradient, from HWI inequality for any \(s\geq 4\), we have \[H\left(\mu\left|\nu\right.\right\rangle \leq W_{2}(\mu,\nu)\sqrt{I\left(\mu\left|\nu\right.\right\rangle}+L_{G }W_{2}^{2}(\mu,\nu)\] \[\leq\sqrt{2}M_{2}^{\frac{1}{2}}\left(\mu+\nu\right)\sqrt{I\left( \mu\left|\nu\right.\right\rangle}+2L_{G}M_{4}^{\frac{1}{2}}\left(\mu+\nu\right) \sqrt{\frac{1}{\gamma}}\sqrt{I\left(\mu\left|\nu\right.\right\rangle}\] \[\leq\left(\sqrt{2}M_{2}^{\frac{1}{2}}\left(\mu+\nu\right)+2L_{G}M _{4}^{\frac{1}{2}}\left(\mu+\nu\right)\sqrt{\frac{1}{\gamma}}\right)\sqrt{I \left(\mu\left|\nu\right.\right\rangle}\] \[\leq\left(\sqrt{2}+2L_{G}\sqrt{\frac{1}{\gamma}}\right)M_{4}^{ \frac{1}{2}}\left(\mu+\nu\right)\sqrt{I\left(\mu\left|\nu\right.\right\rangle},\] where in the last step, we have used Jensen inequality for \(s\geq 4\). This gives us the desired result. ### Proof of Lemma 14 Proof: Integrating both sides of equation from \(t=0\) to \(t=\eta\) we obtain \[H(p_{k+1}|\nu)-H(p_{k}|\nu)\leq D\eta^{2+\alpha_{H}},\] where the inequality holds since the first term is negative. Using discrete Gronwall inequality, we have, _for any \(k\in\mathbb{N}\)_ \[H(p_{k}|\nu) \leq H(p_{k_{0}}|\nu)+KD\eta^{2+\alpha_{H}}\] \[\leq H(p_{k_{0}}|\nu)+TD\eta^{1+\alpha_{H}}\] \[\leq H(p_{k_{0}}|\nu)+\frac{\epsilon}{2}.\] If there exists some \(k<K\) such that \(H(p_{k}|\nu)\leq\frac{\epsilon}{2}\) then we can choose \(\eta\leq\left(\frac{\epsilon}{2TD}\right)^{\frac{1}{1+\alpha_{H}}}\) so that \(H(p_{K}|\nu)\leq\epsilon\). If there is no such \(k\), we will prove for sufficiently large \(K\), \(H(p_{K}|\nu)\leq\epsilon\). Let \(A=\frac{3C}{8\left(M_{4}(\beta_{k,t}+\pi)\right)}\left(\frac{\epsilon}{2}\right)\), the above expression leads to \[H(p_{k+1}|\nu)\leq H(p_{k}|\nu)\left(1-A\eta\right)+D\eta^{\alpha_{H}+2}.\] By iterating the process we get \[H(p_{k}|\nu)\leq H(p_{0}|\nu)\left(1-A\eta\right)^{k}+\frac{D}{A}\eta^{\alpha_ {H}+1}.\] To get \(H(p_{K}|\nu)\leq\epsilon\), for \(\eta\) small enough so that \(\eta\leq\left(\frac{A\epsilon}{2D}\right)^{\frac{1}{\alpha_{H}+1}}\), it suffices to run \(K\) iterations such that \[\left(1-A\eta\right)^{K}\leq\frac{\epsilon}{2H(p_{0}|\nu)}.\] As a result, we obtain \[K =\log_{\left(1-A\eta\right)}\left(\frac{\epsilon}{2(H(p_{0}|\nu ))}\right)\] \[=\frac{\ln\left(\frac{(H(p_{0}|\nu))}{\epsilon}\right)}{\ln\left( \frac{1}{1-A\eta}\right)}\] \[\leq\frac{\frac{\ln\left(\frac{(H(p_{0}|\nu))}{\epsilon}\right)} {\frac{3C}{8\left(M_{4}(\beta_{k,t}+\pi)\right)}\left(\frac{\epsilon}{2} \right)\eta}}{\frac{3C}{8\left(M_{4}(\beta_{k,t}+\pi)\right)}\left(\frac{ \epsilon}{2}\right)\eta}.\] By plugging \(T=K\eta\) and assuming without loss of generality that \(T>1\) (since we can choose \(T\)), we obtain \[T\leq\frac{\ln\left(\frac{\left(H(p_{0}|\nu|)\right)}{\epsilon}\right)}{\frac{3C} {8\left(M_{4}(\bar{p}_{k,t}+\pi)\right)}\left(\frac{\epsilon}{2}\right)}\] which is satisfied if we choose \[T=O\left(\frac{\left(M_{4}(\bar{p}_{k,t}+\pi)\right)\ln\left(\frac{\left(H(p_{0 }|\nu|)\right)}{\epsilon}\right)}{\epsilon}\right).\] Without loss of generality, since \(H(p_{0}|\nu)=O\left(d\right),\) we can assume that \(H(p_{0}|\nu)\geq 1>\epsilon.\) We have \(\ln\left(\frac{\left(H(p_{0}|\nu)\right)}{\epsilon}\right)>1.\) Therefore, \[\eta=\min\left\{1,\left(\frac{A\epsilon}{2D}\right)^{\frac{1}{4H+1}},\left( \frac{\epsilon}{2TD}\right)^{\frac{1}{4H+1}}\right\}=\left(\frac{\epsilon}{2 TD}\right)^{\frac{1}{4H+1}}\] Using \(K=\frac{T}{\eta},\) we have \[K \leq O\left(\left(\frac{2TD}{\epsilon}\right)^{\frac{1}{4H+1}} \frac{M_{4}(\bar{p}_{k,t}+\pi)\ln\left(\frac{\left(H(p_{0}|\pi)\right)}{ \epsilon}\right)}{\epsilon}\right)\] \[\leq O\left(\frac{D^{\frac{1}{4H+1}}d^{\frac{1}{2}}\left(1+\frac{ 1}{4H+1}\right)\ln\left(1+\frac{1}{4H+1}\right)\left(\frac{\left(H(p_{0}|\pi) \right)}{\epsilon}\right)}{\epsilon^{\left(1+\frac{1}{4H+1}\right)+\frac{1}{4H +1}}}\right).\] Combining with these above results for \(\eta\) small enough, \(M_{4}(\bar{p}_{k,t}+\pi)=O\left(d^{\frac{1}{2}\frac{\epsilon}{2}}\right)\) and \(D=O\left(d^{\frac{1}{2}\frac{\epsilon(4H+1)}{2}}\right),\)smoothing we obtain \[K=O\left(\frac{d^{\frac{1}{2}\frac{\epsilon H}{2(4H+1)}}\frac{\left(4\bar{p}_ {k,t}+\pi\right)}{\epsilon}\right.}{\frac{3C}{8\left(M_{4}(\bar{p}_{k,t}+\pi) \right)}\left(\frac{\left(H(p_{0}|\nu)\right)}{\epsilon}\right)}{\epsilon^{ \left(1+\frac{2}{4H+1}\right)}}\right)\] which is our desired result. Remark 4: We can get a tighter result for each specific case. For example, by choosing \(\ell_{H}=0,\)\(\alpha_{HN}=\alpha_{H}=1,\)\(\beta\simeq 2\) we obtain \[K\approx\tilde{O}\left(\frac{d^{5}}{\epsilon^{2}}\right),\] a weaker but rather comparable to the result of (Erdogdu and Hosseinzadeh, 2020). ## Appendix D Proof of ULA algorithm via smoothing potential ### Proof of Lemma 11 Proof: Since \(\left|U_{\mu}-U\right|\leq L\mu^{1+\alpha}d^{\frac{1+\alpha}{2\gamma_{\mu}}}\) and \(U\) satisfies Poincare inequality with constant \(\gamma,\) by (Ledoux, 2001)'s Lemma 1.2, \(U_{\mu}\) satisfies Poincare inequality with constant \(\gamma_{1}=\gamma e^{-4\mu^{1+\alpha}d^{\frac{1+\alpha}{2\gamma_{\mu}}}}.\) From Theorem (1) we have \[W_{2}^{2}(p,\pi_{\mu})\leq 2M_{4}^{\frac{1}{2}}\left(p+\pi_{\mu}\right) \sqrt{\frac{1}{\gamma}}\sqrt{I\left(p|\pi_{\mu}\right)}.\] On the other hand,\(W_{2}\) can be bounded directly again from (Villani, 2008) as \[W_{2}(p,\pi_{\mu}) \leq\left(2\int_{\mathbb{R}^{d}}\left\|x\right\|^{2}\left|p(x)-\pi_ {\mu}(x)\right|dx\right)^{\frac{1}{2}}\] \[\leq\left(2\int_{\mathbb{R}^{d}}\left\|x\right\|^{2}\left(p(x)+ \pi_{\mu}(x)\right)dx\right)^{\frac{1}{2}}\] \[\leq\sqrt{2}\sqrt{M_{2}\left(p+\pi_{\mu}\right)}.\] Since \(\pi_{\mu}\) is \(\frac{NL\mu^{1+\alpha}}{(1+\alpha)}d^{\frac{2}{p}\lor 2}\)-Lipschitz gradient, from HWI inequality for any \(s\geq 4\), we have \[H\left(p|\pi_{\mu}\right) \leq W_{2}(p,\pi_{\mu})\sqrt{I\left(p|\pi_{\mu}\right)}+\frac{NL \mu^{1+\alpha}}{(1+\alpha)}d^{\frac{2}{p}\lor 2}W_{2}^{2}(p,\pi_{\mu})\] \[\leq\sqrt{2}M_{2}^{\frac{1}{2}}\left(p+\pi_{\mu}\right)\sqrt{I \left(p|\pi_{\mu}\right)}+2\frac{NL\mu^{1+\alpha}}{(1+\alpha)}d^{\frac{2}{p} \lor 2}M_{4}^{\frac{1}{2}}\left(p+\pi_{\mu}\right)\sqrt{\frac{1}{\gamma}} \sqrt{I\left(p|\pi_{\mu}\right)}\] \[\leq\sqrt{2}M_{2}^{\frac{1}{2}}\left(p+\pi_{\mu}\right)\sqrt{I \left(p|\pi_{\mu}\right)}+2\frac{NL\mu^{1+\alpha}}{(1+\alpha)}d^{\frac{2}{p} \lor 2}M_{4}^{\frac{1}{2}}\left(p+\pi_{\mu}\right)\sqrt{\frac{1}{\gamma}} \sqrt{I\left(p|\pi_{\mu}\right)}\] \[\leq\left(\sqrt{2}+2\frac{NL\mu^{1+\alpha}}{(1+\alpha)}d^{\frac{2 }{p}\lor 2}\sqrt{\frac{1}{\gamma_{1}}}\right)M_{4}^{\frac{1}{2}}\left(p+\pi_{ \mu}\right)\sqrt{I\left(p|\pi_{\mu}\right)}\] \[\leq\left(\sqrt{2}+2\frac{NL\mu^{1+\alpha}}{(1+\alpha)}d^{\frac{2 }{p}\lor 2}\sqrt{\frac{1}{\gamma_{1}}}\right)M_{s}^{\frac{2}{2}}\left(p+\pi_{ \mu}\right)\sqrt{I\left(p|\pi_{\mu}\right)},\] where in the last step, we have used Jensen inequality for \(s\geq 4\). This gives us the desired result. ### Proof of Lemma 12 Proof: We provide the proof for completeness. Recall that by definition of \(U_{\mu}\), we have \(\nabla U_{\mu}(x)=\mathbb{E}_{\zeta}[U(x+\mu\zeta)]\), where \(\zeta\sim N_{p}(0,I_{d})\). For \(\zeta_{1}\sim N_{p}(0,I_{d})\) and it is independent of \(\zeta\), clearly, \(\mathbb{E}_{\zeta_{1}}[g_{\mu}(x,\zeta_{1})]=\mathbb{E}_{\zeta_{1}}\nabla U(x+ \mu\zeta_{1})=\nabla\mathbb{E}_{\zeta_{1}}U(x+\mu\zeta_{1})=\nabla U_{\mu}(x)\) by exchange gradient and expectation and the definition of \(U_{\mu}(x)\). We now proceed to bound the variance of \(g_{\mu}(x,\zeta_{1})\). We have: \[\mathbb{E}_{\zeta_{1}}[\|\nabla U_{\mu}(x)-g_{\mu}(x,\zeta_{1})\| _{2}^{2}]\] \[\leq\mathbb{E}_{\zeta_{1},\zeta}[\|\nabla U(x+\mu\zeta)-\nabla U( x+\mu\zeta_{1})\|^{2}].\] \[\leq N\sum_{i}L_{i}^{2}\mathbb{E}_{\zeta_{1},\zeta}[\|\mu(\zeta- \zeta_{1})\|^{2\alpha_{i}}\] \[\leq N\sum_{i}L_{i}^{2}\mu^{2\alpha_{i}}\mathbb{E}_{\zeta_{1}, \zeta}[\|\zeta-\zeta_{1}\|^{2\alpha_{i}}]\] \[\leq 2N\sum_{i}L_{i}^{2}\mu^{2\alpha_{i}}\left(\mathbb{E}\left[\| \zeta\|^{2\alpha_{i}}\right]+\mathbb{E}\left[\|\zeta_{1}\|^{2\alpha_{i}} \right]\right)\] \[\leq 2N\sum_{i}L_{i}^{2}\mu^{2\alpha_{i}}\left(\left(\mathbb{E} \left[\|\zeta\|^{2}\right]\right)^{\alpha_{i}}+\left(\mathbb{E}\left[\|\zeta_ {1}\|^{2}\right]\right)^{\alpha_{i}}\right)\] \[\leq 4N\sum_{i}L_{i}^{2}\mu^{2\alpha_{i}}d^{\frac{2\alpha_{i}}{p}}\] \[\leq 4N^{2}L^{2}\mu^{2\alpha}d^{\frac{2\alpha_{i}}{p}},\] as claimed. ### Proof of Lemma 13 Proof: First of all, we have \[\mathbb{E}_{p_{\mu,\lambda,\zeta}}\left\|\nabla U_{\mu}(x_{\mu,k})- \nabla U_{\mu}(x_{\mu,k,\lambda})\right\|^{2}\] \[\overset{!}{\leq}3\mathbb{E}_{p_{\mu,\lambda,\zeta}}\left[\left\| \nabla U_{\mu}(x_{\mu,k})-\nabla U(x_{\mu,k})\right\|^{2}+\left\|\nabla U(x_{ \mu,k})-\nabla U(x_{\mu,k,\lambda})\right\|^{2}+\left\|\nabla U(x_{\mu,k, \lambda})-\nabla U_{\mu}(x_{\mu,k,\lambda})\right\|^{2}\right]\] \[\overset{!}{\leq}3NL^{2}\sum_{i}\mathbb{E}_{p_{\mu,\lambda,\zeta }}\left\|x_{\mu,k,\lambda}-x_{\mu,k}\right\|^{2\alpha}+6\left(\frac{NL\mu^{1+ \alpha}}{(1+\alpha)}d^{\frac{3}{p}\vee\frac{5}{2}}\right)^{2}\] \[\leq 3NL^{2}\sum_{i}\mathbb{E}_{p_{\mu,\lambda,\zeta}}\left\|-tg(x _{\mu,k},\zeta)+\sqrt{2t}z_{\mu,k}\right\|^{2\alpha}+6\left(\frac{NL\mu^{1+ \alpha}}{(1+\alpha)}d^{\frac{3}{p}\vee\frac{5}{2}}\right)^{2}\] \[\overset{!}{\leq}3NL^{2}\sum_{i}\left(2\eta^{2\alpha_{i}} \mathbb{E}_{p_{\mu,\lambda,\zeta}}\left[\left\|\nabla U(x_{\mu,k})\right\|+ \left\|\nabla U_{\mu}(x_{\mu,k})-\nabla U(x_{\mu,k})\right\|+\left\|\nabla U_{ \mu}(x_{\mu,k})-g(x_{\mu,k},\zeta)\right\|\right]^{2\alpha}\right)\] \[+3NL^{2}\sum_{i}4\eta^{\alpha_{i}}d^{\alpha_{i}}+6\left(\frac{NL \mu^{1+\alpha}}{(1+\alpha)}d^{\frac{3}{p}\vee\frac{5}{2}}\right)^{2}\] \[\overset{!}{\leq}3NL^{2}\sum_{i}\left(6\eta^{2\alpha_{i}} \mathbb{E}_{p_{\mu,\lambda,\zeta}}\left[\left\|\nabla U(x_{\mu,k})\right\|^{2 \alpha_{i}}+\left\|\nabla U_{\mu}(x_{\mu,k})-\nabla U(x_{\mu,k})\right\|^{2 \alpha_{i}}+\left\|\nabla U_{\mu}(x_{\mu,k})-g(x_{\mu,k},\zeta)\right\|^{2 \alpha}\right]\right)\] \[+3NL^{2}\sum_{i}4\eta^{\alpha_{i}}d^{\alpha_{i}}+6\left(\frac{NL \mu^{1+\alpha}}{(1+\alpha)}d^{\frac{3}{p}\vee\frac{5}{2}}\right)^{2}\] \[\leq 3NL^{2}\sum_{i}\left(6\eta^{2\alpha_{i}}\left[\mathbb{E}_{p_{ \mu,\lambda}}\left\|\nabla U(x_{\mu,k})\right\|^{2\alpha_{i}}+\left(\frac{NL \mu^{1+\alpha}}{(1+\alpha)}d^{\frac{3}{p}\vee\frac{5}{2}}\right)^{2\alpha_{i} }+\left(\mathbb{E}_{p_{\mu,\lambda,\zeta}}\left\|\nabla U_{\mu}(x_{\mu,k})-g( x_{\mu,k},\zeta)\right\|^{2}\right)^{\alpha_{i}}\right]\right)\] \[+3NL^{2}\sum_{i}4\eta^{\alpha_{i}}d^{\alpha_{i}}+6\left(\frac{NL \mu^{1+\alpha}}{(1+\alpha)}d^{\frac{3}{p}\vee\frac{5}{2}}\right)^{2}\] \[\overset{!}{\leq}3NL^{2}\sum_{i}\left(6\eta^{2\alpha_{i}}\left[ \left(d^{\frac{((i_{G}+\alpha_{G}))^{2\alpha_{i}}}{p}}\right)+\left(\frac{NL \mu^{1+\alpha}}{(1+\alpha)}d^{\frac{3}{p}\vee\frac{5}{2}}\right)^{2\alpha_{i} }+\left(8N^{2}L^{2}\mu^{2\alpha}d^{\frac{2\alpha}{p}}\right)^{\alpha_{i}} \right]\right)\] \[+3NL^{2}\sum_{i}4\eta^{\alpha_{i}}d^{\alpha_{i}}+6\left(\frac{NL \mu^{1+\alpha}}{(1+\alpha)}d^{\frac{3}{p}\vee\frac{5}{2}}\right)^{2}\] \[\leq\left(O\left(d^{\frac{(\alpha_{G})^{2\alpha_{i}}}{p}}\right) \right)+6\left(\frac{NL\mu}{(1+\alpha)}d^{\frac{3}{p}\vee\frac{5}{2}}\right)^{ 2}\right)\eta^{\alpha}\] \[+3NL^{2}\sum_{i}\left(6\left[\left(\frac{NL\mu^{1+\alpha}}{(1+ \alpha)}d^{\frac{3}{p}\vee\frac{5}{2}}\right)^{2\alpha_{i}}+\left(8N^{2}L^{2} d^{\frac{2\alpha}{p}}\right)^{\alpha_{i}}\right]+4d^{\alpha_{i}}\right)\eta^{\alpha},\] where step 1 follows from Assumption 1, step 2 comes from Young inequality and triangle inequality, step 3 comes from triangle inequality and normal distribution, step 4 is due to Young inequality, step 5 comes from Lemma 12, and in the last step, we have used Lemma 14. On the other hand, by choosing \(\mu=\sqrt{\eta}\), we also have \[\mathbb{E}_{p_{k}\xi}\left\|\nabla U_{\mu}(x_{\mu,k,t})-g(x_{\mu,k}, \zeta)\right\|^{2}\] \[\overset{1}{\leq}2\left[\mathbb{E}_{p_{k}\xi}\left\|\nabla U_{\mu} (x_{\mu,k,t})-\nabla U_{\mu}(x_{\mu,k})\right\|^{2}+\left\|\nabla U_{\mu}(x_{ \mu,k})-g(x_{k},\xi)\right\|^{2}\right]\] \[\overset{2}{\leq}2\mathbb{E}_{p_{k}\xi}\left\|\nabla U_{\mu}(x_{ \mu,k,t})-\nabla U_{\mu}(x_{\mu,k})\right\|^{2}+8N^{2}L^{2}\mu^{2\alpha}d^{ \frac{2\alpha}{p}}\] \[\leq\left(O\left(d^{\left(\frac{\alpha_{1}(\alpha+\alpha_{2}) \beta\alpha}{\beta}\right)}\right)+12\left(\frac{NL\mu}{(1+\alpha)}d^{\frac{3} {p},\frac{\zeta}{2}}\right)^{2}\right)\eta^{\alpha}\] \[+6NL^{2}\sum_{i}\left(6\left[\left(\frac{NL\mu^{1+\alpha}}{(1+ \alpha)}d^{\frac{3}{p},\frac{\zeta}{2}}\right)^{2\alpha_{i}}+\left(8N^{2}L^{2 }d^{\frac{2\alpha}{p}}\right)^{\alpha_{i}}\right]+4d^{\alpha_{i}}\right)\eta^ {\alpha}+8N^{2}L^{2}d^{\frac{2\alpha}{p}}\eta^{\alpha}\] \[\leq O\left(d^{\left\lceil\frac{2\alpha_{ON}^{2}}{\beta}\right\rceil }\right)\eta^{\alpha},\] where step 1 follows from Young inequality, step 2 is because of Lemma 12 and \(\eta\leq 1\), and the last step comes from \(\eta\) small enough. Therefore, from Lemma 4, the time derivative of KL divergence along ULA is bounded by \[\frac{d}{dt}H\left(p_{\mu,k,t}|\pi_{\mu}\right) \leq-\frac{3}{4}I\left(p_{\mu,k,t}|\pi_{\mu}\right)+\mathbb{E}_{p _{k}\xi}\left\|\nabla U(x_{\mu,k,t})-g(x_{\mu,k},\zeta)\right\|^{2}\] \[\leq-\frac{3}{4}I\left(p_{\mu,k,t}|\pi_{\mu}\right)+D_{\mu}\eta^ {\alpha_{G}},\] where \(D_{\mu}=O\left(d^{\left\lceil\frac{2\alpha_{ON}^{2}}{\beta}\right\rceil}\right)\), as desired. ### Proof of Theorem 4 Proof: Follow the same steps as in Theorem 1, we will get \(H(p_{K}|\pi)\leq\epsilon\) after \[K=O\left(\frac{\gamma^{1+\frac{1}{\alpha_{G}}d}d^{\frac{2\alpha_{ON}^{2}}{ \beta}}|\frac{1}{\alpha_{G}}+\frac{\alpha_{G}\alpha+2}{\beta}|(\alpha_{G} +2)\left(1+\frac{1}{\alpha_{G}}\right)\ln^{\left(1+\frac{1}{\alpha_{G}} \right)}\left(\frac{\left(H(p_{0}|\pi)\right)}{\epsilon}\right)}{\epsilon^{( \alpha_{G}\alpha_{N}+1)\left(1+\frac{1}{\alpha_{G}}\right)+\frac{1}{\alpha_{ G}}}}\right).\] By replacing \(\delta_{1}=\frac{1}{2}\) and \(\delta_{2}=\frac{2}{\gamma}\) for \(s>4\), we have \[K\approx\tilde{O}\left(\frac{d^{\left\lceil\frac{2\alpha_{ON}^{2}}{\beta} \right\rceil}\frac{1}{\alpha_{G}}+\lceil\frac{\alpha_{G}\alpha+2}{\beta} \rceil(\alpha_{G}\alpha+2)\left(1+\frac{1}{\alpha_{G}}\right)}{\epsilon^{( \alpha_{G}\alpha_{N}+1)\left(1+\frac{1}{\alpha_{G}}\right)+\frac{1}{\alpha_{ G}}}}\right).\] From (Nguyen et al., 2021)'s Lemma 3.4, by choosing \(\mu=\sqrt{\eta}\) small enough so that \(W_{\beta}(\pi,\ \pi_{\mu})\leq 3\sqrt{NLE_{2}}\eta^{\frac{\alpha}{2}}d^{\frac{1}{p}} \leq\frac{\epsilon}{2}\). Since \(\pi\) satisfies Poincare inequality, by triangle inequality we also get \[W_{\beta}(p_{k},\ \pi) \leq W_{\beta}(p_{k},\ \pi_{\mu})+W_{\beta}(\pi,\ \pi_{\mu})\] \[\leq 2\inf_{\tau}\left[\tau\left(1.5+\log\int e^{\pi|\xi|^{2}} \pi(x)dx\right)\right]^{\frac{1}{\beta}}\left(H(p_{k}|\pi)^{\frac{1}{p}}+H(p_{ k}|\pi)^{\frac{1}{p\beta}}\right)+W_{\beta}(\pi,\ \pi_{\mu})\] \[\leq 2\left[\frac{a}{4\beta}\left(1.5+\bar{d}+\bar{\mu}\right) \right]^{\frac{1}{\beta}}\left(H(p_{k}|\pi)^{\frac{1}{p}}+H(p_{k}|\pi)^{\frac{ 1}{2\beta}}\right)+3\sqrt{NLE_{2}}\eta^{\frac{\alpha}{2}}d^{\frac{1}{p}}\] To have \(W_{\beta}(p_{K},\ \pi)\leq\epsilon\), it is sufficient to choose \(H(p_{k}|\pi)^{\frac{1}{2\beta}}=\tilde{O}\left(\epsilon d^{\frac{-1}{\beta}}\right)\), which in turn implies \(H(p_{k}|\pi)=\tilde{O}\left(\epsilon^{2\beta}d^{-2}\right).\) By replacing this in the bound above, we obtain the number of iteration for \(L_{\beta}\)-Wasserstein distance is \(\tilde{O}\left(\frac{d^{\frac{2}{\beta}\left(\left\lceil\frac{2\alpha_{MN}^{2}}{\beta }\right\rceil\frac{1}{\delta_{G}}+\left\lceil\frac{\alpha_{GN+2}}{\beta}\right \rceil(\alpha_{GN+2})\left(1+\frac{1}{\delta_{G}}\right)\right)+2+\frac{4}{ \beta}}{\gamma_{1}^{\left(1+\frac{1}{\delta_{G}}\right)}\epsilon^{\left(\alpha _{GN+1}\right)\left(1+\frac{1}{\delta_{G}}\right)+\frac{1}{\delta_{G}}}}{ \varepsilon^{\left(\alpha_{GN+1}\right)\left(1+\frac{1}{\delta_{G}}\right)+ \frac{1}{\delta_{G}}}}\right)\) where \(\gamma_{1}=\gamma e^{-4\lambda t\mu^{1+\alpha}d^{\frac{1+\alpha}{2\gamma p}}}\). Given \(\varepsilon>0\), if we further assume \[\eta=\min\left\{1,\left(\frac{\varepsilon}{2TD_{\mu}}\right)^{\frac{1}{\delta_ {G}}},\left(\frac{\varepsilon}{9\sqrt{NLE_{2}}d^{\frac{1}{\gamma}}}\right)^{ \frac{2}{\delta_{G}}}\right\}\] and for \(\mu\) is small enough, then the above inequality implies for \[K\geq\tilde{O}\left(\frac{d^{\frac{2}{\beta}\left(\left\lceil\frac{2\alpha_{ MN}^{2}}{\beta}\right\rceil\frac{1}{\delta_{G}}+\left\lceil\frac{\alpha_{GN+2}}{ \beta}\right\rceil(\alpha_{GN+2})\left(1+\frac{1}{\delta_{G}}\right)\right)+ 2+\frac{4}{\beta}}}{\gamma_{1}^{\left(1+\frac{1}{\delta_{G}}\right)}\epsilon ^{\left(\alpha_{GN+1}\right)\left(1+\frac{1}{\delta_{G}}\right)+\frac{1}{ \delta_{G}}}}\right),\] we have \(W_{\beta}(p_{K},\ \pi)\leq\frac{\varepsilon}{2}+\frac{\varepsilon}{2}=\varepsilon\), as desired. ## Appendix E Extended result ### Proof of Theorem 5 Proof: Using Lemma 2, there exists \(\tilde{U}\left(x\right)\in C^{1}(R^{d})\) with its Hessian exists everywhere on \(R^{d}\), and \(\tilde{U}\) is convex on \(R^{d}\) such that \[\sup\left(\tilde{U}\left(x\right)-U\left(x\right)\right)-\inf\left(\tilde{U} \left(x\right)-U\left(x\right)\right)\leq 2\sum_{i}L_{i}R^{1+\alpha_{i}}. \tag{34}\] We now prove that \(U\) satisfies a Poincare inequality with constant \(\frac{1}{3C_{K}^{2}d\left(\frac{a+b+2aR^{2}+3}{a}\right)}e^{-4\left(2\sum_{i} L_{i}R^{1+\alpha_{i}}\right)}\). Since \(\tilde{U}\) is convex, by Theorem 1.2 of (Bobkov, 1999), \(\tilde{U}\) satisfies Poincare inequality with constant \[\gamma \geq\frac{1}{4C_{K}^{2}\int\left\|x-E_{\pi}(x)\right\|^{2}\pi \left(x\right)dx}\] \[\geq\frac{1}{8C_{K}^{2}\left(E_{\pi}\left(\left\|x\right\|^{2} \right)+\left\|E_{\pi}(x)\right\|^{2}\right)}\] \[\geq\frac{1}{16C_{K}^{2}E_{\pi}\left(\left\|x\right\|^{2}\right)},\] where \(C_{K}\) is a universal constant, step 1 follows from Young inequality and the last line is due to Jensen inequality. In addition, for \(\left\|x\right\|>R+2\varepsilon+\delta\) from \(\beta-\)dissipative assumption, we have for some \(a\), \(b>0,\left\langle\nabla\tilde{U}(x),x\right\rangle=\left\langle\nabla U(x),x \right\rangle\geq a\left\|x\right\|^{\beta}-b\), while for \(\left\|x\right\|\leq R+2\varepsilon+\delta\) by convexity of \(\tilde{U}\) \[\left\langle\nabla\tilde{U}(x),x\right\rangle \geq 0\] \[\geq a\left\|x\right\|^{\beta}-a\left(R+2\varepsilon+\delta\right) ^{2}\] \[\geq a\left\|x\right\|^{\beta}-2aR^{2}.\] so for every \(x\in\mathbb{R}^{d}\), \[\left\langle\nabla\tilde{U}(x),x\right\rangle\geq a\left\|x\right\|^{\beta}- \left(b+2aR^{2}\right).\] Therefore, \(\tilde{U}(x)\) also satisfies \(\beta-\)dissipative, which implies \[E_{\pi}\left(\left\|x\right\|^{2}\right)\leq 2d\left(\frac{a+b+2aR^{2}+3}{a}\right),\] so the Poincare constant satisfies \[\gamma\geq\frac{1}{32C_{K}^{2}d\left(\frac{a+b+2aR^{2}+3}{a}\right)}.\] From (Ledoux, 2001)'s Lemma 1.2, we have \(U\) satisfies Poincare inequality with constant \[\gamma\geq\frac{1}{32C_{k}^{2}d\left(\frac{a+b+2aR^{2}+3}{a}\right)}e^{-4\left(2 \sum_{i}L_{i}R^{1+iq_{i}}\right)}.\] Now, applying Theorem (1), we derive for \(\alpha_{G}\)-mixture locally smooth with \(\ell_{G}=0\), ULA converges in \[K\approx\tilde{O}\left(\frac{\left(32C_{k}^{2}d\left(\frac{a+b+2aR^{2}+3}{a} \right)e^{4\left(2\sum_{i}L_{i}R^{1+iq_{i}}\right)}\right)^{1+\frac{1}{4q_{G}}} d^{\frac{aR(2+2)}{B}\left(\alpha_{G}+2\right)\left(1+\frac{1}{4q_{G}}\right)+ \frac{\left(2\left\lvert\alpha_{G}\right\rvert\right)}{2}}}{\varepsilon^{ \frac{a_{22}^{2}+22\alpha_{G}+2}{\alpha_{G}}}}\right)\] which is the desired result. Similarly, we have the convergence rate are \[\tilde{O}\left(\frac{\left(32C_{k}^{2}d\left(\frac{a+b+2aR^{2}+3}{a}\right)e^{4 \left(2\sum_{i}L_{i}R^{1+iq_{i}}\right)}\right)^{2}d^{2\lceil\frac{aR(2+3)}{B }\rceil(\alpha_{G}+3)+\frac{\lceil\frac{4aR(2+1)}{B}\rceil}{2}+\frac{\lceil \frac{aR(2+1)(4aR(2+4))}{B}\rceil}{2}}}{\varepsilon^{2aR(2R+5)}}\right)\] and \[K=\tilde{O}\left(\frac{\left(32C_{k}^{2}d\left(\frac{a+b+2aR^{2}+3}{a}\right)e ^{4\left(2\sum_{i}L_{i}R^{1+iq_{i}}\right)}\right)^{2}d^{\lceil\frac{\frac{b} {2}}{2}\rceil+2\lceil\frac{4}{B}\rceil}}{\varepsilon^{2}}\right)\] respectively if the potential is \(\alpha_{H}\)-mixture locally Hessian smooth with \(\ell_{H}=0\) or if the potential is \(1\)-smooth and \(1\)-Hessian smooth. ## Appendix F Useful lemmas ### Proof of Lemma 24 **Lemma 20**: _[Erdogdu and Hosseinzadeh (2020)' Lemma 34]_ _The function \(\|x\|^{\alpha-2}\)\(x\) is \(\alpha-1\)-Holder for \(1<\alpha<2\)._ **Lemma 21**: _The function \(\|x\|^{\alpha}\) is \(\frac{\alpha-1}{n}\)-locally smooth for \(1\leq n-1<\alpha-1\leq n\)._ Proof.: Without loss of generality, assume \(\|y\|\leq\|x\|\) which implies \(\|y\|-\|x\|\leq\|x-y\|\leq\|x\|+\|y\|\). Therefore, \[\|\nabla f(x)-\nabla f(y)\|\] \[\leq\left\|\left\|x\right\|^{\alpha-2}x-\|y\|^{\alpha-2}y\right\|\] \[\leq\left\|\left\|x\right\|^{\alpha-2}x-\|x\|^{\alpha-1}\frac{y}{ \|y\|}+\|x\|^{\alpha-1}\frac{y}{\|y\|}-\|y\|^{\alpha-2}y\right\|\] \[\leq\left\|x\right\|^{\alpha-1}\left\|\frac{x}{\|x\|}-\frac{y}{ \|y\|}\right\|+\left\|x\right\|^{\alpha-1}-\|y\|^{\alpha-1}\] \[\leq\left\|x\right\|^{\alpha-2}\|x-y\|+\|x\|^{\alpha-1}\left\| \frac{y}{\|y\|}\left(\frac{\|y\|}{\|x\|}-1\right)\right\|+\|x-y\|^{\alpha-1}\] \[\leq\left\|x\right\|^{\alpha-1}\left(2\left\|x\right\|^{\alpha- 2}\left\|x-y\right\|^{1-\frac{\alpha-1}{\alpha}}+\left\|x\right\|^{\frac{( \alpha-1)(\alpha-1)}{\alpha}}+\ldots+\left\|y\right\|^{\frac{(\alpha-1)( \alpha-1)}{\alpha}}\right)\] \[\leq n\|x-y\|^{\frac{\alpha-1}{\alpha}}\left(2\left\|x\right\|^{ \alpha-2}\left\|x\right\|^{1-\frac{\alpha-1}{\alpha}}+2\left\|x\right\|^{ \alpha-2}\left\|y\right\|^{1-\frac{\alpha-1}{\alpha}}+\left\|x\right\|^{ \frac{(\alpha-1)(\alpha-1)}{\alpha}}+\ldots+\left\|y\right\|^{\frac{(\alpha-1 )(\alpha-1)}{\alpha}}\right)\] \[\leq(n+5)\left\|x-y\right\|^{\frac{\alpha-1}{\alpha}}\left(1+\|x \|^{\frac{(\alpha-1)(\alpha-1)}{\alpha}}+\|y\|^{\frac{(\alpha-1)(\alpha-1)}{ \alpha}}\right).\] This is the desired result. **Lemma 22**: _The function \(\left\|x\right\|^{\alpha}\) is \(\alpha-2\)-locally Hessian smooth for \(2<\alpha\leq 3\)._ _Proof_ Without loss of generality, assume \(\left\|y\right\|\leq\left\|x\right\|\) which implies \(\left\|y\right\|-\left\|x\right\|\leq\left\|x-y\right\|\leq\left\|x\right\|+ \left\|y\right\|\leq 2\left\|x\right\|\), which in turn implies \(\left\|x\right\|^{\alpha-3}\leq 2^{3-\alpha}\left\|x-y\right\|^{\alpha-3}\). Therefore, \[\left\|\nabla^{2}f(x)-\nabla^{2}f(y)\right\|_{\mathrm{op}}\] \[\leq\alpha\left\|\left\|x\right\|^{\alpha-2}I+(\alpha-2)\left\|x \right\|^{\alpha-3}\frac{xx^{T}}{\left\|x\right\|}-\left\|y\right\|^{\alpha-2 }I-(\alpha-2)\frac{yy^{T}}{\left\|y\right\|}\left\|y\right\|^{\alpha-3} \right\|_{\mathrm{op}}\] \[\leq\left\|y\right\|^{\alpha-2}-\left\|x\right\|^{\alpha-2}+( \alpha-2)\left\|\left\|x\right\|^{\alpha-4}x^{T}-\left\|x\right\|^{\alpha-4} yx^{T}+\left\|x\right\|^{\alpha-4}yx^{T}-yy^{T}\left\|y\right\|^{\alpha-4} \right\|_{\mathrm{op}}\] \[\leq\left\|y\right\|^{\alpha-2}-\left\|x\right\|^{\alpha-2}+( \alpha-2)\left\|x\right\|^{\alpha-3}\left\|x-y\right\|+(\alpha-2)\left\|x \right\|^{\alpha-4}yx^{T}-\left\|x\right\|^{\alpha-4}yy^{T}+\left\|x\right\|^{ \alpha-4}yy^{T}-yy^{T}\left\|y\right\|^{\alpha-4}\right\|_{\mathrm{op}}\] \[\leq\left\|y\right\|^{\alpha-2}-\left\|x\right\|^{\alpha-2}+( \alpha-2)\left\|x\right\|^{\alpha-3}\left\|x-y\right\|+(\alpha-2)\left\|x \right\|^{\alpha-4}\left\|y\right\|\left\|x-y\right\|+(\alpha-2)\left(\left\| x\right\|^{\alpha-4}-\left\|y\right\|^{\alpha-4}\right)\left\|y\right\|^{2}\] \[+(\alpha-2)\left(\left\|y\right\|^{\alpha-2}-\left\|x\right\|^{ \alpha-2}\right)+(\alpha-2)\left\|x\right\|^{\alpha-4}\left(\left\|y\right\| ^{2}-\left\|x\right\|^{2}\right)\] \[+(\alpha-2)\left(\left\|y\right\|^{\alpha-2}-\left\|x\right\|^{ \alpha-2}\right)+2(\alpha-2)\left\|x\right\|^{\alpha-4}\left(\left\|y\right\| ^{2}-\left\|x\right\|^{2}\right)\] \[+(\alpha-2)\left(\left\|y\right\|^{\alpha-2}-\left\|x\right\|^{ \alpha-2}\right)+2(\alpha-2)\left\|x\right\|^{\alpha-3}\left\|x-y\right\|+( \alpha-2)\left\|x\right\|^{\alpha-4}\left\|x-y\right\|^{2}\] \[+(\alpha-2)\left\|x\right\|^{\alpha-4}(2\left\|x\right\|+\left\| x-y\right\|)\left(\left\|x-y\right\|\right)\] \[\leq(\alpha-1)\left(\left\|y\right\|^{\alpha-2}-\left\|x\right\| ^{\alpha-2}\right)+4(\alpha-2)\left\|x\right\|^{\alpha-3}\left\|x-y\right\|+2 (\alpha-2)\left\|x\right\|^{\alpha-4}\left\|x-y\right\|^{2}\] \[\leq\left(\alpha-1+(\alpha-2)\,2^{6-\alpha}\right)\left\|x-y \right\|^{\alpha-2},\] where the last inequality follows from power expansion and triangle inequality. This is the desired result. **Lemma 23**: _The function \(\left\|x\right\|^{\alpha}\) is \(\frac{\alpha-1}{n}\)-locally smooth for \(1\leq n-1<\alpha-2\leq n\)._ _Proof_ Without loss of generality, assume \(\left\|y\right\|\leq\left\|x\right\|\) which implies \(\left\|y\right\|-\left\|x\right\|\leq\left\|x-y\right\|\leq\left\|x\right\|+ \left\|y\right\|\leq 2\left\|x\right\|\). Therefore, \[\left\|\nabla^{2}f(x)-\nabla^{2}f(y)\right\|_{\mathrm{op}}\] \[\leq(\alpha-1)\left\|y\right\|^{\alpha-2}-\left\|x\right\|^{ \alpha-2}\left|+4\left(\alpha-2\right)\left\|x\right\|^{\alpha-3}\left\|x-y \right\|+2\left(\alpha-2\right)\left\|x\right\|^{\alpha-4}\left\|x-y\right\| ^{2}\] \[\leq(\alpha-1)\left\|x-y\right\|^{\frac{\alpha-2}{n}}\left(\left\| x\right\|^{\frac{(n-1)(\alpha-2)}{n}}+\ldots+\left\|y\right\|^{\frac{(n-1)(\alpha-2)}{n}}\right)\] \[+4\left(\alpha-2\right)\left\|x\right\|^{\alpha-3}\left\|x-y \right\|+2\left(\alpha-2\right)\left\|x\right\|^{\alpha-4}\left\|x-y\right\| ^{2}\] \[\leq(n\left(\alpha-1\right)+6\left(\alpha-2\right))\left\|x-y \right\|^{\frac{\alpha-2}{n}}\left(1+\left\|x\right\|^{\frac{(n-1)(\alpha-2)}{ n}}+\left\|y\right\|^{\frac{(n-1)(\alpha-2)}{n}}\right).\] This is the desired result. **Lemma 24**: _Suppose \(\pi=e^{-U}\) satisfies \(\alpha\)-mixture weakly smooth. Let \(p_{\mu,0}=N(0,\frac{1}{L}I)\). Then \(H(p_{\mu,0}|\pi_{\mu})\leq U(0)+\frac{NL\mu^{1+\alpha}}{(1+\alpha)}d^{\frac{2}{ 2\nu\rho}}-\frac{d}{2}\log\frac{2\Pi e}{L}+\frac{Nd}{1+\alpha}=O(d)\)._ _Proof_ Since \(U\) is mixture weakly smooth, for all \(x\in\mathbb{R}^{d}\) we have \[U_{\mu}(x) \leq U(0)+\left\langle\nabla U(0),x\right\rangle+\frac{L}{1+ \alpha}\sum_{i}\left\|x\right\|^{1+\alpha}+\frac{NL\mu^{1+\alpha}}{(1+\alpha)} d^{\frac{2}{2\nu\rho}}\] \[\leq U(0)+\frac{L}{1+\alpha}\sum_{i}\left\|x\right\|^{1+\alpha} +\frac{NL\mu^{1+\alpha}}{(1+\alpha)}d^{\frac{2}{2\nu\rho}}.\] Let \(X\sim\rho=N(0,\frac{1}{L}I)\). Then \[\mathbb{E}_{\rho}\left[U(X)\right] \leq U(0)+\frac{NL\mu^{1+\alpha}}{(1+\alpha)}d^{\frac{2}{2\gamma_{ \rho}}}+\frac{L}{1+\alpha}\sum_{i}\mathbb{E}_{\rho}\left(\left\|\mathbb{x} \right\|^{1+\alpha}\right)\] \[\leq U(0)+\frac{NL\mu^{1+\alpha}}{(1+\alpha)}d^{\frac{2}{2\gamma_ {\rho}}}+\frac{L}{1+\alpha}\sum_{i}\mathbb{E}_{\rho}\left(\left\|\mathbb{x} \right\|^{2}\right)^{\frac{1+\alpha_{i}}{2}}\] \[\leq U(0)+\frac{NL\mu^{1+\alpha}}{(1+\alpha)}d^{\frac{2}{2\gamma_ {\rho}}}+\frac{L}{1+\alpha}\sum_{i}\left(\frac{d}{L}\right)^{\frac{1+\alpha_{i }}{2}}\] \[\leq U(0)+\frac{NL\mu^{1+\alpha}}{(1+\alpha)}d^{\frac{2}{2\gamma_ {\rho}}}+\frac{Nd}{1+\alpha}.\] Recall the entropy of \(\rho\) is \(H(\rho)=-\mathbb{E}_{\rho}[\log\rho(X)]=\frac{d}{2}\log\frac{2\Pi e}{L}\). Therefore, the KL divergence is \[\mathbb{E}(\rho|\pi) =\int\rho\left(\log\rho+U\right)dx\] \[=-H(\rho)+\mathbb{E}_{\rho}[U]\] \[\leq U(0)+\frac{NL\mu^{1+\alpha}}{(1+\alpha)}d^{\frac{2}{2\gamma_ {\rho}}}-\frac{d}{2}\log\frac{2\Pi e}{L}+\frac{Nd}{1+\alpha}\] \[=O(d).\] This is the desired result. ### Proof of Lemma 25 **Lemma 25**: _Assume \(\pi_{\mu}=e^{-U_{\mu}(x)}\) then_ \[\mathbb{E}_{\pi_{\mu}}\left[\left\|\nabla U(x)\right\|^{2}\right] \leq\frac{2NL}{\mu^{1-\alpha}}d^{\frac{2}{2}}d^{\frac{2}{2}}+2\left(\frac{NL \mu^{1+\alpha}}{(1+\alpha)}d^{\frac{2}{2\gamma_{\rho}}}\right)^{2},\] _for d sufficiently large._ Proof: Since \(\pi_{\mu}\) is stationary distribution, we have \[\mathbb{E}_{\pi_{\mu}}\left[\left\|\nabla U_{\mu}(x)\right\|^{2}\right] =\mathbb{E}_{\pi_{\mu}}\left(\triangle U_{\mu}\left(x\right)\right)\] \[\leq\frac{NL}{\mu^{1-\alpha}}d^{\frac{2}{2}},\] where the last step comes from Lemma 10that \(\nabla U_{\mu}\left(x\right)\) is \(\frac{NL}{\mu^{1-\alpha}}d^{\frac{2}{2}}\)-Lipschitz, \(\nabla^{2}U_{\mu}\left(x\right)\leq\frac{NL}{\mu^{1-\alpha}}d^{\frac{2}{2}}I\). In addition, \[\mathbb{E}_{\pi_{\mu}}\left[\left\|\nabla U(x)\right\|^{2}\right] \leq 2\mathbb{E}_{\pi_{\mu}}\left[\left\|\nabla U_{\mu}(x)\right\|^ {2}+\left\|\nabla U_{\mu}(x)-\nabla U(x)\right\|^{2}\right]\] \[\leq\frac{2NL}{\mu^{1-\alpha}}d^{\frac{2}{2}}d^{\frac{2}{2}}+2 \left(\frac{NL\mu^{1+\alpha}}{(1+\alpha)}d^{\frac{2}{2\gamma_{\rho}}}\right) ^{2},\] where the last step follows from Lemma 10. This gives the desired result. ### Proof of lemma 26 **Lemma 26**: _If \(U\) satisfies Assumptions 1and 3, then_ \[U(x)\geq\frac{a}{2\beta}\|x\|^{\beta}+U(0)-\frac{L}{\alpha+1}\sum_{i}R^{\alpha _{i}+1}-\frac{b}{\beta}. \tag{35}\] ### Proof of Lemma 27 **Lemma 27**: _Assume that \(U\) satisfies Assumptions 1 and 3, then for \(\pi=e^{-U}\) and any distribution \(p\), we have for \(\beta>0\),_ \[W_{\beta}^{\beta}(p,\ \pi)\leq\frac{4a}{\beta}\left(1.5+\tilde{d}+\tilde{c}_{ \mu}\right)H(p_{\mu,k}|\pi_{\mu})+\frac{4a}{\beta}\left(1.5+\tilde{d}+\tilde{c }_{\mu}\right),\] _where_ \[\tilde{c}_{\mu} =\frac{1}{2}\log(\frac{2}{\beta})+\frac{L}{\alpha+1}\sum_{i} \left(\frac{2b}{a}\right)^{\frac{\alpha+1}{\beta}}+\frac{b}{\beta}+|U(0)|+ \frac{NL\mu^{1+\alpha}}{(1+\alpha)}d^{\frac{2}{2\alpha\beta}}, \tag{36}\] \[\tilde{d} =\frac{d}{\beta}\left[\frac{\beta}{2}log\left(\Pi\right)+\log \left(\frac{4\beta}{a}\right)+(1-\frac{\beta}{2})\log(\frac{d}{2e})\right]. \tag{37}\] ### Proof of Lemma 28 **Lemma 28**: _If the potential \(U\) satisfies \(\beta\)-dissipative and \(\alpha\)-mixture weakly smooth with \(2\alpha_{N}\leq\beta\), then let \(p_{k}\) be the distribution of \(x_{k}\) of ULA with a step size satisfying \(\eta\leq\frac{1}{2}\left(1\wedge\frac{a}{2N^{2}L^{2}}\right)\), we have for any even integer \(s\geq 2\),_ \[\mathrm{M}_{s}(p_{k}+\pi)\leq\mathrm{M}_{s}(p_{0}+\pi)+C_{s}k\eta,\] _where_ \[C_{s}\stackrel{{\triangle}}{{=}}\left(\frac{3a+2b+3}{1\wedge a} \right)^{\frac{s-2}{B}+1}s^{d}d^{\frac{s-2}{B}+1},\] \[\mathrm{M}_{s}(p_{0}+\pi)\leq 2(\frac{3a+b+3}{a})^{s/B}s^{s/B}d^{\kappa/B}.\] Proof: Since \(U\) satisfies \(\alpha\)-mixture weakly smooth, we have \[\|\nabla U(x)\| \leq\sum_{i}L_{i}\left\|x\right\|^{\alpha_{i}}\] \[\leq\sum_{i}L\left\|x\right\|^{\alpha_{i}}\] \[\leq\sum_{i}L\left(\left\|x\right\|^{\alpha_{N}}+1\right)\] \[\leq NL\left(\left\|x\right\|^{\alpha_{N}}+1\right)\] where \(2\alpha_{N}\leq\beta\) by our assumption. Moreover, \(U\) also satisfies \(\beta\)-dissipative. From (Erdogdu and Hosseinzadeh, 2020) Proposition 2 and Lemma 22, we obtain the desired result. **Lemma 29**: _[(Nguyen et al., 2021) Lemma F.16] If \(\xi\sim N_{p}\left(0,I_{d}\right)\) then \(d^{\left\lfloor\frac{a}{p}\right\rfloor}\leq E(\|\xi\|_{p}^{n})\leq\left[d+ \frac{a}{2}\right]^{\frac{n}{p}}\) where \(\left|x\right|\) denotes the largest integer less than or equal to \(x\). If \(n=kp\), then \(E(\|\xi\|_{p}^{n})=d..(d+k-1)\)._ **Lemma 30**: _[(Nguyen et al., 2021) Lemma C.2] Assume \(\pi=e^{-U(x)}\) is \(\alpha\)-mixture weakly smooth. Then_ \[\mathbb{E}_{\pi}\left[\left\|\nabla U(x)\right\|^{2}\right]\leq 2NL^{2}d^{\frac{3}{ p}},\] _for \(d\) sufficiently large._ **Lemma 31**: _[(Nguyen et al., 2021) Lemma 2.1] If potential \(U:\mathbb{R}^{d}\rightarrow\mathbb{R}\) satisfies an \(\alpha\)-mixture weakly smooth for some \(0<\alpha=\alpha_{1}<...<\alpha_{N}\leq 1\), \(i=1,...,N\)\(0<L_{i}<\infty\), then:_ \[U(y)\leq U(x)+\langle\nabla U(x),\ y-x\rangle+\sum_{i}\frac{L_{i}}{1+\alpha_{ i}}\|y-x\|^{1+\alpha_{i}}. \tag{38}\] ## Acknowledgements This research was funded in part by the University of Mississippi summer grant.
2309.08262
Generalizing odd elasticity theory to odd thermoelasticity for planar materials
We generalize the odd elasticity of planar materials to thermoelasticity, admitting spatially inhomogeneous properties. First, we show that for active systems breaking Onsager relations thermal evolution is given by an odd generalization of the Maxwell-Cattaneo relation. Next three different heat conduction models of odd solids are considered leading, respectively, to a classical coupled thermoelasticity with Fourier law, thermoelasticity with relaxation times of the Maxwell-Cattaneo type, and thermoelasticity with two relaxation times. Governing equations are established in terms of either displacement-temperature pair, stress-heat flux pair, or stress-temperature pair. Next, we establish a form of the stiffness tensor, ensuring its inversion to a compatibility tensor, and write equations of elasticity in the presence of eigenstrains, such as thermal strains, where we find that the stress field remains unchanged for a specific additive change of the compliance tensor field. This so-called stress invariance gives an equivalence class of a wide range of odd materials with different values of material properties. Effectively, within each class, the elastic compliances may be modified by a field linear in the plane without affecting the stress field. Finally, we study hydrodynamic modes in an odd thermoelastic solid with Fourier heat conduction and argue that contrary to even elastic solids, the temperature can affect both dilatational and shear waves. We present odd corrections to sound attenuation and diffusion coefficients.
Martin Ostoja-Starzewski, Piotr Surówka
2023-09-15T09:15:59Z
http://arxiv.org/abs/2309.08262v3
# Odd thermoelasticity ###### Abstract We generalize the odd elasticity of planar materials to thermoelasticity, admitting spatially inhomogeneous properties. First, we show that for active systems breaking Onsager relations thermal evolution is given by an odd generalization of the Maxwell-Cattaneo relation. Next three different heat conduction models of odd solids are considered leading, respectively, to a classical coupled thermoelasticity with Fourier law, thermoelasticity with relaxation times of Maxwell-Cattaneo type, and thermoelasticity with two relaxation times. Governing equations are established in terms of either displacement-temperature pair, stress-heat flux pair, or stress-temperature pair. Next, we establish a form of the stiffness tensor, ensuring its inversion to a compatibility tensor, and write equations of elasticity in the presence of eigenstrains, such as thermal strains. Finally, we find that the stress field remains unchanged for a specific additive change of the compliance tensor field. This so-called stress invariance gives an equivalence class of a wide range of odd materials with different values of material properties. Effectively, within each class, the elastic compliances may be modified by a field linear in the plane without affecting the stress field. Finally, we study hydrodynamic modes in an odd thermoelastic solid with Fourier heat conduction and argue that contrary to even elastic solids, the temperature can affect both dilatational and shear waves. We present odd corrections to sound attenuation and diffusion coefficients. _Introduction_--Thermoelasticity, the interdisciplinary study that converges the principles of thermodynamics with those of elasticity, is a cornerstone in the realm of continuum mechanics. As a field of inquiry, it bridges the gap between thermal and mechanical behavior in materials. Understanding thermoelastic phenomena is crucial for a host of applications ranging from industrial processes, aerospace engineering, and material science to cutting-edge research in nanotechnology and biological systems. As the demands for advanced materials with specific properties grow, be it in extreme temperatures, pressures, or other challenging conditions, the predictive power of thermoelastic models becomes increasingly significant. Traditional elasticity theories may fall short when subjected to varying temperature fields, often resulting in inaccurate predictions and undesirable outcomes in applications such as turbine blade design, thermal barrier coatings, or nuclear reactor construction. Herein lies the indutibable importance of thermoelasticity--it provides a more comprehensive model, capturing the intricate interplay between thermal and elastic effects, to achieve better accuracy and reliability in predictive analysis and design. Moreover, the theory of thermoelasticity finds applications in the real-time analysis of stress and strain in nanostructures, high-speed machinery, and other systems where both mechanical and thermal effects cannot be ignored. With the advent of high computational capabilities, solving complex thermoelastic problems is becoming increasingly feasible, thereby opening new avenues for innovation and application. Traditionally, thermoelasticity has been employed to understand how conventional materials respond to thermal and mechanical stimuli. In active matter, this traditional framework needs to be extended to account for the internal energy sources that drive system behavior. Unlike passive systems, active matter is characterized by a constant input of energy at the microscopic level, leading to macroscopic patterns of motion and deformation. This inherently makes the thermoelastic description of active matter far more complex but also far more intriguing. The recent few years have witnessed the development of odd elasticity, a theoretical framework for elastic materials that do not store energy in the same way as hyperelastic materials do [1; 2; 3; 4] (for a review see [5]). Various physical systems display such responses, typically due to the breakdown of Maxwell-Betti reciprocity. At this point, new challenges arise, First, to extend the framework of odd elasticity to non-isothermal and/or non-adiabatic behaviors. This is considered here with three thermoelastic models: classical coupled thermoelasticity with Fourier law, thermoelasticity with relaxation times of Maxwell-Cattaneo type, and thermoelasticity with two relaxation times. While the latter two models are fundamentally different from one another, they can be simplified to the classical coupled one when the relaxation times are set to zero. The goal is to write down the governing equations admitting, in general, spatially inhomogeneous properties. The next question concerns the possibility of invariance of the stress field in an odd thermoelastic two-dimensional (planar) material. This problem of a so-called "CLM shift in a compliance tensor field" [6] is tackled in a more general setting of eigenstrains. We determine an equivalence class of a wide range of odd materials in which the elastic compliances may be modified without affecting the stress field. _Irreversible thermodynamics_--We construct the first odd extension of thermoelasticity by using the language of irreversible thermodynamics. the activity is introduced by breaking Onsager reciprocity. We start by writing the differential for the entropy. We assume that it is a function of the energy, strain, and heat current [7] \[s=s(\varepsilon,u_{ij},q_{i}). \tag{1}\] This is analogous to viscoelasticity, where the stress or momentum current also contributes to entropy. Taking the divergence one gets \[ds=\frac{d\varepsilon}{T}+\frac{\partial s}{\partial u_{ij}}du_{ij}+\frac{ \partial s}{\partial q_{k}}dq_{k}. \tag{2}\] We impose the second law of thermodynamics \[\Delta_{s}=\dot{s}+\nabla_{i}J_{i}^{s}\geq 0, \tag{3}\] and supplement the system with conservation laws \[\rho\ddot{u}_{i}+\partial_{j}t_{ij}=0\;\;, \tag{4}\] \[\dot{\varepsilon}+\partial_{j}q_{j}+\dot{u}_{ij}t_{ij}=0\;\;,\] (5) \[\dot{\rho}=0\;\;. \tag{6}\] Here \(\rho\) is the mass density, which we set \(\rho=1\). Using the conservation laws, after some algebra, we find \[\partial_{i}(J_{i}^{s}-q^{i}/T)+q_{i}\nabla_{i}(1/T)+\left(\frac{\partial s}{ \partial u_{ij}}+t_{ij}\right)\dot{u}_{ij}+\lambda_{i}\dot{q}_{i}\geq 0, \tag{7}\] where we have defined \(\lambda_{i}\equiv\partial s/\partial q_{i}\). The positivity of the entropy generation leads to \(J_{i}^{s}=q_{i}/T\), \(\partial s/\partial u_{ij}=-t_{ij}\). We are left with \[\Delta_{s}=q_{i}\nabla_{i}(1/T)+\lambda_{i}\dot{q}_{i}\geq 0, \tag{8}\] We now assume, in the linear regime, that \[\lambda_{i}=-\alpha_{ij}q_{j}, \tag{9}\] for some phenomenological tensor \(\alpha_{ij}\). When \(\alpha_{ij}\) is positive the system is bounded from below with a well-defined equilibrium state. Although we have active matter in mind we still assume that our system is bounded from below. Similarly, we impose \[\nabla_{i}(1/T)+\alpha_{ij}\dot{q}_{j}=\gamma_{ij}q_{j}, \tag{10}\] from which relation we obtain \[\tau_{ij}\dot{q}_{j}=-k_{ij}\nabla_{j}T-q_{i}, \tag{11}\] where \(\tau_{ij}=\alpha_{ik}\gamma_{kj}^{-1}\), \(k_{ij}=\gamma_{ij}^{-1}/T^{2}\). In passive systems, the above equation follows from the positivity of the entropy production. Note that in the universe of active matter, this relation is not the most general. However, it is not our goal to introduce as many mechanisms for activity, but rather have a minimal set-up that leads to non-trivial physical phenomena. In this case, we assume that activity is introduced by breaking Onsager relations. For even passive materials (\(\alpha_{ij}=\alpha\delta_{ij}\), \(\gamma_{ij}=\gamma_{ji}=\gamma\delta_{ij}\)) this reduces to the isotropic, even Maxwell-Cattaneo relation \[\tau\dot{q}_{i}=-\lambda\nabla_{i}T-q_{i}, \tag{12}\] where \(\tau=\alpha\gamma\) and \(\lambda=\gamma/T^{2}\). However, for odd active thermoelastic materials, we can have a generalized relation, with odd components \(\tau_{\text{odd}}\) and \(k_{\text{odd}}\). \(k_{\text{odd}}\) is an active component of heat conductivity in odd materials. \(\tau_{\text{odd}}\) is a relaxation time of the odd heat flux. Its value is independent of the even component. Odd and even heat propagation have a different physical origin, therefore they do not need to relax in the same way. This is analogous to odd relaxations in odd viscoelasticity [2]. The necessity to modify the Maxwell-Cattaneo law for odd active materials that do not obey Onsager symmetry is one of the central results of this paper. _Thermoelasticity with parabolic or hyperbolic heat conduction_--In the subsequent analysis, we focus on the thermal properties of odd elastic solids. This activity stems from breaking Maxwell-Betti reciprocity relations. Incorporation of thermal strains and stresses in elasticity leads to three basic models of thermoelasticity: (i) Heat conduction based on the Fourier law \[q_{i}=-k_{ij}T_{,j}\,, \tag{13}\] where \(k_{ij}\) is the thermal conductivity tensor (such that \(k_{ij}T_{,i}\,T_{,j}\!>0\)). Depending on the particular constitution of the odd material, the Onsager symmetry of \(k_{ij}=k_{ji}\) may be broken. (ii) Heat conduction involving one relaxation time \(\tau\) of the Maxwell-Cattaneo law \[q_{i}+\tau\dot{q}_{i}=-k_{ij}T_{,j}\,, \tag{14}\] where the overdot stands for the material time derivative. This model is required in polymers, including the living or dead soft bio-tissues where \(\tau\) may be on the order of seconds or tens of seconds, (e.g. [8] for the even case). The resulting wave-type heat propagation is referred to as the _second sound_ as opposed to the usual elastic waves (_first sound_). Also in this model, the Onsager symmetry may be broken. (iii) Heat conduction involving two relaxation times \(\tau_{0}\) and \(\tau_{1}\)[9]. This model provides an alternative formulation of wave-type heat conduction. Again one can further consider generalizations of this model with broken Onsager symmetry. Since elastodynamics itself is a hyperbolic theory, the first of these models lead to a coupled hyperbolic-parabolic thermoelastic system, while (ii) and (iii) are purely hyperbolic, albeit each different in character. In the cases (i) and (ii), the thermoelastic constitittutive law reads \(\varepsilon_{ij}=S_{ijkl}\sigma_{kl}+A_{ij}\Delta T\), where \(\varepsilon_{kl}=u_{(k,l)}\) is the strain with \((k,l)\) denoting symmetrization on indices \(i\) and \(j\), \(u_{i}\) is the displacement; \(S_{ijkl}\) is the compliance tensor (to be discussed below), and \(\sigma_{kl}\) is the Cauchy stress tensor. Also, \(A_{ij}\) (\(=-S_{ijkl}M_{kl}\)) is the thermal expansion tensor and \(\Delta T\) is the temperature change from \(T_{0}\). The thermoelasticity field equations can be compactly grasped in one system of two coupled equations for the displacement-temperature pair: \[\left(C_{ijkl}u_{k,l}\right)_{,j}+\left(M_{ij}T\right)_{,j}+b_{i}= \rho\ddot{u}_{i}\] \[\left(k_{ij}T_{,j}\right)_{,i}-c_{e}\left(\dot{T}+\tau\ddot{T} \right)+T_{0}\left[M_{ij}(\dot{u}_{i}+\tau\ddot{u}_{i})\right]_{,j}=-r-\tau \dot{r}, \tag{15}\] Here \(C_{ijkl}=S_{ijkl}^{-1}\) is the elasticity (or stiffness) tensor which has minor symmetries (\(C_{ijkl}=C_{jikl}=C_{ijlk}\)) but not the major one; \(M_{ij}\) is the stress-temperature tensor without symmetry (\(M_{ij}\neq M_{ji}\)); \(T\) is the absolute temperature, \(T_{0}\) is the reference temperature, \(b_{i}\) is the body force per unit volume, \(\rho\) is the mass density, \(r\) is the heat produced per unit time and unit mass, and \(c_{e}>0\) is the specific heat at zero strain. When \(\tau>0\), we have the so-called Lord-Shulman (L-S) thermoelasticity [10] and, when \(\tau=0\), the classical coupled (C-C) thermoelasticity is recovered. On the other hand, case (iii) is the basis of the Green-Lindsay (G-L) thermoelasticity [11], which relies on (13) and two thermoelastic constitutive laws (\(\varepsilon_{ij}=S_{ijkl}\sigma_{kl}+A_{ij}\left(\Delta T+\tau_{1}\dot{T}\right)\) and \(s=-T_{0}M_{ij}\varepsilon_{ij}+M_{ij}\left(\Delta T+\tau_{0}\dot{T}\right)\)) where \(s\) is the entropy density. The thermoelasticity field equations of the first and third models are grasped in one system of two coupled equations for the displacement-temperature pair: \[\begin{array}{l}\left(C_{ijkl}u_{k,l}\right)_{,j}+[M_{ij}(T+\tau_{1}\dot{T}) ]_{,j}+b_{i}=\rho\ddot{u}_{i}\\ \left(k_{ij}T_{,j}\right)_{,i}-c_{e}(\dot{T}+\tau_{0}\dot{T})+\left(T_{0}M_{ij }\dot{u}_{i}\right)_{,j}=-r.\end{array} \tag{16}\] The formulation of the G-L theory implies \(\tau_{1}>\tau_{0}\geq 0\) and, when both relaxation times are set to zero, the C-C thermoelasticity is recovered. In various situations (e.g., when the boundary conditions are given in terms of stress tractions), it is advantageous to work with the field equations expressed in terms of stresses. Then, an alternative formulation of the L-S theory is obtained in terms of the stress-heat flux pair \(\left(\sigma_{ij},q_{i}\right)\): \[\begin{array}{l}\left[\rho^{-1}\sigma_{\left(ik,k\right]}\right]_{,j}+c_{ \sigma}^{-1}\left(A_{ij}\dot{q}_{k,k}-A_{ij}\dot{r}\right)+\left(\rho^{-1}b_{ \left(i\right)}\right)_{,j}\\ \left.\hskip 28.452756pt=S_{ijkl}^{\prime}\ddot{\sigma}_{kl},\right.\\ \left[c_{\sigma}^{-1}\left(q_{k,k}+r\right)\right]_{,i}+T_{0}\left(c_{\sigma}^ {-1}A_{pq}\dot{\sigma}_{pq}\right)_{,i}=-\lambda_{ij}\left(\dot{q}_{j}+\tau \ddot{q}_{j}\right).\end{array} \tag{17}\] Here \(c_{\sigma}\) (\(=c_{e}-T_{0}M_{ij}A_{ij}\)) is the specific heat at zero strain. Also, \(\lambda_{ij}=k_{ij}^{-1}\) is the thermal resistivity tensor such that \(\lambda_{ij}q_{i}q_{j}>0\), and \[S_{ijkl}^{{}^{\prime}}=S_{ijkl}-T_{0}c_{\sigma}^{-1}A_{ij}A_{kl}, \tag{18}\] The field equations of the G-L thermoelasticity in terms of the stress-temperature pair \(\left(\sigma_{ij},T\right)\) read \[\begin{array}{l}\left[\rho^{-1}\sigma_{\left(ik,k\right]}\right]_{,j}-A_{ij }t_{(0)}^{-1}[t_{1}c_{\sigma}^{-1}\left(k_{pq}\dot{T}_{,q}\right)_{,p}-\left( \tau_{1}-\tau_{(0)}\right)\ddot{T}]\\ +\tilde{b}_{i,j}=\tilde{S}_{ijkl}\dot{\sigma}_{kl},\\ c_{\sigma}^{-1}\left[\left(k_{pq}T_{,q}\right)_{,p}+r\right]-T_{0}c_{\sigma}^ {-1}A_{pq}\dot{\sigma}_{pq}=(\dot{T}+\tau_{(0)}\ddot{T}),\end{array} \tag{19}\] where we have \(\tilde{S}_{ijkl}=S_{ijkl}-\frac{\tau_{1}}{\tau_{(0)}}\frac{\dot{\sigma}_{0}}{ \sigma_{\sigma}}A_{ij}A_{kl}\), \(\tilde{b}_{(ij)}=\left(\rho^{-1}b_{\left(i\right),j}\right)-\frac{\tau_{1}}{ \tau_{(0)}}\frac{\dot{r}}{\sigma_{\sigma}}A_{ij}\), and \(\tau_{(0)}=\left(1-\frac{c_{\sigma}}{c_{\sigma}}\right)\tau_{1}+\frac{c_{ \sigma}}{c_{\sigma}}\tau_{2}\). The C-C theory is obtained from (17) and (19) by setting \(\tau=0\) and \(\tau_{1}=\tau_{0}=0\), respectively. _From odd elasticity to odd compliance_--As is well-known (e.g., [5]), the major symmetry relation \(C_{ijkl}=C_{klij}\) does not hold in odd solids. To identify the elasticity (\(C_{ijkl}\)) and compliance (\(S_{ijkl}\)) tensors for an isotropic planar odd solid, we begin with \[\sigma_{kl}=K_{ijkl}u_{k,l}\,,\quad i,j,k,l=1,2, \tag{20}\] where \(K_{ijkl}\) is the tensor (with \(\epsilon_{ij}\) the Levi-Civita symbol) \[\begin{array}{l}K_{ijkl}=B\delta_{ij}\delta_{kl}-A\epsilon_{ij}\delta_{kl}+ \mu\left(\delta_{il}\delta_{jk}+\delta_{im}\delta_{jn}-\delta_{ij}\delta_{kl} \right)\\ +K^{0}\left(\epsilon_{ik}\delta_{jl}+\epsilon_{jl}\delta_{ik}\right).\end{array} \tag{21}\] Hence, we find explicitly \[\begin{array}{l}\sigma_{11}=\left(B+\mu\right)u_{1,1}+K^{0}u_{1,2}+K^{0}u_{2, 1}+\left(B-\mu\right)u_{2,2}\\ \sigma_{12}=-\left(A+K^{0}\right)u_{1,1}+\mu u_{1,2}+\mu u_{2,1}+\left(-A+K^{0 }\right)u_{2,2}\\ \sigma_{21}=\left(A-K^{0}\right)u_{1,1}+\mu u_{1,2}+\mu u_{2,1}+\left(-A+K^{0 }\right)u_{2,2}\\ \sigma_{22}=\left(B-\mu\right)u_{1,1}-K^{0}u_{1,2}-K^{0}u_{2,1}+\left(B+\mu \right)u_{2,2}\end{array} \tag{22}\] from which we identify all \(K_{ijkl}\)'s. Unfortunately \(K_{ijkl}\) is not invertible to a compliance form, so we cannot write \(u_{k,l}\)\(=\)\(K_{ijkl}^{-1}\sigma_{kl}\). In addition (20)-(21) imply \(\sigma_{12}\neq\sigma_{21}\) which violates the angular momentum balance under the assumption of no couple-stresses present, suggesting that \(A\) has to be removed to achieve invertibility. We then have \(C_{ijkl}=K_{ijkl}\) (where \(A=0\)) with the odd elasticity property: \(C_{1112}\neq C_{1211}\) and \(C_{2212}\neq C_{1222}\). Using \(\sigma_{12}=\sigma_{21}\) we then write this Hooke's law in matrix form \[\left(\begin{array}{c}\sigma_{11}\\ \sigma_{22}\\ \sigma_{12}\end{array}\right)=\left[\begin{array}{ccc}B+\mu&B-\mu&K^{0}\\ B-\mu&B+\mu&-K^{0}\\ -K^{0}&K^{0}&\mu\end{array}\right]\left(\begin{array}{c}e_{11}\\ e_{22}\\ 2e_{12}\end{array}\right), \tag{23}\] which specifies the elasticity matrix \([\mathbf{C}]\) mapping the vector of elastic strains \(\left(e_{11},e_{22},2e_{12}\right)\) into stresses, where elastic strain is defined as \(e_{kl}=u_{\left(k,l\right)}\). The compliance matrix \([\mathbf{S}]=\left[\mathbf{C}\right]^{-1}\) is found as \[[\mathbf{S}]=\left[\begin{array}{ccc}\frac{\left(K^{0}\right)^{2}+\mu^{2}+B \mu}{4B\left(K^{0}\right)^{2}+4B\mu^{2}}&\frac{\left(K^{0}\right)^{2}+\mu^{2}-B \mu}{4B\left(K^{0}\right)^{2}+4B\mu^{2}}&-\frac{K^{0}}{2\left(K^{0}\right)^{2}+2 \mu^{2}}\\ \frac{\left(K^{0}\right)^{2}+\mu^{2}-B\mu}{4BK^{2}+4B\mu^{2}}&\frac{K^{0}}{4BK^{2}+4 B\mu^{2}}&\frac{K^{0}}{2\left(K^{0}\right)^{2}+2\mu^{2}}\\ \frac{K^{0}}{2\left(K^{0}\right)^{2}+2\mu^{2}}&-\frac{K^{0}}{2\left(K^{0}\right)^{2}+2 \mu^{2}}&\frac{\mu}{\left(K^{0}\right)^{2}+\mu^{2}}\end{array}\right]. \tag{24}\] Here we identify the planar bulk compliance \(B^{-1}\) and the shear compliance \(S_{1212}=\mu/\left(\left(K^{0}\right)^{2}+\mu^{2}\ surface force. In general, eigenstrains and eigenstresses can also be due to swelling, plastic or transformation, loss/gain of mass, or changes to the molecular structure of the phases. _Stress field invariance under a shift in compliances--_We now take the odd elastic body to occupy a simply-connected domain \(\mathcal{B}\) in the plane, with a boundary \(\partial B\) characterized by the unit outer normal vector \(n_{i}\). The body is assumed to be in static equilibrium (\(\sigma_{ij;j}=0\)) while subjected to traction boundary conditions on its entire boundary (\(\sigma_{ij}n_{j}=t_{i}^{(n)}\), \(\forall x_{i}\in\partial B\)), and to satisfy the global equilibrium \[\int_{\partial B}t_{i}^{(n)}dS=0\quad\int_{\partial B}\epsilon_{ijk}x_{j}t_{k} ^{(n)}dS=0. \tag{26}\] If the body domain is multiply-connected, the tractions are self-equilibrated (with overall zero force and zero moment) on each internal boundary. We take the compliances and eigenstrains to be, in general, inhomogeneous in \(\mathcal{B}\) and assume them twice-differentiable in \(\mathcal{B}\). The stress invariance problem [6] is described by asking the following question: _"Given a statically equilibrated solid with a stress field \(\sigma=(\sigma_{11},\sigma_{22},\sigma_{12})\) under prescribed traction boundary conditions, can the compliance tensor \(S_{ijkl}\) be changed to a new \(\widehat{S}_{ijkl}\) in such a way that the stress field remains unchanged?"_ Now, the strain compatibility condition \(\varepsilon_{11,22}+\varepsilon_{22,11}=2\varepsilon_{12,12}\) becomes \[\begin{split}&\nabla^{2}\left[\left(B^{-1}+S_{1212}\right)\left( \sigma_{11}+\sigma_{22}\right)\right]\\ &-2\left(S_{1212;11}\,\sigma_{11}+2S_{1212;12}\,\sigma_{12}+S_{1 212;22}\,\sigma_{22}\right)\\ &=4[\left(S_{1211}\sigma_{11}\right),_{12}+\left(S_{1222}\sigma_ {22}\right),_{12}-\left(S_{1211}\sigma_{12}\right),_{22}\\ &\quad-\left(S_{1212}\sigma_{12}\right),_{11}]+8\varepsilon_{12,1 2}^{*}-4\varepsilon_{11,22}^{*}-4\varepsilon_{22,11}^{*},\end{split} \tag{27}\] Inspecting (27) we see that, for the new stress field to remain \(\sigma_{ij}\), there must hold these relations \[\begin{split}\widehat{B}^{-1}+\widehat{S}_{1212}&=m \left(B^{-1}+S_{1212}\right)\quad\widehat{S}_{1212,11}=mS_{1212;11}\\ \widehat{S}_{1212;12}&=mS_{1212;12}\quad\widehat{S}_ {1212;22}=mS_{1212;12}\,,\end{split} \tag{28}\] where \(m\) is an arbitrary scalar. This implies \[\widehat{B}^{-1}=mB^{-1}+a+bx_{1}+cx_{2},\;\;\widehat{S}_{1212}=mS_{1212}-a-bx_ {1}-cx_{2}. \tag{29}\] The constants \(m\), \(a\), \(b\), and \(c\) are subject to restrictions dictating that the new compliances be non-negative. Put another way, although \(S_{ijkl}\neq S_{klij}\), the answer to the above question is affirmative for a so-called _shift_ of \(S_{ijkl}\) to \(\widehat{S}_{ijkl}\) according to \[\widehat{S}_{ijkl}=S_{ijkl}+S_{ijkl}^{I}, \tag{30}\] where \[S_{ijkl}^{I}\left(\Lambda,-\Lambda\right)=\frac{1}{2\Lambda}\delta_{ij}\delta_ {kl}-\frac{1}{4\Lambda}\left(\delta_{ik}\delta_{jl}+\delta_{il}\delta_{jk} \right). \tag{31}\] is the _shift tensor_ and \(\Lambda^{-1}\) is linear in \(x_{1}\) and \(x_{2}\), only subject to a condition that the new stiffness tensor \(\widehat{S}_{ijkl}\) is positive-definite everywhere in \(\mathcal{B}\). Writing this in terms of matrices, the shift (30) with (31) is expressed as a change of the compliance matrix \([\mathbf{S}]\) to a new \(\left[\widehat{\mathbf{S}}\right]\) according to \[\left[\widehat{\mathbf{S}}\right]=\left[\mathbf{S}\right]+\left[\mathbf{S}^{I }\right],\;\;\;\left[\mathbf{S}^{I}\right]=\frac{1}{2\Lambda}\left[\begin{array} []{ccc}0&1&0\\ 1&0&0\\ 0&0&-2\end{array}\right]. \tag{32}\] Thus, a very wide range of odd thermoelastic materials with different values of material properties will have the same new stress field \(\widehat{\sigma}\) as the original \(\sigma\). _Hydrodynamic modes--_As an instructive example of collective phenomena in thermoelastic models, we study collective modes in the following model of thermoelasticity \[\begin{split}&\left(C_{ijkl}u_{k;l}\right),_{j}+\left(M_{ij}T \right),_{j}=\rho\ddot{u}_{i}\\ &\left(k_{ij}T_{,j}\right),_{i}+T_{0}\left[M_{ij}\dot{u}_{i}\right],_{j}=c_{ e}\dot{T},\end{split} \tag{33}\] where \(k_{ij}=k_{1}\delta_{ij}+k_{2}\epsilon_{ij}\) and \(m_{ij}=m_{1}\delta_{ij}+m_{2}\epsilon_{ij}\). The above system of equation can be put in the hydrodynamic form by introducing an auxiliary velocity field \(v=\dot{u}\) that plays the role of a Josephson equation in a system with a spontaneously broken translation symmetry. It is convenient to pass to the Fourier space, where the excitations are proportional to plane waves \(e^{i\left(\vec{k}\vec{x}-\omega t\right)}\). Our aim is to compute the dispersion relations \(\omega=\omega(k)\) (see e.g. [14]). We get a fifth-order equation that does not have a closed-form solution. However, for our purposes, it is enough to construct a perturbative solution in the powers of \(k\) as \(k\to 0\). We expect two pairs of sound modes plus a diffusive mode due to the temperature profile. Therefore our perturbative ansatz for the solution reads \[\left(\omega^{2}+i\omega k^{2}\Gamma_{1}(\omega,k)-v_{1}^{2}(\omega,k)k^{2}\right)\times\\ \left(\omega^{2}+i\omega k^{2}\Gamma_{2}(\omega,k)-v_{2}^{2}(\omega,k)k^{2}\right)\left(-i\omega+k^{2}D(\omega,k)\right).\] In the lowest order of expansion \(\Gamma_{1}(\omega,k)=\Gamma_{1}\), \(\Gamma_{2}(\omega,k)=\Gamma_{2}\), \(D(\omega,k)=D\). These coefficients correspond to the sound attenuation and the diffusion coefficients respectively. \(v_{1}^{2}(\omega,k)\) and \(v_{2}^{2}(\omega,k)\) correspond to non-dissipative sound velocities squared. The solution reads \[v_{1}^{2}=\frac{1}{\rho^{2}}[-(Bc_{e}+(m_{1}^{2}-m_{2}^{2})T_{0}+2c_{ e}\mu)\rho \tag{34a}\] \[+\sqrt{(8m_{1}m_{2}T_{0}\kappa_{o}+4m_{1}^{2}T_{0}\mu-4m_{2}^{2}T_{0}(B+\mu)+( Bc_{e}+(m_{1}^{2}-m_{2}^{2})T_{0}+2c_{e}\mu)^{2}+4c_{e}(\kappa_{o}^{2}+\mu(B+\mu))) \rho]}.\] \[v_{2}^{2}=\frac{1}{\rho^{2}}[-(Bc_{e}+(m_{1}^{2}-m_{2}^{2})T_{0}+2 c_{e}\mu)\rho\] (34b) \[-\sqrt{(8m_{1}m_{2}T_{0}\kappa_{o}+4m_{1}^{2}T_{0}\mu-4m_{2}^{2}T_ {0}(B+\mu)+(Bc_{e}+(m_{1}^{2}-m_{2}^{2})T_{0}+2c_{e}\mu)^{2}+4c_{e}(\kappa_{o}^ {2}+\mu(B+\mu)))\rho]}.\] Similarly, we can determine the sound attenuation coefficients \[\Gamma_{1}=-2\frac{2k_{1}(4\kappa_{o}^{2}+(2\mu-v_{1}^{2}\rho)(2B+2\mu-v_{1}^{ 2}\rho))}{v_{1}^{2}(v_{1}^{2}-v_{2}^{2})\rho^{2}}, \tag{35a}\] \[\Gamma_{2}=-2\frac{2k_{1}(4\kappa_{o}^{2}+(2\mu-v_{2}^{2}\rho)(2B+2\mu-v_{2}^{ 2}\rho))}{v_{2}^{2}(v_{2}^{2}-v_{1}^{2})\rho^{2}}, \tag{35b}\] and the diffusion coefficient \[D=\frac{2k_{1}(4\kappa_{o}^{2}+(2\mu-v_{1}^{2}\rho)(2B+2\mu-v_{1}^{2}\rho))}{ v_{1}^{2}(v_{1}^{2}-v_{2}^{2})\rho^{2}}. \tag{36}\] We note that generically in models of thermoelasticity without odd coefficients only dilatational waves are affected by thermal effects (see e.g. [15]). Here odd coefficients mix transverse and longitudinal waves, which means that shear waves also feel temperature profiles. In addition, odd coefficients also affect the dissipative responses as they modify both sound attenuation and diffusion. _Acknowledgments_--M.O.-S. thanks the Isaac Newton Institute for Mathematical Sciences, University of Cambridge, for support and hospitality as the Rothschild Distinguished Visiting Fellow during the programme "Uncertainty quantification and stochastic modelling of materials" where work on this paper was partially completed. P.S. acknowledges support from the Polish National Science Centre (NCN) Sonata Bis grant 2019/34/E/ST3/00405. Part of this work was performed at the Aspen Center for Physics, which is supported by the National Science Foundation grant PHY-2210452.
2309.07637
All Gauged Curvature Squared Supergravities in Five Dimensions
We present a complete basis to study gauged curvature-squared supergravity in five dimensions. We replace the conventional ungauged Riemann-squared action with a new Log-invariant, offering a comprehensive framework for all gauged curvature-squared supergravities. Our findings address long-standing challenges and have implications for precision tests in the AdS/CFT correspondence.
Gregory Gold, Jessica Hutomo, Saurish Khandelwal, Mehmet Ozkan, Yi Pang, Gabriele Tartaglino-Mazzucchelli
2023-09-14T12:03:14Z
http://arxiv.org/abs/2309.07637v2
# All Gauged Curvature Squared Supergravities in Five Dimensions ###### Abstract We present a complete basis to study gauged curvature-squared supergravity in five dimensions. We replace the conventional ungauged Riemann-squared action with a new Log-invariant, offering a comprehensive framework for all gauged curvature-squared supergravities. Our findings address long-standing challenges and have implications for precision tests in the AdS/CFT correspondence. _Introduction._--Twenty-six years after its discovery, the AdS/CFT correspondence has entered a new era in which precision tests beyond the leading order have become increasingly important, owing to developments in both field theory and gravity. On the one hand, integrability and localization techniques allow one to compute the observables in superconformal field theories (SCFT) exactly at finite couplings. On the other hand, the development of superconformal tensor calculus and superspace techniques - see the reviews [1; 2; 3; 4] - in conjunction with the computational capabilities offered by computer algebra programs, has significantly advanced the construction of exact, off-shell higher-derivative supergravity models. In this letter, we present all gauged curvature-squared supergravity invariants in five dimensions based on the off-shell dilaton Weyl multiplet. After going on-shell, our invariants describe the most general four-derivative corrections to the five-dimensional minimal gauged supergravity, which is a universal sector to all string compactifications preserving at least eight supercharges. The gauged aspect is necessary to accommodate a supersymmetric anti-de Sitter (AdS) solution, and thus is of broad interest in holography. In particular, due to recent advancements in AdS black hole microstate counting [5; 6; 7; 8; 9; 10; 11; 12] using the dual CFT, a precise matching between the gravity and CFT results at the next to leading order clearly requires the knowledge of the complete curvature-squared supergravity actions. Previous works have made attempts to compute four-derivative corrections based on partial results in the literature, and certain assumptions were made. Using our full results, it can be shown that, in fact, some of the assumptions are invalid, thus finally furnishing the stage for new, next-to-leading order analyses on the gravity side of the AdS/CFT correspondence. The construction of gauged curvature-squared invariants is notoriously hard, as opposed to their ungauged counterparts, which have been fully known for more than a decade [13; 14; 15]. The primary difficulty stems from the absence of a straightforward transition from ungauged to gauged theories. In fact, the complete basis of invariants must be constructed from completely different starting points. For instance, certain ungauged curvature-squared models are attainable through the application of superconformal tensor calculus, utilizing the dilaton Weyl multiplet. In contrast, their gauged counterparts necessitate the use of a modified version of the same multiplet [16], which has an entirely different field content and transformation rules. Furthermore, the deformation necessary for the construction of gauged supergravity models renders certain established higher-derivative supergravity building techniques impractical, thus further complicating the task. In fact, it takes an interplay between superconformal tensor calculus [16] and superspace techniques [17], together with a series of new, daunting computations finalized only in the results presented here, to yield the complete set of gauged curvature-squared invariants. This letter aims to explicitly show that the past challenges can be overcome by changing the basis of curvature-squared supergravities, which previously employed the Weyl tensor squared, Riemann tensor squared, and Ricci scalar squared as fundamental building blocks. We demonstrate that by replacing the Riemann-squared action with the Log-invariant, in which the leading term comes with the Ricci tensor squared, it is possible to explicitly establish all gauged curvature-squared supergravities in five dimensions. The outcomes presented in our letter mark a significant advancement, paving the way to a complete study of physical results beyond the leading supergravity approximation in five dimensions. This development holds particular promise for precision tests of the AdS\({}_{5}\)/CFT\({}_{4}\) correspondence. In this context, we derive the anomaly coefficients in the dual SCFT\({}_{4}\), which apparently depend on all curvature-squared couplings. _Construction of the invariants._--We start by introducing the field content of the standard Weyl multiplet of conformal supergravity in five dimensions [18]. Our notation and conventions correspond to that of [18]. We denote the spacetime indices by \(\mu,\nu,\cdots\), Lorentz indices by \(a,b,\cdots\), SU(2) indices by \(i,j,\cdots\), and spinor indices by \(\alpha,\beta,\cdots\). The multiplet is described by a set of independent gauge fields: the vielbein \(e_{\mu}{}^{a}\), the gravitino \(\psi_{\mu}{}^{i}\), the SU(2) gauge fields \(V_{\mu}{}^{ij}\), and a dilatation gauge field \(b_{\mu}\). The other gauge fields associated with the remaining symmetries, including the spin connection \(\omega_{\mu}{}^{ab}\), the \(S\)-supersymmetry connection \(\phi_{\mu}{}^{i}_{\alpha}\), and the special conformal connection \(f_{\mu}{}^{a}\), are composite fields, i.e., they are determined in terms of the other fields by imposing certain curvature constraints. The standard Weyl multiplet also contains a set of matter fields: a real antisymmetric tensor \(T_{ab}\), a fermion \(\chi^{i}_{\alpha}\), and a real scalar \(D\). A more detailed discussion of the superconformal transformations of the various fields can be found, e.g., in [16; 18]. Below we will make use of a variant multiplet of conformal supergravity, known as the gauged dilaton Weyl multiplet [19; 16]. For this multiplet, the independent gauge fields remain the same as the standard Weyl multiplet, but the matter content is replaced with \(\{\sigma,C_{\mu},B_{\mu\nu},L_{ij},E_{\mu\nu\rho},N,\psi^{i},\varphi^{i}\}\). This is obtained by coupling the standard Weyl multiplet to on-shell vector and linear multiplets. The vector multiplet consists of a scalar field \(\sigma\), the gaugino \(\psi^{i}_{\alpha}\), an abelian gauge vector \(C_{\mu}\) with field strength \(G_{\mu\nu}=2\partial_{\mu}C_{\nu}\), and an SU(2) triplet of auxiliary fields \(Y^{ij}=Y^{(ij)}\). The linear multiplet contains an SU(2) triplet of scalars \(L^{ij}=L^{(ij)}\), a gauge three-form \(E_{\mu\nu\rho}\), a scalar \(N\), and an SU(2) doublet \(\varphi^{i}_{\alpha}\). The bosonic matter fields of the vector and the standard Weyl multiplet are then expressed as follows [16] \[Y^{ij} = -\tfrac{g}{2}\sigma^{-1}L^{ij}+\text{f.t.}\,\] \[T_{ab} = \tfrac{1}{8}\sigma^{-1}G_{ab}+\tfrac{1}{48}\sigma^{-2}\epsilon_{ abcde}H^{cde}+\text{f.t.}\,\] \[D = \tfrac{1}{4}\sigma^{-1}\nabla^{a}\nabla_{a}\sigma+\tfrac{1}{8} \sigma^{-2}(\nabla^{a}\sigma)\nabla_{a}\sigma-\tfrac{1}{32}R \tag{1}\] \[-\tfrac{1}{16}\sigma^{-2}G^{ab}G_{ab}-(\tfrac{26}{3}T^{ab}-2 \sigma^{-1}G^{ab})T_{ab}\] \[+\tfrac{g}{4}\sigma^{-2}N+\tfrac{g^{2}}{16}\sigma^{-4}L^{2}+ \text{f.t.}\,\] where "f.t." stands for omitted fermionic terms and \(H_{abc}=e_{a}{}^{\mu}e_{b}{}^{\nu}e_{c}{}^{\rho}H_{\mu\nu\rho}\) denotes the three-form field strength \(H_{\mu\nu\rho}:=3\partial_{[\mu}B_{\nu\rho]}+\tfrac{3}{2}C_{[\mu}G_{\nu\rho]} +\tfrac{1}{2}gE_{\mu\nu\rho}\). In the above, the covariant derivative is denoted by \[\nabla_{a}=e_{a}{}^{\mu}\big{(}\partial_{\mu}-\omega_{\mu}{}^{bc}M_{bc}-b_{\mu }\mathbb{D}-V_{\mu}{}^{ij}U_{ij}\big{)}\, \tag{2}\] with \(M_{ab}\), \(\mathbb{D}\), and \(U_{ij}\) being the Lorentz, dilatation, and SU(2) generators, respectively. The dilatation connection \(b_{\mu}\) is pure gauge and will be set to zero throughout. The mapping (1) allows us to easily convert every invariant involving a coupling to the standard Weyl multiplet to that written in terms of the gauged dilaton Weyl multiplet. The ungauged map and the models can simply be obtained by setting \(g=0\) in (1). In this case, the fields of the linear multiplet decouple from the map (1), and the multiplet reduces to the ungauged dilaton Weyl multiplet with \(32+32\) off-shell degrees of freedom [21; 18]. In the superconformal tensor calculus, the so-called BF action principle plays a fundamental role in the construction of general supergravity-matter couplings, see [21; 22; 23; 24; 25; 17; 25] for the 5D case. It is based on an appropriate product of a linear multiplet with an Abelian vector multiplet: \[e^{-1}\mathcal{L}_{\text{BF}}=\,A_{a}E^{a}+\rho N+\mathcal{Y}_{ij}L^{ij}+ \text{f.t.}. \tag{3}\] Here we use \(\{\rho,A_{\mu},\mathcal{Y}_{ij},\lambda^{i}_{\alpha}\}\) to denote the field content in an arbitrary vector multiplet, and the bosonic part of the constrained vector \(E_{a}\) is related to the three-form gauge field \(E_{abc}\) via \(E_{a}=-\tfrac{1}{12}\epsilon_{abcde}\nabla^{b}E^{cde}\). In any construction that involves composite expressions for the fields of the linear multiplet in terms of the vector multiplet, the BF-action yields a vector-coupled action in the (gauged) dilaton Weyl background. Using the off-shell map given in [15], a vector multiplet can be identified with fields in the gauged dilaton Weyl multiplet as \[\mathcal{Y}^{ij} \to\frac{1}{4}\text{i}\sigma^{-1}\bar{\psi}^{i}\psi^{j}-\frac{ g}{2}\sigma^{-1}L^{ij}\,, \qquad\rho\to\sigma\,,\] \[A_{\mu} \to C_{\mu}\,, \lambda^{i}\to\psi^{i}\,, \tag{4}\] which gives rise to off-shell models that are purely expressed in terms of the fields of the (gauged) dilaton Weyl multiplet. By appropriately choosing primary composite linear multiplets, eq. (3) becomes the building block for constructing various curvature-squared invariants. In the superconformal approach, the off-shell formulation of minimal 5D supergravity can be achieved by coupling the standard Weyl multiplet to two off-shell conformal compensators: a vector multiplet and a linear multiplet. Within this setup, supersymmetric completions of the Weyl tensor squared and Ricci scalar squared were constructed in [26] and [14], respectively. The Weyl tensor-squared invariant is based on a composite linear multiplet comprised solely of standard Weyl multiplet fields. To construct the Ricci scalar-squared invariant, one starts by defining a composite vector multiplet in terms of a linear multiplet. This composite vector multiplet is then substituted into the vector multiplet action obtained using (3). While the Weyl tensor-squared and Ricci scalar-squared actions based on the dilaton Weyl multiplet were presented as ungauged models [14], they can simply be gauged by using the maps (1) and (4). At this point, it is worthwhile to mention that the on-shell results for these gauged actions differ from those presented by [27]. The reason is that Ref. [27] assumes that the map between the standard Weyl and the dilaton Weyl multiplet is not modified in the gauged case. However, as shown in [16] and presented as in eq. (1), these expressions are indeed deformed. For completeness, we present the off-shell gauged results in the Supplemental Material. The third invariant necessary to obtain all the curvature-squared models in five dimensions was constructed as the Riemann-squared invariant in the ungauged dilaton Weyl basis in [13]. However, this model does not refer to the standard Weyl multiplet; hence, the prescription to obtain gauged models cannot be applied. Furthermore, the construction methodology cannot be extended to the gauged dilaton Weyl multiplet. Alternatively, a third independent, locally superconformal invariant containing the Ricci tensor-squared term can be constructed [17; 28], which provides the correct basis to study gauged curvature-squared supergravity, as we shall discuss momentarily. In this case, the lowest component of the composite linear multiplet is given by the field \(L_{\rm Log}^{ij}\). This is obtained by making use of the standard Weyl multiplet and by acting with six \(Q\)-supersymmetry transformations on the field \(\log\rho\), with \(\rho\) being the lowest component of a compensating vector multiplet [29]. The rest of the composite "Log multiplet" is then obtained by acting with up to two more \(Q\)-supersymmetry transformations on \(L_{\rm Log}^{ij}\). Due to the complexity of computing up to eight supersymmetry transformations, the explicit form of the Log multiplet, including all fermionic terms, has been obtained only recently with the aid of the _Cadabra_ software [30; 31]. These lengthy results will be published elsewhere [32], see also [28], and [33] for the complete analysis of the gauged supergravity case. Inserting the resulting composite multiplet into (3) yields the explicit form of a new "Log invariant" which will be presented in [32] in the standard Weyl basis. Then, the Log invariant in the gauged dilaton Weyl background can be obtained by employing the map (1) and (4). For the purpose of this letter, it suffices to present its bosonic sector in the gauge \[\sigma=1\,\qquad b_{\mu}=0\,\qquad\psi^{i}=0. \tag{5}\] The gauged Log invariant in the dilaton Weyl background, which includes a Ricci-squared term, reads \[e^{-1}{\cal L}_{\rm Log} = -\tfrac{1}{6}R_{ab}R^{ab}+\tfrac{1}{24}R^{2}+\tfrac{1}{6}R^{ab}G _{ab}^{2}+\tfrac{1}{3}RH_{ab}G^{ab}-\tfrac{4}{3}R_{ab}H^{ac}G^{b}{}_{c}- \tfrac{1}{3}RH^{2}-\tfrac{1}{12}\epsilon^{abcde}C_{a}V_{bc}{}^{ij}V_{deij} \tag{6}\] \[+\tfrac{1}{6}V^{abij}V_{abij}-2(H^{2})^{2}+\tfrac{16}{3}H_{ab}^{2 }H^{ac}G^{b}{}_{c}-\tfrac{4}{3}H^{2}H_{ab}G^{ab}+\tfrac{2}{3}H_{ab}H_{cd} \big{(}G^{ab}G^{cd}-2G^{ac}G^{bd}\big{)}\] \[+\tfrac{2}{3}H^{2}G^{2}-\tfrac{4}{3}H^{2ab}G_{ab}^{2}-\tfrac{1}{3 }H_{ab}G^{ab}G^{2}+G_{ab}^{2}H^{ac}G^{b}{}_{c}-\tfrac{1}{48}(G^{2})^{2}-\tfrac {1}{24}G^{4}-\tfrac{1}{6}\nabla_{c}G^{ac}\nabla^{b}G_{ab}\] \[+2\nabla_{a}H_{bc}\nabla^{[a}H^{bc]}+\tfrac{1}{48}\epsilon^{abcde }\nabla^{f}G_{ef}(4H_{ab}-G_{ab})(4H_{cd}-G_{cd})\] \[+\tfrac{6}{6}\Big{(}RN-4NH_{ab}G^{ab}-2NG^{2}+V_{ab}\,^{ij}L_{ij} (G^{ab}+4H^{ab})+12NH^{2}-6\nabla^{a}\nabla_{a}N\Big{)}\] \[-\tfrac{g^{2}}{24}\Big{(}2RL^{2}-L^{2}(G^{2}-4G^{ab}H_{ab}-24H^{2 })+4N^{2}+6\nabla^{a}L^{ij}\nabla_{a}L_{ij}\Big{)}+\tfrac{2}{3}NL^{2}g^{3}+ \tfrac{5}{24}L^{4}g^{4}\,\] where \(V_{ab}{}^{ij}=2\partial_{[a}V_{b]}{}^{ij}-2V_{[a}{}^{ki}(V_{b]k}{}^{j})\). Furthermore, we have used the following notations: \(R^{ab}=-\tfrac{1}{12}\epsilon^{abcde}H_{cde}\), \(H^{2}=H^{ab}H_{ab}\), \(G^{2}=G^{ab}G_{ab}\), \(H^{2}_{ab}:=H^{a}_{c}H_{bc}\), \(G^{2}_{ab}:=G_{a}{}^{c}G_{bc}\), \(G^{4}=G^{2ab}G^{2}_{ab}\), and \(H^{4}=H^{2ab}H^{2}_{ab}\). Note that the supersymmetric Riemann-squared invariant can be obtained by taking the following linear combination of the ungauged Weyl-squared invariant presented in [13] and setting \(g=0\) in the Log invariant (6) \[{\cal L}_{\rm Riem^{2}}={\cal L}_{\rm Weyl^{2}}+2{\cal L}_{\rm Log}|_{g=0}. \tag{7}\] The resulting action is identical to the one presented in [13] up to total derivatives. _Going on shell and dual CFT._--Now let us study a certain linear combination of the Einstein-Hilbert and all three curvature-squared invariants \[(16\pi G){\cal L}_{2\partial+4\partial}={\cal L}_{\rm EH}+\lambda_{1}{\cal L }_{\rm Weyl^{2}}+\lambda_{2}{\cal L}_{\rm Log}+\lambda_{3}{\cal L}_{R^{2}}. \tag{8}\] where \(G\) is Newton's constant and all the invariants are given in the gauged dilaton Weyl multiplet background. \({\cal L}_{\rm Weyl^{2}}\) and \({\cal L}_{R^{2}}\) respectively denote the Weyl tensor squared and Ricci scalar squared actions which are obtained by employing the maps (1) and (4) in the standard Weyl multiplet results of [14]. Their explicit form is not crucial here, but they are given in the Supplemental Material for the reader's convenience. The two-derivative Lagrangian \({\cal L}_{\rm EH}\) is obtained by using the linear multiplet action in the standard Weyl multiplet basis [15; 16] and the sequential use of the maps (1) and (4). Note that, in this section, we rescaled the Lagrangians such that the coefficient of their leading curvature-squared term is normalized to unity. To go on-shell, we fix the gauge according to (5) and break SU(2) down to U(1) by choosing \[L_{ij}=\tfrac{1}{\sqrt{2}}\delta_{ij}L\,,\qquad\quad V^{ij}_{a}=V^{{}^{\prime} ij}_{a}+\tfrac{1}{2}\delta^{ij}V_{a}. \tag{9}\] Consequently, the two-derivative Lagrangian becomes \[e^{-1}{\cal L}_{\rm EH}=L(R-\tfrac{1}{2}G_{ab}G^{ab}+4H_{ab}H^{ ab}+2V^{{}^{\prime}ij}_{a}V^{{}^{\prime}a}_{ij})\] \[+L^{-1}\partial_{a}L\partial^{a}L-2L^{-1}E_{a}E^{a}-2\sqrt{2}E_{a }V^{a}-2N^{2}L^{-1}\] \[-4gC_{a}E^{a}-2gNL-4gN-\tfrac{1}{2}g^{2}L^{3}+2g^{2}L^{2}. \tag{10}\] From the total Lagrangian (8), several auxiliary fields can be solved from their field equations up to \({\cal O}(\lambda_{i})\) \[N = -\tfrac{1}{2}gL(2+L)+{\cal O}(\lambda_{i})\,\] \[E_{a} = {\cal O}(\lambda_{i})\,,\quad V^{{}^{\prime}ij}_{a}={\cal O}( \lambda_{i}). \tag{11}\] To arrive at the five-dimensional gauged minimal supergravity, we first dualize \(B_{\mu\nu}\) to a new 1-form gauge field \(\widetilde{C}_{\mu}\) following the procedure in [15; 16]. We then truncate the model consistently by imposing \[L=1+{\cal O}(\lambda_{i})\,,\quad\widetilde{C}_{a}=C_{a}+{\cal O}(\lambda_{i}). \tag{12}\] Following (12), the field equation of \(E_{abc}\) now implies \[V_{a}=-\tfrac{3}{\sqrt{2}}gC_{a}+\mathcal{O}(\lambda_{i}). \tag{13}\] Plugging (11)-(13) back to the total Lagrangian (8), one obtains the on-shell theory up to first order in \(\lambda_{i}\). It is important to note that in the procedure outlined above, the \(\mathcal{O}(\lambda_{i})\) terms arising from substituting (11)-(13) to the two-derivative action either vanish (proportional to the leading order equations of motion of auxiliary fields) or can be removed by field redefinitions [34]. To recover the standard convention of minimal supergravity, we rescale the graviphoton and the U(1) coupling according to \(C_{a}\to\tfrac{1}{\sqrt{3}}C_{a},\,g\to\sqrt{2}g\). To conclude, following [35], the resulting Lagrangian can be further simplified by redefining the metric and the U(1) gauge field. Eventually, the on-shell model is recast in the form below \[(16\pi G)e^{-1}\mathcal{L}_{2\partial+4\partial}=c_{0}R+12c_{1}g^ {2}-\tfrac{1}{4}c_{2}G_{ab}G^{ab}\] \[\qquad+\tfrac{1}{12\sqrt{3}}c_{3}\epsilon^{abcde}C_{a}G_{bc}G_{ de}+\lambda_{1}\mathcal{L}_{\rm GB}|_{\rm onshell}\, \tag{14}\] where the various coefficients are \[c_{0} = 1+(\tfrac{28}{9}\lambda_{1}-20\lambda_{2}-4\lambda_{3})g^{2}\,,\] \[c_{1} = 1+(\tfrac{50}{9}\lambda_{1}-\tfrac{28}{3}\lambda_{2}+\tfrac{52}{ 3}\lambda_{3})g^{2}\,\] \[c_{2} = 1+(\tfrac{64}{9}\lambda_{1}-\tfrac{92}{3}\lambda_{2}-\tfrac{76}{ 3}\lambda_{3})g^{2}\,\] \[c_{3} = 1-12(\lambda_{1}+3\lambda_{2}+3\lambda_{3})g^{2}\, \tag{15}\] and the on-shell Gauss-Bonnet invariant is given by \[\mathcal{L}_{\rm GB}|_{\rm onshell}=R_{abcd}R^{abcd}-4R_{ab}R^{ab }+R^{2}+\tfrac{1}{8}G^{4}\] \[\qquad-\tfrac{1}{2}W_{abcd}G^{ab}G^{cd}+\tfrac{1}{2\sqrt{3}} \epsilon^{abcde}C_{a}R_{bc}{}^{fg}R_{def}\, \tag{16}\] where \(W_{abcd}\) is the Weyl tensor. This on-shell action is consistent with the generic result presented in [36] for the proper choice of parameters. Based on the on-shell model (14) we find that the AdS\({}_{5}\) radius receives corrections from the higher-derivative terms and is given by \[\ell=g^{-1}(1+\tfrac{8}{9}g^{2}\lambda_{1}-\tfrac{16}{3}g^{2}\lambda_{2}- \tfrac{32}{3}g^{2}\lambda_{3}). \tag{17}\] The effective Newton's constant from Eq. (14) is then \[G_{\rm eff}=G+G(-\tfrac{28}{3}\lambda_{1}+20\lambda_{2}+4\lambda_{3})g^{2}. \tag{18}\] The AdS\({}_{5}\) vacuum preserves maximal eight supercharges [37; 38] and the dual field theory should be a \(D=4\), \(\mathcal{N}=1\) CFT. Utilizing (17) and (18) the \(a\) and \(c\) Weyl anomaly coefficients of the dual CFT can be obtained via the standard holographic renormalization procedure [39; 40] \[a = \frac{\pi}{8g^{3}G}-\frac{9\pi(\lambda_{2}+\lambda_{3})}{2gG}\,\] \[c = \frac{\pi}{8g^{3}G}+\frac{\pi(2\lambda_{1}-9\lambda_{2}-9\lambda_ {3})}{2gG}\, \tag{19}\] using which one finds the results above are consistent with \(R\)-symmetry anomaly whose coefficients are related to those of the Weyl anomaly via [41; 42] \[5a-3c=\frac{\pi c_{3}}{4g^{3}G}\,\quad a-c=-\frac{\pi\lambda_{1}}{gG}. \tag{20}\] _Conclusions and outlook._--In this letter, we provide the correct and complete basis to study curvature-squared gauged supergravity in five dimensions. Based on the new results, we successfully computed the anomaly coefficients governing dual four-dimensional SCFTs. As four-dimensional SCFTs are characterized by two anomaly coefficients, one would naturally anticipate the emergence of only two independent linear combinations among the three four-derivative couplings. Indeed, our analysis confirms this expectation, with \(\lambda_{2}\) and \(\lambda_{3}\) consistently appearing together in the anomaly coefficients as the combination \(\lambda_{2}+\lambda_{3}\). However, when examining the on-shell action (14), it becomes evident that \(\lambda_{2}\) and \(\lambda_{3}\) do not share this combination. This suggests that while two curvature-squared invariants may suffice for calculating BPS quantities [43; 44; 45; 36], the computation of generic physical parameters may require the incorporation of all three invariants. We can also generalize the results by coupling multiple vector multiplets which also enjoy an off-shell formulation. Given the simple form the 6D ungauged Gauss-Bonnet invariant [46; 47; 48] and the relation between dilaton Weyl multiplets in these two dimensions [13], it may be feasible to reformulate the 5D Gauss-Bonnet invariant into a more elegant expression that facilitates the construction of intriguing solutions. For instance, the non-existence of supersymmetric AdS\({}_{5}\) black ring solutions in the two-derivative theory [49] raises the intriguing question of whether this situation changes in the presence of higher-derivative interactions. Our new invariants enable the computation of corrections to the entropy of \(\tfrac{1}{16}\)-BPS black holes [37; 38; 50; 51; 52], thereby extending the precision test of black hole microstate counting to the next to leading order. It is also interesting to extend the recently proposed equivariant localization [53; 54] beyond the leading two-derivative cases. _Acknowledgements._--The work of G.G., J.H., S.K., and G.T.-M. was supported by the Australian Research Council (ARC) Future Fellowship FT180100353, and by the Capacity Building Package of the University of Queensland. G.G. and S.K. are supported by postgraduate scholarships at the University of Queensland. M.O. acknowledges the support by the Outstanding Young Scientist Award of the Turkish Academy of Sciences (TUBA-GEBIP). The work of Y.P. is supported by National Natural Science Foundation of China (NSFC) under grant No. 12175164 and the National Key Research and Development Program under grant No. 2022YFE0134300.
2309.15233
A Highly Efficient and Pure Few-Photon Source on Chip
We report on multi-photon statistics of correlated twin beams produced in a periodic poled micro-ring resonator on thin-film lithium niobate. Owing to high cavity confinement and near perfect quasi-phase matching, the photons pairs are produced efficiently in single modes at rates reaching 27 MHz per $\mu$W pump power. By using a pump laser whose pulse width impedance matches with the cavity, those photons are further created in single longitudinal modes with purity reaching 99\%, without relying on later-on filtering. With a dual-channel photon-number resolving detection system, we obtain directly the joint detection probabilities of multi-photon states up to three photons, with high coincidence to accidental contrast for each. Used as a single photon source, it gives heralded $g_H^{(2)}(0)$ around 0.04 at a single photon rate of 650 kHz on chip. The findings of our research highlight the potential of this nanophotonic platform as a promising platform for generating non-classical, few-photon states with ideal indistinguishability, for fundamental quantum optics studies and information applications.
Zhaohui Ma, Jia-Yang Chen, Malvika Garikapati, Zhan Li, Chao Tang, Yong Meng Sua, Yu-Ping Huang
2023-09-26T19:54:57Z
http://arxiv.org/abs/2309.15233v1
# A Highly Efficient and Pure Few-Photon Source on Chip ###### Abstract We report on multi-photon statistics of correlated twin beams produced in a periodic poled microring resonator on thin-film lithium niobate. Owing to high cavity confinement and near perfect quasi-phase matching, the photons pairs are produced efficiently in single modes at rates reaching 27 MHz per \(\mu\)W pump power. By using a pump laser whose pulse width impedance matches with the cavity, those photons are further created in single longitudinal modes with purity reaching 99%, without relying on later-on filtering. With a dual-channel photon-number resolving detection system, we obtain directly the joint detection probabilities of multi-photon states up to three photons, with high coincidence to accidental contrast for each. Used as a single photon source, it gives heralded \(g_{H}^{(2)}(0)\) around 0.04 at a single photon rate of 650 kHz on chip. The findings of our research highlight the potential of this nanophotonic platform as a promising platform for generating non-classical, few-photon states with ideal indistinguishability, for fundamental quantum optics studies and information applications. ## I Introduction Discrete photon-number and quantum entangled states are among the cornerstones of quantum optics and its many information processing applications. Limited by photon creation and measurement technology, most quantum applications hitherto have been designed based on the uses of their lowest-order forms: single photons or two of them in pairs. For example, quantum key distribution based on BB84 uses antibunched single photon states[1], while quantum teleportation takes advantage of two-photon entanglement[2]. Lately, the emergence of photon-number resolving (PNR) capability in photon detection has open a door to a new paradigm of quantum optics, where nonclassical states containing multiple photons promise to offer significant advantages in computing and sensing. In this pursuit, encouraging progress has been made in the generation of multiphoton quantum states [3; 4], quantum interferometry using N00N states [5; 6], quantum sensing using photon-number squeezing [7], and quantum computing [8]. To capitalize on the quantum benefits of multiphoton states, it is desirable to embed them in single optical modes. In bulky photon sources of spontaneous parametric downconversion (SPDC) or four-wave mixing, to meet this condition usually requires ultra-narrow band filtering or using ultra-short, broadband pump pulses [9; 4; 10], either of which add significantly to system complexity and footprint. In contrast, nanophotonic circuits with high Q cavities can create photons intrinsically in single spatial and temporal modes of high purity. For example, a \(\chi^{(3)}\) microring was shown to produce squeezed states in good single modes, albeit suffering parametric fluorescence emission into multiple cavity lines [11]. Here, we demonstrate an on-chip \(\chi^{(2)}\) source of multiphoton states in quasi-phase matched microrings of lithium niobate on insulator (LNOI). Due to subwavelength lateral confinement, the photons are created in single transverse (spatial) modes of high purity. With a high cavity Q and by using a pump laser whose pulse width impedance-matches with the cavity, those photons are further created in single longitudinal (time-frequency) modes with purity reaching 99%, without relying on later-on filtering. Such high purity in both spatial and time-frequency modes gives rise to high indistinguishability, as desirable for many quantum computing, teleportation, and sensing applications. Aided by nearly perfect quasi phase matching through periodic poling, the photon generation efficiency is exceptional, where only microwatt pump power is required to create single, double, and triplet photon states of high correlation and at megahertz rates. Such high purity and high efficiency contribute to the device scaling and wide deployment. Together with narrow cavity bandwidth, they suppress background noise created through, e.g., Raman scattering or fluorescence emission. On detection, we use photon-number resolving, superconducting nanowire single-photon detectors (PNR-SNSPDs) built in a parallel circuit configuration to accurately characterize the photon number statistics and time correlation of multiphoton states with picosecond resolution. Our results show high coincident to accidental ratios for photon counts in one, two, and three photon states. Finally, we show how this system can be used for heralded single-photon generation at 10 MHz clock speed. [12; 13]. **Device Calibration and Experiment Setup**. Figure 1 gives device details of the on-chip multiphoton source. As shown in Fig. 1(a), it is a periodically poled microring cavity fabricated on a Z-cut LNOI wafer (by NANOLN Inc.), with a 600-nm thick lithium niobate thin film bonded onto a 2-\(\mu\)m silicon dioxide layer above a sili con substrate. Utilizing our standard fabrication method [14], a top width of 1.6\(\mu\)m and a radius of 80\(\mu\)m periodically poled lithium niobate (PPLN) microring is etched with a pulley bus waveguide as the coupler. The loaded quality factor (\(Q_{l}\)) is measured for each mode, and the coupling (\(Q_{c}\)) and intrinsic (\(Q_{0}\)) factors are each calculated by fitting the resonance spectra; see result in Fig. I.(b). The chip is fiber coupled, with the fiber-chip-fiber coupling losses measured to be 9.2 \(\pm\) 0.2 dB at 1553.93 nm and 11.5 \(\pm\) 0.3 dB at 776.96 nm, respectively. The overall optical nonlinearity is characterized by second harmonic generation (SHG), similarly to our previous measurement [15]. With an on-chip pump power \(P_{p}\) of 4.78\(\mu\)W, \(P_{\rm SH}=75\) nW of second harmonic light is coupled out into the bus waveguide. The SHG efficiency is thus \(\eta_{\rm SHG}=P_{\rm SH}/P_{p}^{2}=0.33\%/\mu W\), thus supporting highly efficient SPDC using only microwatt pumping. In a single-mode cavity, the effective Hamiltonian describing quasi-phase matched, non-degenerate spontaneous parametric downconversion can be written as follows: \[\hat{H}_{\bf eff}=\hbar g(\hat{a}_{s}\hat{a}_{i}\hat{b}_{p}^{\dagger}+\hat{a}_ {s}^{\dagger}\hat{a}_{i}^{\dagger}\hat{b}_{p}), \tag{1}\] where {\(\hat{a}_{s}\), \(\hat{a}_{i}\), and \(\hat{a}_{p}\)} each denotes the annihilation operator for the signal, idler, and pump photons, and \(g\) is the nonlinear coupling coefficient between the pump and photon pairs. By periodic poling, the current lithium niobate micro-ring resonator can achieve phase matching while attaining the largest overlap between the fundamental quasi-transverse magnetic (quasi-TM) cavity modes in the infrared bands for the signal and idler photons and the visible band for the pump. Meanwhile, it provides the access to the largest \(\chi^{(2)}\) nonlinear tensor \(d_{33}\) of lithium niobate. All contribute to a large effective nonlinear coupling coefficient \(g\), which is given by [14] \[g=\sqrt{\frac{\hbar\omega_{p}\omega_{s}\omega_{i}}{2\epsilon_{0}\epsilon_{p} \epsilon_{s}\epsilon_{i}}}\frac{2}{\pi}d_{\rm eff}\zeta\over\sqrt{V_{\rm eff}}}, \tag{2}\] where \(\omega_{j}\) is the angular frequency, with \(j\)=p,\(s\), and \(i\) indicates the pump, signal, and idler modes, respectively. \(\epsilon_{0}\) is the vacuum permittivity. {\(\epsilon_{j}\)} are the relative permittivities. \(d_{\rm eff}\) is the effective nonlinear susceptibility with the quasi phase matching discount. \(\zeta\) is the mode-overlapping factor. \(V_{\rm eff}\) is the effective mode volume. For the current microring device, the calculated single-photon coupling strength \(g\) is 2.98 MHz. Figure 2 illustrates the experiment setup for generating and detecting photons. A visible pulse train with a pulse duration of \(\tau\)=300 ps and a repetition rate of 10MHz is created with a bulk SHG system made of a periodic-poled lithium niobate waveguide, to match the cavity lifetime and ensure single-mode operations; see details in Appendix A. Its power is varied by using a visible fiber attenuator (OZ OPTICS), and its polarization is controlled using fiber polarization controllers (FPCs). The output is fed into the microring cavity to excite the quasi-TM visible mode at 776.96 nm with a bandwidth of 1.14 GHz. There, signal and idler photons are created through SPDC into 1551.85 nm and 1555.93 nm quasi-TM modes, respectively, each in bandwidth of 1.68 GHz. The SPDC efficiency is tracked and maximized by temperature tuning using a temperature electronic controller (TEC). Subsequently, the generated photon pairs are filtered using an inline long-pass filter featuring an 80 dB extinction ratio and a 0.5 dB insertion loss (IL) to eliminate the pump power while transmitting the generated photon pairs. In order to separate the signal and idler photons, cascaded dense wavelength division multiplexing (DWDM) filters with a full width at half maximum (FWHM) transmission bandwidth of 1.6 nm are employed, resulting in a transmission loss of approximately 0.3 dB. A pair of FPCs are then utilized to independently prepare the signal and idler photons in the best polarization states to Figure 1: (a): Schematic of the Z-cut periodical poling microring resonator, where the pump (\(\omega_{P}\)) couples into the microring and generates signal (\(\omega_{S}\)) and idler (\(\omega_{I}\)). A pulley coupler is designed for overcoupling all light waves for high photon-extraction efficiency. Inset shows an SEM image of the microring with the pulley waveguide. Figure 1 (b) plots the typical spectra of interacting TM\({}_{00}\) cavity modes at (i) 776.96 nm, (ii) 1551.85 nm, and (iii) 1555.93 nm Figure 2: Experiment setup. VA, Variable Attenuator; DWDM, Dense wavelength-division multiplexing; TT, Time Tagger unit; Sync, synchronise cable. be detected by the SNSPDs with the maximum detection efficiency. The two-channel PNR-SNSPDs (ID281, ID Quantique) feature a dark count rate of 50 to 100 Hz and detection efficiencies of 70% (corresponding to 1.55 dB loss) and 82% (0.86 dB loss), respectively. The detector outputs are fed to a synchronized time-tagging unit (Swabian Instrument). Accounting for all insertion losses, chip-fiber coupling losses, and finite detection efficiencies, the signal and idler channels experience a total loss of \(\eta_{\text{S}}\)=7.55 dB and \(\eta_{\text{I}}\)=6.76 dB, respectively. **Photon Number Statistics**. Upon carefully calibrating the chip device and SNSPDs, we proceed to measure the photon number statistics of the signal and idler photons while varying the input pump power. For the signal channel, the measurement results are shown in Fig. 3, where the normalized probabilities of detecting 0, 1, 2, and 3 photons are plotted along with error bar (assuming shot noise) under various mean photon number. As shown, the overall photon number distribution follows thermal distribution, as expected. As the mean photon number increases, the relative probabilities of multiple photon events increase. For example, when the mean photon number is 0.0028, the normalized probabilities for one, two, and three photons are \(2.80\times 10^{-3}\), \(7.85\times 10^{-6}\), and \(3.03\times 10^{-7}\), respectively. As we increase it to 0.0137, they each become \(1.37\times 10^{-2}\), \(1.91\times 10^{-4}\), and \(1.77\times 10^{-6}\). Because the SPDC saturated regime of multi-photon starts at mean photon number of 0.0137, there will be no obvious increasing at three-photon case. In the figure, the error bars for three-photon events are higher because of much less detection events so that the Poissonian noise is more pronounced. To further show that our SPDC source indeed operates in the single-mode range, in Fig. 4 we compare the measurement results with ideal thermal light distribution (TLD) in a single mode for two mean photon number cases: 0.0137 and 0.008. The TLD follows \(P(n)=\bar{n}^{n}/(1+\bar{n})^{n+1}\), where \(n\) and \(\bar{n}\) denote the photon number and their mean, respectively. As seen, in both cases the measurement results agree well with TLD for the one and two photon cases. Compared with the coherent light distribution, there is a clearly deviation. For the three photon case, there is noticeable discrepancy, which can primarily be ascribed to the threshold sensitivity encountered in higher photon situations for the present SNSPD system. These results verify that our SPDC photons are in single modes, as desirable for many quantum information and quantum computing processes. **Photon correlation**. Next, we characterize the one-photon and two-photon pair generation, by measuring their rates in each individual channel and jointly over paired SPDC channels. Specifically, we record the events of detecting one and two photons in the signal channel, Figure 4: (a) Detected photon distribution, thermal light and coherent light fitting at a mean photon number of approximately 0.0137. (b) Detected photon distribution, thermal light and coherent light fitting at a mean photon number of approximately 0.008. Figure 3: Photon number statistics for different mean photon numbers. with rates \(N_{S}\) and \(N_{SS}\), respectively, and in the idler channel with \(N_{I}\) and \(N_{II}\). Simultaneously, we record the one-photon coincident events where there is one photon detected in each channel, with rate \(N_{SI}\), as well as two-photon coincident events for two photons per channel with rate \(N_{SSII}\). From these rates, the on-chip generation rate for the one-photon pairs is estimated to the first order as \(P_{SI}=N_{S}N_{I}/N_{SI}\). The results are plotted as a function of the on-chip SPDC pump power in Fig. 5.(a). As shown, \(P_{SI}\) increases linearly with the power, as expected. Only 220 nW power is needed to create 7 million pairs per second. By linear regression, the brightness, defined as pair generation per unit pump power, is obtained as the slope of the fitting curve as 27 MHz/\(\mu\)W, which is among the highest across all SPDC sources in various materials. The detection rate corresponds to ten times higher than our previous result [16], which is ascribed to the higher efficiencies in both photon pair generation and detection. Similarly, for the two-photon pairs (i.e., two signal photons and two idler photons generated simultaneously in pairs), the on-chip rate under first order approximation is \(P_{\rm SSII}=N_{\rm SS}N_{\rm II}/N_{\rm SSII}\). The results as a function of the on-chip pump power are plotted in Fig. 5.(b). In contrast to the one-photon pair case, here the rate increases quadratically over the power, because the underline process is of the second order in SPDC. At 220 nW pumping, the two-photon pair on-chip rate is 8.6(10\({}^{4}\)), and increase to 9.5(10\({}^{6}\)) at 1.12 \(\mu\)W. The above results are from simple calculations under the first order approximation. To further characterize the mutliphoton correlation, we count the joint events of mixed photon numbers and use loss inversion to calculate the inferred joint states of photon numbers [4]. The results for 0.137 mean photon number on chip are shown in Fig. 6, where we neglect the contributions from detector dark counts and ambient photons (about 100 Hz). As seen, while the photon numbers in the signal and idler channels are correlated, the correlation is not strong. This is mainly due to the high total loss of each channel (7.55 dB and 6.76 dB) and the low coincidence events of multiphoton states, because of which the loss inversion calculation is not very accurate. To get a better measurement, we exam the coincident detection of the various multiphoton states. The results for the same pump power as in Fig. 6 are given in Table 1 as the coincidence-to-accidental counting rates of one, two, and three photons in each channel. Here, the coincidence rates between S(\(n\)) and I(\(m\)) are for event occurrences of simultaneously detecting \(n\) signal photons and \(m\) idler photons in the same time slot (in this case each of 400 ps width). The accidental rates are for those events occuring in a different slot, set by 100 ns apart to avoid any correlation. As seen, the coincidence to accidental detection ratio is about 10 for single-photon pairs, and 100 for two-photon pairs, which shows high correlation. Over our total acquisition period of 120 seconds, we record 3 coincidence of three photon pairs, but no accidental event. Interestingly, in the Figure, the coincident rates are not maximized at diagonal. For example, the coincident detection of one signal photon and two idler photons is more likely than that of two signal and two idler photons. This is because although signal and idler photons are created on chip with strong photon number correlation, the total loss is about 7 dB per channel so that only a fraction of them can be detected thus blurring the correlation. From Table 1, the mutual correlation function can be calculated \(g^{(n,m)}=\langle\hat{a}^{\dagger n}\hat{a}^{n}\hat{b}^{\dagger m}\hat{b}^{m} \rangle/\langle\hat{a}^{\dagger}\hat{a}\rangle^{n}\langle\hat{b}^{\dagger} \hat{b}\rangle^{m}\). To satisfy the non-classical criteria [3; 17], the following condition must be met: \(\gamma=g^{(1,2)}/\sqrt{g^{(2,2)}g^{(0,2)}}>1\). From the pump power ranging from 220 nW to 1.12 \(\mu\)W, we have calculated \(\gamma\) to between 1.3 and 1.6, indicating good quantum correlation. We next study the prospective use of this source for heralded single photon generation. Figure 7 plots the heralded photon correlation for both channels under various pump power. In contrast to standard Hanbury Brown and Twiss effect (HBT) measurement using a beamsplitter, here we utilize the collected multiphoton statistics directly by the PNR-SNSPDs. In this case, the second-order correlation function at \(\tau\)=0 without heralding, denoted as \(g^{(2)}(0)\), is given by Figure 5: On-chip generation rates for one-photon pairs (a) and two-photon pairs (b), respectively, along with their curve fitting results. Figure 6: Coincidence photon probability at average pump power around 1.1\(\mu\)W. \(g^{(2)}(0)=\sum n(n-1)P(n)/(\sum nP(n))^{2}\). The results are around 1.99 to 2.25 (see given in the Appendix B), verifying the thermal statistics of each SPDC channel under the single-mode condition. In the heralding case, on the other hand, the same statistics is taken only when there is one photon clicking event in the paired channel. In this case, the correlation becomes \(g_{H}^{(2)}(0)=\sum n(n-1)P(n|1)/(\sum nP(n|1))^{2}\), where \(P(n|m)=P(n,m)/P(m)\) is the conditional probability of detecting \(n\) photons in one channel upon detecting \(m\) photons in the other, computed from the joint detection probability of \(m\) and \(n\) photons in the two channels and that of a single one. With the coincidence counts from two PNR-SNSPDs, we can easily compute \(P(n|1)\) for both signal and idler channels. As seem in the figure, for both channels, \(g_{H}^{(2)}(0)\) is about 0.01 when the mean photon numbers are 0.003 per pulse and increases to approaching 0.05 as the mean photon numbers increase to 0.014. Finally, we compare the time-frequency mode purity obtained here with competing sources. The results are summarized in Table 2. In waveguides, it typically requires to use picosecond pump pulses so as to match the optical filters for the generated photons, to obtain single modes. In comparison, those based on resonators, such as the present microrings, the pump pulses can be much longer, ranging from a few hundred picoseconds to nanoseconds in order to match with the cavity's lifetime for single modes. In this device, the effective mode number \(K=1/[g^{(2)}(0)-1]\) is at 1.01, which is very close to the ideal case with \(K=1\)[4]. This represents a mode purity of \(1/K=99\pm 4.9\%\), indicating an optimal condition for single mode photon production, as desirable for many applications. In conclusion, we have demonstrated photon statistics with a two-channel PNR-SNSPD system, characterizing single-photon and multiphoton pair generation. Utilizing an ideally quasi phase matched lithium niobate microring in Z cut, we have scored a ten-fold enhancement in the SPDC generation rate of single-photon pairs [16]. We measured joint photon probabilities of multiphoton states up to three photons in a channel. Also, we have performed coincident to accidental photon detection for multiphoton states using time-delayed measurement, for the first time. Our results highlight a SPDC source for multiphoton entanglement with both high efficiency and mode purity, as needed for many quantum information processing applications with multiphoton states. This work paves the way for the development of advanced quantum photonic devices and systems with good performance and versatility. \begin{table} \begin{tabular}{c|c|c|c|c|c|c} \hline Reference & Material Structure & Quality Factor & Pulse Width & \(g^{(2)}\) & \(K\) & Purity \\ \hline \hline Eckstein[9] & PPKTP\({}^{\rm a}\) Waveguide & N/A & 1ps & 1.95 & 1.05 & 95\% \\ \hline Harder[4] & PPKTP Waveguide & N/A & 1ps & 1.89 & 1.12 & 89\% \\ \hline Stasi[13] & PPKTP Waveguide & N/A & 1ps & 1.99 & 1.01 & 99\% \\ \hline Vaidya[11] & \(Si_{3}N_{4}\)\(\mu\)-ring & 8(10\({}^{5}\)) & 1.5ns & 1.95 & 1.05 & 95\% \\ \hline This work & PPLN \(\mu\)-ring & 1.15(10\({}^{5}\)) & 300ps & 1.99 & 1.01 & 99\% \\ \hline \end{tabular} \end{table} Table 2: Mode purity in Various Photon Sources Figure 7: Heralded \(g_{H}^{(2)}(0)\) for signal (a) and idler photons (b). ###### Acknowledgements. The research was supported in part by the Office of Naval Research (Award No. N00014-21-1-2898) and by ACC-New Jersey (Contract No. W15QKN-18-D-0040). Device fabrication was performed at ASRC, CUNY. ## Appendix A SPDC Pump Generation To create the SPDC pump pulses in the visible band, a single-channel picosecond EOM (electro-optical modulator) driver (Highland Technology T130, 250ps-30ns pulse width, 0-50MHz pulse rate) supplies Radiofrequency pulse(10MHz) to the EOM. An IR power meter monitors the EOM output. An erbium-doped fiber amplifier (EDFA) in the telecom C band further amplifies the weak signal, followed by two DWDM filters to clean the beam. The resulted signal then couples into a bulk PPLN crystal to create the visible pulsed light as the SPDC pump. Two low-pass-filters(IL \(\sim\) 0.5 dB; extinction ratio, ER \(\sim\) 50 dB) and narrow-band-pass-filters(Alluxa, 3 nm, IL\(\sim\) 1 dB, ER \(>\) 120 dB) reject the pump signal and passing largest light at 776.96nm. In Figure 9, we measure the cross-correlation between photons created by the signal cavity and the synchronized electronic pulse from EO Driver by using a time tagger. The full width at half maximum is around 320 ps. Due to the EO Driver jitter(10 ps) and PNR-SNSPDs jitter(54 ps), it is slightly wider than the electronic pulse(300 ps). ## Appendix B Signal and Idler \(g^{(2)}(0)\) Measurement Figure 10(a) and 3(b) plot the photon correlation measurement of the signal and idler channel using two PNR-SNSPDs, before heralding. Here the second-order correlation function is calculated from the PNR-SNSPD results as \(g^{(2)}_{\rm{unc}}(0)=\sum n(n-1)P(n)/(\sum nP(n))^{2}\). As seen, for both channels \(g^{(2)}_{\rm{unc}}(0)\approx 2\) for each channel at different mean photon number. Figure 8: Setup for generate the SPDC pump. Blue and red lines depict the telecom light path and visible path, respectively. FPC, fiber-polarization controller; EOM, Electro-Optic modulator; PM, power meter; EDFA, Erbium-Doped Fiber Amplifier; WDM, wavelength division multiplexing module; LP, Low pass filter; BP, bandpass filter; DUT, device under test. Figure 10: (a) and (b): Unheralded two-photon correlation in signal and idler channels. Figure 9: Pulse width measurement
2309.13642
One sided a_idempotent, one sided a_equivalent and SEP elements in a ring with involution
In order to study the properties of SEP elements, we propose the concepts of one sided a_idempotent and one sided a_equivalent. Under the condition that an element in a ring is both group invertible and MP_invertible, some equivalent conditions of such an element to be an SEP element are given based on these two concepts, as will as based on projections and the second and the third power of some products of some elements.
Hua Yao, Junchao Wei
2023-09-24T14:03:04Z
http://arxiv.org/abs/2309.13642v1
One sided \(a-\)idempotent, one sided \(a-\)equivalent and \(Sep\) elements in a ring with involution ###### Abstract In order to study the properties of \(SEP\) elements, we propose the concepts of one sided \(a-\)idempotent and one sided \(a-\)equivalent. Under the condition that an element in a ring is both group invertible and \(MP-\)invertible, some equivalent conditions of such an element to be an \(SEP\) element are given based on these two concepts, as will as based on projections and the second and the third power of some products of some elements. **2020 Mathematics Subject Classification:** 15A09; 16U99; 16W10 **Keywords:**\(SEP\) elements, \(a-\)idempotent, \(a-\)equivalent, projection. ## 1 Introduction An _involution_\(a\mapsto a^{*}\) in a ring \(R\) is an anti-isomorphism of degree \(2\), that is, \[(a^{*})^{*}=a,\ \ (a+b)^{*}=a^{*}+b^{*},\ \ (ab)^{*}=b^{*}a^{*}.\] A ring \(R\) with an involution \(*\) is called a \(*-\)_ring_. Throughout this paper, unless otherwise stated, ring \(R\) considered is a \(*\)-ring which is also associative with an identity. An element \(a\in R\) is said to be _Moore\(-\)Penrose invertible_ (_MP\(-\)invertible_ for short) [7] if there exists some \(b\in R\) such that the following Penrose equations hold: \[(1)\ aba=a,\ \ (2)\ bab=b,\ \ (3)\ ab=(ab)^{*},\ \ (4)\ ba=(ba)^{*}.\] There is at most one \(b\) such that the above conditions hold. We call it the _Moore\(-\)Penrose inverse_ (_MP\(-\)inverse_ for short) of \(a\) and denote it by \(a^{\dagger}\). The set of all MP\(-\)invertible elements of \(R\) is denoted by \(R^{\dagger}\). Following [1], an element \(a\in R\) (unnecessarily a \(*\)-ring) is said to be _group invertible_ if there is some \(b\in R\) satisfying the following conditions: \[aba=a,\;\;bab=b,\;\;ab=ba.\] There is at most one \(b\) such that the above conditions hold. We call it the _group inverse_ of \(a\) and denote it by \(a^{\#}\). The set of all group invertible elements of \(R\) is denoted by \(R^{\#}\). Let \(a\in R^{\#}\cap R^{\dagger}\). If \(a^{\#}=a^{\dagger}\), then \(a\) is called an \(EP-\)element of \(R\). The set of all \(EP-\) elements of \(R\) is denoted by \(R^{EP}\). There are lots of interesting properties about \(EP-\)elements, which can be seen in some previous literatures [2, 4, 5, 6, 10, 11, 12]. An element \(a\) is called a partial isometry if \(a\in R^{\dagger}\) and \(a^{*}=a^{\dagger}\). The set of all partial isometries in a ring \(R\) is denoted by \(R^{PI}\). An element \(e\in R\) is called a projection if \(e^{2}=e=e^{*}\). Denote the set of all projections of a ring \(R\) by \(PE(R)\). Clearly, \(e\in R\) is a projection if and only if \(e=ee^{*}\) if and only if \(e=e^{*}e\). If a partial isometry \(a\in R\) is also an \(EP\) element as well, that is \(a^{*}=a^{\dagger}=a^{\#}\), we call it a strongly \(EP\) element, \(SEP\) element for short. Denote by \(R^{SEP}\) the set of all \(SEP\) elements. Recently, the properties of \(SEP\) elements are investigated [8, 9, 12]. In this paper, some new properties of \(SEP\) elements are studied further. In Section 2, some equivalent conditions of an element \(a\in R^{\#}\cap R^{\dagger}\) to be an \(SEP\) element are given based on some other elements being projections. In Section 3, modeling on the concept of idempotent, we introduce the concept of one sided \(a-\)idempotent, including left \(a-\)idempotent and right \(a-\)idempotent. Then some equivalent conditions of an element \(a\in R^{\#}\cap R^{\dagger}\) to be an \(SEP\) element are given based on these concepts. Equivalent conditions based on the second and the third power of some products of some elements for an element to be an \(SEP\) element are provided in 4. Finally, in Section 5, we propose the concept of one sided \(a-\)equivalent, including left \(a-\)equivalent and right \(a-\)equivalent. On this basis, we give some equivalent conditions for an element \(a\in R^{\#}\cap R^{\dagger}\) to be an \(SEP\) element. ## 2 Equivalent conditions based on projections for an element to be an \(Sep\) element In [8, Theorem 2.3], it is indicated that giving \(a\in R^{\#}\cap R^{\dagger}\), then \(a\in R^{SEP}\) if and only if \(a(a^{\#})^{*}a^{\dagger}=a^{\dagger}a^{2}\). This makes us to give the following theorem. **Theorem 2.1**.: _Let \(a\in R^{\#}\cap R^{\dagger}\). Then \(a\in R^{SEP}\) if and only if \(a(a^{\#})^{*}a^{\dagger}a^{\#}\in PE(R)\)._ Proof.: \(\Rightarrow\) Since \(a\in R^{SEP}\), \(a(a^{\#})^{*}a^{\dagger}=a^{\dagger}a^{2}\) by [8, Theorem 2.3]. This gives \[a(a^{\#})^{*}a^{\dagger}a^{\#}=a^{\dagger}a\in PE(R).\] \(\Leftarrow\) From the assumption, one gets \[a(a^{\#})^{*}a^{\dagger}a^{\#}=(a(a^{\#})^{*}a^{\dagger}a^{\#})^{*}(a(a^{\#}) ^{*}a^{\dagger}a^{\#})=(a^{\#})^{*}(a^{\dagger})^{*}a^{\#}a^{*}a(a^{\#})^{*}a^{ \dagger}a^{\#}.\] Multiplying the equality on the right by \(a^{2}a^{*}a^{\dagger}a\), one has \(a=(a^{\#})^{*}(a^{\dagger})^{*}a^{\#}a^{*}a\). Noting that \((a^{\#})^{*}=a^{\dagger}a(a^{\#})^{*}\), one obtains \(a=a^{\dagger}a^{2}\) and so \(a\in R^{EP}\). It follows that \[(a^{\dagger})^{*}=aa^{\dagger}(a^{\dagger})^{*}=((a^{\#})^{*}(a^{\dagger})^{* }a^{\#}a^{*}a)a^{\dagger}(a^{\dagger})^{*}=(a^{\#})^{*}(a^{\dagger})^{*}a^{\#}\] and \[(a^{\dagger})^{*}a=(a^{\#})^{*}(a^{\dagger})^{*}a^{\#}a=(a^{\#})^{*}(a^{ \dagger})^{*}.\] Applying the involution on the equality, one obtains \[a^{*}a^{\dagger}=a^{\dagger}a^{\#}=a^{\dagger}a^{\dagger}.\] By [9, Corollary 2.10], \(a\in R^{PI}\). Thus \(a\in R^{SEP}\). **Theorem 2.2**.: _Let \(a\in R^{\#}\cap R^{\dagger}\). Then \(a\in R^{SEP}\) if and only if \(a^{\dagger}a(a^{\dagger})^{*}a^{\dagger}\in PE(R)\)._ Proof.: \(\Rightarrow\) Suppose that \(a\in R^{SEP}\). Then \(a(a^{\#})^{*}a^{\dagger}=a^{\dagger}a^{2}\) by [8, Theorem 2.3] and \(a^{\dagger}=a^{\#}\). It follows that \[a^{\dagger}a(a^{\dagger})^{*}a^{\dagger}=a^{\dagger}aa^{\#}a(a^{\#})^{*}a^{ \dagger}=a^{\dagger}aa^{\#}a^{\dagger}a^{2}=a^{\dagger}a\in PE(R).\] \(\Leftarrow\) From the hypothesis, one yields \[a^{\dagger}a(a^{\dagger})^{*}a^{\dagger}=a^{\dagger}a(a^{\dagger})^{*}a^{ \dagger}(a^{\dagger}a(a^{\dagger})^{*}a^{\dagger})^{*}=a^{\dagger}a(a^{\dagger })^{*}a^{\dagger}(a^{\dagger})^{*}a^{\dagger}a^{\dagger}a.\] Multiplying the equality on the left by \(a^{*}aa^{\#}\), one gets \(a^{\dagger}=a^{\dagger}(a^{\dagger})^{*}a^{\dagger}a^{\dagger}a\) and \[a^{*}=a^{*}aa^{\dagger}=a^{*}aa^{\dagger}(a^{\dagger})^{*}a^{\dagger}a^{ \dagger}a=a^{\dagger}a^{\dagger}a.\] So \(a^{*}a^{\dagger}=a^{\dagger}a^{\dagger}\). By [9, Corollary 2.10], \(a\in R^{PI}\), and this infers \((a^{\dagger})^{*}=a\). Now \[a^{\dagger}=a^{\dagger}(a^{\dagger})^{*}a^{\dagger}a^{\dagger}a=a^{\dagger}aa^{ \dagger}a^{\dagger}a=a^{\dagger}a^{\dagger}a.\] Hence \(a\in R^{EP}\) and so \(a\in R^{SEP}\). From [3, Theorem 1.5.1], \(a\in R^{+}\) is a partial isometry if and only if \(aa^{*}\in PE(R)\). This implies the following theorem. **Theorem 2.3**.: _Let \(a\in R^{\#}\cap R^{\dagger}\). Then \(a\in R^{SEP}\) if and only if \(aa^{*}a^{\dagger}a^{\dagger}a^{2}\in PE(R)\)._ Proof.: \(\Rightarrow\) Since \(a\in R^{SEP}\), \(a^{*}=a^{\dagger}=a^{\#}\), it follows that \[aa^{*}a^{\dagger}a^{\dagger}a^{2}=aa^{\#}a^{\#}a^{\#}a^{\#}a^{2}=aa^{\#}=aa^{\dagger }\in PE(R).\] \(\Leftarrow\) From \(aa^{*}a^{\dagger}a^{\dagger}a^{2}\in PE(R)\), one gets \[aa^{*}a^{\dagger}a^{\dagger}a^{2}=(aa^{*}a^{\dagger}a^{\dagger}a^{2})^{2}=aa^ {*}a^{\dagger}a^{\dagger}a^{3}a^{*}a^{\dagger}a^{\dagger}a^{2}.\] Multiplying the equality on the left by \((a^{\#})^{*}a^{\dagger}\), one has \[a^{\dagger}a^{\dagger}a^{2}=a^{\dagger}a^{\dagger}a^{3}a^{*}a^{\dagger}a^{ \dagger}a^{2}.\] It follows from [9, Lemma 2.11] that \[a^{\dagger}a^{2}=a^{\dagger}a^{3}a^{*}a^{\dagger}a^{\dagger}a^{2}\] and \[a=aa^{\#}a^{\dagger}a^{2}=aa^{\#}a^{\dagger}a^{3}a^{*}a^{\dagger}a^{\dagger}a ^{2}=a^{2}a^{*}a^{\dagger}a^{\dagger}a^{2}.\] This infers \[a^{\#}a=aa^{*}a^{\dagger}a^{\dagger}a^{2}\in PE(R).\] Hence \(a\in R^{EP}\) by [3, Theorem 1.1.3]. It follows that \[aa^{*}=aa^{*}a^{\dagger}a^{\dagger}a^{2}\in PE(R).\] So \(a\in R^{PI}\) and thus \(a\in R^{SEP}\). Let \(a\in R^{\#}\cap R^{\dagger}\) and write \(\chi_{a}=\{a,a^{\#},a^{\dagger},a^{*},(a^{\dagger})^{*},(a^{\#})^{*}\}.\) We obtain the following corollary. **Corollary 2.4**.: _Let \(a\in R^{\#}\cap R^{\dagger}\). Then \(a\in R^{SEP}\) if and only if \(aa^{*}a^{\dagger}xx^{\dagger}a\in PE(R)\) for some \(x\in\chi_{a}\)._ Proof.: \(\Rightarrow\) Since \(a\in R^{SEP}\), \(aa^{*}a^{\dagger}a^{\dagger}a^{2}\in PE(R)\) by Theorem 2.3. Choosing \(x=a^{\dagger}\), we are done. \(\Leftarrow\) If there exists \(x_{0}\in\chi_{a}\) such that \(aa^{*}a^{\dagger}x_{0}x_{0}^{\dagger}a\in PE(R)\), then (1) if \(x_{0}\in\tau_{a}=\{a,a^{\#},(a^{\dagger})^{*}\}\), then \(x_{0}x_{0}^{\dagger}=aa^{\dagger}\), so \[aa^{*}a^{\dagger}a=aa^{*}a^{\dagger}aa^{\dagger}a=aa^{*}a^{\dagger}x_{0}x_{0}^ {\dagger}a\in PE(R).\] Similar to the proof of Theorem 2.3, \(a\in R^{SEP}\). (2) if \(x_{0}\in\gamma_{a}=\{a^{\dagger},a^{*},(a^{\#})^{*}\}\), then \(x_{0}x_{0}^{\dagger}=a^{\dagger}a\), so \[aa^{*}a^{\dagger}a^{\dagger}a^{2}=aa^{*}a^{\dagger}x_{0}x_{0}^{\dagger}a\in PE (R).\] By Theorem 2.3, \(a\in R^{SEP}\). **Theorem 2.5**.: _Let \(a\in R^{\#}\cap R^{\dagger}\). Then \(a\in R^{SEP}\) if and only if \(a^{\dagger}a^{3}a^{*}a^{\dagger}\in PE(R)\)._ Proof.: \(\Rightarrow\) Since \(a\in R^{SEP}\), \(a(a^{\#})^{*}a^{\dagger}=a^{\dagger}a^{2}\) by [8, Theorem 2.3]. This infers \[a^{\dagger}a^{3}a^{*}a^{\dagger}=a(a^{\#})^{*}a^{\dagger}aa^{*}a^{\dagger}=aa^{ \dagger}\in PE(R).\] \(\Leftarrow\) From \(a^{\dagger}a^{3}a^{*}a^{\dagger}\in PE(R)\), one gets \[a^{\dagger}a^{3}a^{*}a^{\dagger}=(a^{\dagger}a^{3}a^{*}a^{\dagger})^{*}=(a^{ \dagger})^{*}aa^{*}a^{\dagger}a=((a^{\dagger})^{*}aa^{*}a^{\dagger}a)a^{\dagger }a=a^{\dagger}a^{3}a^{*}a^{\dagger}a^{\dagger}a.\] Multiplying the equality on the left by \((a^{\#})^{*}a^{\dagger}a^{\#}\), one yields \(a^{\dagger}=a^{\dagger}a^{\dagger}a\). Thus \(a\in R^{PE}\), which leads to \(a^{2}a^{*}a^{\dagger}=a^{\dagger}a^{3}a^{*}a^{\dagger}\in PE(R)\). Hence \[a^{2}a^{*}a^{\dagger}=(a^{2}a^{*}a^{\dagger})^{2}=a^{2}a^{*}a^{\dagger}a^{2}a^ {*}a^{\dagger}=a^{2}a^{*}aa^{*}a^{\dagger}.\] Multiplying the equality on the left by \(a^{\dagger}a^{\#}\) and on the right by \(a\), one obtains \(a^{*}=a^{*}aa^{*}\). This induces \(a\in R^{PI}\). Thus \(a\in R^{SEP}\). Note that \(xx^{+}=\begin{cases}aa^{+},\,x\in\tau_{a},\\ a^{+}a,\,x\in\gamma_{a}.\end{cases}\) Then Theorem 2.5 induces the following corollary. **Corollary 2.6**.: _Let \(a\in R^{\#}\cap R^{\dagger}\). Then the followings are equivalent:_ _(1) \(a\in R^{SEP}\);_ _(2) \(xx^{\dagger}a^{2}a^{*}a^{\dagger}\in PE(R)\) for some \(x\in\gamma_{a}\);_ _(3) \(x^{\dagger}xa^{2}a^{*}a^{\dagger}\in PE(R)\) for some \(x\in\tau_{a}\)._ **Corollary 2.7**.: _Let \(a\in R^{\#}\cap R^{\dagger}\). Then \(a\in R^{SEP}\) if and only if \(a^{2}a^{*}a^{\#}\in PE(R)\)._ Proof.: \(\Rightarrow\) Assume that \(a\in R^{SEP}\). Then \(a\in R^{EP}\) and \(a^{\dagger}a^{3}a^{*}a^{\dagger}\in PE(R)\) by Theorem 2.5. This implies \(a^{2}a^{*}a^{\#}=a^{\dagger}a^{3}a^{*}a^{\dagger}\in PE(R)\). \(\Leftarrow\) From the assumption, we have \[a^{2}a^{*}a^{\#}=(a^{2}a^{*}a^{\#})^{*}=(a^{\#})^{*}aa^{*}a^{*}=(a^{\#})^{*} aa^{*}a^{*}aa^{\dagger}=a^{2}a^{*}a^{\#}aa^{\dagger}.\] Multiplying the equality on the left by \((a^{\dagger})^{*}a^{\dagger}a^{\#}\), one has \(a^{\#}=a^{\#}aa^{\dagger}\). Hence \(a\in R^{EP}\) by [3, Theorem 1.2.1]. Now we have \[a^{\dagger}a^{3}a^{*}a^{\dagger}=a^{2}a^{*}a^{\#}\in PE(R).\] By Theorem 2.5, \(a\in R^{SEP}\). **Lemma 2.8**.: _Let \(a\in R^{\#}\cap R^{\dagger}\) and \(x\in PE(R)\). If \(x=aa^{+}xa^{+}a\), then \(a^{+}axaa^{+}\in PE(R)\)._ Proof.: Since \(x\in PE(R)\), \(x=x^{*}\). Then \[a^{+}axaa^{+}=a^{+}ax^{*}aa^{+}=(aa^{+}xa^{+}a)^{*}=x^{*}=x\in PE(R).\] Corollary 2.7 and Lemma 2.8 imply **Corollary 2.9**.: _Let \(a\in R^{\#}\cap R^{\dagger}.\) Then \(a\in R^{SEP}\) if and only if \(a^{+}a^{3}a^{*}a^{\#}aa^{+}\in PE(R).\)_ Lemma 2.8 implies if \(a^{+}axaa^{+}\in PE(R)\) and \(x^{*}=x,\) then \(aa^{+}xa^{+}a\in PE(R).\) Hence Theorem 2.5 infers the following corollary. **Corollary 2.10**.: _Let \(a\in R^{\#}\cap R^{\dagger}.\) Then \(a\in R^{SEP}\) if and only if \(a^{2}a^{*}a^{\#}\in PE(R).\)_ ## 3 Equivalent conditions based on \(a-\)idempotents for an element to be an \(Sep\) element Let \(e,a\in R.\) If \(e^{2}=ae,\) then \(e\) is called a _left \(a-\)idempotent_. Similarly, \(e\) is called a _right \(a-\)idempotent_ if \(e^{2}=ea.\) The following lemma is evident. **Lemma 3.1**.: \(e\) _is a left \(a-\)idempotent if and only if \(a-e\) is a right \(a-\)idempotent._ **Theorem 3.2**.: _Let \(a\in R^{\#}\cap R^{\dagger}.\) Then \(a\in R^{SEP}\) if and only if \(a(a^{\#})^{*}a^{\dagger}\) is a left \(a^{\dagger}a^{2}-\)idempotent._ Proof.: \(\Rightarrow\) Since \(a\in R^{SEP},\)\(a(a^{\#})^{*}a^{\dagger}=a^{\dagger}a^{2}\) by [8, Theorem 2.3]. So \(a(a^{\#})^{*}a^{\dagger}\) is a left \(a^{\dagger}a^{2}-\)idempotent. \(\Leftarrow\) From the assumption, one has \((a(a^{\#})^{*}a^{\dagger})^{2}=a^{\dagger}a^{2}(a(a^{\#})^{*}a^{\dagger}).\) Hence \[a(a^{\#})^{*}(a^{\#})^{*}a^{\dagger}=a^{\dagger}a^{3}(a^{\#})^{*}a^{\dagger}.\] Multiplying the equality one the right by \(aa^{*}a^{\dagger},\) one gets \[a(a^{\#})^{*}a^{\dagger}=a^{\dagger}a^{3}a^{\dagger}=a^{\dagger}a(a^{\dagger}a ^{3}a^{\dagger})=a^{\dagger}a^{2}(a^{\#})^{*}a^{\dagger}.\] Multiplying the last equality on the right by \(aa^{*}a^{\dagger}a,\) one yields \(a=a^{\dagger}a^{2}.\) Hence \(a\in R^{EP}.\) It follows that \[a(a^{\#})^{*}a^{\dagger}=a^{\dagger}a^{3}a^{\dagger}=a\] and \[a^{*}=a^{*}a^{\dagger}a=a^{*}a^{\dagger}(a(a^{\#})^{*}a^{\dagger})=a^{\dagger}.\] Hence \(a\in R^{PI}\) and so \(a\in R^{SEP}.\) Similarly, we have the following theorem by [8, Theorem 2.3]. **Theorem 3.3**.: _Let \(a\in R^{\#}\cap R^{\dagger}.\) Then the followings are equivalent:_ _(1) \(a\in R^{SEP}\);_ _(2) \(a(a^{\#})^{*}a^{\dagger}\) is a right \(a^{\dagger}a^{2}-\)idempotent;_ _(3) \(a^{\dagger}a^{2}\) if a left \(a(a^{\#})^{*}a^{\dagger}-\)idempotent;_ _(4) \(a^{\dagger}a^{2}\) is a right \(a(a^{\#})^{*}a^{\dagger}-\)idempotent._ By Lemma 3.1, Theorems 3.2 and 3.3, we have the following theorem. **Theorem 3.4**.: _Let \(a\in R^{\#}\cap R^{\dagger}\). Then the following are equivalent:_ _(1) \(a\in R^{SEP}\);_ _(2) \(a(a^{\#})^{*}a^{\dagger}-a^{\dagger}a^{2}\) is a right \(-a^{\dagger}a^{2}-\)idempotent;_ _(3) \(a(a^{\#})^{*}a^{\dagger}-a^{\dagger}a^{2}\) is a left \(-a^{\dagger}a^{2}-\)idempotent;_ _(4) \(a^{\dagger}a^{2}-a(a^{\#})^{*}a^{\dagger}\) is a right \(-a(a^{\#})^{*}a^{\dagger}-\)idempotent;_ _(5) \(a^{\dagger}a^{2}-a(a^{\#})^{*}a^{\dagger}\) is a right \(-a(a^{\#})^{*}a^{\dagger}-\)idempotent._ **Theorem 3.5**.: _Let \(a\in R^{\#}\cap R^{\dagger}\). Then \(a\in R^{SEP}\) if and only if \(a(a^{\#})^{*}a^{\dagger}\) is a right \(a-\)idempotent._ Proof.: \(\Rightarrow\) It is an immediate result of Theorem 3.3 because \(a^{\dagger}a^{2}=a\). \(\Leftarrow\) From the assumption, we have \((a(a^{\#})^{*}a^{\dagger})^{2}=a(a^{\#})^{*}a^{\dagger}a\). Thus \[a(a^{\#})^{*}(a^{\#})^{*}a^{\dagger}=a(a^{\#})^{*}a^{\dagger}a.\] Multiplying the equality on the left by \(a^{*}a^{\dagger}\), one obtains \((a^{\#})^{*}a^{\dagger}=a^{\dagger}a\) and \[a^{\dagger}=a^{*}(a^{\#})^{*}a^{\dagger}=a^{*}a^{\dagger}a.\] Hence \(a\in R^{SEP}\) by [3, Theorem 1.5.3]. Similarly, we have the following theorem. **Theorem 3.6**.: _Let \(a\in R^{\#}\cap R^{\dagger}\). Then \(a\in R^{SEP}\) if and only if \(a^{\dagger}a^{2}\) is a left \((a^{\dagger})^{*}-\)idempotent._ Equivalent conditions based on the second and the third power of some products of some elements for an element to be an \(Sep\) element **Theorem 4.1**.: _Let \(a\in R^{\#}\cap R^{\dagger}\). Then \(a\in R^{SEP}\) if and only if \((a(a^{\#})^{*}a^{\dagger})^{k}=(a^{\dagger}a^{2})^{k}\) for \(k=2,3\)._ Proof.: \(\Rightarrow\) It is an immediate result of [8, Theorem 2.3]. \(\Leftarrow\) From the assumption, one gets \[a^{\dagger}a^{4}=(a^{\dagger}a^{2})^{3}=(a(a^{\#})^{*}a^{\dagger})^{3}=(a(a^{ \#})^{*}a^{\dagger})^{2}a(a^{\#})^{*}a^{\dagger}=(a^{\dagger}a^{2})^{2}a(a^{ \#})^{*}a^{\dagger}=a^{\dagger}a^{4}(a^{\#})^{*}a^{\dagger}\] and \[a=a^{\#}a^{\#}a^{\dagger}a^{4}=a^{\#}a^{\#}a^{\dagger}a^{4}(a^{\#})^{*}a^{ \dagger}=a(a^{\#})^{*}a^{\dagger}.\] This gives \[a^{\dagger}a=a^{\dagger}a(a^{\#})^{*}a^{\dagger}=(a^{\#})^{*}a^{\dagger}\] and \[a^{*}a^{\dagger}a=a^{*}(a^{\#})^{*}a^{\dagger}=a^{\dagger}.\] By [3, Theorem 1.5.3], \(a\in R^{SEP}\) **Theorem 4.2**.: _Let \(a\in R^{\#}\cap R^{\dagger}.\) Then \(a\in R^{SEP}\) if and only if \((aa^{*}a^{\dagger}a^{\dagger}a^{2})^{k}\in PE(R)\) for \(k=2,3.\)_ Proof.: \(\Rightarrow\) By the proof of Theorem 2.3, \(aa^{*}a^{\dagger}a^{\dagger}a^{2}=aa^{\dagger}.\) Thus \[(aa^{*}a^{\dagger}a^{\dagger}a^{2})^{k}=aa^{\dagger}\in PE(R)\] for \(k=2,3.\) \(\Leftarrow\) The condition \((aa^{*}a^{\dagger}a^{\dagger}a^{2})^{2}\in PE(R)\) implies \[(aa^{*}a^{\dagger}a^{\dagger}a^{2})^{2} = ((aa^{*}a^{\dagger}a^{\dagger}a^{2})^{2})^{*}=(a^{*}a^{\dagger}a( a^{\dagger})^{*}aa^{*})^{2}\] \[= (a^{*}a^{\dagger}a(a^{\dagger})^{*}aa^{*})^{2}aa^{\dagger}=(aa^{* }a^{\dagger}a^{\dagger}a^{2})^{2}aa^{\dagger}.\] Multiplying the equality on the left by \((a^{\#})^{*}a^{\dagger}a^{\#}(aa^{\#})^{*}a(a^{\#})^{*}a^{\dagger},\) one obtains \[a^{\dagger}a^{\dagger}a^{2}=a^{\dagger}a^{\dagger}a^{3}a^{\dagger},\] which gives \(a^{\dagger}a^{2}=a^{\dagger}a^{3}a^{\dagger}\) by [9, Lemma 2.11]. Multiplying the equality on the left by \(a^{\#},\) one has \(aa^{\#}=aa^{\dagger}.\) Hence \(a\in R^{EP}.\) This induces \((aa^{*})^{k}=(aa^{*}a^{\dagger}a^{\dagger}a^{2})^{k}\in PE(R)\) for \(k=2,3.\) So \((aa^{*})^{4}=(aa^{*})^{2}\) and \((aa^{*})^{6}=(aa^{*})^{3}.\) Hence \[(aa^{*})^{4}=(aa^{*})^{2}(aa^{*})^{2}=(aa^{*})^{4}(aa^{*})^{2}=(aa^{*})^{6}=( aa^{*})^{3}.\] Thus \((aa^{*})^{2}=(aa^{*})^{3}.\) Multiplying the equality on the left by \(((a^{\dagger})^{*}a^{\dagger})^{2},\) one gets \(aa^{\dagger}=aa^{*}.\) By [3, Theorem 1.5.1], \(a\in R^{PI}.\) Hence \(a\in R^{SEP}.\) **Theorem 4.3**.: _Let \(a\in R^{\#}\cap R^{\dagger}.\) Then \(a\in R^{SEP}\) if and only if \((a(a^{\#})^{*}a^{\dagger})^{k}\) is a left \((a^{\dagger}a^{2})^{k}-\)idempotent for \(k=2,3.\)_ Proof.: \(\Rightarrow\) Since \(a\in R^{SEP},a(a^{\#})^{*}a^{\dagger}=a^{\dagger}a^{2}.\) Hence \((a(a^{\#})^{*}a^{\dagger})^{k}\) is a left \((a^{\dagger}a^{2})^{k}-\)idempotent for \(k=2,3.\) \(\Leftarrow\) Following the assumption, one has \[a((a^{\#})^{*})^{4}a^{\dagger}=(a(a^{\#})^{*}a^{\dagger})^{4}=(a^{\dagger}a^{ 2})^{2}(a(a^{\#})^{*}a^{\dagger})^{2}=a^{\dagger}a^{4}(a^{\#})^{*}(a^{\#})^{ *}a^{\dagger}. \tag{4.1}\] and \[a((a^{\#})^{*})^{6}a^{\dagger}=(a(a^{\#})^{*}a^{\dagger})^{6}=(a^{\dagger}a^ {2})^{3}(a(a^{\#})^{*}a^{\dagger})^{3}=a^{\dagger}a^{5}((a^{\#})^{*})^{3}a^{ \dagger}. \tag{4.2}\] Multiplying (4.1) on the left by \(a^{\dagger}a,\) one gets \[a((a^{\#})^{*})^{4}a^{\dagger}=a^{\dagger}a^{2}((a^{\#})^{*})^{4}a^{\dagger}. \tag{4.3}\] Multiplying (4.3) on the right by \(a(a^{*})^{4}a^{\dagger},\) one obtains \[aa^{\dagger}=a^{\dagger}a^{2}a^{\dagger}. \tag{4.4}\] So \(a\in R^{EP}\) and (4.1) and (4.2) become into the following equalities. \[a((a^{\#})^{*})^{4}a^{\dagger}=a^{3}(a^{\#})^{*}(a^{\#})^{*}a^{\dagger}. \tag{4.5}\] \[a((a^{\#})^{*})^{6}a^{\dagger}=a^{4}((a^{\#})^{*})^{3}a^{\dagger}. \tag{4.6}\] Multiplying (4.5) and (4.6) on the left by \(a^{\dagger}\) and on the right by \(a\), one has \[((a^{\#})^{*})^{4}=a^{2}(a^{\#})^{*}(a^{\#})^{*}\] and \[((a^{\#})^{*})^{6}=a^{3}((a^{\#})^{*})^{3}.\] Then \[a^{2}((a^{\#})^{*})^{4}=a^{2}(a^{\#})^{*}(a^{\#})^{*}((a^{\#})^{*})^{2}=((a^ {\#})^{*})^{4}((a^{\#})^{*})^{2}=((a^{\#})^{*})^{6}=a^{3}((a^{\#})^{*})^{3}.\] Multiplying the above equality on the left by \((a^{\#})^{2}\) and on the right by \((a^{*})^{3}\), and noting that \(a\in R^{EP}\), one gets \((a^{\#})^{*}=a\). So \(a\in R^{SEP}\). ## 5 Equivalent conditions based on \(a-\)equivalent relation for an element to be an \(Sep\) element Let \(a,b,c\in R\). If \(ab=ac\), then \(b\) and \(c\) are said to be _left \(a-\)equivalent_. Correspondingly, if \(ba=ca\), then \(b\) and \(c\) are said to be _right \(a-\)equivalent_. **Theorem 5.1**.: _Let \(a\in R^{\#}\cap R^{\dagger}\). Then \(a\in R^{SEP}\) if and only if \(a(a^{\#})^{*}a^{\dagger}\) and \(a^{\dagger}a^{2}\) are left \(a-\)equivalent._ Proof.: \(\Rightarrow\) Since \(a\in R^{SEP}\), \(a(a^{\#})^{*}a^{\dagger}=a^{\dagger}a^{2}\) by [8, Theorem 2.3]. So \(a(a^{\#})^{*}a^{\dagger}\) and \(a^{\dagger}a^{2}\) are left \(a-\)equivalent. \(\Leftarrow\) From the assumption, we have \(a^{2}(a^{\#})^{*}a^{\dagger}=aa^{\dagger}a^{2}=a^{2}\). Multiplying the equality on the left by \(a^{*}a^{\dagger}a^{\#}\), one gets \(a^{\dagger}=a^{*}a^{\dagger}a\). Hence \(a\in R^{SEP}\) by [3, Theorem 1.5.3]. **Theorem 5.2**.: _Let \(a\in R^{\#}\cap R^{\dagger}\). Then \(a\in R^{SEP}\) if and only if \(a(a^{\#})^{*}a^{\dagger}\) and \(a^{\dagger}a^{2}\) are left \(x-\)equivalent for some \(x\in\rho_{a}=\{a,a^{\#},a^{\dagger},a^{*},(a^{\dagger})^{*},(a^{\#})^{*},(a^{ \#})^{\dagger},(a^{\dagger})^{\#}\}\)._ Proof.: \(\Rightarrow\) By Theorem 5.1, \(a(a^{\#})^{*}a^{\dagger}\) and \(a^{\dagger}a^{2}\) are left \(a-\)equivalent. Choosing \(x=a\), we are done. \(\Leftarrow\) From the assumption, there exists \(x\in\rho_{a}\) such that \(xa(a^{\#})^{*}a^{\dagger}=xa^{\dagger}a^{2}\). If \(x\in\tau_{a}\), then \(x^{\dagger}x=a^{\dagger}a\). It follows that \[(a^{\#})^{*}a^{\dagger} = a^{\dagger}a(a^{\#})^{*}a^{\dagger}=a^{\dagger}a^{\#}aa^{ \dagger}a^{2}(a^{\#})^{*}a^{\dagger}=a^{\dagger}a^{\#}a(x^{\dagger}x)a(a^{\# })^{*}a^{\dagger}\] \[= a^{\dagger}a^{\#}ax^{\dagger}xa^{\dagger}a^{2}=a^{\dagger}a^{ \#}aa^{\dagger}aa^{\dagger}a^{2}=a^{\dagger}a^{\#}a^{2}=a^{\dagger}a.\] Hence \[a^{*}a^{\dagger}a=a^{*}(a^{\#})^{*}a^{\dagger}=a^{\dagger}.\] By [3, Theorem 1.5.3], \(a\in R^{SEP}\). If \(x\in\gamma_{a}\), then \(x^{\dagger}x=aa^{\dagger}.\) It follows that \[(a^{\#})^{*}a^{\dagger} = a^{\dagger}a(a^{\#})^{*}a^{\dagger}=a^{\dagger}aa^{\dagger}a(a^{ \#})^{*}a^{\dagger}=a^{\dagger}x^{\dagger}xa(a^{\#})^{*}a^{\dagger}\] \[= a^{\dagger}x^{\dagger}xa^{\dagger}a^{2}=a^{\dagger}aa^{\dagger} a^{\dagger}a^{2}=a^{\dagger}a^{\dagger}a^{2}\] and \[a^{\dagger}=a^{*}(a^{\#})^{*}a^{\dagger}=a^{*}a^{\dagger}a^{\dagger}a^{2}=(a^{ *}a^{\dagger}a^{\dagger}a^{2})a^{\dagger}a=a^{\dagger}a^{\dagger}a.\] Thus \(a\in R^{EP}.\) This induces \((a^{\#})^{*}a^{\dagger}=a^{\dagger}a^{\dagger}a^{2}=a^{\dagger}a.\) Hence \(a\in R^{SEP}.\) If \(x=(a^{\#})^{\dagger}=a^{\dagger}a^{3}a^{\dagger},\) then one gets \(a^{\dagger}a^{3}a^{\dagger}a(a^{\#})^{*}a^{\dagger}=a^{\dagger}a^{3}a^{\dagger }a^{2}.\) Thus \[a^{\dagger}a^{3}(a^{\#})^{*}a^{\dagger}=a^{\dagger}a^{3}a^{\dagger}a^{\dagger} a^{2}.\] Multiplying the equality on the left by \(a^{\dagger}a^{\#},\) one has \((a^{\#})^{*}a^{\dagger}=a^{\dagger}a^{\dagger}a^{2}.\) Hence \(a\in R^{SEP}\) by the above proof. If \(x=(a^{\dagger})^{\#}=(aa^{\#})^{*}a(aa^{\#})^{*},\) then \[(aa^{\#})^{*}a(aa^{\#})^{*}a(a^{\#})^{*}a^{\dagger}=(aa^{\#})^{*}a(aa^{\#})^{ *}a^{\dagger}a^{2}=(aa^{\#})^{*}a^{2}.\] Multiplying the equality on the left by \(a^{\dagger}a^{\dagger},\) one obtains \((a^{\#})^{*}a^{\dagger}=a^{\dagger}a^{\dagger}a^{2}.\) Hence \(a\in R^{SEP}.\) **Theorem 5.3**.: _Let \(a\in R^{\#}\cap R^{\dagger}.\) Then \(a\in R^{SEP}\) if and only if \(a(a^{\#})^{*}a^{\dagger}\) and \(a^{\dagger}a^{2}\) are left \(a^{\dagger}a^{2}-\)equivalent._ Proof.: \(\Rightarrow\) It is clear because \(a(a^{\#})^{*}a^{\dagger}=a^{\dagger}a^{2}.\) \(\Leftarrow\) From the hypothesis, we have \[a^{\dagger}a^{3}(a^{\#})^{*}a^{\dagger}=a^{\dagger}a^{2}a^{\dagger}a^{2}=a^{ \dagger}a^{3}.\] Multiplying the equality on the left by \(a^{\#},\) one obtain \(a(a^{\#})^{*}a^{\dagger}=a\) and \[a^{\dagger}a=a^{\dagger}a(a^{\#})^{*}a^{\dagger}=(a^{\#})^{*}a^{\dagger}.\] Hence \(a\in R^{SEP}.\) **Theorem 5.4**.: _Let \(a\in R^{\#}\cap R^{\dagger}.\) Then \(a\in R^{SEP}\) if and only if \(a(a^{\#})^{*}a^{\dagger}\) and \(a^{\dagger}a^{2}\) are left \(a^{\dagger}aa^{\#}-\)equivalent._ Proof.: \(\Rightarrow\) It is clear by [8, Theorem 2.3]. \(\Leftarrow\) From the assumption, we have \[a^{\dagger}aa^{\#}a(a^{\#})^{*}a^{\dagger}=a^{\dagger}aa^{\#}a^{\dagger}a^{2} =a^{\dagger}a,\] \[a=aa^{\dagger}a=a(a^{\dagger}aa^{\#}a(a^{\#})^{*}a^{\dagger})=a(a^{\#})^{*}a^ {\dagger}.\] Hence \(a\in R^{SEP}\) by the proof of Theorem 5.3. ### Acknowledgements This work is in part supported by the Natural Science Foundation of Henan Province of China under Grant No. 222300420499.
2301.11844
A Comment on the Classical Electron Self-Energy
This paper is devoted to the analysis of the divergence of the electron self-energy in classical electrodynamics. To do so, we appeal to the theory of distributions and a method for obtaining corresponding extensions. At first sight, electrostatics implies a divergence once we treat the electron as a charged point particle. However, our construction shows that its self-energy turns out to be an undetermined constant upon renormalization. Appealing to empirical results we may fix its value, demanding, for example, that all its mass comes from an electrostatic origin.
H. R. de Assis, B. F. Rizzuti
2022-12-23T21:01:13Z
http://arxiv.org/abs/2301.11844v2
# A (not so) short comment on the classical electron self-energy ###### Abstract This paper is devoted to analyze the divergence of the electron self-energy in classical electrodynamics. To do so, we develop the basics on the theory of distributions and a method for obtaining corresponding extensions. At first sight, electrostatics implies a divergence once we treat the electron as a charged point particle. However, our construction shows that its self-energy turns out to be an undetermined constant upon renormalization. Appealing to empirical results we may fix it, demanding, for example, that all its mass comes from an electrostatic origin. **Keywords**: Theory of distributions. Extension of distributions. Electron self-energy. ## 1 Introduction One of the most compatible matches between theory and experiment in physics is devoted to Quantum Electrodynamics (QED), as the precision on the electron magnetic moment goes far from expected [1]. Since its early days on QED computations, it became clear that the behavior of fields was more singular than usual functions. In turn, this has led the community to stare at fields not as maps, but as distributions. In fact, for the case of the electric field \(\vec{E}(\vec{x},t)\) originated from a point particle, for instance, one would expect an ultraviolet divergence at origin, while \(\int d^{3}\vec{x}dt\vec{E}(\vec{x},t)f(\vec{x})=\vec{E}(f)\) is well behaved [2]. Here, \(f(\cdot)\) is a smooth function of compact support. We are interested, in this manuscript, on the self-energy of the electron. While it has a fascinating history so depicted in [3], involving an entire war and a new generation of physicists developing regularization and renormalization techniques, the classical counterpart is often subdue, justifying our approach here. Simply put, the self-energy of a charged particle, such as the electron, is the measure of the energy which it has when freed from any other interaction, be it with other particles or with given fields. One finds in the study of classical electrodynamics that, summed to the kinetic and potential energies given particles might have, a system composed of _charged_ particles has a quantity of energy related to the electromagnetic field it generates [4]. The electromagnetic system we wish to examine could be seen, at first, as the simplest one: that of an electron, stationary, free from any other interaction. Seen as a _point particle_ - that is, supposing it has no internal structure and can be solely described by the position in which its whole charge is stored - which is the standard way one encounters at first [4, 5], the electric field and potential are given by \[\mathbf{E}(\mathbf{r})=\frac{1}{4\pi\epsilon_{0}}\,\frac{e}{r^{2}}\mathbf{ \hat{r}},\qquad V(\mathbf{r})=\frac{1}{4\pi\epsilon_{0}}\frac{e}{r}, \tag{1}\] where \(e\) denotes the strength of its charge. Meanwhile, the expression for the self-energy for a system with electric field \(\mathbf{E}\) and magnetic field \(\mathbf{B}\) is \[E=\frac{\epsilon_{0}}{2}\int_{\mathds{R}^{3}}\left(\mathbf{E}^{2}+c^{2} \mathbf{B}^{2}\right)d\tau, \tag{2}\] so that, using (1), we obtain \[E_{0}=\frac{\epsilon_{0}}{2}\int_{\mathds{R}^{3}}\left(\frac{1}{4\pi\epsilon_ {0}}\right)^{2}\left(\frac{e}{r^{2}}\right)^{2}d\tau.\] We denote \(E_{0}\) the self-energy of interest here, as we are allegedly neglecting the magnetic field due to our interest only in the static case. Using spherical coordinates,1 Footnote 1: Since different materials might use different notations concerning the polar coordinates \(\theta\) and \(\phi\), we make explicit that we are considering here \(\begin{cases}x=r\,\mathrm{sen}\theta\cos\phi,\\ y=r\,\mathrm{sen}\theta\,\mathrm{sen}\phi,\\ z=r\cos\theta,\end{cases}\) where \(\theta\in[0,\pi]\), \(\phi\in[0,2\pi)\). \[E_{0} =\frac{e^{2}}{(4\pi\epsilon_{0})^{2}}\left(\int_{0}^{2\pi}d\phi \right)\left(\int_{0}^{\pi}\,\mathrm{sen}(\theta)d\theta\right)\left(\int_{0} ^{+\infty}\frac{1}{r^{2}}dr\right) \tag{3}\] \[=\frac{e^{2}}{8\pi\epsilon_{0}}\left[\frac{1}{r}\right]_{+\infty}^ {0}\] \[=+\infty.\] The conclusion we arrive then is that there is an infinite amount of energy stored in the field of a simple electron positioned at the origin of our system, if one considers it to be a stationary point particle. Needless to say, any satisfactory field theory, both classical or quantum, must resolve such type of divergences. Should we discard the assumption that the electron has no spatial extension? Both theoretical and experimental results seem to point in an opposite way [6, 7], indicating that we should seek a improvement in the theory and in the conception of self-energy itself. Therefore, here we present one of the available methods for the "removal" of such infinite quantities, a process known as _renormalization_[8]. The main idea of renormalization is that these infinities can be justified by attributing them to quantities which we cannot directly measure (something that can be seen as a parallel with the acceptance of complex numbers in the formalism of quantum mechanics). Take, for example, our case of the electron and its infinite self-energy. In calculating \(E_{0}\), we are, simultaneously, calculating its mass \(m_{elec}\) arising from the electric field of such particle, since Einstein's relativity theory affirms that mass and energy are but two manifestations of the same phenomenon. In light of this, we conclude that the divergence of \(E_{0}\) implies that the electron possesses infinite inertia. If, however, we assume there exists another contribution for the _effective mass_ (that is, the one we can actually measure), originated from some unknown effect other than electromagnetism, then we might conceive that this new contribution is negative enough to oppose the infinite appearing from \(m_{elec}\). The hypothesis of another source contributing for the effective mass is not something difficultly justified, since we know that neutral bodies are also provided with mass and generate no electric or magnetic field. Thus, assuming this new contribution for the mass of the particle, independent of where it comes from, we can "erase" the infinite we have just found, obtaining the so called _mass renormalization_ of the electron. Such initial method of renormalization gave rise to new and more advanced approaches which came to be used in the renormalization of other infinite quantities, specially in the Quantum Field Theory (QFT), which advanced quite a lot in the decades following the emergence of quantum mechanics and presented similar problems with divergent integrals in its equations [9]. For this new theory, the methods had to be refined in its mathematical formulation and, meanwhile, the development of approaches such as _constructive quantum field theory_[2] and _causal perturbation theory_[10, 11] made clear that distribution theory is a crucial subject for the understanding of such new ideas. It was also through the latter that it was perceived the connection between the appearance of divergent quantities and the product of distributions, whereas distribution theory is strictly linear, as we shall see. This obstacle can be overcome using the idea of extension of distributions, which we will encounter in Section 3. What we shall present here is a direct consequence of the work of Epstein and Glaser [11] and texts which adapted their ideas [12, 13, 14, 15]. In view of this, we seek, in the present work, to develop the theory of distributions, elaborating next a method for extension certain distributions and, finally, exemplifying how it can be used in the renormalization of the self-energy of the electron which we have just introduced. All this chain was written to be both self-consistent and also, a pedestrian introduction to the theme. Distributions Looking through the available literature, we may find different approaches to the theory of distributions, ranging from superficial ones [2, 4] to more advanced and detailed studies on the subject [16, 17, 18]. Here, we shall confine ourselves to a more peripheral point of view, since the reader interested in our work is expected to be, at a certain level, familiar with these ideas. Moreover, the available literature is rich enough to permit the search for the missing gaps we might leave along the way. In any case, the manuscript contains the standard definitions/results on the subject and a couple of examples. They are intended to make the text as self-consistent as possible. ### Space of test functions First of all, we must make clear our notation for derivations, since we shall deal quite often with multi-variable functions. For the euclidean space \(\mathds{R}^{n}\), we will consider the norm \(\|\cdot\|:\mathds{R}^{n}\to\mathds{R}_{+}\) to be \[\|x\|=\sqrt{x_{1}^{2}+x_{2}^{2}+\cdots+x_{n}^{2}},\] arising from the scalar product \[\langle x,y\rangle=x_{1}y_{1}+x_{2}y_{2}+\cdots+x_{n}y_{n},\] and the topology will be the one derived from the metric our norm provides. We define a _multi-index_\(\beta\) as an n-tuple \((\beta_{1},\cdots,\beta_{n})\) of natural numbers (also written \(\beta\in\mathds{N}_{0}^{n}\)). Given a multi-index \(\beta\) and a function \(\varphi:\mathds{R}^{n}\to\mathds{C}\), we shall denote the series of partial derivations as follows \[D^{\beta}\varphi(x):=\frac{\partial^{|\beta|}\varphi}{\partial x_{1}^{\beta_{ 1}}\cdots\partial x_{n}^{\beta_{n}}}(x),\] when \(\varphi\) is such that this new function exists. Here, \(|\beta|\) is the _order_ of \(\beta\), given by \(|\beta|=\beta_{1}+\cdots+\beta_{n}\). Furthermore, we define the _factorial_ of \(\beta\) as \(\beta!=\beta_{1}!\cdots\beta_{n}!\), notation that will come at hand later. As elements of the n-dimensional space of natural numbers, \(\beta\in\mathds{N}_{0}^{n}\), two multi-index can be added and produce a new one, \(\alpha+\beta=(\alpha_{1}+\beta_{1},\cdots,\alpha_{n}+\beta_{n})\), from which we shall obtain the operation \(D^{\alpha}(D^{\beta}\varphi)=D^{(\alpha+\beta)}\varphi\). For functions which are smooth enough to permit changes in the order of derivations, and this will be always the case here, this operation is guaranteed to be associative. Moreover, we are able to define, over \(\mathds{N}_{0}^{n}\), a notion of partial order, establishing \(\alpha\leq\beta\) when \(\alpha_{i}\leq\beta_{i}\) for all \(i\in\{1,\cdots,n\}\). From two multi-index, \(\alpha\) and \(\beta\), we set \[\min\{\alpha,\beta\}=(\min\{\alpha_{1},\beta_{1}\},\cdots,\min\{\alpha_{n}, \beta_{n}\})\] and analogously for \(\max\{\alpha,\beta\}\). **Remark 2.1**.: As is set by the literature, for the multi-index \(\alpha=(0,\cdots,0)\), we appoint \(D^{\alpha}\varphi=\varphi\). **Remark 2.2**.: It is possible to demonstrate the interesting parallel between Leibniz rule applied to multi-variable derivations and the Newton's binomial expansion \[D^{\beta}(\varphi\psi)=\sum_{0\leq\alpha\leq\beta}\frac{\beta!}{\alpha!(\beta- \alpha)!}D^{\alpha}(\varphi)D^{\beta-\alpha}(\psi).\] **Example 2.1**.: As an example of the use of the multi-index notation in dealing with smooth functions, consider two multi-indices \(\beta\), \(\alpha\) and let \(f\in C^{\infty}(\mathds{R}^{n})\) be the function \[f(x)=x^{\beta}=x_{1}^{\beta_{1}}\cdots x_{n}^{\beta_{n}}.\] In this case, \[D^{\alpha}f(x)=\left(\frac{\partial^{\alpha_{1}}}{\partial x_{1}^{\alpha_{1}} }x_{1}^{\beta_{1}}\right)\left(\frac{\partial^{\alpha_{2}}}{\partial x_{2}^{ \alpha_{2}}}x_{2}^{\beta_{2}}\right)\cdots\left(\frac{\partial^{\alpha_{n}}}{ \partial x_{n}^{\alpha_{n}}}x_{n}^{\beta_{n}}\right).\] We can thus see that, if \(\alpha_{i}>\beta_{i}\) for some \(i\in\{1,2,\cdots,n\}\), then the deivation of order \(\alpha_{i}\) of the \(x_{i}^{\beta_{i}}\) term will result in the null function, meaning \(D^{\beta}f=0\). If, on the other hand, we have \(\alpha\leq\beta\), then \[D^{\beta}f(x)=\left(\frac{\beta_{1}!}{(\beta_{1}-\alpha_{1})!}x_{1}^{\beta_{1} -\alpha_{1}}\right)\left(\frac{\beta_{2}!}{(\beta_{2}-\alpha_{2})!}x_{2}^{ \beta_{2}-\alpha_{2}}\right)\cdots\left(\frac{\beta_{n}!}{(\beta_{n}-\alpha_{ n})!}x_{n}^{\beta_{n}-\alpha_{n}}\right),\] or, in other words, \[D^{\beta}f(x)=\frac{\beta!}{(\beta-\alpha)!}x^{\beta-\alpha}.\] In particular, \[(D^{\alpha}f)(0)=\begin{cases}\alpha!,&\text{se }\alpha=\beta,\\ 0,&\text{se }\alpha\neq\beta.\end{cases} \tag{4}\] As we mentioned briefly in the last paragraph, we will always be considering complex functions which are smooth enough. More precisely, we shall work in a subspace of the vector space of infinitely differentiable functions, \(C^{\infty}(\mathds{R}^{n})\), namely the subspace of functions which vanish outside some compact \(K\subset\mathds{R}^{n}\). To make our words more exact and mathematically grounded, we define first the concept of the _support_ of a function \(\varphi\in C^{\infty}(\mathds{R}^{n})\). That is the smallest closed set containing all the points where \(\varphi\) does not vanish, in other words, the closure of \(\{x\in\mathds{R}^{n}\;;\;\varphi(x)\neq 0\}\). We shall denote the support of \(\varphi\) by _supp_\(\varphi\). We may now define in precise terms the space of functions we deal with when introducing distributions. **Definition 2.1**.: We define \(\mathcal{D}(\mathds{R}^{n})\), the space of test functions, as the space containing elements \(\varphi:\mathds{R}^{n}\to\mathds{C}\) of \(C^{\infty}(\mathds{R}^{n})\), meaning infinitely diferentiable functions, whose support is compact. Thus, we can write \[\mathcal{D}(\mathds{R}^{n}):=\{\varphi\in C^{\infty}(\mathds{R}^{n})\;;\quad \text{supp }\varphi\text{ is a compact set}\}.\] We encounter no difficulties when trying to prove that \(\mathcal{D}(\mathds{R}^{n})\) is indeed a vector space. The sum of two functions \(\varphi\), \(\psi\in\mathcal{D}(\mathds{R}^{n})\), with support \(K_{1}=\mathit{supp}\ \varphi\) and \(K_{2}=\mathit{supp}\ \psi\), will have its support contained in the set \(K=K_{1}\cup K_{2}\), which will again be compact. Besides that, it is trivial the fact that the support of the function \(z\varphi\) is equal to \(\mathit{supp}\ \varphi\), for every \(z\neq 0\), from which we see that the multiplication by a scalar will be closed in \(\mathcal{D}(\mathds{R}^{n})\). The reader familiar with the area of functional analysis may recognize \(\mathcal{D}(\mathds{R}^{n})\) by the name of \(C^{\infty}_{0}(\mathds{R}^{n})\) and might question our choice of symbols. Our notation is justified, however, by the notion of convergence we shall impose over \(\mathcal{D}(\mathds{R}^{n})\), something that is not touched upon when the focus is other than the theory of distributions. Here, a sequence \((\varphi_{k})_{k\in\mathds{N}}\) of functions \(\varphi_{k}\in\mathcal{D}(\mathds{R}^{n})\) is said to converge to a function \(\varphi\in\mathcal{D}(\mathds{R}^{n})\) when * There exists a compact set \(K\subset\mathds{R}^{n}\) and a natural number \(k_{0}\) such that, for every \(k\geq k_{0}\), we have \(\mathit{supp}\ \varphi_{k}\subset K\). * For every multi-index \(\beta\), \(D^{\beta}\varphi_{k}\) converges uniformly to \(D^{\beta}\varphi\) in \(K\). **Example 2.2**.: Consider \(h>0\) and let \(\psi_{h}:(-h,h)\longrightarrow\mathds{R}\) be given by \(\psi_{h}(x)=\frac{1}{x^{2}-h^{2}}\). First of all, we observe that \[\lim_{x\to h^{-}}\psi_{h}(x)=\lim_{x\to h^{-}}\frac{x^{2}}{(h/x)^{2}-1}=+\infty \tag{5}\] and the same will happen \(\lim_{x\to-h^{+}}\psi_{h}(x)=+\infty\), by reasons of symmetry. Moreover, in the interval of definition, \(\psi_{h}\) is smooth, since it is the composition of smooth functions. For that reason, \(\xi_{h}(x)=e^{\psi_{h}(x)}\) shall also belong to \(C^{\infty}((-h,h))\) and, by (5), \(\lim_{x\to h^{-}}\xi_{h}(x)=\lim_{x\to-h^{+}}\xi_{h}(x)=0\). Taking the first derivative of \(\xi\), we obtain \[\xi_{h}^{\prime}(x)=\psi_{h}^{\prime}(x)e^{\psi_{h}(x)}=-\left\{\frac{2x}{(h^ {2}-x^{2})^{2}}\right\}e^{\psi_{h}(x)},\] whose limits in \(-h\) and \(h\) will both be zero,2 Footnote 2: From the fact that \(\psi_{h}\) is an even function, \(\psi_{h}^{\prime}\) will be odd, so that \(\lim_{x\to-h^{+}}\xi^{\prime}(x)=-\lim_{x\to h^{-}}\xi^{\prime}(x)=0\). \[\lim_{x\to h^{-}}\xi^{\prime}(x)=-\lim_{x\to h^{-}}\left\{\frac{2x}{(h^{2}-x^ {2})^{2}}\right\}e^{\psi_{h}(x)}=0.\] Taking one more derivative, we find \[\xi_{h}^{\prime\prime}(x)=\left\{\frac{4x^{2}}{(h^{2}-x^{2})^{4}}-\frac{8x^{2} }{(h^{2}-x^{2})^{3}}-\frac{2}{(h^{2}-x^{2})^{2}}\right\}e^{\psi_{h}(x)}. \tag{6}\] Once again, taking the limits \(x\to h^{-}\) e \(x\to-h^{+}\), we shall have again zero, since the exponential term will dominate any polynomial term in the denominator. Generally speaking, we can extend this result to any derivative of \(\xi_{h}(x)\), \[\lim_{x\to-h^{+}}\xi_{h}^{(k)}(x)=(-1)^{k}\lim_{x\to h^{-}}\xi_{h}^{(k)}(x)=0, \tag{7}\] which permits us to define the following test function \(\eta_{h}\in\mathcal{D}(\mathds{R})\) \[\eta_{h}(x)=\begin{cases}0,&\text{if}\;\;|x|\geq h,\\ \xi_{h}(x),&\text{if}\;\;|x|\leq h,\end{cases} \tag{8}\] whose support is, by definition, \[\text{\emph{supp}}\;\eta_{h}=[-h,h].\] By itself, Example 2.2 should be interesting enough to give us the general look of test functions. Besides that, we can use \(\eta_{h}\) to construct new test functions which will be even more useful ahead. **Example 2.3**.: Let \(M,h>0\) be any two positive constants. We define the test function \(\eta_{M,h}\in\mathcal{D}(\mathds{R})\) by simply translating and stretching of the function given in (8), \[\eta_{M,h}(x)=\begin{cases}0,&\text{if}\;x\notin[M,M+h],\\ \frac{1}{C_{h}}\xi_{h}(2(x-M-h)),&\text{if}\;x\in[M,M+h],\end{cases} \tag{9}\] where \(C_{h}\) is a constant of normalization, i.e., \[C_{h}=\int_{-\infty}^{\infty}\xi_{h}(2(t-M-h))dt,\] which implies \[\int_{-\infty}^{\infty}\eta_{M,h}(t)dt=1.\] As we said, we have translated and stretched the support of our test function, so that now we have _supp_\(\eta_{M,h}=[M,M+h]\). With this, when considering the integral \[\int_{-\infty}^{x}\eta_{M,h}(t)dt,\] we obtain zero if \(x\leq M\) and unity when \(x\geq M+h\). Another fact easily seen is that this function is also infinitely differentiable. We cannot say, however, that \(\eta_{M,h}\) belongs to \(\mathcal{D}(\mathds{R})\), since it does not possesses compact support, the same happening to \(1-\eta_{M,h}\). If, on the other hand, we consider \(\mathds{R}^{n}\) and take the radial coordinate \(\|x\|=\sqrt{x_{1}^{2}+\cdots+x_{n}^{2}}\) as input, we can define \(\zeta_{M,h}:\mathds{R}^{n}\to\mathds{R}\) which will be a test function. Subsequently, we define \(\zeta_{M,h}\) by \[\zeta_{M,h}(x)=1-\int_{-\infty}^{\|x\|}\eta_{M,h}(t)dt, \tag{10}\] obtaining \(\zeta_{M,h}\) infinitely differentiable and with support \[\text{\emph{supp}}\;\zeta_{M,h}=B_{M+h}(0),\] where \(B_{M+h}(0)\) denotes the closed ball centered at the origin and with radius \(M+h\). From that, it follows that \(\zeta_{M,h}\in\mathcal{D}(\mathds{R}^{n})\). The importance of this example is better appreciated when we consider the product \(\zeta_{M,h}g\), where we can, for now, consider \(g\) to be only locally integrable. In this case, we obtain a function which is equal to \(g\) in \(B_{M}(0)\) and zero outside the ball \(B_{M+h}(0)\), also satisfying \(|\zeta_{M,h}(x)g(x)|\leq|g(x)|\) for all \(x\in\mathds{R}^{n}\). Since, in our definition of \(\zeta_{M,h}\), \(M\) and \(h\) were any positive real numbers, we can make the domain of coincidence of \(\zeta_{M,h}g\) and \(g\) as big as we want to and the ring \(B_{M+h}(0)\backslash B_{M}(0)\) as narrow as we wish. Therefore, if we integrate this product and take the limit \(h\to 0\), we have, by the Lebesgue Dominated Convergence Theorem (see [19]), \[\begin{split}\lim_{h\to 0^{+}}\int_{\mathds{R}^{n}}\zeta_{M,h}(x)g( x)d^{n}x&=\int_{\mathds{R}^{n}}\left(\lim_{h\to 0}\zeta_{M,h}(x)g(x) \right)d^{n}x\\ &=\int_{\mathds{R}^{n}}g(x)\chi_{B_{M}(0)}(x)d^{n}x,\end{split} \tag{11}\] where \(\chi_{B_{M}(0)}(x)\) is the characteristic function of the ball \(B_{M}(0)\), defined by \[\chi_{B_{M}(0)}(x)=\begin{cases}1\;,&\text{if }x\in B_{M}(0),\\ 0\;,&\text{if }x\notin B_{M}(0).\end{cases} \tag{12}\] In other words, \[\lim_{h\to 0^{+}}\int_{\mathds{R}^{n}}\zeta_{M,h}(x)g(x)d^{n}x=\int_{B_{M}(0) }g(x)d^{n}x.\] This fact will shall be important later ahead. We have now all the necessary ingredients to define our main objects of study. **Definition 2.2**.: A **distribution** is an element of \(\mathcal{D}^{\prime}(\mathds{R}^{n})\), meaning it is a linear continuous functional, in the sense of convergence defined in \(\mathcal{D}(\mathds{R}^{n})\).3 Footnote 3: As usual, \(\mathcal{D}^{\prime}(\mathds{R}^{n})\) stands for the dual topological space of \(\mathcal{D}(\mathds{R}^{n})\). **Remark 2.3**.: The action of a given distribution \(T\in\mathcal{D}^{\prime}(\mathds{R}^{n})\) over an element \(\varphi\) of \(\mathcal{D}(\mathds{R}^{n})\) may be written in different ways and here we shall denote it by \[\varphi\longmapsto\langle T,\varphi\rangle=T(\varphi),\] referencing the inner product notation. This convention will be justified in more details below. ### Distributions Given a first look at Definition 2.2, it may not be clear how one can say that distributions are the generalization of locally integrable functions. We can, however, demonstrate that it is so by working on our first example of an element of \(\mathcal{D}^{\prime}(\mathds{R}^{n})\). Given a function \(f\in L^{1}(\mathds{R}^{n})\), let us define the following operation, \[\text{for each }\varphi\in\mathcal{D}(\mathds{R}^{n})\;,\quad T_{f}(\varphi)= \int_{\mathds{R}^{n}}f(x)\varphi(x)\ d^{n}x. \tag{13}\] The condition that \(f\) be locally integrable is clearly seen to be necessary for this application to be well defined. Since \(\varphi\) vanishes outside some compact set, the integration is only performed in this set and since it is infinitely differentiable, \(\varphi f\) will also be locally integrable. We affirm that expression (13) defines a distribution. Indeed, the operations of multiplication by \(f\) and integration are well known to be linear, resulting in the linearity of the composition of both. For the continuity, taking a sequence \((\varphi_{k})_{k\in\mathds{N}}\subset\mathcal{D}(\mathds{R}^{n})\) converging to \(\varphi\in\mathcal{D}(\mathds{R}^{n})\), it follows from our notion of convergence in \(\mathcal{D}(\mathds{R}^{n})\) that there exists a compact set \(K\) containing _supp_\((\varphi_{k}-\varphi)\), for all \(k\) sufficiently big. Furthermore, since this convergence is uniform, we have, for every \(\varepsilon>0\) and every \(x\in K\), there exists \(k_{0}\in\mathds{N}\) such that, for \(k\geq k_{0}\), \[\left|\varphi_{k}(x)\right|\leq\left|\varphi(x)\right|+\left|\varphi_{k}(x)- \varphi(x)\right|\leq C+\varepsilon,\] where \(C=\sup\left\{\left|\varphi(x)\right|,\;x\in K\right\}\). Thus, \(\left|f\varphi_{k}\right|\) is bounded by \((C+\varepsilon)\left|f\right|\) for all \(k\geq k_{0}\) and, since this is an uniform bound in \(K\), we can apply the Dominated Convergence Theorem to conclude that \[\lim_{k\to\infty}\langle T_{f},\varphi_{k}\rangle=\lim_{k\to \infty}\int_{\mathds{R}^{n}}f(x)\varphi_{k}(x)d^{n}x =\int_{\mathds{R}^{n}}\left(\lim_{k\to\infty}f(x)\varphi_{k}(x) \right)d^{n}x\] \[=\int_{\mathds{R}^{n}}f(x)\varphi(x)d^{n}x.\] In other words, we have obtained \[\lim_{k\to\infty}\langle T_{f},\varphi_{k}\rangle=\langle T_{f},\varphi\rangle,\] proving that \(T_{f}\) is indeed a linear continuous functional over \(\mathcal{D}(\mathds{R}^{n})\). Such distributions, characterized by a locally integrable function \(f\), are called _regular distributions_. This gives us the idea that we can construct, from a function \(f\in L^{1}(\mathds{R}^{n})\), a distribution \(T_{f}\in\mathcal{D}^{\prime}(\mathds{R}^{n})\). Still, it does not tells yet how we can view such functions as proper elements of \(\mathcal{D}^{\prime}(\mathds{R}^{n})\). Could we, for instance, have different functions representing the same regular distribution? The answer is _yes_ and is justified by the fact that changing the quantity inside an integral in a discrete set does not alter its value. More generally, any change in a null measure set leaves the integral unaltered. This fact implies that two functions which are equal almost everywhere4 define the same regular distribution. We can prove, however, that this is the only way we get this coincidence. Footnote 4: With that we mean that they are equal outside a null measure set. **Lemma 2.1**.: _Two functions \(f,g\in L^{1}_{loc}(\mathds{R}^{n})\) define the same distribution, in the sense that \(T_{f}=T_{g}\), if and only if \(f\) is equal to \(g\) almost everywhere._ Proof.: (\(\Leftarrow\)) Firstly, if \(A\) is the null set where \(f\) differs from \(g\) and \(B=\mathds{R}^{n}\backslash A=\mathds{R}^{n}\cap A^{c}\), then for each \(\varphi\in\mathcal{D}(\mathds{R}^{n})\), \[\begin{split}\langle T_{f},\varphi\rangle=\int_{\mathds{R}^{n}}f( x)\varphi(x)d^{n}x&=\int_{A}f(x)\varphi(x)d^{n}x+\int_{B}f(x)\varphi(x)d^{n}x \\ &=0+\int_{B}f(x)\varphi(x)d^{n}x\\ &=\int_{B}g(x)\varphi(x)d^{n}x=\langle T_{g},\varphi\rangle,\end{split} \tag{14}\] where we have used \(f=g\) in \(B\). Since this is valid for every \(\varphi\in\mathcal{D}(\mathds{R}^{n})\), we achieve \(T_{f}=T_{g}\). (\(\Rightarrow\)) If we suppose now that \(f\) and \(g\) define the same distribution, then \(\langle f,\varphi\rangle=\langle g,\varphi\rangle\) for every test function \(\varphi\in\mathcal{D}(\mathds{R}^{n})\). In particular, we can take \(\varphi=\zeta_{M,h}\in\mathcal{D}(\mathds{R}^{n})\) as constructed in Example 2.3, being define by the following: \(0\leq\zeta_{M,h}(x)\leq 1\) for all \(x\in\mathds{R}^{n}\); \(\zeta_{M,h}(x)=1\) for \(x\) in the ball centered at \(x_{0}\) and of radius \(M>0\); \(\zeta_{M,h}(x)=0\) for \(x\) outside the ball centered at \(x_{0}\) and of radius \(M+h>0\), with \(h>0\). Therefore, we have \[\langle f,\zeta_{M,h}\rangle=\langle g,\zeta_{M,h}\rangle\;\Rightarrow\;\int_ {\mathds{R}^{n}}f(x)\zeta_{M,h}(x)d^{n}x=\int_{\mathds{R}^{n}}g(x)\zeta_{M,h} (x)d^{n}x.\] By the conclusion of Example 2.3, taking the limit \(h\to 0\) we obtain at last \[\int_{B_{M}(x_{0})}f(x)d^{n}x=\int_{B_{M}(x_{0})}g(x)d^{n}x,\] which in turn implies that \(f\) and \(g\) are equal outside a null measure set in \(B_{M}(x_{0})\) (see, for example, Corollary 4.10 in [19]). Since \(x_{0}\) is arbitrary and we can cover \(\mathds{R}^{n}\) with a countable number of balls of radius \(M\), we obtain the desired result. In view of Lemma 2.1, if we consider elements of \(L^{1}_{loc}(\mathds{R}^{n})\) to be the equivalence classes,5 defined by the requirement that two functions are equivalent if, and only if, they are equal almost everywhere, then we shall have an one-to-one correspondence between regular distributions and elements \(f\in L^{1}(\mathds{R}^{n})\). For this reason, it does not clouds our understanding when we refer to a regular distribution the function which characterizes it, justifying also the name _generalized functions_, sometimes attributed to distributions. Footnote 5: Still denoting them by \(f\in L^{1}(\mathds{R}^{n})\), meaning we identify each class with one of its representatives. Regular distributions are not, however, the only class of elements in \(\mathcal{D}^{\prime}(\mathds{R}^{n})\). Those which are not defined as in (13) by some \(f\in L^{1}(\mathds{R}^{n})\) are called _singular distributions_. **Example 2.4**.: The first example of a singular distribution we can give is the famous _Dirac delta distribution_\(\delta\), which can finally put a well defined meaning to symbols such as \(\delta(x)\). We define the distribution \(\delta\in\mathcal{D}^{\prime}(\mathds{R}^{n})\) by the expression \[\delta(\varphi)=\varphi(0),\quad\forall\;\varphi\in\mathcal{D}(\mathds{R}^{n}). \tag{15}\] If \(\varphi,\psi\) are functions in \(\mathcal{D}(\mathds{R}^{n})\) and \(z\in\mathds{C}\) is a complex number, then \[\delta(z\varphi+\psi)=(z\varphi+\psi)(0)=z\varphi(0)+\psi(0)=z\delta(\varphi)+ \delta(\psi).\] Besides that, if \(\varphi_{k}\to\varphi\) in \(\mathcal{D}(\mathds{R}^{n})\), then the point convergence \(\varphi_{k}(0)\to\varphi(0)\) is promptly assured, so that \(\langle\delta,\varphi_{k}\rangle\to\langle\delta,\varphi\rangle\). With this, we prove the continuity of \(\delta\). **Remark 2.4**.: To leave things clear, the notation \(\delta(x)\) or expressions of the form \[\delta(\varphi)=\int_{-\infty}^{+\infty}\delta(x)\varphi(x)dx \tag{16}\] become only abuses of notation. It is not possible to define a legitimate function \(\delta:\mathds{R}\longrightarrow\mathds{R}\) such that \(\delta=T_{\delta(x)}\), making (16) nothing more than a sometimes useful convention on notation. Indeed, if we suppose there exists such function \(\delta(x)\), then we can show that, in \(\mathds{R}\backslash\{0\}\), it must be equal to the null function outside a null measure set. This is given by the fact that, for test functions \(\varphi\) such that _supp_\(\varphi\subset\mathds{R}\backslash\{0\}\), \[\int_{A}\delta(x)\varphi(x)dx=\varphi(0)=0,\] where \(A\) is a subset of \(\mathds{R}\backslash\{0\}\) containing _supp_\(\varphi\). Thus, if \(N\) is the mentioned null measure set, \(N\cup\{0\}\) remains of null measure. Therefore, we would have \[\delta(\varphi)=\int_{-\infty}^{+\infty}\delta(x)\varphi(x)dx=0,\] now for \(\varphi\) in \(\mathcal{D}(\mathds{R})\). This is a contradiction with definition (15) of \(\delta\). **Remark 2.5**.: We shall utilize the symbol \(\delta_{x_{0}}\), with \(x_{0}\in\mathds{R}^{n}\) to represent the singular distribution \[\delta_{x_{0}}(\varphi)=\langle\delta_{x_{0}},\varphi\rangle=\varphi(x_{0})\;,\quad\forall\;\varphi\in\mathcal{D}(\mathds{R}^{n}).\] With this, the delta distribution as defined in (15) is nothing more than \(\delta_{0}\). We will, however, write the more compact form, for convenience, only explicitly writing the point \(x_{0}\) of evaluation when it is different from the origin. Continuing on the topic of the delta distribution, we may find materials where it is introduced as the limit of a sequence of smooth functions [4, 20], providing but an intuition of the meaning of \(\delta(x)\). Now that we have seen the rigorous definition of \(\delta\) and expressed the space in which it lives, we can in fact solidify this idea of convergence. As a consequence, we also have, as expected, a generalization of the convergence in the function spaces we have defined. **Definition 2.3**.: Let \((T_{k})_{k\in\mathds{N}}\) be a sequence of distributions in \(\mathcal{D}^{\prime}(\mathds{R}^{n})\) and let \(T\in\mathcal{D}^{\prime}(\mathds{R}^{n})\). We say that \(T_{k}\) converges to \(T\) if, for every \(\varphi\in\mathcal{D}(\mathds{R}^{n})\), we have the convergence (in \(\mathds{C}\)) \(\lim_{k\to\infty}\langle T_{k},\varphi\rangle=\langle T,\varphi\rangle\). We denote this convergence simply by \(T_{k}\to T\) or \(T_{k}\xrightarrow{\mathcal{D}^{\prime}(\mathds{R}^{n})}T\). With this definition, the idea of defining the Dirac delta distribution as a limit of _bona fide_ functions becomes rigorous. This convergence cannot happen point wise, but only if we view the sequence \(f_{k}\) as a sequence in \(\mathcal{D}^{\prime}(\mathds{R})\). With this in mind, we next prove a result which allows us to obtain an infinity of sequences of regular distributions converging to \(\delta\). These are called _Dirac sequences_. **Theorem 2.1**.: _Let \(f:\mathds{R}^{n}\to\mathds{R}\) be an integrable function such that_ \[\int_{\mathds{R}^{n}}f(x)d^{n}x=1.\] _Then, the sequence \(f_{k}\) defined by \(f_{k}(x)=k^{n}f(kx)\) is such that \(f_{k}\to\delta\), as a sequence of distributions._ Proof.: If \(\varphi\) is an element of \(\mathcal{D}(\mathds{R}^{n})\), then \[\begin{split}\int_{\mathds{R}^{n}}k^{n}f(kx)\varphi(x)d^{n}x& \stackrel{{ y=kx}}{{=}}\int_{\mathds{R}^{n}}f(y) \varphi(y/k)d^{n}y\\ &=\int_{\mathds{R}^{n}}f(y)\left(\varphi(y/k)-\varphi(0)\right) d^{n}y+\int_{\mathds{R}^{n}}f(y)\varphi(0)d^{n}y.\end{split} \tag{17}\] Now, since \(\varphi\) is continuous by definition, it follows that \[\lim_{k\to\infty}f(y)\left(\varphi(y/k)-\varphi(0)\right)=0,\quad\forall\;y \in\mathds{R}^{n}\] and, since \[\left|f(y)\left(\varphi(y/k)-\varphi(0)\right)\right|\leq 2\left\|\varphi \right\|\left|f(y)\right|,\] we may apply the Lebesgue Dominated Convergence Theorem, obtaining \[\lim_{k\to\infty}\int_{\mathds{R}^{n}}f(y)\left(\varphi(y/k)-\varphi(0)\right) d^{n}y=\int_{\mathds{R}^{n}}\lim_{k\to\infty}f(y)\left(\varphi(y/k)-\varphi(0) \right)d^{n}y=0.\] Thus, by (17), we have \[\lim_{k\to\infty}\int_{\mathds{R}^{n}}k^{n}f(kx)\varphi(x)d^{n}x=\lim_{k\to \infty}\varphi(0)\int_{\mathds{R}^{n}}f(y)d^{n}y=\varphi(0)=\langle\delta, \varphi\rangle,\] as we wished. **Remark 2.6**.: Note that Theorem 2.1 claims we can actually construct a Dirac sequence from any integrable function \(f\) whose integral is different from zero. To see this, we need only to take \(g=Cf\), where \(C\in\mathds{R}\) is the constant \(\left(\int_{\mathds{R}^{n}}f(x)d^{n}x\right)^{-1}\), and use then \(g_{k}(x)=k^{n}g(kx)\) as the sequence contained in Theorem 2.1. **Example 2.5**.: With Theorem 2.1 and our last remark, we can now easily obtain some examples of Dirac sequences: \[f_{k}(x)=\frac{k}{\sqrt{2\pi}e^{-k^{2}x^{2}}},\qquad g_{k}(x)=\frac{1}{\pi} \frac{k}{1+k^{2}x^{2}},\qquad h_{k}(x)=\frac{1}{\pi k}\frac{\operatorname{sen} ^{2}(kx)}{x^{2}}.\] ### New distributions from old ones What we wish to do now is construct some of the operators that take elements of \(\mathcal{D}^{\prime}(\mathds{R}^{n})\) and return new elements of the same space, much like the operations of sum and product by a scalar \(z\in\mathds{C}\). We are used to such operations acting on function spaces, with derivations, products or convolutions being the main cases. Here we are going to translate some of these into the language of distributions. It must be clear that, since we are dealing with a generalization of functions, the operations we want to define should also be generalization of the ones we already know and this, in reality, gives us the insights we need to construct said operations. We begin with the concept of the derivative of a distribution \(T\in\mathcal{D}^{\prime}(\mathds{R}^{n})\). Given a function \(f\in L^{1}_{loc}(\mathds{R}^{n})\) which admits a first derivation with respect to some variable \(x_{i}\) and whose derivative \(\frac{\partial f}{\partial x_{i}}\) also belongs to \(L^{1}_{loc}(\mathds{R}^{n})\), we have, for \(\varphi\in\mathcal{D}(\mathds{R}^{n})\), \[\left\langle\frac{\partial f}{\partial x_{i}},\varphi\right\rangle=\int_{ \mathds{R}^{n}}\frac{\partial f}{\partial x_{i}}(x)\varphi(x)\ d^{n}x=\int_{- \infty}^{+\infty}dx_{1}\cdots\int_{-\infty}^{+\infty}dx_{n}\left(\int_{-\infty }^{+\infty}\frac{\partial f}{\partial x_{i}}(x)\varphi(x)dx_{i}\right)\] and, using integration by parts, \[\begin{split}\int_{-\infty}^{+\infty}\frac{\partial f}{\partial x _{i}}(x)\varphi(x)dx_{i}&=f(x)\varphi(x)\Big{|}_{x_{i}=-\infty}^ {x_{i}=\infty}-\int_{-\infty}^{+\infty}f(x)\frac{\partial\varphi}{\partial x _{i}}(x)\ dx_{i}\\ &=-\int_{-\infty}^{+\infty}f(x)\frac{\partial\varphi}{\partial x _{i}}(x)\ dx_{i}.\end{split} \tag{18}\] Since \(\frac{\partial\varphi}{\partial x_{i}}\) is again a test function, we can write \[\left\langle\frac{\partial f}{\partial x_{i}},\varphi\right\rangle=-\bigg{\langle} f,\frac{\partial\varphi}{\partial x_{i}}\bigg{\rangle}.\] By induction, we can easily extend this result to any multi-index \(\beta\), since any derivation of a test function will again be a test function. Paying attention to the factor of \((-1)\) we must insert at each step, we have at last \[\big{\langle}D^{\beta}f,\varphi\big{\rangle}=(-1)^{|\beta|}\big{\langle}f,D^ {\beta}\varphi\big{\rangle}. \tag{19}\] Thus, the logical extension of this result to a general distribution \(T\in\mathcal{D}^{\prime}(\mathds{R}^{n})\) is given by the following **Definition 2.4**.: For any multi-index \(\beta\), the derivative \(D^{\beta}\) of a distribution \(T\in\mathcal{D}^{\prime}(\mathds{R}^{n})\) is a new distribution \(D^{\beta}T\in\mathcal{D}^{\prime}(\mathds{R}^{n})\) defined by \[\big{\langle}D^{\beta}T,\varphi\big{\rangle}=(-1)^{|\beta|}\big{\langle}T,D^{ \beta}\varphi\big{\rangle},\quad\forall\;\varphi\in\mathcal{D}(\mathds{R}^{n}). \tag{20}\] The fact that equation (20) indeed represents a new distribution is a straightforward consequence of the facts that the operation \(D^{\beta}\) is linear over \(\mathcal{D}(\mathds{R}^{n})\) and that the convergence \(\varphi_{n}\to\varphi\) in \(\mathcal{D}(\mathds{R}^{n})\) implies, by construction, in the convergence \(D^{\beta}\varphi_{n}\to D^{\beta}\varphi\), again in \(\mathcal{D}(\mathds{R}^{n})\), whatever \(\beta\) we may have. **Remark 2.7**.: Perhaps the most interesting aspect of Definition 2.4 - something that can actually be seen as the most interesting aspect of the theory of distributions itself - is that, being \(\varphi\in\mathcal{D}(\mathds{R}^{n})\) infinitely differentiable, \(D^{\beta}T\) is well defined for every \(\beta\). In other words, distributions are also infinitely differentiable objects, and this is given from the start. We have no need in restraining the space we work with to obtain infinite smoothness, something which is very much desired in almost any theory. **Example 2.6**.: Consider the uni dimensional case where \(f\) is sectionally differentiable, that is, \(f^{\prime}\) exists for every point of \(\mathds{R}\) outside a finite set, say \(\{x_{1},x_{2},\cdots,x_{m}\}\). Suppose, further, that \(f\) is such that the limits \(f(x_{i}^{+})=\lim_{x\to x_{i}^{+}}f(x)\) and \(f(x_{i}^{-})=\lim_{x\to x_{i}^{-}}f(x)\) exist. The derivative of the regular distribution \(T_{f}\) is thus given by \[{T_{f}}^{\prime}=T_{f^{\prime}}+\sum_{i=1}^{m}\sigma_{i}\delta_{x_{i}}, \tag{21}\] where \(\sigma_{i}=f(x_{i}^{+})-f(x_{i}^{-})\). Indeed, we know that \[\left\langle{T_{f}}^{\prime},\varphi\right\rangle =-\int_{-\infty}^{\infty}f(x)\varphi^{\prime}(x)dx\] \[=-\int_{-\infty}^{x_{1}}f(x)\varphi^{\prime}(x)dx-\sum_{i=1}^{m- 1}\int_{x_{i}}^{x_{i+1}}f(x)\varphi^{\prime}(x)dx-\int_{x_{n}}^{\infty}f(x) \varphi^{\prime}(x)dx\] \[=-\lim_{\varepsilon\to 0}\left(\int_{-\infty}^{x_{1}- \varepsilon}f(x)\varphi^{\prime}(x)dx+\sum_{i=1}^{m-1}\int_{x_{i}+\varepsilon }^{x_{i+1}-\varepsilon}f(x)\varphi^{\prime}(x)dx\right.\] \[\left.\quad+\int_{x_{n}+\varepsilon}^{\infty}f(x)\varphi^{\prime }(x)dx\right).\] Applying integration by parts, we obtain \[\int_{-\infty}^{x_{1}-\varepsilon}f(x)\varphi^{\prime}(x)dx =f(x)\varphi(x)\Big{|}_{-\infty}^{x_{1}-\varepsilon}-\int_{- \infty}^{x_{1}-\varepsilon}f^{\prime}(x)\varphi(x)dx,\] \[\int_{x_{n}+\varepsilon}^{\infty}f(x)\varphi^{\prime}(x)dx =f(x)\varphi(x)\Big{|}_{x_{n}+\varepsilon}^{\infty}-\int_{x_{n}+ \varepsilon}^{\infty}f^{\prime}(x)\varphi(x)dx,\] \[\int_{x_{i}+\varepsilon}^{x_{i+1}-\varepsilon}f(x)\varphi^{ \prime}(x)dx =f(x)\varphi(x)\Big{|}_{x_{i}+\varepsilon}^{x_{i+1}-\varepsilon}- \int_{x_{i}+\varepsilon}^{x_{i+1}-\varepsilon}f(x)\varphi^{\prime}(x)dx,\] which implies, passing the limit \(\varepsilon\to 0\), \[\left\langle{T_{f}}^{\prime},\varphi\right\rangle =\int_{-\infty}^{\infty}f^{\prime}(x)\varphi(x)dx+\lim_{ \varepsilon\to 0}\left(\sum_{i=1}^{m-1}f(x_{i+1}+\varepsilon)\varphi(x_{i+1}+ \varepsilon)-f(x_{i}-\varepsilon)\varphi(x_{i}-\varepsilon)\right)\] \[=\int_{-\infty}^{\infty}f^{\prime}(x)\varphi(x)dx+\sum_{i=1}^{m-1} \sigma_{i}\varphi(x_{i}).\] This equation is exactly the equality of distributions we wished to prove, (21). On the other hand, we have no reason to believe that general results such as (21) may be obtained for singular distributions. The only information we have for \(D^{\beta}T\) in this case is expression (20), defining it as a continuous functional over \(\mathcal{D}(\mathds{R}^{n})\). **Example 2.7**.: The derivative of order \(m\) of the Dirac delta distribution \(\delta_{x_{0}}\in\mathcal{D}(\mathds{R})\), for example, is given by \[\left\langle\delta^{(m)},\varphi\right\rangle=(-1)^{m}\varphi^{(m)}(0). \tag{22}\] This follows directly from equation (20). Now, the next operation we define over \(\mathcal{D}^{\prime}(\mathds{R}^{n})\) is the multiplication of distributions by smooth functions. We develop these new elements like we have done for \(D^{\beta}T\), dealing first with regular distributions and then generalizing for the general case. If \(T=T_{f}\) and \(h\) is an infinitely differentiable function, we can write \[\left\langle hf,\varphi\right\rangle=\int_{\mathds{R}^{n}}h(x)f(x)\varphi(x)\ dx=\langle f,h\varphi\rangle,\] since \(h\varphi\) is again a infinitely differentiable function which has support contained in the support of \(\varphi\), i.e., \(h\varphi\in\mathcal{D}(\mathds{R}^{n})\). If this is the relation which we seek to preserve when working with regular distributions, our definition for the product of a distribution with a function must clearly be the following. **Definition 2.5**.: Given a distribution \(T\) and a function \(h\in C^{\infty}(\mathds{R}^{n})\), the product \(hT\in\mathcal{D}^{\prime}(\mathds{R}^{n})\) is defined by \[\left\langle hT,\varphi\right\rangle=\left\langle T,h\varphi\right\rangle, \quad\forall\;\varphi\in\mathcal{D}(\mathds{R}^{n}). \tag{23}\] Alike the case of derivatives of distributions, the linearity of \(hT\) follows from the linearity of the operation \(\varphi\mapsto h\varphi\), whereas the continuity is assured by the preservation of the convergence in \(\mathcal{D}(\mathds{R}^{n})\) by this operation. This last assertion requires a little more mathematical rigor, which we give now. **Lemma 2.2**.: _For any \(T\in\mathcal{D}^{\prime}(\mathds{R}^{n})\) and \(h\in C^{\infty}(\mathds{R}^{n})\), expression (23) defines a new distribution._ Proof.: Indeed, if \(\varphi_{k}\to\varphi\) in \(\mathcal{D}(\mathds{R}^{n})\), \[\left|D^{\beta}\left\{h(\varphi_{k}-\varphi)\right\}(x)\right|\leq\sum_{ \alpha\leq\beta}\frac{\beta!}{\alpha!(\beta-\alpha)!}\left|D^{\beta-\alpha}h(x )\right|\left|D^{\alpha}(\varphi_{k}-\varphi)(x)\right|.\] Thus, if \(K\subset\mathds{R}^{n}\) is the compact set such that _supp_\(\varphi_{k}\subset K\) for \(k\) sufficiently big, then we have \[\left|D^{\beta}\left\{h(\varphi_{k}-\varphi)\right\}(x)\right|\leq\sum_{ \alpha\leq\beta}\frac{\beta!}{\alpha!(\beta-\alpha)!}C_{\alpha}\left\|\varphi _{k}-\varphi\right\|_{\beta},\] where \(C_{\alpha}=\sup\{\left|(D^{\beta-\alpha}h)(x)\right|,\ x\in\mathds{R}^{n}\}\) are constants independent of \(k\). Therefore, \[\lim_{k\to+\infty}\left\|h(\varphi_{k}-\varphi)\right\|_{\beta}\leq\sum_{\alpha \leq\beta}\frac{\beta!}{\alpha!(\beta-\alpha)!}C_{\alpha}\left(\lim_{k\to+ \infty}\left\|\varphi_{k}-\varphi\right\|_{\beta}\right)=0,\] proving that \(h\varphi_{k}\) converges in \(\mathcal{D}(\mathds{R}^{n})\) for \(h\varphi\) and, consequently, \(\langle hT,\varphi_{k}\rangle\) converges to \(\langle hT,\varphi\rangle\). We conslude from this that \(hT\) is, indeed, a linear and continuous functional over \(\mathcal{D}(\mathds{R}^{n})\). The main example is the one related to the Dirac delta. **Example 2.8**.: Given any \(h\in C^{\infty}(\mathds{R}^{n})\) and a test function \(\varphi\in\mathcal{D}(\mathds{R}^{n})\), we have \[\langle h\delta_{x_{0}},\varphi\rangle=\langle\delta_{x_{0}},h\varphi\rangle =h(x_{0})\varphi(x_{0})=h(x_{0})\langle\delta_{x_{0}},\varphi\rangle.\] We see then that, the only value necessary to define \(h\delta_{x_{0}}\) is \(h(x_{0})\), that means, \[h\delta_{x_{0}}=h(x_{0})\delta_{x_{0}}.\] A particular result, and one of great importance, is when \(n=1\), \(x_{0}=0\) and \(h(x)=x\). In this case, we obtain \[x\delta=0.\] ## 3 Extension of distributions As we have hinted in Section 1, the developments of QFT showed that renormalization in causal perturbation theory depends heavily on distribution theory, more specifically on the procedures for obtaining extensions of distributions whose behavior at the origin (this nomenclature will become clear later) does not allow that we apply these over functions whose support contains the origin. Our main problem then becomes: _Given a distribution \(T_{0}\) which is only well defined when we apply it on test functions \(\varphi\in\mathcal{D}(\mathds{R}^{n})\) for which \(0\notin\text{supp }\varphi,\) how can we construct a new distribution \(T\in\mathcal{D}^{\prime}(\mathds{R}^{n})\) which is the extension of \(T_{0},\) that is, such that \(\langle T_{0},\varphi\rangle=\langle T,\varphi\rangle\) whenever \(0\notin\text{supp }\varphi\)_? The method for constructing such extensions is our goal in this section. The ideas here presented originate mainly from [12, 13, 21] and references therein. Some passages in these papers, however, may be too straightforward for an unfamiliar public, perhaps in virtue of the public they are directed to, this being researchers more familiar with the area. For that reason, we wish here to perhaps fill some blanks one could find during those reads, providing then a more accessible text to less experienced audience. ### Distributions with dependence in one parameter Here, we deviate slightly from the common approach adopted by most of the literature, which takes what we will do now as known, giving more space to certain results crucial in later moments. Our main focus now will be distributions with dependence on a real parameter \(\mu\). With this, we mean to say that we will study a family of distributions in \(\mathcal{D}^{\prime}(\mathds{R}^{n})\) of the form \[\{T_{\mu}\in\mathcal{D}^{\prime}(\mathds{R}^{n})\;;\quad\mu\in\mathds{R}\},\] and explore the analytic properties of this dependence over \(\mu\). Firstly, let us see that, for any distinct \(\mu_{1},\mu_{2}\in\mathds{R}\), \[F_{\mu_{1},\mu_{2}}=\frac{T_{\mu_{1}}-T_{\mu_{2}}}{\mu_{1}-\mu_{2}}\] is well defined as an element of \(\mathcal{D}^{\prime}(\mathds{R}^{n})\), with action given by \[\langle F_{\mu_{1},\mu_{2}},\varphi\rangle=\frac{1}{\mu_{1}-\mu_{2}}\left[ \langle T_{\mu_{1}},\varphi\rangle-\langle T_{\mu_{2}},\varphi\rangle\right] \in\mathds{C}.\] If, moreover, for all test function \(\varphi\) and for all \(\overline{\mu}\in\mathds{R}\), the limit \(\lim\limits_{\delta\mu\to 0}\langle F_{\overline{\mu}+\delta\mu, \overline{\mu}},\varphi\rangle\) exists, then we are capable of defining the distribution \[\left\langle\frac{d}{d\mu}T_{\overline{\mu}},\varphi\right\rangle\coloneqq \lim\limits_{\delta\mu\to 0}\frac{1}{\delta\mu}\langle T_{\overline{\mu}+\delta\mu}-T_{ \overline{\mu}},\varphi\rangle,\;\forall\;\overline{\mu}\in\mathds{R}\,\; \forall\;\varphi\in\mathcal{D}(\mathds{R}^{n}),\] which will be the limit, in the sense of distributions, of \(F_{\overline{\mu}+\delta\mu,\overline{\mu}}\). Is should be clear that this new distribution appears itself to be equivalent to the derivation of the application \(\mu\mapsto T_{\mu}\) and should not be confused with our previous definition of derivative \(D^{\alpha}T\) of \(T\). Both represent new objects belonging to \(\mathcal{D}^{\prime}(\mathds{R}^{n})\) and are constructed from \(T\), but whereas the former is given by a differentiation in relation to the parameter \(\mu\) of the family \(T_{\mu}\), without any mention to test functions, the latter is dependent entirely upon derivations on each \(\varphi\) and its variables. Let us consider now the following: given the application \(\mu\mapsto T_{\mu}\), we can construct a new, complex, function for each \(\varphi\in\mathcal{D}(\mathds{R}^{n})\). For this, we take \[\eta\colon\mathds{R} \longrightarrow\mathds{C}\] \[\mu \longmapsto\eta(\mu)=\langle T_{\mu},\varphi\rangle. \tag{24}\] It follows from this composition that \[\frac{d\eta}{d\mu}(\overline{\mu})=\lim\limits_{\delta\mu\to 0}\frac{1}{ \delta\mu}\left(\langle T_{\overline{\mu}+\delta\mu},\varphi\rangle-\langle T _{\overline{\mu}},\varphi\rangle\right)=\lim\limits_{\delta\mu\to 0}\langle F_{ \overline{\mu}+\delta\mu,\overline{\mu}},\varphi\rangle,\] that means, \[\frac{d\eta}{d\mu}(\overline{\mu})=\bigg{\langle}\frac{d}{d\mu}T_{\overline{ \mu}},\varphi\bigg{\rangle}. \tag{25}\] This is an important result, justifying then its reiteration in an alternative form: _For any \(\varphi\in\mathcal{D}(\mathds{R}^{n})\) and \(\overline{\mu}\in\mathds{R}\), we have_ \[\left\langle\frac{d}{d\mu}T_{\overline{\mu}},\varphi\right\rangle=\frac{d}{d \mu}\langle T_{\overline{\mu}},\varphi\rangle. \tag{26}\] Besides that, we can, through \(\eta\), define a new distribution. If \(\eta\) is continuous, we know that, for \(a\in\mathds{R}\), \[H(\mu)=\int_{a}^{\mu}\eta(\overline{\mu})d\:\overline{\mu}\] is a well defined complex function. We have thus obtained, from a test function \(\varphi\) and a fixed \(\mu\in\mathds{R}\), a new element \(I\in\mathcal{D}^{\prime}(\mathds{R}^{n})\), characterized by \[\langle I,\varphi\rangle=H(\mu)=\int_{a}^{\mu}\eta(\overline{\mu})d\: \overline{\mu}\in\mathds{C}.\] In reality, we shall utilize another symbol to reference \(I\), defining \[\int_{a}^{\mu}d\:\overline{\mu}\:T_{\overline{\mu}}=I.\] This nomenclature permits us to write \[\left\langle\int_{a}^{\mu}d\:\overline{\mu}\:T_{\overline{\mu}},\varphi \right\rangle=\int_{a}^{\mu}d\:\overline{\mu}\:\langle T_{\overline{\mu}}, \varphi\rangle.\] In particular, if we take the integration of \(\frac{d}{d\mu}T_{\mu}\), we obtain, due to (26), \[\begin{split}\left\langle\int_{a}^{\mu}d\:\overline{\mu}\: \left(\frac{d}{d\mu}T_{\overline{\mu}}\right),\varphi\right\rangle& =\int_{a}^{\mu}d\:\overline{\mu}\:\left\langle\frac{d}{d\mu}T_{ \overline{\mu}},\varphi\right\rangle\\ &=\int_{a}^{\mu}d\:\overline{\mu}\:\frac{d}{d\mu}\langle T_{ \overline{\mu}},\varphi\rangle\\ &=\eta(\mu)-\eta(a).\end{split} \tag{27}\] We see then that we have obtained a version of the Fundamental Theorem of Calculus, \[\int_{a}^{\mu}d\:\overline{\mu}\frac{d}{d\mu}T_{\overline{\mu}}=T_{\mu}-T_{a}, \tag{28}\] for some mapping from \(\mathds{R}\) to \(\mathcal{D}^{\prime}(\mathds{R}^{n})\). This result will be an important piece for our main theorems in the next section. ### Extensions of distributions Let us consider a regular distribution \(T_{f}=f\in\mathcal{D}(\mathds{R}^{n})\) such that \(f(x)=0\) for every \(x\) belonging to a subset \(U\subset\mathds{R}^{n}\). It is then evident that, for all \(\varphi\in\mathcal{D}(\mathds{R}^{n})\) such that _supp_\(\varphi\subset U\) (we write, in this case, \(\varphi\in\mathcal{D}(U)\)), we have \(\langle f,\varphi\rangle=0\). This can be, therefore, a form of characterizing the support _supp_\(f\) of the function \(f\). With the intent of extending this notion to distributions \(T\in\mathcal{D}^{\prime}(\mathds{R}^{n})\), we say that \(T\) is zero in a subset \(U\subset\mathds{R}^{n}\) when \[\langle T,\varphi\rangle=0\;,\quad\forall\;\varphi\in\mathcal{D}(U).\] We are thus able to define the **support of a distribution**\(T\). **Definition 3.1**.: For a given \(T\in\mathcal{D}^{\prime}(\mathds{R}^{n})\), the support of \(T\) is the subset _supp_\(T\subset\mathds{R}^{n}\) given by \[\textit{supp}\;T:=\{x\in\mathds{R}^{n}\;;\;x\text{ does not contain a neighborhood in which $T$ is zero}\}.\] In an equivalent manner, _supp_\(T\) can be seen as the complement of the biggest subset in which \(T\) is zero. This definition, in turn, allows us to define an important subspace of \(\mathcal{D}^{\prime}(\mathds{R}^{n})\). For any subset \(U\subset\mathds{R}^{n}\), the subspace of \(\mathcal{D}^{\prime}(\mathds{R}^{n})\) given by the distributions such that _supp_\(T\subset U\) will be denoted by \(\mathcal{D}^{\prime}(U)\). This notation comes from the clear idea that we can associate this subset with the space of distributions whose arguments are test functions in \(\mathcal{D}(U)\). **Example 3.1**.: Given the distribution \(\delta_{x_{0}}\in\mathcal{D}^{\prime}(\mathds{R}^{n})\), we know that, for any \(\varphi\in\mathcal{D}(\mathds{R}^{n})\) such that \(\varphi(x_{0})=0\), that is, such that \(x_{0}\notin\textit{supp}\;\varphi\), we have \[\langle\delta_{x_{0}},\varphi\rangle=\varphi(x_{0})=0.\] It follows from this that _supp_\(\delta_{x_{0}}=\{x_{0}\}\). We can obtain a reciprocate from this last result with the following lemma (for the proof, see for example [17]). **Lemma 3.1**.: _If \(T\in\mathcal{D}^{\prime}(\mathds{R}^{n})\) is such that \(\textit{supp}\;T=\{x_{0}\}\), then there exists \(m\in\mathds{N}\) and constants \(c_{\nu}\), \(|\nu|\leq m\), such that_ \[T=\sum_{|\nu|\leq m}c_{\nu}D^{\nu}\delta_{x_{0}}. \tag{29}\] In what follows, we define a quantity which probes the behavior of a distribution \(T\) in the origin, in terms of singularities. Since we have drawn a clear distinction between _bona fide_ functions (regular distributions) and general (singular) distributions, we must develop for the later the idea of studying the behavior of \(T\) at some point in \(\mathds{R}^{n}\). It is in this ground that we introduce the concept of _pull-back_. This definition is characterized by a transformation \(\Phi:\mathds{R}^{n}\to\mathds{R}^{n}\), which we consider here to be invertible for simplification. For a function \(f\in L^{1}_{loc}(\mathds{R}^{n})\), the pull-back \(\Phi^{*}f:\mathds{R}^{n}\to\mathds{C}\) of \(f\) over \(\Phi\) is a new complex function given by \[\Phi^{*}f(x)=f(\Phi(x)).\] Therefore, if viewing \(f\) as a regular distribution, we will have, for every \(\varphi\in\mathcal{D}(\mathds{R}^{n})\),6 Footnote 6: Here the symbol \(|DF(y)|\) represents the Jacobian of \(F:\mathds{R}^{n}\longrightarrow\mathds{R}^{n}\). \[\langle\Phi^{*}f,\varphi\rangle=\int_{\mathds{R}^{n}}f(\Phi(x))\varphi(x)d^{n} x=\int_{\mathds{R}^{n}}f(y)\varphi(\Phi^{-1}(y))\left|D\Phi^{-1}(y)\right|d^{n}y\] and, finally, \[\langle\Phi^{*}f,\varphi\rangle=\langle f,\left|D\Theta(y)\right|\Theta^{*} \varphi\rangle\;,\quad\Theta=\Phi^{-1}.\] With this in mind, we can finally define the pull-back of a distribution. **Definition 3.2**.: The pull-back of \(T\in\mathcal{D}^{\prime}(\mathds{R}^{n})\) over a invertible transformation \(\Phi:\mathds{R}^{n}\rightarrow\mathds{R}^{n}\) is a new distribution \(\Phi^{*}T\), defined by \[\langle\Phi^{*}T,\varphi\rangle=\langle T,\left|D\Theta(y)\right|\Theta^{*} \varphi\rangle\;,\forall\;\varphi\in\mathcal{D}(\mathds{R}^{n}), \tag{30}\] where \(\Theta=\Phi^{-1}\). It is common the notation \(T(\Phi(x))\) (which we shall employ from here on, for conformity) for the distribution \(\Phi^{*}T\), making reference to the definition of the pull-back of a function. **Remark 3.1**.: We reiterate that the use of \(\Phi\) invertible is a particular case of a definition that can be made more general. For the generalization, it is defined \(\Phi^{*}T\) as the limit, in the distribution sense, of the regular distributions \(\Phi^{*}f_{n}\) if \(f_{n}\) is a sequence converging to \(T\). The reader can utilize [17] for a source of deeper reading in the matter. **Remark 3.2**.: Let us take here some more lines to achieve a better understanding of Definition 3.2. Despite the indication given by the notation \(T(\Phi(x))\), the pull-back of a distribution should not be read as an ordinary function. If \(T\) is singular, then \(\Phi^{*}T\) is only well defined as a new distribution, which will again be singular. Thus, we should not interpret \(T(\Phi(x))\) as an object which varies with \(x\in\mathds{R}^{n}\), but actually as a distribution dependent on the transformation \(\Phi\). Nonetheless, when regarding simple cases, such as \(\Phi(x)=\lambda x\) (\(\lambda>0\)), it is easier and perhaps more didactic to express the transformation directly as the argument of \(T\). For regular functions, \(T_{f}(\Phi(x))\) will be indeed a _bona fide_ function, just like \(T_{f}\) itself. Moreover, let us see that, if \(\varphi\in\mathcal{D}(\mathds{R}^{n})\) is a test function such that _supp_\(\varphi\subset B_{M}(0)\), then _supp_\(\varphi(\lambda^{-1}x)\subset B_{\lambda M}(0)\), so that \[\begin{split}\langle f(\lambda x),\varphi\rangle=\int_{\mathds{R }^{n}}f(\lambda x)\varphi(x)d^{n}x&=\int_{\mathds{R}^{n}}f(x) \varphi(\lambda^{-1}x)\lambda^{-n}d^{n}x\\ &=\lambda^{-n}\int_{B_{\lambda M}(0)}f(x)\varphi(\lambda^{-1}x)d^ {n}x.\end{split} \tag{31}\] As we take the limit \(\lambda\to 0^{+}\), the integration is performed over a ball with ever decreasing radius, that means, we evaluate the behavior of \(f\) in smaller and smaller neighborhoods of the origin, as indicated in our discussion preceding Definition 3.2. After such remarks, we present the following definition. **Definition 3.3**.: Let \(T\in\mathcal{D}^{\prime}(\mathds{R}^{n})\) be a distribution. The **scaling degree** of \(T\) is the real number (or \(\pm\infty\)), denoted here by \(\sigma(T)\), such that \[\sigma(T)=\inf\{s\in\mathds{R}\;;\;\lambda^{s}T(\lambda x)\xrightarrow{\lambda \to 0^{+}}0\}.\lx@note{footnote}{Here the convergence is in the sense of distributions.}\] The **singular order** of \(T\), denoted by \(\omega(T)\), is the value \[\omega(T)=[\sigma(T)]-n,\] where \([m]\) denotes the biggest integer smaller (or equal) to \(m\). **Example 3.2**.: Let us see some examples of distributions and their respective scaling degrees, making our definitions clearer. The examples will be useful to our further discussions as well. 1. If \(T=\delta\in\mathcal{D}^{\prime}(\mathds{R})\), then \(\sigma(\delta)=1\). This is given by the result \(\delta(\lambda x)=\lambda^{-1}\delta(x)\), which follows directly from (30). More generally, dealing with the \(n\)-dimensional case, \(\delta\in\mathcal{D}^{\prime}(\mathds{R}^{n})\), we have \[\big{|}D\Phi^{-1}(y)\big{|}=\lambda^{-n},\text{ if }\Phi(x)=\lambda x,\] implying that \[\langle\delta(\lambda x),\varphi\rangle=\lambda^{-n}\langle\delta,\varphi\rangle,\] that means, \(\sigma(\delta)=n\). 2. If \(T=P\frac{1}{x}\), then \[\lambda^{s}\langle T(\lambda x),\varphi\rangle=\lambda^{s-1}P\int_{-\infty}^{ +\infty}\frac{1}{x}\varphi(\lambda^{-1}x)dx=\lambda^{s-1}P\int_{-\infty}^{+ \infty}\varphi(y)\frac{\lambda dy}{\lambda y}.\] Therefore, \(\lambda^{s}T(\lambda x)\xrightarrow{\lambda\to 0^{+}}0\) if, and only if, \(s>1\), that is, \(\sigma(T)=1\), again. 3. If \(f\) is a continuous function, of one variable, homogeneous with degree \(m\), meaning \(f(\lambda x)=\lambda^{m}f(x)\), then \[\langle\lambda^{s}f(\lambda x),\varphi\rangle=\lambda^{s}\int_{-\infty}^{+ \infty}\lambda^{m}f(x)\varphi(x)dx=\lambda^{s+m}\langle f,\varphi\rangle.\] It follows immediately from this that \(\sigma(f)=-m\). Thus, _the scaling degree of a homogeneous function is the (additive) inverse of its homogeneity degree_. **Example 3.3**.: We have seen that \(\sigma(\delta)=n\) and, therefore, \(\omega(\delta)=0\). Let us see now what occurs for a derivation \(T=D^{\alpha}\delta\) of the delta distribution. We know that, for any \(\varphi\in\mathcal{D}(\mathds{R}^{n})\), \[\langle D^{\alpha}\delta,\varphi\rangle=(-1)^{|\alpha|}\langle\delta,D^{\alpha }\varphi\rangle. \tag{32}\] With this, we can verify that, for \(\lambda>0\), \(\langle T(\lambda x),\varphi\rangle=(1/\lambda^{n+|\alpha|})\langle\delta, \varphi\rangle\). Indeed, from (32) and given Example 3.2, \[\lambda^{s}\langle D^{\alpha}\delta(\lambda x),\varphi\rangle=(-1)^{|\alpha|} \lambda^{s-n}\big{\langle}\delta,D^{\alpha}\varphi(\lambda^{-1}x)\big{\rangle}.\] From this, and from the equality \[D^{\alpha}\varphi(\lambda^{-1}x)=\lambda^{-|\alpha|}D^{\alpha}\varphi(x),\] we conclude that \[\lambda^{s}\langle D^{\alpha}\delta(\lambda x),\varphi\rangle=\lambda^{s-(n+| \alpha|)}\langle D^{\alpha}\delta,\varphi\rangle.\] This implies that \(\sigma(T)=n+|\alpha|\) and, thus, \(\omega(T)=|\alpha|\). In other words, the derivative \(D^{\alpha}\) of the delta distribution increases its initial scaling degree by a factor \(\|\alpha\|\). Now that we have gone through some examples to fixate this new concepts and definitions, we shall cite a lemma which gather their main properties. Despite its importance, we feel it is not necessary that we give here the complete demonstration of this result. For the idea of its demonstration, we refer the reader to [22]. **Lemma 3.2**.: _Consider \(T,S\in\mathcal{D}^{\prime}(\mathds{R}^{n})\), \(c\in\mathds{C}\) and \(\beta\) a multi-index. Then, we have the following_ 1. \(\sigma(x^{\beta}T)=\sigma(T)-|\beta|\)_._ 2. \(\sigma(D^{\beta}T)=\sigma(T)+|\beta|\)_._ 3. \(\sigma(cT)=\sigma(T)\)_._ 4. \(\sigma(\varphi)\leq 0\) _and_ \(\sigma(\varphi T)\leq\sigma(T),\) _for every_ \(\varphi\in\mathcal{D}(\mathds{R}^{n})\)_._ 5. \(\sigma(T+S)\leq\max\{\sigma(T),\sigma(S)\}\)_._ **Example 3.4**.: The proof of (5.) of Lemma 3.2 comes from the evident fact that, if \(s\in\mathds{R}\) is such that \(\lambda^{s}T(\lambda x)\) and \(\lambda^{s}S(\lambda x)\) both converge to zero, the we cannot have anything other than \(\lambda^{s}(T+S)(\lambda x)\to 0\), in the sense of distributions. The reciprocal, however, is not necessarily true, which gives rise to the inequality. However, for the particular case when \(T\) and \(S\) are both derivatives of the Dirac delta, say \(T=D^{\alpha_{1}}\delta\) and \(S=D^{\alpha_{2}}\delta\), then we obtain the equality. Indeed, let us suppose, without loss of generality, that \(|\alpha_{1}|>|\alpha_{2}|\), from which \(\max\{\sigma(T),\sigma(S)\}=\sigma(T)=n+|\alpha_{1}|\), by Example 3.3. Now, taking \(s<n+|\alpha_{1}|\), we have, for \(\varphi\in\mathcal{D}(\mathds{R}^{n})\), \[\lambda^{s}((T+S)(\lambda x),\varphi) =\lambda^{s}\left[\langle D^{\alpha_{1}}\delta(\lambda x),\varphi \rangle+\langle D^{\alpha_{2}}\delta(\lambda x),\varphi\rangle\right]\] \[=\lambda^{s}\left[(-1)^{|\alpha_{1}|}\lambda^{-n-|\alpha_{1}|} \langle\delta,D^{\alpha_{1}}\varphi\rangle+(-1)^{|\alpha_{2}|}\lambda^{-n-| \alpha_{2}|}\langle\delta,D^{\alpha_{2}}\varphi\rangle\right]\] \[=(-1)^{|\alpha_{1}|}\lambda^{s-n-|\alpha_{1}|}D^{\alpha}\varphi(0 )+(-1)^{|\alpha_{2}|}\lambda^{s-n-|\alpha_{2}|}D^{\alpha}\varphi(0). \tag{33}\] Hence, if \(\varphi\) is such that \(D^{\alpha_{1}}\varphi(0)\neq 0\), then \[\lim_{\lambda\to 0^{+}}\lambda^{s-n-|\alpha_{1}|}D^{\alpha}\varphi(0)=\pm\infty,\] that means, \(\lambda^{s}\langle(T+S)(\lambda x),\varphi\rangle\) diverges. This proves that \(n+|\alpha_{1}|\) must be a lower bound to \(\{s\in\mathds{R}\;;\;\lambda^{s}(T+S)(\lambda x)\xrightarrow{\lambda\to 0^{+}}0\}\), which implies \[\max\{\sigma(T),\sigma(S)\}\leq\sigma(T+S)\] and, by Lemma 3.2, \[\max\{\sigma(T),\sigma(S)\}=\sigma(T+S). \tag{34}\] We now posses the appropriate tools for proving the results that concern the proper extension of distributions. As we shall see, we must separate our problem into two cases, differentiated by a condition over the singular order of \(T_{0}\). The biggest difference between the two cases, which are characterized by \(\omega(T)<0\) and \(\omega(T)\geq 0\), is in the uniqueness of our extension. **Theorem 3.1**.: _Let \(T_{0}\in\mathcal{D}^{\prime}(\mathds{R}^{n}\backslash\{0\})\) such that \(\sigma(T_{0})=s<n\). Then there exists a unique distribution \(T\in\mathcal{D}^{\prime}(\mathds{R}^{n})\) such that \(\sigma(T)=s\) and_ \[\langle T,\varphi\rangle=\langle T_{0},\varphi\rangle\;,\quad\forall\;\varphi \in\mathcal{D}(\mathds{R}^{n}\backslash\{0\}).\] Proof.: We first prove uniqueness. Indeed, if \(T_{1},T_{2}\) both are extensions of \(T_{0}\) in \(\mathcal{D}^{\prime}(\mathds{R}^{n})\), then \(\langle T_{1}-T_{2},\varphi\rangle=0\) for any function \(\varphi\in\mathcal{D}(\mathds{R}^{n}\backslash\{0\})\). On one hand, _supp_\((T_{1}-T_{2})=\{0\}\) and, according to Lemma 3.1, we have \(T_{1}-T_{2}=\sum_{|\nu|\leq m}c_{\nu}D^{\nu}\delta\), for some set of complex constants \(\{c_{\nu}\;;\;|\nu|\leq m\}\). Supposing that some of these constants are not zero, we can use Example 3.4 to conclude that \(\sigma(T_{1}-T_{2})\geq n\). On the other hand, Lemma 3.2 affirms that \[\sigma(T_{1}-T_{2})\leq\max\{\sigma(T_{1}),\sigma(T_{1})\}<n.\] This is a contradiction and from that we take that \(T_{1}-T_{2}\equiv 0\). For the existence, we first take \(\chi\in\mathcal{D}(\mathds{R}^{n})\) a test function such that \(\chi(x)=1\) in a neighborhood of the origin.8 For every \(\mu>0\) and \(\varphi\in\mathcal{D}(\mathds{R}^{n})\), \(\chi(\mu x))\varphi(x)=0\) is also a neighborhood of the origin, from which \((1-\chi(\mu x))\varphi(x)\in\mathcal{D}(\mathds{R}^{n}\backslash\{0\})\) and, therefore, the distribution \[T_{\mu}=(1-\chi(\mu x))T_{0} \tag{35}\] is well defined as an element of \(\mathcal{D}^{\prime}(\mathds{R}^{n})\). Moreover, for every \(\varphi\in\mathcal{D}(\mathds{R}^{n}\backslash\{0\})\), taking \(\mu_{0}>0\) big enough, we have \[\chi(\mu x)\varphi(x)=0\;,\quad\forall\;x\in\mathds{R}^{n}\;\text{ and for }\mu\geq\mu_{0},\] thus the limit \(T=(T_{\mu})_{\mu\to\infty}\) is defined for every argument in \(\mathcal{D}^{\prime}(\mathds{R}^{n}\backslash\{0\})\) and \[\lim_{\mu\to\infty}\langle T_{\mu},\varphi\rangle=\langle T_{0},\varphi \rangle\;,\quad\forall\;\varphi\in\mathcal{D}^{\prime}(\mathds{R}^{n} \backslash\{0\}).\] For the proof that \(T\) is defined over the whole space of test functions, take any \(\varphi\in\mathcal{D}(\mathds{R}^{n})\). By the result (28) of subsection 3.1, we can rewrite \(T_{\mu}\) as \[T_{\mu}=T_{1}+\int_{1}^{\mu}d\;\overline{\mu}\;\frac{d}{d\mu}T_{\overline{ \mu}}.\] On the other side, we know from (26) that \[\left\langle\frac{d}{d\mu}T_{\mu},\varphi\right\rangle=\frac{d}{d\mu}\langle T _{\mu},\varphi\rangle=\frac{d}{d\mu}\left(\langle T_{0},(1-\chi(\mu x))\varphi \rangle\right). \tag{36}\] The dependence of this term on \(\mu\) is now entirely within the argument and, since \(T_{0}\) is a continuous functional, we are able to transfer the derivation to the inside of the brackets, giving \[\begin{split}\frac{d}{d\mu}\left(\langle T_{0},(1-\chi(\mu x)) \varphi\rangle\right)&=-\left\langle T_{0},\frac{d}{d\mu}(\chi( \mu x)\varphi(x))\right\rangle\\ &=-\sum_{i=1}^{n}\langle T_{0},x_{i}(\partial_{i}\chi)(\mu x) \varphi(x)\rangle,\end{split} \tag{37}\] where it was used that \[\frac{d}{d\mu}(\chi(\mu x))=\sum_{i=1}^{n}\frac{d}{d\mu}(\mu x_{i})\frac{ \partial\chi}{\partial x_{i}}(\mu x).\] Now, for each term of the sum (37), we see that \[\begin{split}\langle T_{0},x_{i}(\partial_{i}\chi)(\mu x) \varphi(x)\rangle&=\mu^{-1}\langle\varphi T_{0},(\mu x_{i})( \partial_{i}\chi)(\mu x)\rangle\\ &=\mu^{-(n+1)}\big{\langle}(\varphi T_{0})(\mu^{-1}x),x_{i}( \partial_{i}\chi)(x)\big{\rangle}.\end{split} \tag{38}\] Thus, from (36), (37) and (38), we obtain \[\left\langle\frac{d}{d\mu}T_{\mu},\varphi\right\rangle=-\mu^{-(n+1)}\sum_{i=1 }^{n}\bigl{\langle}(\varphi T_{0})(\mu^{-1}x),x_{i}(\partial_{i}\chi)(x) \bigr{\rangle}. \tag{39}\] Let us then take \(\varepsilon\) such that \(\sigma(T_{0})<\varepsilon<n\), \(\lambda=\mu^{-1}\) and \(\psi_{i}(x)=x_{i}(\partial_{i}\chi)(x)\). By Lemma 3.2, we know that \(\sigma(\varphi T_{0})\leq\sigma(T_{0})\) and, by the definition of scaling degree of \(T_{0}\), \[\lim_{\mu\to\infty}\mu^{-\varepsilon}\big{\langle}(\varphi T_{0})(\mu^{-1}x),x _{i}(\partial_{i}\chi)(x)\big{\rangle}=\lim_{\lambda\to 0^{+}}\lambda^{ \varepsilon}\langle(\varphi T_{0})(\lambda x),\psi_{i}\rangle=0, \tag{40}\] for all \(i\in\{1,\cdots,n\}\). Therefore, for \(\lambda\) small enough, that is, for \(\mu\) big enough, it follows from (39) and (40) that \[\mu^{n+1-\varepsilon}\left|\left\langle\frac{d}{d\mu}T_{\mu},\varphi\right\rangle \right|\leq 1,\] that means, \[\left|\int_{1}^{\mu}d\;\overline{\mu}\;\left\langle\frac{d}{d\mu}T_{\overline {\mu}},\varphi\right\rangle\right|\leq\int_{1}^{\mu}(\overline{\mu})^{ \varepsilon-n-1}d\;\overline{\mu}=\frac{1}{\varepsilon-n}(\mu^{\varepsilon-n }-1).\] We have just proven that \(T=(T_{\mu})_{\mu\to\infty}\) exists as an element of \(\mathcal{D}^{\prime}(\mathds{R}^{n})\) and, by definition of \(T_{\mu}\) in (35), \(T\) will also have scaling degree equal to \(s\). Furthermore, we have already seen that \(T\) will be the only extension of \(T_{0}\) with such scaling degree. Theorem 3.1 will also be used to help us prove our next result, which has the same objective as our last, but now to distributions such that \(\sigma(T)\geq n\). Before that, however, we shall need to define some more concepts which will be an important part of our proof. We will denote by \(\mathcal{D}_{\omega}(\mathds{R}^{n})\) the subspace of \(\mathcal{D}(\mathds{R}^{n})\) composed by functions such that their derivatives up to order \(\omega\) vanish in the origin. Thus, for some natural number \(\omega>0\), \[\mathcal{D}_{\omega}(\mathds{R}^{n})=\{\varphi\in\mathcal{D}(\mathds{R}^{n}) \;;D^{\alpha}\varphi(0)=0\;,|\alpha|\leq\omega\}.\] Now, for every function \(\varphi\in\mathcal{D}(\mathds{R}^{n})\), its Taylor expansion will be given by \[\varphi(x)=\sum_{\nu}\frac{x^{\nu}}{\nu!}D^{\nu}\varphi(0), \tag{41}\] from which, separating the terms whose multi-index have norm \(|\nu|>\omega\), we have \[\varphi(x)=\sum_{|\nu|\leq\omega}\frac{x^{\nu}}{\nu!}D^{\nu}\varphi(0)+\sum_{ |\nu|>\omega}\frac{x^{\nu}}{\nu!}D^{\nu}\varphi(0). \tag{42}\] Therefore, the inclusion \(\varphi\in\mathcal{D}_{\omega}(\mathds{R}^{n})\) is equivalent to saying that the first summation in (42) is zero. Meanwhile, the second summation term will then belong to \(\mathcal{D}_{\omega}(\mathds{R}^{n})\), being actually equal to \(\varphi\). Furthermore, using Lagrange Remainder formula, we can affirm the following result. **Lemma 3.3**.: _Any function \(\varphi\in\mathcal{D}_{\omega}(\mathds{R}^{n})\) can be written as_ \[\varphi(x)=\sum_{|\alpha|=\omega+1}x^{\alpha}g_{\alpha}(x),\] _where \(g_{\alpha}\in\mathcal{D}(\mathds{R}^{n})\) for all multi-index in this summation._ Proof.: To prove this, apply, for every \(\varphi\in\mathcal{D}_{\omega}(\mathds{R}^{n})\), the Taylor Theorem with Remainder in the multi variable case,9 so that we have Footnote 9: For the reader interested in the proof of this version of the Taylor Theorem, see, for example, [23]. \[\varphi(x)=\sum_{|\beta|\leq\omega}\frac{D^{\beta}\varphi(0)}{\beta!}x^{ \beta}+\sum_{|\alpha|=\omega+1}\frac{x^{\alpha}}{\alpha!}f_{\alpha}(x),\] where \[f_{\alpha}(x)=(\omega+1)\int_{0}^{1}(1-t)^{\omega}D^{\alpha}\varphi(tx)dt.\] Since \(\varphi\in\mathcal{D}_{\omega}(\mathds{R}^{n})\), the first summation term vanishes for any \(x\in\mathds{R}^{n}\). Furthermore, being \(\varphi\) a infinitely differentiable function with compact support, each \(f_{\alpha}(x)\) must also be infinitely differentiable. They may, however, not be of compact support. Nonetheless, we can take a function \(\psi(x)\in\mathcal{D}(\mathds{R}^{n})\) such that \(\psi(x)=1\) for \(x\in\text{supp }\varphi\). We shall have \(\psi(x)\varphi(x)=\varphi(x)\) and the product \(g_{\alpha}=\psi f_{\alpha}\in\mathcal{D}_{\omega}(\mathds{R}^{n})\) will satisfy \[\varphi(x)=\sum_{|\alpha|=\omega+1}x^{\alpha}g_{\alpha}(x),\] as desired. Not only the subspace \(\mathcal{D}_{\omega}(\mathds{R}^{n})\) plays a crucial role to us, but also the projection of \(\mathcal{D}(\mathds{R}^{n})\) on \(\mathcal{D}_{\omega}(\mathds{R}^{n})\). Let \(\mathcal{W}=\mathcal{D}_{\omega}(\mathds{R}^{n})^{\perp}\) be the orthogonal complement of \(\mathcal{D}_{\omega}(\mathds{R}^{n})\) so that \(\mathcal{D}(\mathds{R}^{n})=\mathcal{D}_{\omega}(\mathds{R}^{n})\oplus \mathcal{W}\).10 The direct sum of both subspaces guarantees that functionals in \(\mathcal{D}^{\prime}(\mathds{R}^{n})\) may be written as Footnote 10: \(\mathcal{D}_{\omega}(\mathds{R}^{n})\) is evidently closed due to the continuity of any \(D^{\alpha}\varphi\). \[T=T_{\omega}\oplus l\;,\quad T_{\omega}\in\mathcal{D}^{\prime}_{\omega}( \mathds{R}^{n})\;,\quad l\in\mathcal{W}^{\prime}.\] We ask, then, what are the functionals in \(\mathcal{W}^{\prime}\). Well, they are characterized as elements \(l\in\mathcal{D}^{\prime}(\mathds{R}^{n})\) such that \(\langle l,\varphi\rangle=0\) for any \(\varphi\in\mathcal{D}_{\omega}(\mathds{R}^{n})\). Hence, we have, in particular, \[\langle l,\varphi\rangle=0\text{ for every }\varphi\in\mathcal{D}(\mathds{R}^{n} \backslash\{0\})\subset\mathcal{D}_{\omega}(\mathds{R}^{n})\] and, therefore, \(\text{supp }l\subset\{0\}\). According to the Lemma 3.1, it also follows that \[l=\sum_{|\alpha|\leq m}c_{\alpha}D^{\alpha}\delta\;,\quad m\in\mathds{N},\;c_ {\alpha}\in\mathds{C}.\] We state that \(m\leq\omega\). In fact, if we suppose that \(m>\omega\), then we may take \(\varphi\in\mathcal{D}_{\omega}(\mathds{R}^{n})\) such that \(D^{\beta}\varphi(0)\neq 0\), with \(|\beta|=m\), which implies \(\langle l,\varphi\rangle\neq 0\). This contradicts \(l\in\mathcal{W}^{\prime}\). Reciprocally, if \[l=\sum_{|\alpha|\leq\omega}c_{\alpha}D^{\alpha}\delta,\] then \(\langle l,\varphi\rangle\) will only possess derivations of \(\varphi\) in \(x=0\) with order lesser or equal to \(\omega\). Thus, if \(\varphi\in\mathcal{D}_{\omega}(\mathds{R}^{n})\), \(\langle l,\varphi\rangle=0\), that is, \(l\in\mathcal{W}^{\prime}\). Thereby we have just proved the following **Lemma 3.4**.: _For an arbitrary real number \(\omega>0\), we have_ \[\mathcal{W}^{\prime}=\{l\in\mathcal{D}^{\prime}(\mathds{R}^{n})\;;\;l=\sum_{ |\alpha|\leq\omega}c_{\alpha}D^{\alpha}\delta\;,\quad c_{\alpha}\in\mathds{C}\}.\] Once \(\mathcal{W}^{\prime}\) is also a linear space, it is a consequence of the last Lemma that \(\mathcal{B}=\{D^{\alpha}\delta\;;|\alpha|\leq\omega\}\) is a basis to the latter. Actually, we won't use this set of distributions to represent the orthogonal projection of \(\mathcal{D}^{\prime}(\mathds{R}^{n})\) on \(\mathcal{D}^{\prime}_{\omega}(\mathds{R}^{n})\), but another one that depends upon a particular test function \(w\in\mathcal{D}(\mathds{R}^{n})\). We will show that, for any function \(w\in\mathcal{D}(\mathds{R}^{n})\) such that \(w(0)\neq 0\), the set11\(\mathcal{C}=\{D^{\alpha}\delta(w^{-1}\cdot)\;;|\alpha|\leq\omega\}\) is a basis to \(\mathcal{W}^{\prime}\) as well. The number of elements of \(\mathcal{C}\) equals the number of elements in \(\mathcal{B}\), thus, it remains to show that the elements of the latter may be generated by \(\mathcal{C}\). We use the Leibniz rule so pointed out in Observation 2.2, which implies that, for any multi-index \(\alpha\) and any \(\psi\in\mathcal{D}(\mathds{R}^{n})\) that does not vanish in \(x\in\mathds{R}^{n}\), Footnote 11: This means the action of an element of \(\mathcal{C}\) over a \(\varphi\in\mathcal{D}(\mathds{R}^{n})\) is \(\langle D^{\alpha}\delta(w^{-1}\cdot),\varphi\rangle=\langle D^{\alpha}\delta,w^{-1}\varphi\rangle\). \[D^{\alpha}\varphi(x)=\frac{1}{\psi(x)}D^{\alpha}(\psi\varphi)(x)-\frac{1}{ \psi(x)}\sum_{\begin{subarray}{c}0\leq\beta\leq\alpha\\ \beta\neq\alpha\end{subarray}}\frac{\alpha!}{\beta!(\alpha-\beta)!}(D^{\beta }\varphi)(x)(D^{\alpha-\beta}\psi)(x).\] Hence, if \(\langle D^{\gamma}\delta,\varphi\rangle=(-1)^{|\gamma|}D^{\gamma}\varphi(0)\) may be written as a linear combination of terms such as \(\left\langle D^{\beta}\delta(w^{-1}\cdot),\varphi\right\rangle=(-1)^{|\beta| }D^{\beta}(w^{-1}\varphi)(0)\) for arbitray12\(\gamma<\alpha\), then the same holds for \(D^{\alpha}\varphi(0)\). We conclude our proof if we note that the result is valid for \(\alpha=(0,\cdots,0)\), Footnote 12: This inequality means that \(\gamma\leq\alpha\). However \(\gamma\neq\alpha\), that is, at least one of its coordinates \(\gamma_{i}\) is strictly lesser than \(\alpha_{i}\). \[\langle D^{\alpha}\delta,\varphi\rangle=\varphi(0)=w(0)\frac{\varphi(0)}{w(0) }=w(0)\big{\langle}D^{\alpha}\delta(w^{-1}\cdot),\varphi\big{\rangle}.\] Just as any element in \(\mathcal{B}\) may be written as a linear combination of elements in \(\mathcal{C}\), the same is true for \(\mathcal{W}^{\prime}\). In addition, the set \(\mathcal{E}=\{\frac{(-1)^{|\alpha|}}{\alpha!}w(x)x^{\alpha}\;;|\alpha|\leq\omega\}\) generates \(\mathcal{W}\) and will be a basis to the dual of \(\mathcal{C}\), once13 Footnote 13: We have used the generalization of the Kroenecker delta for multiple variables, which is \(1\) whenever \(\alpha=\beta\), and zero otherwise. \[\left\langle D^{\alpha}\delta(w^{-1}\cdot),\frac{(-1)^{|\beta|}}{\beta!}w(x)x^ {\beta}\right\rangle=\frac{(-1)^{|\alpha|+|\beta|}}{\beta!}(D^{\alpha}x^{\beta })(0)=\delta_{\alpha,\beta}.\] Then, we can write the projection operator of \(\mathcal{D}(\mathds{R}^{n})\) on \(\mathcal{D}_{\omega}(\mathds{R}^{n})\) in the form \[W_{(\omega;w)}\colon\mathcal{D}(\mathds{R}^{n}) \longrightarrow\mathcal{D}_{\omega}(\mathds{R}^{n})\] \[\varphi(x) \longmapsto\varphi(x)-w(x)\sum_{|\alpha|\leq\omega}\frac{x^{ \alpha}}{\alpha!}\left(D^{\alpha}\frac{\varphi}{w}\right)(0), \tag{43}\] for any \(w\in\mathcal{D}(\mathds{R}^{n})\) such that \(w(0)\neq 0\). In fact, if \(\varphi\in\mathcal{D}(\mathds{R}^{n})\) is arbitrary, then fixing \(\gamma\) a multi-index such that \(|\gamma|\leq\omega\), we have \[(D^{\gamma}W_{(\omega;w)}\varphi)(0)=D^{\gamma}\varphi(0)-\sum_{|\alpha|\leq \omega}\left(D^{\gamma}w\frac{x^{\alpha}}{\alpha!}\right)(0)\left(D^{\alpha} \frac{\varphi}{w}\right)(0)\] and we know, due to Observation 2.2, that \[\left(D^{\gamma}w(x)\frac{x^{\alpha}}{\alpha!}\right)(0) =\sum_{\beta\leq\gamma}\frac{\gamma!}{\beta!(\gamma-\beta)!}(D^{ \gamma-\beta}w)(0)\left(D^{\beta}\frac{x^{\alpha}}{\alpha!}\right)(0)\] \[=\sum_{\beta\leq\gamma}\frac{\gamma!}{\beta!(\gamma-\beta)!}(D^{ \gamma-\beta}w)(0)\delta_{\beta,\alpha} \tag{44}\] \[=\frac{\gamma!}{\alpha!(\gamma-\alpha)!}(D^{\gamma-\alpha}w)(0).\] That way, we obtain \[(D^{\gamma}W_{(\omega;w)}\varphi)(0) =D^{\gamma}\varphi(0)-\sum_{|\alpha|\leq\omega}\frac{\gamma!}{ \alpha!(\gamma-\alpha)!}(D^{\gamma-\alpha}w)(0)\left(D^{\alpha}\frac{\varphi} {w}\right)(0) \tag{45}\] \[=D^{\gamma}\varphi(0)-D^{\gamma}(w\frac{\varphi}{w})(0)\] \[=D^{\gamma}\varphi(0)-D^{\gamma}\varphi(0)=0.\] Once again we have utilized, in the second equality, what we obtained in the Observation 2.2. Thus, we have just proved that \((W_{(\omega;w)}\varphi)(x)\) is indeed a test function whose derivatives up to order \(\omega\) vanish at the origin. We may check that \(W\) is indeed a projection by showing that it is idempotent, \(W^{2}=W\). For that, we observe that in \(\mathcal{D}_{\omega}(\mathds{R}^{n})\), \(W\) is the identity operator. In fact, for a function \(\varphi\in\mathcal{D}_{\omega}(\mathds{R}^{n})\), we have \(\left(D^{\alpha}\frac{\varphi}{w}\right)(0)=0\) for any \(|\alpha|\leq\omega\), in a way that \[(W_{(\omega;w)}\varphi)(x)=\varphi(x)-\sum_{|\alpha|\leq\omega}\frac{x^{ \alpha}}{\alpha!}\left(D^{\alpha}\frac{\varphi}{w}\right)(0)=\varphi(x).\] We chose \(w^{-1}\) instead of the function \(w\) itself because the former allows us to write an interesting and useful property, to be used ahead. Namely, the operator \(W\) satisfies \[W_{(\omega;w)}(w\varphi)=wW_{(\omega;1)}(\varphi). \tag{46}\] We point out that there is no problem at all when using \(W_{(\omega;\psi)}\), with \(\psi\) being the constant function equals the unity. Although \(W_{(\omega;1)}\varphi\) is not a test function, once its support is not compact, it is infinitely differentiable and, hence, its product with \(w\) will be, in fact, in \(\mathcal{D}(\mathds{R}^{n})\). In accordance with (46), we will have, if \(|\alpha|\leq\omega\), \[W_{(\omega;w)}(wx^{\alpha})=wW_{(\omega;1)}x^{\alpha},\] that is, \[W_{(\omega;w)}(wx^{\alpha})(x)=w(x)\left[x^{\alpha}-\sum_{|\beta|\leq\omega} \frac{x^{\beta}}{\beta!}\left(D^{\beta}x^{\alpha}\right)(0)\right].\] By the Example 2.1, the sum reduces to \(x^{\alpha}\), \[W_{(\omega;w)}(wx^{\alpha})\equiv 0. \tag{47}\] Our next result is the following **Theorem 3.2**.: _Let \(T_{0}\in\mathcal{D}^{\prime}(\mathds{R}^{n}\backslash\{0\})\) such that \(\sigma(T_{0})=s\geq n\) and \(\omega=\omega(T_{0})=s-n\). Moreover, given \(w\in\mathcal{D}(\mathds{R}^{n})\), with \(w(0)\neq 0\), and constants \(C^{\alpha}\in\mathds{C}\) for all multi-index \(\alpha\), with \(|\alpha|\leq\omega\). There exists one, and only one, distribution \(T\in\mathcal{D}^{\prime}(\mathds{R}^{n})\) such that \(\sigma(T)=s\) and satisfying_ 1. \(\langle T,\varphi\rangle=\langle T_{0},\varphi\rangle\)_,_ \(\quad\forall\;\varphi\in\mathcal{D}(\mathds{R}^{n}\backslash\{0\})\)_._ 2. \(\langle T,wx^{\alpha}\rangle=C^{\alpha}\)_._ _Specifically, \(T\) is given by_ \[\langle T,\varphi\rangle=\left\langle T_{\omega},W_{(\omega;w)}\varphi \right\rangle+\sum_{|\alpha|\leq\omega}\frac{C^{\alpha}}{\alpha!}\left(D^{ \alpha}\frac{\varphi}{w}\right)(0), \tag{48}\] _where \(T_{\omega}\) is the only extension guaranteed by the Theorem 3.1 and \(W_{(\omega;w)}\) is the operator \(W\), defined in (43)._ Proof.: We shall begin, once again, with the uniqueness first, supposing the existence prior to its proof. If \(T_{1}\) and \(T_{2}\) are both extensions of \(T_{0}\) in \(\mathcal{D}(\mathds{R}^{n})\), then, for \(\varphi\in\mathcal{D}(\mathds{R}^{n}\backslash\{0\})\), \[\langle T_{1}-T_{2},\varphi\rangle=\langle T_{1},\varphi\rangle-\langle T_{2},\varphi\rangle=0,\] which implies \(\text{supp }(T_{1}-T_{2})\subset\{0\}\). Again, according to the Lemma 3.1, \(T_{1}-T_{2}=\sum_{|\nu|\leq m}c_{\nu}D^{\nu}\delta\), so that \(\sigma(T_{1}-T_{2})=n+|\nu|\) and we must have, by hypothesis, \(m\leq\omega\). Thus, we have, for any \(\alpha\) such that \(|\alpha|\leq\omega\), \[\langle T_{1}-T_{2},wx^{\alpha}\rangle=\langle T_{1},wx^{\alpha}\rangle- \langle T_{2},wx^{\alpha}\rangle=C^{\alpha}-C^{\alpha}=0.\] On the other hand, if we first take \(|\alpha|=m\), then \[\langle T_{1}-T_{2},wx^{\alpha}\rangle=\sum_{|\nu|\leq m}c_{\nu}(D^{\nu}wx^{ \alpha})(0)\] and, with the Example 2.1, we have \[\begin{split}\langle T_{1}-T_{2},wx^{\alpha}\rangle&=\sum_ {|\nu|\leq m}c_{\nu}\left(\sum_{\beta\leq\nu}\frac{\nu!}{\beta!(\nu-\beta)!}(D ^{\beta}x^{\alpha})(0)(D^{\nu-\beta}w)(0)\right)\\ &=\sum_{|\nu|=m}c_{\nu}\left(\sum_{\beta\leq\nu}\frac{\nu!}{\beta! (\nu-\beta)!}(D^{\beta}x^{\alpha})(0)(D^{\nu-\beta}w)(0)\right),\end{split} \tag{49}\] where we have used that all derivatives \((D^{\beta}x^{\alpha})(0)\) will be zero whenever \(|\nu|<|\alpha|=m\). Furthermore, the only non-vanishing term appears when \(\beta=\nu=\alpha\), so that the we are left with only \[\langle T_{1}-T_{2},wx^{\alpha}\rangle=c_{\alpha}(D^{\alpha}x^{\alpha})(0)w(0 )=\alpha!\;c_{\alpha}\;w(0)=0.\] We have obtained that \(c_{\nu}=0\) once \(|\nu|=m\). Analogously, if we consider \(|\alpha|=m-i\), \(1\leq i\leq m\), then we will find the same result, canceling all the constants \(c_{\alpha}\), finally getting \(T_{1}=T_{2}\). For the existence, we first restrict \(T_{0}\) to the subspace \(\mathcal{D}^{\prime}_{\omega}(\mathds{R}^{n}\backslash\{0\})\) of \(\mathcal{D}^{\prime}(\mathds{R}^{n}\backslash\{0\})\) and, denoting this restriction by \(\tilde{T}_{0}\), we have, by the Lemma 3.3, \[\big{\langle}\tilde{T}_{0},\varphi\big{\rangle}=\sum_{|\alpha|=\omega+1} \langle x^{\alpha}T_{0},g_{\alpha}\rangle.\] Using now \((\ref{eq:T1}),(\ref{eq:T2})\) and \((\ref{eq:T3})\) from the Lemma 3.2, we obtain \(\sigma(\tilde{T}_{0})\leq\sigma(T_{0})-\omega-1<n\). Therefore, the restriction of \(T_{0}\) in \(\mathcal{D}^{\prime}_{\omega}(\mathds{R}^{n}\backslash\{0\})\) has, due to Theorem 3.1, an extension14 in \(\mathcal{D}^{\prime}_{\omega}(\mathds{R}^{n})\), which we denote by \(T_{\omega}\). However, we seek an extension over the whole space \(\mathcal{D}^{\prime}(\mathds{R}^{n})\), so that we still need to extend \(T_{\omega}\) to general elements of \(\mathcal{D}(\mathds{R}^{n})\). If we obtain such \(T\in\mathcal{D}^{\prime}(\mathds{R}^{n})\), it is evident that it will be an extension of \(T_{0}\), since \(\mathcal{D}(\mathds{R}^{n}\backslash\{0\})\subset\mathcal{D}_{\omega}( \mathds{R}^{n})\). Footnote 14: The more careful reader may ponder if this extension will in fact belong to \(\mathcal{D}^{\prime}_{\omega}(\mathds{R}^{n})\), since Theorem 3.1 affirms only that the extension will be an element of \(\mathcal{D}^{\prime}(\mathds{R}^{n})\). For that, we note that our construction made no reference to the behavior of \(\varphi\in\mathcal{D}(\mathds{R}^{n})\). From that, we see that the restrictions over the application of \(\tilde{T}_{0}\in\mathcal{D}^{\prime}_{\omega}(\mathds{R}^{n}\backslash\{0\})\) will be inherited by \(T_{\omega}\in\mathcal{D}^{\prime}_{\omega}(\mathds{R}^{n})\). Now, since we are dealing with a closed orthogonal subspace of \(\mathcal{D}^{\prime}(\mathds{R}^{n})\), extensions of \(T_{\omega}\) will be simply characterized by \[T=T_{\omega}\oplus l\;,\quad l\in\mathcal{W}^{\prime}.\] It is through the operator \(W\), which projects functions from \(\mathcal{D}(\mathds{R}^{n})\) onto the subspace \(\mathcal{D}_{\omega}(\mathds{R}^{n})\), that we are capable of applying \(T_{\omega}\) over any \(\varphi\in\mathcal{D}(\mathds{R}^{n})\), placing between then the action of \(W_{(\omega;w)}\). Since the projection operator is unique (once we have chosen \(w\)), each extension \[T=T_{\omega}\circ W_{(\omega;w)}\oplus l\] will be unique except by the change of \(l\). Since \(\mathcal{C}=\{D^{\alpha}\delta(w^{-1}\cdot)\;;|\alpha|\leq\omega\}\) is a basis to \(\mathcal{W}^{\prime}\), the constants \(C^{\alpha}\) must define such functional \(l\), which implies that \[T=T_{\omega}\circ W_{(\omega;w)}\oplus\sum_{|\alpha|\leq\omega}\frac{C^{\alpha} }{\alpha!}\left(D^{\alpha}\frac{\varphi}{w}\right)(0)\] is the only extension of \(T_{0}\) satisfying \(\left\langle T,wx^{\alpha}\right\rangle=C^{\alpha}\). **Remark 3.3**.: We can simplify even further our calculations of the extension of \(T_{0}\) if we restrict the class of functions permitted to \(w\). More specifically, if we take \(w\) such that \((D^{\alpha}w)(0)=\delta_{\alpha,0}\), which is equivalent to taking \(w\) equal to \(1\) in a neighborhood of the origin, we have \[\begin{split}(D^{\alpha}\frac{\varphi}{w})(0)&= \sum_{\begin{subarray}{c}0\leq\beta\leq\alpha\\ \beta\neq\alpha\end{subarray}}\frac{\alpha!}{\beta!(\alpha-\beta)!}(D^{\beta }\varphi)(0)(D^{\alpha-\beta}w^{-1})(0)\\ &=D^{\alpha}\varphi(0),\end{split} \tag{50}\] since any derivation of \(w^{-1}\) will also carry some derivative of \(w\) itself. From that, it follows that \(W\) reduces to \[(W_{(\omega;w)}\varphi)(x)=\varphi(x)-w(x)\sum_{|\alpha|\leq\omega}\frac{x^{ \alpha}}{\alpha!}D^{\alpha}\varphi(0)\] and the extension \(T\) given by Theorem 3.2 can, therefore, be written as \[\left\langle T,\varphi\right\rangle=\left\langle T_{\omega},W_{(\omega;w)} \varphi\right\rangle+\sum_{|\alpha|\leq\omega}\frac{C^{\alpha}}{\alpha!}\left( D^{\alpha}\varphi\right)(0). \tag{51}\] Such functions, satisfying \((D^{\alpha}w)(0)=\delta_{\alpha,0}\), are called Epstein-Glaser functions (see, for example, [14]). ### Dependence of the extension on the test function \(w(x)\) We have seen that, for the case when \(\sigma(T_{0})\geq n\), it does not seem possible to get rid of the dependence of the extension \(T\in\mathcal{D}^{\prime}(\mathds{R}^{n})\) on the test function \(w\in\mathcal{D}(\mathds{R}^{n})\) chosen to construct the projection of \(\mathcal{D}(\mathds{R}^{n})\) over \(\mathcal{D}_{\omega}(\mathds{R}^{n})\). We can, nonetheless, observe the behavior of this dependence, mainly by studying the term \(\left\langle T_{\omega},W_{(\omega;w)}\varphi\right\rangle\), which we denote, following [21], by the _integral kernel of the extension_. Here, we will stick with the supposition that \(w\) is an Epstein-Glaser function, in the sense defined above. Thus, choosing two Epstein-Glaser test functions \(w_{1},w_{2}\in\mathcal{D}(\mathds{R}^{n})\), we have \[(W_{(\omega;w_{1})}\varphi)(x) =\varphi(x)-w_{1}(x)\sum_{|\alpha|\leq\omega}\frac{x^{\alpha}}{ \alpha!}D^{\alpha}\varphi(0) \tag{52}\] \[=\varphi(x)-w_{2}(x)\sum_{|\alpha|\leq\omega}\frac{x^{\alpha}}{ \alpha!}D^{\alpha}\varphi(0)+(w_{2}(x)-w_{1}(x))\sum_{|\alpha|\leq\omega}\frac {x^{\alpha}}{\alpha!}D^{\alpha}\varphi(0)\] \[=(W_{(\omega;w_{2})}\varphi)(x)+(w_{2}(x)-w_{1}(x))\sum_{|\alpha| \leq\omega}\frac{x^{\alpha}}{\alpha!}D^{\alpha}\varphi(0)\] and, applying \(T_{\omega}\) over this expression, we obtain (bearing in mind that \(T_{\omega}\) is unique, by Theorem 3.1) \[\big{\langle}T_{\omega},W_{(\omega;w_{1})}\varphi\big{\rangle}=\big{\langle} T_{\omega},W_{(\omega;w_{2})}\varphi\big{\rangle}+\sum_{|\alpha|\leq\omega} \bigg{\langle}T_{\omega},(w_{2}(x)-w_{1}(x))\frac{x^{\alpha}}{\alpha!}\bigg{ }\bigg{\rangle}D^{\alpha}\varphi(0). \tag{53}\] We therefore conclude that the application of \(T_{\omega}\) over different projections differ only by a linear combination of terms of the form \(\langle D^{\alpha}\delta,\varphi\rangle\), that is, by application, over the test function \(\varphi\), of operators belonging to \(\mathcal{W}^{\prime}\). Our goal ahead will be, however, to get rid of the restriction that \(w\) be a test function. As we have already mentioned, this seems to be a crucial condition for defining \(W\) as a projection operator, since it is responsible for the fact that the term \(w(x)\sum_{|\alpha|\leq\omega}\frac{x^{\alpha}}{\alpha!}D^{\alpha}\varphi(0)\) has compact support. Nonetheless, very often (as will be the case ahead) a distribution which is not well behaved at the origin will behave nicely at infinity. To be more precise, we mean that such distributions will be well defined when applied to functions \(\varphi\in C^{\infty}(\mathds{R}^{n})\) whose support may not be compact. In that sense, taking a sequence \((w_{k})\subset\mathcal{D}(\mathds{R}^{n})\) whose pointwise limit \(w\) is a function15 in \(C^{\infty}(\mathds{R}^{n})\) and \(T_{0}\) is such that Footnote 15: Some works, such as [21], go even further as to only ask that \(w\in\mathcal{D}^{\prime}(\mathds{R}^{n})\) be a distribution. We will not need this generality, so that we have preferred to omit this possibility. \[\lim_{k\to\infty}\langle T_{\omega},W_{(\omega;w_{k})}\varphi\rangle\in \mathds{C},\quad\forall\quad\varphi\in\mathcal{D}(\mathds{R}^{n}),\] then there is no motive to not consider \(w\) in our renormalization scheme. We shall see how this is an important part for our application of this method of extension of distributions. ## 4 Application to the electron self-energy In this section, we finally attack the electron self-energy problem, seen as a point particle. We posses now sufficient machinery to see it as a pathology to be faced with extension of distributions defined over \(\mathcal{D}(\mathds{R}^{n}\backslash\{0\})\). To clarify our aims, we will translate the issues of electrostatics to the language of distributions. The first and default example is the charge distribution of an electron, seen as a charged point particle, which is represented by the Dirac delta \(\delta(\cdot)\), centered where we suppose the whole electric charge of the particle should be concentrated. Actually, this particular case is not the only one where we consider the charge distribution \(\rho\) as a generalized function. After all, it would be an incredible coincidence in nomenclature if the charge distributions presented in the electrodynamics realm were not presented by distributions \(\rho\in\mathcal{D}^{\prime}(\mathds{R}^{3})\). This is one of the main contributions of the theory of generalized functions to electrodynamics. It not only incorporates more general charge distributions \(\rho\) (which could not be defined just as real functions in \(\mathds{R}^{3}\)), but also eases its manipulations, employing the properties so described in Section 2. **Example 4.1**.: To illustrate our last paragraph, let us analyze the representation of an electric dipole as a distribution in \(\mathcal{D}(\mathds{R}^{3})\). The pure dipole, considered an idealization such as point charges, is constructed as follows: start by setting two charges, \(q\) and \(-q\), separated by a fixed distance \(\varepsilon\). Suppose that they lie in the \(x\) coordinate, with \(-q\) at the origin and \(q\) at \(x=\varepsilon\), accordingly. That way, the distribution \(\rho\in\mathcal{D}(\mathds{R})\) will be \[\rho=-q\delta_{0}+q\delta_{\varepsilon}=q(\delta_{\varepsilon}-\delta_{0}).\] If we set \(q=1/\varepsilon\), then, in the limit \(\varepsilon\to 0^{+}\) we obtain a distribution whose total charge vanishes. This result is not correct nonetheless. A null charge distribution would lead to a zero electric field, which is not the case for a dipole (see [4] for the expression of \(\vec{E}\) in this case). This simple argument indicates that the distribution formalism is more than necessary to an accurate description of even the basics of electrostatics. Now, we apply \(\rho_{\varepsilon}=\varepsilon^{-1}(\delta_{\varepsilon}-\delta_{0})\) over any \(\varphi\in\mathcal{D}(\mathds{R})\). We have \[\langle\rho_{\varepsilon},\varphi\rangle=\frac{1}{\varepsilon}\langle\delta_{ \varepsilon}-\delta_{0},\varphi\rangle=\frac{\varphi(\varepsilon)-\varphi(0) }{\varepsilon}\] and, therefore, we shall obtain, in the limit \(\varepsilon\to 0^{+}\), \[\langle\rho_{\varepsilon},\varphi\rangle\to\varphi^{\prime}(0)=-\langle \delta^{\prime},\varphi\rangle,\] that is, \[\rho_{dip}=-\delta^{\prime}.\] The generalization to the three-dimensional case is straightforward. In this case, the dipole moment \(\mathbf{p}\coloneqq q\vec{\varepsilon}\), where \(\vec{\varepsilon}\) is the displacement vector that connects the negative to the positive charge. Once again we take the limit \(|\vec{\varepsilon}|\to 0\). The way we defined \(q\) above, we guaranteed that the dipole moment was kept constant, even when the charges are close enough. With this premise, the charge distribution will be given by \[\langle\rho_{\varepsilon},\varphi\rangle\to-\frac{\partial}{\partial\mathbf{p }}\varphi(0)\;,\qquad\forall\;\varphi\in\mathcal{D}(\mathds{R}^{3}),\] that we denote by \[\rho_{dip}=-\mathbf{p}\cdot\nabla\delta\in\mathcal{D}(\mathds{R}^{3}).\] We return to the main problem we intend to attack, namely, the divergence of the electron self-energy. In this context, the fact that the charge is fully concentrated in the origin is seen by the application of \(q\delta\) on different test functions \(\varphi\in\mathcal{D}(\mathds{R}^{3})\). For any test function whose support does not contain the origin, we have \[\int_{\text{\it supp }\varphi}\rho(x)\varphi(x)d^{3}x=0. \tag{54}\] At the same time, the distribution possess a finite charge once the integration of \(\rho\) over \(\mathds{R}^{3}\) is equivalent to apply \(q\delta\) on a test function \(\varphi\in\mathcal{D}(\mathds{R}^{3})\) whose support contains the origin such that \(\varphi(0)=1\). \[\int_{\mathds{R}^{3}}\rho(x)\varphi(x)\ d^{3}x=\int_{\text{\it supp }\varphi}\rho(x)\varphi(x)\ d^{3}x=\langle q\delta,\varphi\rangle=q. \tag{55}\] This charge distribution produces both a potential \(V\) and an electric field \(\mathbf{E}\), which are also new distributions. We haven't considered vector fields as distributions so far, just like \(\mathbf{E}\), but this generalization is quite natural and \(\mathbf{E}\) acts on an element of \(\mathcal{D}(\mathds{R}^{3})\) according to \[\langle\mathbf{E},\varphi\rangle=(\langle E_{x},\varphi\rangle,\langle E_{y}, \varphi\rangle,\langle E_{z},\varphi\rangle).\] The application of vector fields as distributions were mentioned here just for completeness. It will be no longer necessary hence forth. We can see that, in fact, \(V\) represents an element of \(\mathcal{D}^{\prime}(\mathds{R}^{3})\). The explicit formula for the potential is given by \(V(\mathbf{r})=\frac{e}{r}\), where \(e\) is the electron charge and we are using unities in which \(4\pi\epsilon_{0}=1\), with no further implications to the final results whatsoever. As usual, \(r\) represents the radial coordinate of a spherical coordinate system centered on the charge. \(V(\cdot)\) is a smooth function for any \(\mathbf{r}\neq 0\). Hence, we just have to be concerned to the convergence of \(\langle V,\varphi\rangle\), for an arbitrary \(\varphi\in\mathcal{D}(\mathds{R}^{3})\). In effect, if \(R>0\) is such that \(K=B_{R}(0)\supset\text{\it supp }\varphi\) and \(M=\max_{x\in\mathds{R}^{3}}\varphi(x)\), then \[\begin{split}\left|\int_{\mathds{R}^{3}}V(x)\varphi(x)d^{3}x \right|&\leq M\int_{K}V(x)d^{3}x\\ &=M\left(\int_{0}^{2\pi}d\phi\int_{0}^{\pi}\text{ sen}\theta d\theta\right)\int_{0}^{R}\frac{e}{r}r^{2}dr\\ &=(2\pi Me)R^{2}<\infty.\end{split} \tag{56}\] An analogous consideration may be done for the electric field, since \(\mathbf{E}\sim\frac{1}{r^{2}}\). In view of that, the radial integral will converge as well. We can thus turn our attention to the stored self-energy of a charged system. As we have already seen, for the particular case of an electron there is a divergence at the origin. We are considering here only the static case, so we don't have to worry about magnetic fields. We can consider, however, other cases where such divergence does not appear and we are thus able to calculate the system self-energy. If we consider, for example, the electron as a uniformly charged spherical shell of radius \(a\), then \[\mathbf{E}(\mathbf{r})=\begin{cases}0&,\,\text{se }r\leq a,\\ (e/r^{2})\ \mathbf{\hat{r}}&,\,\text{se }r>a.\end{cases} \tag{57}\] Therefore, the self-energy, see eq. (2), will be given by \[W=\frac{1}{8\pi}\int_{\mathds{R}^{3}}\mathbf{E}^{2}d\tau=\frac{1}{8\pi}(4\pi) \int_{a}^{\infty}\frac{e^{2}}{r^{2}}dr=\frac{1}{2}\frac{e^{2}}{a}. \tag{58}\] In distribution parlance, the last equation is but the application of the regular distribution \(\frac{1}{8\pi}\mathbf{E}^{2}\) over the function \(\varphi(x)\equiv 1\). We point out that, even with \(\varphi\notin\mathcal{D}(\mathds{R}^{3})\), \(\left\langle\mathbf{E}^{2},\varphi\right\rangle\) does exist. It is a consequence of the behavior of \(\mathbf{E}^{2}\) at infinity, which is good enough that we do not need to restrict the range of integration to a compact set. For general distributions, with unknown behavior at infinity, this restriction is considered by supposing that \(\varphi\) is a test function. We have already mentioned, see Sec. 3.3, that it is often advantageous (or even necessary) to work with functions that are not compactly supported. This is allowed once our distribution possesses the necessary conditions so that its application over this larger class of functions is well behaved. For instance, the Dirac delta may be applied on any function \(\varphi\) continuous at the origin. In our specific case, we will see how the electron self-energy (\(\sim\mathbf{E}^{2}\)) behavior far from the origin permits such loosening of the conditions over \(\varphi\). More precisely, \[\mathbf{E}^{2}=\frac{e^{2}}{r^{4}},\quad r>0 \tag{59}\] is not well defined as a distribution. In effect, for any test function obeying \(\varphi(x)=1\) for \(x\) in a neighborhood \(\mathcal{V}\) of the origin, say, a ball, we have \[\left\langle\mathbf{E}^{2},\varphi\right\rangle=\int_{\mathcal{V}}\frac{e^{2} }{r^{4}}d^{3}x+\int_{\mathds{R}^{3}\backslash\mathcal{V}}\frac{e^{2}}{r^{4}} \varphi(x)d^{3}x=+\infty,\] once the first term is linear divergent due to the fourth-order homogeneity of \(\mathbf{E}^{2}\).16 Footnote 16: The classification as a linear divergence may be justified in polar coordinates. Taking \(R=1/r\), we find \(\int_{0}^{+\infty}\frac{dr}{r^{2}}=\int_{0}^{+\infty}dR\). See [3] for details. Outside the origin, however, \(\mathbf{E}^{2}\) is a smooth function, and as such, locally integrable. Hence, we have, at least, \(\mathbf{E}^{2}\in\mathcal{D}^{\prime}(\mathds{R}^{3}\backslash\{0\})\). For this reason, we may extend (renormalize) the distribution \(\mathbf{E}^{2}\) to a new distribution \(U\in\mathcal{D}^{\prime}(\mathds{R}^{n})\) with the methods exposed previously in Sec. 3. In that way, we expect that the electron self-energy \(E_{0}\) will be well defined as the application of \(\frac{1}{8\pi}U\) over the function \(\varphi\equiv 1\), \[E_{0}=\frac{1}{8\pi}\langle U,1\rangle. \tag{60}\] According to what we have made so far, let us first determine the scaling degree and the singular order of \(\mathbf{E}^{2}\), which are key to Theorems 3.1 and 3.2. We observe that it is a homogeneous function of order \(-4\), so that, as expressed through the Example 3.2, \[\lambda^{s}\mathbf{E}^{2}(\lambda x)=\lambda^{s}\frac{e^{2}}{(\lambda r)^{4}}= \lambda^{s-4}\mathbf{E}^{2},\] which implies, \[\lambda^{s}\mathbf{E}^{2}(\lambda x)\xrightarrow{\lambda\to 0^{+}}0\iff s>4.\] That is, \(\sigma(\mathbf{E}^{2})=4\), and also \(\omega(\mathbf{E}^{2})=\sigma(\mathbf{E}^{2})-n=1\). Therefore, for a test function \(w\in\mathcal{D}(\mathds{R}^{3})\) and constants \(C^{0},C^{(1,0,0)}\equiv C^{1}\), \(C^{(0,1,0)}\equiv C^{2}\), \(C^{(0,0,1)}\equiv C^{3}\), we obtain, according to the Theorem 3.2, a distribution \(U\in\mathcal{D}^{\prime}(\mathds{R}^{3})\) defined by \[\begin{split}\langle U,\varphi\rangle&=\left\langle \mathbf{E}^{2}_{1},W_{(1;w)}\varphi\right\rangle+\sum_{|\alpha|\leq 1}\frac{C^{ \alpha}}{\alpha!}\left(D^{\alpha}\frac{\varphi}{w}\right)(0)\\ &=\left\langle\mathbf{E}^{2}_{1},W_{(1;w)}\varphi\right\rangle+C^ {0}\frac{\varphi(0)}{w(0)}+\sum_{i=1}^{3}C^{i}\partial_{x_{i}}\left(\frac{ \varphi}{w}\right)(0)\end{split} \tag{61}\] satisfying \[\langle U,\varphi\rangle=\left\langle\mathbf{E}^{2},\varphi\right\rangle, \quad\forall\quad\varphi\in\mathcal{D}(\mathds{R}^{3}\backslash\{0\}), \tag{62}\] \[\langle U,wx^{\alpha}\rangle=C^{\alpha}\;,\quad|\alpha|\leq 1. \tag{63}\] Moreover, if \(w\) is chosen as an Epstein-Glaser function, we have \[\langle U,\varphi\rangle=\left\langle\mathbf{E}^{2}_{1},W_{(1,w)}\varphi \right\rangle+C^{0}\varphi(0)+\sum_{i=3}C^{i}(\partial_{x_{i}}\varphi)(0). \tag{64}\] Before moving on, there is a comment in order. The charge distribution of a (charged) point particle possesses spherical symmetry. It is not only due to the concentration of charge in a single point. In fact, a electric dipole has also a distribution whose support is contained in the origin, although the allegedly spherical symmetry is broken once the moment \(\mathbf{p}\) defines a privileged direction. The additional fact that our point particle model admits no internal structure imposes the constraint of having no special direction. Since we would like to preserve such symmetry when extending \(\mathbf{E}^{2}\), we must choose \(C^{i}=0\), for \(i=1,2,3\). This is justified because the last term in (64) does not behave like a scalar under rotations of our coordinate system, unless the three constants vanish. We can promptly see this by writing \[\sum_{i=3}C^{i}(\partial_{x_{i}}\varphi)(0)=\mathbf{C}\cdot(\nabla\varphi)(0) \;,\quad\mathbf{C}=(C^{1},C^{2},C^{3}).\] Now, \(\nabla\varphi\) does behave as a vector, however we cannot say the same for \(\mathbf{C}\). For this reason, the only way to keep this sum inert under rotations is to set \(\mathbf{C}=\mathbf{0}\). From this, we may rewrite \(U\) as \[\langle U,\varphi\rangle=\left\langle\mathbf{E}_{1}^{2},W_{(1;w)}\varphi\right\rangle +C^{0}\varphi(0). \tag{65}\] Then, we seek, pretty much like what has been done in Sec. 3.3, to relax the conditions under \(w\) employed in the renormalization of \(\mathbf{E}^{2}\). Our path will be to take a sequence of test functions \(w_{M}\) that converges pointwise to \(w(x)=1\in C^{\infty}(\mathds{R}^{3})\), obtaining a well defined distribution given by \[\langle U,\varphi\rangle=\lim_{M\to\infty}\left\langle\mathbf{E}_{1}^{2},W_{( 1;w_{M})}\varphi\right\rangle+C^{0}\varphi(0). \tag{66}\] The last equation suggests that we will take all \(w_{M}\) as Epstein-Glaser functions. Specifically, each \(w_{M}\) shall be taken as in the Example 2.3, \[w_{M}(x)=\zeta_{M,1}(x)=1-\int_{-\infty}^{r}\eta_{M,h}(t)dt,\] where \(r=\|x\|\) is a radial coordinate and \(\eta_{M,h}(t)\) is given in (9). We can write this sequence of functions in a convenient manner that will be useful ahead. The comments in the Example 2.3 suggest that \(w_{M}\) equals \(1\) within the ball \(B_{M}(0)\) and \(0\) outside the ball \(B_{M+1}(0)\). In the ring \(B_{M+1}(0)\backslash B_{M}(0)\), _i. e._, for \(M\leq r\leq M+1\), we may write \(w_{M}\) as a radial smooth function17\(\chi(r-M)\) such that \(|\chi(s)|\leq 1\), \(s\in[0,1]\), \(\chi(0)=1\) and \(\chi(1)=0\), Footnote 17: which shall not be confused with the characteristic function \(\chi_{A}\) on a set \(A\subset\mathds{R}^{3}\), also present in the Example 2.3. \[w_{M}(x)=\begin{cases}1,&\text{if }x\in B_{M}(0),\\ \chi(r-M),&\text{if }x\in B_{M+1}(0)\backslash B_{M}(0),\\ 0,&\text{if }x\notin B_{M+1}(0).\end{cases} \tag{67}\] Therefore, we have \((D^{\alpha}w_{M})(0)=\delta_{\alpha,0}\) and \(w_{M}(x)\to 1\) for any \(x\in\mathds{R}^{3}\) in the limit \(M\to\infty\). Moreover, due to the result (53), for any two naturals \(M_{1},M_{2}\in\mathds{N}\) (say, \(M_{1}<M_{2}\)), we have \[\left\langle\mathbf{E}_{1}^{2},W_{(1;w_{M_{2}})}\varphi\right\rangle=\left\langle \mathbf{E}_{1}^{2},W_{(1;w_{M_{1}})}\varphi\right\rangle+\sum_{|\alpha|\leq 1 }\left\langle\mathbf{E}_{1}^{2},(w_{M_{1}}(x)-w_{M_{2}}(x))\frac{x^{\alpha}}{ \alpha!}\right\rangle D^{\alpha}\varphi(0),\] wherein \[w_{M_{1}}(x)-w_{M_{2}}(x)=\begin{cases}0,&\text{if }x\in B_{M_{1}}(0),\\ \chi(r-M_{1})-1,&\text{if }x\in B_{M_{1}+1}(0)\backslash B_{M_{1}}(0),\\ -1,&\text{if }x\in B_{M_{2}}(0)\backslash B_{M_{1}+1}(0),\\ -\chi(r-M_{2}),&\text{if }x\in B_{M_{2}+1}(0)\backslash B_{M_{2}}(0),\\ 0,&\text{if }x\notin B_{M_{2}+1}(0).\end{cases} \tag{68}\] That way, if we denote \((a_{M})\) the real sequence whose elements are \(\left\langle\mathbf{E}_{1}^{2},W_{(1;w_{M})}\varphi\right\rangle\), then we will show that it converges, proving that \((a_{M})\) is a Cauchy sequence. In fact, we have just seen that18 Footnote 18: Since \(w_{M_{1}}(x)-w_{M_{2}}(x)\in\mathcal{D}(\mathds{R}^{n}\backslash\{0\})\), the action of \(\mathbf{E}_{1}^{2}\) may be replaced by \(\mathbf{E}^{2}\). \[\begin{split} a_{M_{1}}-a_{M_{2}}&=\left\langle \mathbf{E}_{1}^{2},W_{(1;w_{M_{2}})}\varphi\right\rangle-\left\langle\mathbf{E }_{1}^{2},W_{(1;w_{M_{1}})}\varphi\right\rangle\\ &=\sum_{|\alpha|\leq 1}\left\langle\mathbf{E}^{2},(w_{M_{1}}(x)-w_{M_{ 2}}(x))\frac{x^{\alpha}}{\alpha!}\right\rangle\!D^{\alpha}\varphi(0),\end{split} \tag{69}\] so that, if we limit this sum, then we will also limit the difference \(a_{M_{1}}-a_{M_{2}}\). Now, given \(\varepsilon>0\), we take \(M\in\mathds{N}\) such that \(\frac{1}{M}<\frac{\varepsilon}{8\pi e^{2}}\) and \(M_{1},M_{2}\in\mathds{N}\), with \(M\leq M_{1}<M_{2}\). Since \(\mathbf{E}^{2}\) acts on test functions whose support does not contain the origin, we can employ the formula (59), obtaining \[\left\langle\mathbf{E}^{2},(w_{M_{1}}(x)-w_{M_{2}}(x))\right\rangle=\int_{ \mathds{R}^{3}}\frac{e^{2}}{r^{4}}(w_{M_{1}}(x)-w_{M_{2}}(x))d^{3}x,\] \[\left\langle\mathbf{E}^{2},(w_{M_{1}}(x)-w_{M_{2}}(x))x_{i}\right\rangle=\int_ {\mathds{R}^{3}}\frac{e^{2}}{r^{4}}(w_{M_{1}}(x)-w_{M_{2}}(x))x_{i}d^{3}x.\] Now, once **i.**\(w_{M_{1}}-w_{M_{2}}\) is radial, **ii.** has support in \(B_{M_{2}+1}(0)\backslash B_{M_{1}}(0)\) and **iii.** assumes values only in \([0,1]\), we have \[\left|\left\langle\mathbf{E}^{2},(w_{M_{1}}(x)-w_{M_{2}}(x))\right\rangle \right|\leq(4\pi e^{2})\int_{M_{1}}^{M_{2}+1}\frac{1}{r^{2}}dr=(4\pi e^{2}) \left[\frac{1}{r}\right]_{M_{2}+1}^{M_{1}}\leq(8\pi e^{2})\frac{1}{M_{1}},\] that is, \[\left|\left\langle\mathbf{E}^{2},(w_{M_{1}}(x)-w_{M_{2}}(x))\right\rangle \right|\leq\frac{8\pi e^{2}}{M}<\varepsilon. \tag{70}\] Meanwhile, \[\left\langle\mathbf{E}^{2},(w_{M_{1}}(x)-w_{M_{2}}(x))x_{i}\right\rangle=\int_ {\mathds{R}^{3}}\frac{e^{2}}{r^{4}}(w_{M_{1}}(r)-w_{M_{2}}(r))x_{i}d^{3}x=0, \tag{71}\] because for any \(i=1,2,3\), corresponding to the three Euclidean axis \(x,y,z\), respectively, the integration in \(\varphi\) or in \(\theta\) will vanish.19 In fact, for \(i=1\) and \(i=2\), the integrals in \(\varphi\) vanish, \(\int_{0}^{2\pi}d\phi\cos\phi=\int_{0}^{2\pi}d\phi\operatorname{sen}\phi=0\). Now, for \(i=3\), we find \(\int_{0}^{\pi}d\theta\cos\theta\operatorname{sen}\theta=0\). Footnote 19: This is only possible because \(w_{M}\) is a sequence of radial functions and as such, the integration in \(r,\varphi\) e \(\theta\) can be factored. Hence, with the help of (70) and (71), we may conclude that \[|a_{M_{1}}-a_{M_{2}}|\leq\varepsilon\varphi(0), \tag{72}\] which, in turn, implies that \((a_{M})\) will be a Cauchy sequence, that is, a convergent one. All the previous development allows us to show that the distribution defined in (66), that we will denote simply by \[\left\langle U,\varphi\right\rangle=\left\langle\mathbf{E}_{1}^{2},W_{(1;1)} \varphi\right\rangle+C^{0}\varphi(0), \tag{73}\] is well defined. Thus, we may finally use the eq. (60) to obtain the renormalized electron self-energy, \[E_{0}=\frac{1}{8\pi}\langle\mathbf{E}_{1}^{2},W_{(1;1)}1\rangle+\frac{1}{8\pi} C^{0}=\frac{C^{0}}{8\pi}. \tag{74}\] Although simple, the eq. (74) bears great physical meaning, concentrating our results. We have shown that defining the electron self-energy as the application of the distribution \(\frac{1}{8\pi}U\in\mathcal{D}(\mathds{R}^{3})\), which extends (or renormalize) \(\mathbf{E}^{2}\), over the constant function \(1\), we get rid of the divergence previously obtained. This divergence appeared when one directly considers \(\mathbf{E}^{2}\), which cannot be seen as an actual distribution.20 Now, \(E_{0}\) becomes an undetermined constant, that we may control to serve our model. This is the very kernel of a renormalization method. If, for instance, we assume that the self-energy is, alone, responsible for the electron mass, we shall take Footnote 20: The statement that \(\mathbf{E}^{2}\notin\mathcal{D}(\mathds{R}^{3})\) is a consequence that the product between distributions is not, in general, well defined. An alternative method to skirt the electron self-energy divergence is related to a generalization to the very concept of distributions, working with the _generalized Coulombau functions_. For details, see [24, 25] and references therein. \[C^{0}=8\pi m_{e}c^{2},\] in a way that \(E_{0}=m_{e}c^{2}\). To summarize, we have seen how the self-energy problem originates in the fact that \(\mathbf{E}^{2}\) is not a proper element of \(\mathcal{D}(\mathds{R}^{n})\). On the other hand, \(\mathbf{E}^{2}\) is, outside the origin, a smooth function. Thus, we at least have \(\mathbf{E}^{2}\in\mathcal{D}^{\prime}(\mathds{R}^{n}\backslash\{0\})\), which means we can use Theorem 3.2 to extend it to a distribution in \(\mathcal{D}^{\prime}(\mathds{R}^{n})\). ## 5 Conclusion The main objective of this work was to analyze (and renormalize) a simple but central problem in classical electrodynamics: the electron self-energy. Although the electrostatics model of a charged point particle implies a linear divergence on the self-energy, we may skirt this infinity with an extension of the corresponding distribution. With more details, **1.** We have also developed a self-consistent study of the theory of distributions. The basic aspects, main definitions and examples, operations and key results were all included. Of course our notes are not supposed to replace the standard and seminal literature, such as [16, 26, 27], which is considered utterly necessary. However, we provide here the minimum to the interested reader in maneuvering such a powerful tool for analyzing, for instance, classical and quantum field theoretical models. **2.** The leading results of extension of distributions, that is, the corresponding renormalization, were all enunciated and demonstrated, following the lines in [22]. We have focused in a particular subspace of the set of test functions, namely, the one whose elements vanish in an arbitrary neighborhood of the origin. We have investigated the behavior of different distributions in the origin and how one could recover such distributions, in the sense of making them continuous linear functionals defined over all the space of test functions. Our thorough demonstrations may serve as an auxiliary/pedagogical pathway on the subject. **3.** We have applied the concepts of distributions and the corresponding extensions to the classical electron self-energy. At first sight, electrostatics implies a divergence once we treat the electron as a charged point particle. However, our construction shows that its self-energy turns out to be an undetermined constant upon renormalization, so that our parameters might be fixed, for example, appealing to empirical results.
2310.20300
Pre-Lie algebras with divided powers and the Deligne groupoid in positive characteristic
The purpose of this paper is to develop a deformation theory controlled by pre-Lie algebras with divided powers over a ring of positive characteristic. We show that every differential graded pre-Lie algebra with divided powers comes with operations, called weighted braces, which we use to generalize the classical deformation theory controlled by Lie algebras over a field of characteristic $0$. Explicitly, we define the Maurer-Cartan set, as well as the gauge group, and prove that there is an action of the gauge group on the Maurer-Cartan set. This new deformation theory moreover admits a Goldman-Millson theorem which remains valid on the integers. As an application, we give the computation of the $\pi_0$ of a mapping space $\text{Map}(B^c(\mathcal{C}),\mathcal{P})$ with $\mathcal{C}$ and $\mathcal{P}$ suitable cooperad and operad respectively.
Marvin Verstraete
2023-10-31T09:13:55Z
http://arxiv.org/abs/2310.20300v2
# Pre-Lie algebras with divided powers and the Deligne groupoid in positive characteristic ###### Abstract The purpose of this paper is to develop a deformation theory controlled by pre-Lie algebras with divided powers over a ring of positive characteristic. We show that every differential graded pre-Lie algebra with divided powers comes with operations, called weighted braces, which we use to generalize the classical deformation theory controlled by Lie algebras over a field of characteristic \(0\). Explicitly, we define the Maurer-Cartan set, as well as the gauge group, and prove that there is an action of the gauge group on the Maurer-Cartan set. This new deformation theory moreover admits a Goldman-Millson theorem which remains valid over the integers. As an application, we give the computation of the \(\pi_{0}\) of a mapping space \(\mathrm{Map}(B^{c}(\mathcal{C}),\mathcal{P})\) with \(\mathcal{C}\) and \(\mathcal{P}\) suitable cooperad and operad respectively. ###### Contents * Introduction * Acknowledgements * 1 Recollections on pre-Lie algebras with divided powers * 1.1 Pre-Lie algebras and the rooted tree operad * 1.2 Pre-Lie algebras with divided powers * 2 Deformation theory of \(\Gamma(\mathcal{P}re\mathcal{L}ie,-)\)-algebras * 2.1 Differential graded pre-Lie algebras with divided powers * 2.2 The gauge group * 2.3 Maurer-Cartan elements and the Deligne groupoid * 2.4 An integral Goldman-Millson theorem * 3 Application in homotopy theory for operads * 3.1 Infinitesimal compositions and decompositions of an operad and a cooperad * 3.2 \(\Gamma(\mathcal{P}re\mathcal{L}ie,-)\)-algebra structure of the convolution operad * 3.3 Computation of \(\pi_{0}(\mathrm{Map}(B^{c}(\mathcal{C}),\mathcal{P}))\) ## Introduction An important result in deformation theory asserts that every deformation problem over a field of characteristic 0 can be encoded by a differential graded Lie algebra (see [13] and [15]). More precisely, any deformation problem can be described by a solution of the _Maurer-Cartan equation_: \[d(x)+\frac{1}{2}[x,x]=0,\] in some differential graded Lie algebra. The group obtained by the integration of the differential graded Lie algebra into a Lie group, called the _gauge group_, moreover acts on the Maurer-Cartan set. The orbits of this action give isomorphism classes of deformation problems. In [6], Dotsenko-Shadrin-Vallette showed that if the differential graded Lie algebra comes from a differential graded pre-Lie algebra, then the Maurer-Cartan equation, the gauge group and its action on the Maurer-Cartan set can be described in terms of pre-Lie operations. A differential graded pre-Lie algebra is a vector space \(L\) with a bilinear operation \(\star:L\otimes L\longrightarrow L\) such that \[(x\star y)\star z-x\star(y\star z)=(-1)^{|y||z|}((x\star z)\star y-x\star(z \star y)),\] and which satisfies the Leibniz rule with respect to the differential. Every differential graded pre-Lie algebra is in particular a differential graded Lie algebra with the graded commutator: \[[x,y]=x\star y-(-1)^{|x||y|}y\star x.\] Dotsenko-Shadrin-Vallette showed in particular that given a pre-Lie algebra \(L\), the pre-Lie exponential map \(exp:L^{0}\longrightarrow(1+L^{0})\) induces an isomorphism between the gauge group and the group \((1+L^{0},\odot,1)\) with \(\odot\) the circular product defined by \[a\odot(1+b)=\sum_{n\geq 0}\frac{1}{n!}a\{\underbrace{b,...,b}_{n}\},\] where \(-\{-,...,-\}\) denote the _symmetric braces_ determined by the pre-Lie structure \(\star\), starting with \(x\{y\}=x\star y\). Then, by writing the Maurer-Cartan equation as a zero-square equation, they prove that the action of the gauge group on the Maurer-Cartan set can be computed in terms of the circular product \(\odot\) as \[e^{\lambda}.\alpha=(e^{\lambda}\star\alpha)\odot e^{-\lambda},\] allowing us to have an easier way to compute the Deligne groupoid associated to any differential graded pre-Lie algebra over a field of characteristic 0. The aim of this paper is to develop a deformation theory in positive characteristic which generalizes the deformation theory controlled by pre-Lie algebras over a field of characteristic 0 developed in [6]. Our idea is to use differential graded pre-Lie algebras with divided powers. The notion of a pre-Lie algebra with divided powers (or \(\Gamma(\mathcal{P}re\mathcal{L}ie,-)\)-algebra) has been studied by Cesaro in [3]. He showed in particular that every pre-Lie algebra with divided powers comes equipped with weighted brace operations \(-\{-,...,-\}_{r_{1},...,r_{n}}\), for each collection of integers \(r_{1},...,r_{n}\geq 0\), which satisfy similar identities as the quantities \[x\{y_{1},...,y_{n}\}_{r_{1},...,r_{n}}=\frac{1}{\prod_{i}r_{i}!}x\{\underbrace {y_{1},...,y_{1}}_{r_{1}},...,\underbrace{y_{n},...,y_{n}}_{r_{n}}\}\] in a pre-Lie algebra over a field of characteristic 0 (see [3, Propositions 5.9-5.10] for a precise list of these identities). Every differential graded pre-Lie algebra with divided powers \(L\) comes equipped with analogous weighted brace operations \(-\{-,...,-\}_{r_{1},...,r_{n}}\) which satisfy a graded version of the identities satisfied by weighted braces in the non graded framework. In this context, we have an analogue of the Maurer-Cartan equation: \[d(x)+x\{x\}_{1}=0.\] With suitable convergence hypothesis, we also get that the circular product can be written as and gives rise to a group structure on \(1+L^{0}\). This group is called the _gauge group_ of \(L\). As in characteristic 0, we also show that this group acts on the Maurer-Cartan set of \(L\). **Theorem A**.: _Let \(\mathbb{K}\) be a ring._ 1. _In any differential graded pre-Lie algebra with divided powers_ \(L\)_, the circular product_ _, defined as above, endows the set_ \(1+L^{0}\) _with a group structure._ 2. _Suppose that_ \(\mu\{\alpha,\alpha\}_{1,1}=0\) _for every_ \(\mu\in L^{0}\) _and_ \(\alpha\in L\) _with odd degree. If we denote by_ \(d\) _the differential of_ \(L\)_, then this group acts on the Maurer-Cartan set via the formula_ \[(1+\mu).\alpha=(\alpha+\mu\{\alpha\}_{1}-d(\mu))\,\,\raisebox{-1.72pt}{ \includegraphics[width=14.226378pt]{./}}\,(1+\mu)^{\raisebox{-1.72pt}{ \includegraphics[width=14.226378pt]{./}}}\,.\] We prove that this new deformation theory satisfies an analogue of the Goldman-Millson theorem given in [11, SS2.4]. Let \(Deligne(L,A)\) be the Deligne groupoid of the dg pre-Lie algebra with divided powers \(L\otimes\mathfrak{m}_{A}\), where \(L\) is a dg pre-Lie algebra with divided powers and \(\mathfrak{m}_{A}\) the maximal ideal of a local artinian \(\mathbb{K}\)-algebra \(A\). We precisely get the following result. **Theorem B**.: _Let \(\mathbb{K}\) be a noetherian integral domain. Let \(L\) and \(\overline{L}\) be two \(\Gamma(\mathcal{P}re\mathcal{L}ie,-)\)-algebras. Suppose that \(L\) and \(\overline{L}\) are free as \(\mathbb{K}\)-modules and that there is no 2-torsion. Let \(\varphi:L\longrightarrow\overline{L}\) be a morphism of \(\Gamma(\mathcal{P}re\mathcal{L}ie,-)\)-algebras such that \(H^{0}(\varphi)\) and \(H^{1}(\varphi)\) are isomorphisms and \(H^{2}(\varphi)\) a monomorphism. Then for every local artinian \(\mathbb{K}\)-algebra \(A\), the induced functor \(\varphi_{*}:Deligne(L,A)\longrightarrow Deligne(\overline{L},A)\) is an equivalence of groupoids._ Other approaches to generalize the usual deformation theory in the positive characteristic framework have been proposed recently in the literature. We have for instance a deformation theory in an associative context, via \(\mathcal{A}_{\infty}\)-algebras, which is used to study deformations of group representations (see [14]). Another approach is given by (spectral) partition Lie algebras to get a full generalization of the Lurie-Pridham correspondence in the setting of a field with positive characteristic (see [1, 2]). The main motivation for the approach developed in this paper is that operadic deformation problems are expressed in terms of pre-Lie structures. The goal is then to compute the \(\pi_{0}\) of a mapping space \(\mathrm{Map}(B^{c}(\mathcal{C}),\mathcal{P})\), where we take any augmented dg operad \(\mathcal{P}\) on the target and the operad \(B^{c}(\mathcal{C})\) given by the cobar of a dg coaugmented cooperad \(\mathcal{C}\) on the source. Recall simply that \(B^{c}(\mathcal{C})\) defines a cofibrant operad when \(\mathcal{C}\) is cofibrant as a symmetric sequence (\(\Sigma_{*}\)-cofibrant). It is well known that, over a field of characteristic \(0\), the \(\pi_{0}\) of this mapping space is the set of isomorphism classes of the Deligne groupoid of the Lie algebra \(\mathrm{Hom}_{\Sigma}(\overline{\mathcal{C}},\overline{\mathcal{P}})\). Using the pre-Lie algebra structure of \(\mathrm{Hom}_{\Sigma}(\overline{\mathcal{C}},\overline{\mathcal{P}})\), this can be seen as a consequence of the computations in [6]. To extend this result, we first show that \(\mathrm{Hom}_{\Sigma}(\overline{\mathcal{C}},\overline{\mathcal{P}})\) admits a structure of dg pre-Lie algebra with divided powers given in terms of the dg brace algebra structure of \(\mathrm{Hom}(\overline{\mathcal{C}},\overline{\mathcal{P}})\). Then we get the following statement. **Theorem C**.: _Let \(\mathbb{K}\) be a field. Suppose that \(\mathcal{C}\) is a \(\Sigma_{*}\)-cofibrant coaugmented dg cooperad which comes with a weight decomposition and \(\mathcal{P}\) an augmented dg operad. We then have an isomorphism:_ \[\pi_{0}(\mathrm{Map}(B^{c}(\mathcal{C}),\mathcal{P}))\simeq\pi_{0}\mathrm{ Deligne}(\mathrm{Hom}_{\Sigma}(\overline{\mathcal{C}},\overline{\mathcal{P}})),\] _where \(\pi_{0}\mathrm{Deligne}(\mathrm{Hom}_{\Sigma}(\overline{\mathcal{C}}, \overline{\mathcal{P}}))\) denotes the set of isomorphism classes of the Deligne groupoid._ This theorem gives a first step for the calculation of the homotopy groups of a mapping space \(\mathrm{Map}(B^{c}(\mathcal{C}),\mathcal{P})\) over any field. In the first part of this paper, we recall some definitions and properties on pre-Lie algebras and pre-Lie algebras with divided powers: in SS1.1 we briefly review the definition of the notion of a pre-Lie algebra and the construction of the corresponding operad; in SS1.2, we review the definition of a pre-Lie algebra with divided powers and of the weighted brace operations. In the second part, we develop the deformation theory for differential graded pre-Lie algebras with divided powers: in SS2.1, we study pre-Lie algebras with divided powers in the dg framework; in SS2.2, we define the circular product and prove assertion \((i)\) of Theorem A; in SS2.3, we define the Maurer-Cartan set and prove assertion \((ii)\) of Theorem A; in SS2.4, we finally prove our analogue of the Goldman-Millson theorem (Theorem B) for this new deformation theory. We conclude this article with our application of this deformation theory for operadic deformation problems: in SS3.1, we introduce some basic definitions on symmetric sequences and operads which will be useful to write our formulas; in SS3.2, we study the structure of differential graded pre-Lie algebra with divided powers of the convolution operad; in SS3.3, we finally give a proof of Theorem C. ## Acknowledgements I acknowledge support from the Labex CEMPI (ANR-11-LABX-0007-01) and from the FNS-ANR project OCHoTop (ANR-18CE93-0002-01). I also acknowledge Benoit FRESSE for his advice and support throughout the writing of this article. ## 1 Recollections on pre-Lie algebras with divided powers We first recall some definitions and basic properties on pre-Lie algebras and pre-Lie algebras with divided powers. Pre-Lie algebras were introduced in deformation theory by Gerstenhaber in [10], while pre-Lie algebras with divided powers were introduced by Cesaro in [3]. In SS1.1, we give brief recollections on the notion of a pre-Lie algebra. We will more particularly see pre-Lie algebras as algebras over an operad introduced by Chapoton-Livernet in [5], the rooted tree operad, of which we also recall the definition in this subsection. In SS1.2, we give recollections on the notion of a pre-Lie algebra with divided powers. These objects can be seen as pre-Lie algebras with some extra operations. We will focus on some of these operations called weighted braces that will mimic the quantities which appear in the definition of the circular product. ### Pre-Lie algebras and the rooted tree operad We will use the following basic definitions. **Definition 1.1**.: _A pre-Lie algebra on a ring \(\mathbb{K}\) is a \(\mathbb{K}\)-module \(L\) endowed with a bilinear morphism \(\star:L\otimes L\longrightarrow L\) such that_ \[(x\star y)\star z-x\star(y\star z)=(x\star z)\star y-x\star(z\star y).\] Any pre-Lie algebra structure on \(L\) gives rise to multilinear operations denoted by \(-\{-,...,-\}\), called _symmetric braces_, and defined by induction on the length of the brace by \[a\{\} = a,\] \[a\{b_{1}\} = a\star b_{1},\] \[\forall n\geq 1,a\{b_{1},...,b_{n}\} = a\{b_{1},...,b_{n-1}\}\{b_{n}\}-\sum_{i=1}^{n-1}a\{b_{1},...,b_{ i-1},b_{i}\{b_{n}\},b_{i+1}...,b_{n-1}\},\] for all \(a,b_{1},...,b_{n}\in L\). For our purpose, it will be more convenient to see pre-Lie algebras as algebras over an operad. This operad can be described in terms of rooted trees as follows. **Definition 1.2**.: (see [5, SS1.5]) _We call \(n\)-rooted tree a non-planar tree with \(n\) vertices equipped with a numbering from \(1\) to \(n\), together with a distinguished vertex called the root. By convention, we chose to put the root at the bottom in any representation of a tree._ _We let \(RT(n)\) to be the set of all trees with \(n\) vertices, and \(\mathcal{P}re\mathcal{L}ie(n)=\mathbb{K}[RT(n)]\)._ The collection \(\mathcal{P}re\mathcal{L}ie\) is endowed with an operad structure. The action of \(\Sigma_{n}\) on \(\mathcal{P}re\mathcal{L}ie(n)\) for all \(n\geq 1\) is given by the permutation of the indices attached to the vertices. The \(i\)-th partial composition \(S\circ_{i}T\in\mathcal{P}re\mathcal{L}ie(p+q-1)\) of \(S\in RT(p)\) and \(T\in RT(q)\) is given by the sum of all the possible trees obtained by putting \(T\) in the vertex \(i\) of \(S\), with the obvious choice of the numbering (see an example in [5, SS1.5]). This operad is also called the _rooted tree operad_. One can show that the algebras over the rooted tree operad are precisely the pre-lie algebras (see [5, SS1.9]). In particular, the symmetric braces are given by \[T_{n}(x,y_{1},...,y_{n})=x\{y_{1},...,y_{n}\},\] where \(T_{n}\) is the following tree, the _corolla with \(n\) leaves_: \[T_{n}=\raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/201.eps}} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/202.eps}} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/203.eps}} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs/204.eps}},\] and \(T_{n}(x,y_{1},...,y_{n})\) denotes the application of \(T_{n}\in\mathcal{P}re\mathcal{L}ie(n+1)\) on the tensor \(x\otimes y_{1}\otimes...\otimes y_{n}\). ### Pre-Lie algebras with divided powers In this part, we recall the notion of a pre-Lie algebra with divided powers. We obtain this definition as a particular case of a general construction, for algebras over an operad, which we briefly recall. Every operad \(\mathcal{P}\) on a suitable monoidal category \(C\) gives a functor \(\mathcal{S}(\mathcal{P},-):C\longrightarrow C\), called the _Schur functor_, defined by \[\mathcal{S}(\mathcal{P},V)=\bigoplus_{n\geq 0}\mathcal{P}(n)\otimes_{\Sigma_{n}}V ^{\otimes n},\] where we consider, in the direct sum, the coinvariants of \(\mathcal{P}(n)\otimes V^{\otimes n}\) under the diagonal action of \(\Sigma_{n}\) given by its action on \(\mathcal{P}(n)\) and its action by permutation on the tensor product \(V^{\otimes n}\). This functor defines a monad and the category of algebras over this monad is the usual category of algebras over the operad \(\mathcal{P}\). In particular, pre-Lie algebras in the sense of Definition 1.1 are identified with \(\mathcal{S}(\mathcal{P}re\mathcal{L}ie,-)\)-algebras. In the above definition, one can chose to take invariants instead of coinvariants. We obtain a new functor \(\Gamma(\mathcal{P},-):C\longrightarrow C\) defined by \[\Gamma(\mathcal{P},V)=\bigoplus_{n\geq 0}\mathcal{P}(n)\otimes^{\Sigma_{n}}V ^{\otimes n}.\] If \(\mathcal{P}(0)=0\), this functor also gives a monad (see [8, SS1.1.18]). The algebras over this monad are called \(\mathcal{P}\)_-algebras with divided powers_. The motivation for this terminology comes from the fact that, in the case of the commutative operad \(\mathcal{P}=\mathcal{C}om\), the \(\Gamma(\mathcal{C}om,-)\)-algebras are precisely the usual commutative and associative algebras over \(\mathbb{K}\) with divided powers. Note that if \(\mathbb{K}\) is a field of characteristic \(0\), the above monads are in fact isomorphic, with an isomorphism given by the trace map \(Tr:\mathcal{S}(\mathcal{P},V)\longrightarrow\Gamma(\mathcal{P},V)\). This morphism is no longer an isomorphism in general when \(char(\mathbb{K})\neq 0\). In the case \(C=\mathbb{K}Mod\) and \(\mathcal{P}=\mathcal{P}re\mathcal{L}ie\), if \(V\) is free, we however have an isomorphism of modules given by the _orbit morphism_\(\mathcal{O}:\mathcal{S}(\mathcal{P}re\mathcal{L}ie,V)\longrightarrow\Gamma( \mathcal{P}re\mathcal{L}ie,V)\) defined as follows. Let \(n\geq 1\) and \(t\in\mathcal{P}re\mathcal{L}ie(n)\otimes V^{\otimes n}\) be a basis element. We set \[\mathcal{O}(t)=\sum_{\sigma\in\Sigma_{n}/\mathrm{Stab}(t)}\sigma.t,\] where \(\mathrm{Stab}(t)\) is the stabilizer of \(t\) under the diagonal action of \(\Sigma_{n}\) on \(\mathcal{P}re\mathcal{L}ie(n)\otimes V^{\otimes n}\). The map \(\mathcal{O}\) is then extended by linearity on \(\mathcal{P}re\mathcal{L}ie(n)\otimes V^{\otimes n}\). **Theorem 1.3**.: (A. Cesaro, [3]) _Every pre-Lie algebra with divided powers \(L\) comes equipped with operations \(-\{-,...,-\}_{r_{1},...,r_{n}}:L^{\times n+1}\longrightarrow L\) called weighted braces which satisfy the following identities:_ 1. \(x\{y_{\sigma(1)},...,y_{\sigma(n)}\}_{r_{\sigma(1)},...,r_{\sigma(n)}}=x\{y_{ 1},...,y_{n}\}_{r_{1},...,r_{n}},\)__ 2. \(x\{y_{1},...,y_{i-1},y_{i},y_{i+1},...,y_{n}\}_{r_{1},...,r_{i-1},0,r_{i+1},...,r_{n}}=x\{y_{1},...,y_{i-1},y_{i+1},...,y_{n}\}_{r_{1},...,r_{i-1},r_{i+1},...,r_{n}},\)__ _._ 3. \(x\{y_{1},...,\lambda y_{i},...,y_{n}\}_{r_{1},...,r_{i},...,r_{n}}=\lambda^{r_{i}}x \{y_{1},...,y_{i},...,y_{n}\}_{r_{1},...,r_{i},...,r_{n}}\), 4. \(x\{y_{1},...,y_{i},y_{i},...,y_{n}\}_{r_{1},...,r_{i},r_{i+1},...,r_{n}}={r_{i} +r_{i+1}\choose r_{i}}x\{y_{1},...,y_{i},...,y_{n}\}_{r_{1},...,r_{i-1},r_{i}+r _{i+1},r_{i+2},...,r_{n}}\), 5. \(x\{y_{1},...,y_{i}+\widetilde{y}_{i},...,y_{n}\}_{r_{1},...,r_{i},...,r_{n}}= \sum_{s=0}^{r_{i}}x\{y_{1},...,y_{i},\widetilde{y}_{i},...,y_{n}\}_{r_{1},..., s,r_{i}-s,...,r_{n}}\), 6. \(x\{y_{1},...,y_{n}\}_{r_{1},...,r_{n}}\{z_{1},...,z_{m}\}_{s_{1},...,s_{m}}=\) \[\sum_{s_{i}=\beta_{i}+\sum\alpha_{i}^{\cdot\cdot}}{1\over\prod_{j}(r_{j})!}x \{y_{1}\{z_{1},...,z_{m}\}_{\alpha_{1}^{1,1},...,\alpha_{m}^{1,1}},...,y_{1} \{z_{1},...,z_{m}\}_{\alpha_{1}^{1,r_{1}},...,\alpha_{m}^{1,r_{1}}}},\] \[...,y_{n}\{z_{1},...,z_{m}\}_{\alpha_{1}^{n,1},...,\alpha_{m}^{n,1}},...,y_{n} \{z_{1},...,z_{m}\}_{\alpha_{1}^{n,r_{n}},...,\alpha_{m}^{n,r_{n}}},z_{1},...,z_{m}\}_{1,...,1,\beta_{1},...,\beta_{m}},\] _for all \(n,m\geq 0\), \(r_{1},...,r_{n},s_{1},...,s_{m}\geq 0\), \(1\leq i\leq n\), \(\sigma\in\Sigma_{n}\) and \(x,y_{1},...,y_{n},z_{1},...,z_{m}\in L\)._ Note that the formula \((vi)\) is written in a form that uses fractions for more convenience, but can be reduced to \(\mathbb{Z}\) using the other formulas. The process works as follows. Let \(i\) such that \(1\leq i\leq n\). In the sum, we first fix \(\beta_{1},...,\beta_{m}\) and \(\alpha_{j}^{p,q}\) for \(1\leq j\leq m\), \(1\leq q\leq r_{j}\) and \(p\neq i\). We obtain a sum with \((\alpha_{1}^{i,1},...,\alpha_{m}^{i,1},...,\alpha_{1}^{i,r_{1}},...,\alpha_{m} ^{i,r_{i}})\) as variables. We identify this last tuple with a tuple of tuples of the form \(((\alpha_{1}^{i,1},...,\alpha_{m}^{i,1});...;(\alpha_{1}^{i,r_{i}},...,\alpha_{ m}^{i,r_{i}}))\). Let \(u\) be one of these tuples and suppose \(u=(\underbrace{\widetilde{u_{1}},...,\widetilde{u_{1}}}_{t_{1}},..., \underbrace{\widetilde{u_{q}},...,\widetilde{u_{q}}}_{t_{q}})\) up to permutation. Note that, if \(\widetilde{u_{1}},...,\widetilde{u_{q}}\) are given, we exactly have \({r_{i}!\over t_{1}!...t_{q}!}\) such terms occurring in the sum. Then, by using the symmetry formula \((i)\), the formula \((iv)\) and by summing over all such tuples, we have in the sum: \[{1\over\prod_{j}(r_{j})!}{r_{i}!\over t_{1}!...t_{q}!}t_{1}!...t_{q}!\ x\{y_{1} \{z_{1},...,z_{m}\}_{\alpha_{1}^{1,1},...,\alpha_{m}^{1,1}},...,y_{1}\{z_{1},...,z_{m}\}_{\alpha_{1}^{1,r_{1}},...,\alpha_{m}^{1,r_{1}}},...,\] \[y_{i}\{z_{1},...,z_{m}\}_{\overline{u_{1}}},...,y_{i}\{z_{1},...,z_{m}\}_{ \overline{u_{q}}},...\] \[...,y_{n}\{z_{1},...,z_{m}\}_{\alpha_{1}^{n,1},...,\alpha_{m}^{n,1}},...,y_{n} \{z_{1},...,z_{m}\}_{\alpha_{1}^{n,r_{n}},...,\alpha_{m}^{n,r_{n}}},z_{1},...,z_{m}\}_{1,...,t_{1},...,t_{q},...,1,\beta_{1},...,\beta_{m}},\] where we have set \(y_{i}\{z_{1},...,z_{m}\}_{\overline{u_{k}}}=y_{i}\{z_{1},...,z_{m}\}_{\alpha_{ 1},...,\alpha_{m}}\) if \(\widetilde{u_{k}}=(\alpha_{1},...,\alpha_{m})\). Hence, it gives: \[{1\over\prod_{j\neq i}(r_{j})!}x\{y_{1}\{z_{1},...,z_{m}\}_{\alpha_{1}^{1,1},...,\alpha_{m}^{1,1}},...,y_{1}\{z_{1},...,z_{m}\}_{\alpha_{1}^{1,r_{1}},..., \alpha_{m}^{1,r_{1}}},...,\] \[y_{i}\{z_{1},...,z_{m}\}_{\overline{u_{1}}},...,y_{i}\{z_{1},...,z_{m}\}_{ \overline{u_{q}}},...\] \[...,y_{n}\{z_{1},...,z_{m}\}_{\alpha_{1}^{n,1},...,\alpha_{m}^{n,1}},...,y_{n} \{z_{1},...,z_{m}\}_{\alpha_{1}^{n,r_{n}},...,\alpha_{m}^{n,r_{n}}},z_{1},...,z_{m}\}_{1,...,t_{1},...,t_{q},...,1,\beta_{1},...,\beta_{m}}.\] By iterating this argument on the other terms, we obtain an expression over \(\mathbb{Z}\). The reader can find an example of such a reduction of the formula \((vi)\) in [3, Example 5.11], as well as a proof of the previous theorem (see [3, Propositions 5.9-5.10]). We give the explicit construction of the weighted braces. **Construction 1.4**.: _We regard the weighted braces \(x\{y_{1},...,y_{n}\}_{r_{1},...,r_{n}}\) as the action of the corolla \(F_{\sum_{i}r_{i}}\) on the tensor \(x\otimes\underbrace{y_{1}\otimes...\otimes y_{1}}_{r_{1}}\otimes...\otimes \underbrace{y_{n}\otimes...\otimes y_{n}}_{r_{n}}\) where we distinguish all the \(y_{i}\)'s. If \(y_{i}\neq y_{j}\) for all \(i\neq j\), then we precisely set_ \[x\{y_{1},...,y_{n}\}_{r_{1},...,r_{n}}=\gamma(\mathcal{O}F_{\sum_{i}r_{i}}(x \otimes\underbrace{y_{1}\otimes...\otimes y_{1}}_{r_{1}}\otimes...\otimes \underbrace{y_{n}\otimes...\otimes y_{n}}_{r_{n}})),\] _where \(\gamma\) is the \(\Gamma(\mathcal{P}re\mathcal{L}ie,-)\)-algebra structure on \(L\)._ _In order to include the case where some of the \(y_{i}\)'s might be the same, let \(E_{n}\) to be the free \(\mathbb{K}\)-module generated by a basis \(e,e_{1},...,e_{n}\). We have an obvious morphism \(\psi_{x,y_{1},...,y_{n}}:\Gamma(\mathcal{P}re\mathcal{L}ie,E_{n})\longrightarrow \Gamma(\mathcal{P}re\mathcal{L}ie,L)\) which sends \(e\) to \(x\) and each \(e_{i}\) to \(y_{i}\). We then take the orbit map at the source and apply this morphism next to have a good definition of the weighted braces._ **Remark 1.5**.: _The converse of the previous theorem is also true, provided that \(L\) is free as a \(\mathbb{K}\)-module. In fact, by using the same arguments as in [3, Construction 5.14], we can more generally assert that if we have brace operations \(-\{-,...,-\}_{r_{1},...,r_{n}}:L^{\times n+1}\longrightarrow M\) which satisfy formulas \((i)-(vi)\) where \(L\) and \(M\) are \(\mathbb{K}\)-modules with \(L\) free, then these operations extend to a morphism \(\Gamma(\mathcal{P}re\mathcal{L}ie,L)\longrightarrow M\)._ ## 2 Deformation theory of \(\Gamma(\mathcal{P}re\mathcal{L}ie,-)\)-algebras The main goal of this section is to extend the results proved by Dotsenko-Shadrin-Valette in [6] in the context of a ring of positive characteristic. The main idea is that formulas which define the circular product and the gauge action can be written in terms of weighted brace operations. In SS2.1, we revisit the definition of pre-Lie algebras with divided powers in the dg framework. In particular, we give the analogue of the weighted brace operations. We then make explicit an example of differential graded pre-Lie algebras with divided powers given by differential graded brace algebras. In SS2.2, we define the circular product in terms of weighted brace operations that will generalize the one given in [6]. We then show that this induces a group called the gauge group associated to the \(\Gamma(\mathcal{P}re\mathcal{L}ie,-)\)-algebra. In SS2.3, we define the Maurer-Cartan equation in a \(\Gamma(\mathcal{P}re\mathcal{L}ie,-)\)-algebra, and then the Maurer-Cartan set. We also see that the gauge group acts on the Maurer-Cartan set by a similar formula given in [6]. In SS2.4, we finally motivate this new deformation theory with an analogue of the Goldman-Millson theorem. This theorem, in particular, has the advantage to be true on integers. ### Differential graded pre-Lie algebras with divided powers As we are dealing with differential graded modules, our first goal is to define and study differential graded pre-Lie algebras with divided powers. In the following sections, we assume that dg modules are equipped with a cohomological grading convention. We will denote by \(\otimes\) the usual tensor product of graded modules over any ring \(\mathbb{K}\). This induces a symmetric monoidal category that we will denote by dg\(\mathbb{K}Mod\). If there is no possible confusion, then we will denote by \(\pm\) any sign produced by the Koszul sign rule. #### 2.1.1 Weighted braces on \(\Gamma(\mathcal{P}re\mathcal{L}ie,-)\)-algebras Our main goal here is to extend [3, Proposition 5.13] in the context of dg modules. We first begin by a basic definition. **Definition 2.1**.: _A differential graded pre-Lie algebra is an algebra over the monad \(\mathcal{S}(\mathcal{P}re\mathcal{L}ie,-):\mathrm{dg}\mathbb{K}Mod\longrightarrow \mathrm{dg}\mathbb{K}Mod\)._ Equivalently, we can easily see that a differential graded pre-Lie algebra is a graded module \(L=\bigoplus_{n\geq 0}L^{n}\) endowed with a morphism of graded modules \(\star:L\otimes L\longrightarrow L\) such that \[(x\star y)\star z-x\star(y\star z)=\pm((x\star z)\star y-x\star(z\star y))\] and a differential \(d:L^{n}\longrightarrow L^{n+1}\), which satisfies \[d(x\star y)=d(x)\star y\pm x\star d(y),\] where \(\pm\) is the sign yielded by the permutation of \(x\) and \(d\). We now define the notion of a pre-Lie algebra with divided powers in the dg framework. **Definition 2.2**.: _A differential graded pre-Lie algebra with divided powers is an algebra over the monad \(\Gamma(\mathcal{P}re\mathcal{L}ie,-):dg\mathbb{K}Mod\longrightarrow dg \mathbb{K}Mod\)._ As in the non graded case, we have the orbit map \[\mathcal{O}:\mathcal{S}(\mathcal{P}re\mathcal{L}ie,L)\longrightarrow\Gamma( \mathcal{P}re\mathcal{L}ie,L),\] for every \(\Gamma(\mathcal{P}re\mathcal{L}ie,-)\)-algebra \(L\) that is free as a \(\mathbb{K}\)-module, which is defined by the same formula as in SS1.2. More precisely, we use the diagonal action of \(\Sigma_{n}\) on \(\mathcal{P}re\mathcal{L}ie(n)\otimes V^{\otimes n}\) where \(\Sigma_{n}\) acts on \(V^{\otimes n}\) by permuting the elements of the tensor and produces a sign given by the Koszul sign rule. **Proposition 2.3**.: _Let \(\mathbb{K}\) be a ring. A (differential) graded pre-Lie algebra with divided powers \(L=\bigoplus_{n\geq 0}L^{n}\) over \(\mathbb{K}\) comes equipped with operations, called weighted braces, which have the following form._ * _If_ \(char(\mathbb{K})=2\)_, weighted braces are maps_ \[-\{-,...,-\}_{r_{1},...,r_{n}}:L^{\times(n+1)}\longrightarrow L,\] _defined for any collection of integers_ \(r_{1},...,r_{n}\geq 0\)_, which satisfy all formulas of Theorem_ 1.3 _and preserve the grading in the sense that_ \[L^{k}\{L^{k_{1}},...,L^{k_{n}}\}_{r_{1},...,r_{n}}\subset L^{k+k_{1}r_{1}+...+ k_{n}r_{n}}.\] * _If_ \(char(\mathbb{K})\neq 2\)_, by setting_ \(L^{ev}=\bigoplus_{n\geq 0}L^{2n}\) _and_ \(L^{odd}=\bigoplus_{n\geq 0}L^{2n+1}\)_, weighted braces are maps_ \[-\{\underbrace{-,...,-}_{p},\underbrace{-,...,-}_{q}\}_{r_{1},...,r_{p},1,...,1 }:L\times(L^{ev})^{\times p}\times(L^{odd})^{\times q}\longrightarrow L,\] _defined for any collection of integers_ \(p,q,r_{1},...,r_{p}\geq 0\)_, which satisfy all formulas of Theorem_ 1.3 _with a sign and preserve the grading._ _Proof._ We basically do the same thing as in [3, Proposition 5.10]. Let \(x,y_{1},...,y_{n}\in L\). Let \(E_{n}\) be the graded \(\mathbb{K}\)-module generated by \(e,e_{1},...,e_{n}\) with matching degrees. We have an obvious morphism of graded modules from \(E_{n}\) to \(L\) sending \(x\) to \(e\) and \(y_{i}\) to \(e_{i}\). This gives rise by functoriality to a morphism \(\psi_{x,y_{1},...,y_{n}}:\Gamma(\mathcal{P}re\mathcal{L}ie,E_{n}) \longrightarrow\Gamma(\mathcal{P}re\mathcal{L}ie,L)\). Let \(p=\sum_{i=1}^{n}r_{i}\). We set \[x\{y_{1},...,y_{n}\}_{r_{1},...,r_{n}}:=\gamma(\psi_{x,y_{1},...,y_{n}}( \mathcal{O}F_{p}(e\otimes\underbrace{e_{1}\otimes...\otimes e_{1}}_{r_{1}} \otimes...\otimes\underbrace{e_{n}\otimes...\otimes e_{n}}_{r_{n}}))),\] where \(\gamma\) is the \(\Gamma(\mathcal{P}re\mathcal{L}ie,-)\)-algebra structure on \(L\). One can check, in both cases, that all the desired formulas are satisfied. \(\square\) As in the non graded case, the converse is also true provided that \(L\) is free as a \(\mathbb{K}\)-module in each degree. We also see that weighted braces satisfy the Leibniz rule with respect to the differential. **Proposition 2.4**.: _For all differential graded pre-Lie algebras with divided powers \(L=\bigoplus_{n\geq 0}L^{n}\), the differential \(d\) is compatible with the weighted braces in the sense that_ \[d(x\{y_{1},...,y_{n}\}_{r_{1},...,r_{n}})=d(x)\{y_{1},...,y_{n}\}_{r_{1},...,r_ {n}}+\sum_{k=1}^{n}(-1)^{\varepsilon_{k}}x\{y_{1},...,y_{k},d(y_{k}),...,y_{n }\}_{r_{1},...,r_{k}-1,1,...,r_{n}},\] _where \(\varepsilon_{k}=|x|+|y_{1}|+...+|y_{k-1}|\)._ _Proof._ This proposition follows directly from the definition of the weighted braces and the commutation of \(d\) with the monadic composition. \(\square\) We then deduce from Propositions 2.3 and 2.4 that every differential graded pre-Lie algebra with divided powers is in particular a differential graded pre-Lie algebra, with \[x\star y=x\{y\}_{1}.\] **Remark 2.5**.: _If \(\mathbb{Q}\subset\mathbb{K}\) and if \(L\) is a differential graded pre-Lie algebra, then \(L\) is a differential graded pre-Lie algebra with divided powers whose weighted braces are explicitly given by_ \[x\{y_{1},...,y_{n}\}_{r_{1},...,r_{n}}=\frac{1}{\prod_{i}r_{i}!}x\{\underbrace {y_{1},...,y_{1}}_{r_{1}},...,\underbrace{y_{n},...,y_{n}}_{r_{n}}\}\] _in terms of symmetric braces._ **Remark 2.6**.: _Every morphism of \(\Gamma(\mathcal{P}re\mathcal{L}ie,-)\)-algebras preserves the weighted braces:_ \[f(x\{y_{1},...,y_{n}\}_{r_{1},...,r_{n}})=f(x)\{f(y_{1}),...,f(y_{n})\}_{r_{1},...,r_{n}}.\] In the following sections, we deal with a convergence hypothesis to give a sense to series. There are several ways to give such hypothesis. For this paper, we will suppose that \(L\) is complete with respect to some filtration \(...\subset F_{n}L\subset F_{n-1}L\subset...\subset F_{1}L=L\) in the sense that \(L=\lim_{n\geq 1}L/F_{n}L\) in \(dg\mathbb{K}Mod\). We also assume that the filtration is compatible with the weighted braces: \[F_{k}L\{F_{k_{1}}L,...,F_{k_{n}}L\}_{r_{1},...,r_{n}}\subset F_{k+k_{1}r_{1}+...+k_{n}r_{n}}L.\] Moreover, we will formally extend the weighted braces to \(L_{+}=\mathbb{K}1\oplus L\) by \(y\{1\}_{1}=y\) and, for \(n\geq 1\), \[1\{y_{1},...,y_{n}\}=\left\{\begin{array}{ll}y_{1}&\text{if }n=1\\ 0&\text{if }n>1\end{array}\right..\] We can extend the previous filtration to \(L_{+}\) by setting \(F_{0}L=L_{+}\). One can easily check that weighted braces still preserve this new filtration, and we obviously have that \(L_{+}\) is complete with respect to this filtration. #### 2.1.2 Example: differential graded brace algebras We give an example of dg pre-Lie algebra with divided powers which are given by dg brace algebras, following the idea of the proof in the non graded framework in [3]. **Definition 2.7**.: _A differential graded brace algebra is a differential graded module \(L\) endowed with brace operations_ \[-\langle-,...,-\rangle:L^{\otimes n+1}\longrightarrow L\] _which are compatible with the differential \(d\):_ \[d(f\langle g_{1},...,g_{n}\rangle)=d(f)\langle g_{1},...,g_{n}\rangle+\sum_{k=1 }^{n}\pm f\langle g_{1},...,d(g_{k}),...,g_{n}\rangle,\] _and such that \(f\langle\rangle=f\) and_ \[f\langle g_{1},...,g_{n}\rangle\langle h_{1},...,h_{r}\rangle=\sum\pm f\langle Y _{1},y_{1}\langle Y_{2}\rangle,...,Y_{2n-1},y_{n}\langle Y_{2n}\rangle,Y_{2n +1}\rangle,\] _where the sum is over all consecutive subsets \(Y_{1}\sqcup Y_{2}\sqcup...\sqcup Y_{2n+1}=\{h_{1},...,h_{r}\}\), and the sign is yielded by the permutation of the \(y_{i}\)'s with the \(h_{j}\)'s._ The operad which governs brace algebras is denoted by \(Brace\), and is defined, in arity \(n\), as the \(\mathbb{K}\)-module spanned by the planar n-trees, i.e. trees with an order on the set of inputs for each vertex (see [3, SS6.1] or [4, SS2] for some details on the operad \(Brace\)). This operad allows us to represent all operations in brace algebras by the action of a planar tree, or by a planar tree labeled with the inputs. For instance, we have **Remark 2.8**.: _Because the action of the symmetric groups on \(Brace\) is free, we have that the trace map induces an isomorphism of monads \(\mathcal{S}(Brace,-)\longrightarrow\Gamma(Brace,-)\)._ We have an inclusion \[i:\mathcal{P}re\mathcal{L}ie\hookrightarrow Brace\] defined by the _symmetrization_ of trees. Namely, \(i\) is obtained by summing over all possible ways to write a given tree \(t\) as a planar tree. For instance: \[i\left(\raisebox{-1.72pt}{\includegraphics[]{fig/Brace_1.pdf}}\right)= \raisebox{-1.72pt}{\includegraphics[]{fig/Brace_2.pdf}}\] The map \(i\) induces a morphism of monads that can be used to define a \(\Gamma(\mathcal{P}re\mathcal{L}ie,-)\)-algebra structure on every dg brace algebra \(L\), given by the following composition: \[\Gamma(\mathcal{P}re\mathcal{L}ie,L)\xrightarrow{}\Gamma(Brace,L) \xleftarrow{\simeq}\mathcal{S}(Brace,L)\xrightarrow{}L\.\] We also can explicitly compute the weighted braces. **Theorem 2.9**.: _Every dg brace algebra \(L\) is endowed with a \(\Gamma(\mathcal{P}re\mathcal{L}ie,-)\)-algebra structure. Moreover, weighted braces \(-\{-,...,-\}_{r_{1},...,r_{n}}\) are explicitly given by_ \[f\{g_{1},...,g_{n}\}_{r_{1},...,r_{n}}=\sum_{\sigma\in Sh(r_{1},...,r_{n})} \pm f\langle h_{\sigma^{-1}(1)},...,h_{\sigma^{-1}(r)}\rangle,\] _where we set \(r=\sum_{i}r_{i}\) and \((h_{1},...,h_{r})=(\underbrace{g_{1},...,g_{1}}_{r_{1}},...,\underbrace{g_{n},...,g_{n}}_{r_{n}})\) and where \(\pm\) is the sign induced by the permutations of \(g_{i}\) with \(g_{j}\) when \(i\neq j\)._ _Proof_. Weighted braces are given by elements of the form \(x=\mathcal{O}T_{\sum_{i}r_{i}}(f,\underbrace{g_{1},...,g_{1}}_{r_{1}},..., \underbrace{g_{n},...,g_{n}}_{r_{n}})\in\Gamma(\mathcal{P}re\mathcal{L}ie,L)\). For more convenience, we will set \((h_{1},...,h_{r+1})=(f,\underbrace{g_{1},...,g_{1}}_{r_{1}},...,\underbrace{g _{n},...,g_{n}}_{r_{n}})\) (note that we have added \(f\) here so that these \(h_{i}\)'s are different from the \(h_{i}\)'s of the theorem). We precisely have: \[x=\sum_{\sigma\in\Sigma_{n}/\prod_{i}\Sigma_{r_{i}}}\pm(\sigma.T_{r})(h_{ \sigma^{-1}(1)},...,h_{\sigma^{-1}(r+1)})\] because \(\text{Stab}(T_{r}(f,\underbrace{g_{1},...,g_{1}}_{r_{1}},...\underbrace{g_{n },...,g_{n}}_{r_{n}}))=\prod_{i}\Sigma_{r_{i}}\). Now, because \(\Sigma_{n}/\prod_{i}\Sigma_{r_{i}}\) is in bijection with \(Sh(1,r_{1},...,r_{n})\), we can rewrite \(x\) as \[x=\sum_{\sigma\in Sh(1,r_{1},...,r_{n})}\pm(\sigma.T_{r})(h_{\sigma^{-1}(1)},...,h_{\sigma^{-1}(r+1)}).\] We now embed \(\mathcal{P}re\mathcal{L}ie\) into \(Brace\). The tree \(T_{r}\) can be seen in \(Brace\) as \(\sum_{s\in\Sigma_{r}}\sigma.\overline{T_{r}}\) where \(\overline{T_{r}}\) is the planar tree \[\overline{T_{r}}=\raisebox{-14.226378pt}{\includegraphics[]{PereJ}}\raisebox{-14.22 6378pt}{\includegraphics[]{PereJ}}\raisebox{-14.226378pt}{\includegraphics[]{PereJ}} \raisebox{-14.226378pt}{\includegraphics[]{PereJ}}\raisebox{-14.226378pt}{ \includegraphics[]{PereJ}}\raisebox{-14.226378pt}{\includegraphics[]{PereJ}}\] and where we embed \(\Sigma_{r}\) into \(\Sigma_{r+1}\) by fixing \(1\). We then have, in \(\Gamma(Brace,L)\), \[x=\sum_{\sigma\in Sh(1,r_{1},...,r_{n})}\sum_{s\in\Sigma_{r}}\pm(\sigma s. \overline{T_{r}})(h_{\sigma^{-1}(1)},...,h_{\sigma^{-1}(r+1)}).\] Using that every \(s\in\Sigma_{r}\) admits a unique decomposition of the form \(s=\omega.\mu\) where \(\omega\in Sh(r_{1},...,r_{n})\) and \(\mu\in\prod_{i}\Sigma_{r_{i}}\), we obtain \[x=\sum_{\sigma\in Sh(1,r_{1},...,r_{n})}\sum_{\omega\in Sh(r_{1},...,r_{n})} \sum_{\mu\in\prod_{i}\Sigma_{r_{i}}}\pm(\sigma\mu\omega.\overline{T_{r}})(h_{ \sigma^{-1}(1)},...,h_{\sigma^{-1}(r+1)}).\] We now need to compute \(y=Tr^{-1}(x)\). We claim that \[y=\sum_{\omega\in Sh(r_{1},...,r_{n})}(\omega.\overline{T_{r}})(f,\underbrace {g_{1},...,g_{1}}_{r_{1}},...,\underbrace{g_{n},...,g_{n}}_{r_{n}})\] where again we embed \(\Sigma_{r}\hookrightarrow\Sigma_{r+1}\) by fixing \(1\). We compute \[Tr(y)=\sum_{\omega\in Sh(r_{1},...,r_{n})}\sum_{\tau\in\Sigma_{r+1}}\pm(\tau \omega.\overline{T_{r}})(h_{\tau^{-1}(1)},...,h_{\tau^{-1}(r+1)}).\] Similarly as before, we use that every \(\tau\in\Sigma_{r+1}\) admits a unique decomposition of the form \(\tau=\sigma.\mu\) where \(\sigma\in Sh(1,r_{1},...,r_{n})\) and \(\mu\in\prod_{i}\Sigma_{r_{i}}\). Then: \[Tr(y)=\sum_{\omega\in Sh(r_{1},...,r_{n})}\sum_{\sigma\in Sh(1,r_{1},...,r_{n} )}\sum_{\mu\in\prod_{i}\Sigma_{r_{i}}}\pm(\sigma\mu\omega.\overline{T_{r}})(h _{\sigma^{-1}(1)},...,h_{\sigma^{-1}(r+1)})=x,\] which gives the result. In particular, we can easily compute weighted braces in the case \(n=1\): \[f\{g\}_{r}=f\langle\underbrace{g,...,g}_{r}\rangle.\] **Remark 2.10**.: _Let \(\mathcal{P}\) be an operad. It is well known that \(\bigoplus_{n}\mathcal{P}(n)\) is endowed with a brace algebra structure (see [3]). If we denote by \(\gamma:\mathcal{P}\circ\mathcal{P}\longrightarrow\mathcal{P}\) the operadic composition, the brace algebra structure of \(\bigoplus_{n}\mathcal{P}(n)\) can be written as_ \[p\langle q_{1},...,q_{n}\rangle=\sum_{\sigma}\gamma(p\otimes 1\otimes...1 \otimes q_{1}\otimes 1\otimes...\otimes 1\otimes q_{n}\otimes 1\otimes... \otimes 1\otimes\sigma)\] _where we sum on all pointed unshuffle permutations \(\sigma\) of type \((1,...,1,m_{1},1,...,1,m_{n},1,...,1)\) with \(p\in{\cal P}(r),g_{i}\in{\cal P}(m_{i})\). By convention, the term in the sum is \(0\) if \(n\geq r+1\)._ _Consider now the sub module \(\bigoplus_{n}P(n)^{\Sigma}\). It is not preserved by the brace operations in general, however the \(\Gamma({\cal P}re{\cal L}ie,-)\) operations induced by the brace algebra structure of \(\bigoplus_{n}{\cal P}(n)\) preserves \(\bigoplus_{n}{\cal P}(n)^{\Sigma_{n}}\) so that \(\bigoplus_{n}{\cal P}(n)^{\Sigma_{n}}\) is also is \(\Gamma({\cal P}re{\cal L}ie,-)\)-algebra._ ### The gauge group We can now define an analogue of the circular product given in [6] using the weighted brace operations. **Definition 2.11**.: _Let \(\alpha\in L_{+}\) and \(\mu\in L^{0}\). We set_ \[\alpha\mathbin{\raisebox{-1.0pt}{\hbox{\tiny$\otimes$}}}(1+\mu)=\sum_{n=0}^ {+\infty}\alpha\{\mu\}_{n}.\] Note that this quantity is well defined according to our convergence hypothesis. By applying this definition in the case \({\mathbb{Q}}\subset{\mathbb{K}}\) and using the weighted braces given by Remark 2.5, we retrieve the usual circular product given in [6]. **Remark 2.12**.: _One can see that we have \(1\mathbin{\raisebox{-1.0pt}{\hbox{\tiny$\otimes$}}}(1+\mu)=1+\mu=(1+\mu) \mathbin{\raisebox{-1.0pt}{\hbox{\tiny$\otimes$}}}1\) so that \(1\) is a unit element for \(\mathbin{\raisebox{-1.0pt}{\hbox{\tiny$\otimes$}}}\). We thus have_ \[\forall\mu,\nu\in L^{0},(1+\mu)\mathbin{\raisebox{-1.0pt}{\hbox{\tiny$\otimes$ }}}(1+\nu)=1+\nu+\sum_{n=0}^{+\infty}\mu\{\nu\}_{n},\] _which shows that \(\mathbin{\raisebox{-1.0pt}{\hbox{\tiny$\otimes$}}}\) preserves \(1+L^{0}\)._ **Lemma 2.13**.: _The circular product \(\mathbin{\raisebox{-1.0pt}{\hbox{\tiny$\otimes$}}}\) is associative, in the sense that for all \(\alpha\in L_{+}\) and \(\mu,\nu\in L^{0}\),_ \[(\alpha\mathbin{\raisebox{-1.0pt}{\hbox{\tiny$\otimes$}}}(1+\mu))\mathbin{ \raisebox{-1.0pt}{\hbox{\tiny$\otimes$}}}(1+\nu)=\alpha\mathbin{\raisebox{-1. 0pt}{\hbox{\tiny$\otimes$}}}((1+\mu)\mathbin{\raisebox{-1.0pt}{\hbox{\tiny$ \otimes$}}}(1+\nu)).\] _Proof._ Let \(\alpha\in L_{+}\) and \(\mu,\nu\in L^{0}\). We first have \[(\alpha\mathbin{\raisebox{-1.0pt}{\hbox{\tiny$\otimes$}}}(1+\mu)) \mathbin{\raisebox{-1.0pt}{\hbox{\tiny$\otimes$}}}(1+\nu) = \left(\sum_{n=0}^{+\infty}\alpha\{\mu\}_{n}\right)\mathbin{ \raisebox{-1.0pt}{\hbox{\tiny$\otimes$}}}(1+\nu)\] \[= \sum_{n,p=0}^{+\infty}\alpha\{\mu\}_{n}\{\nu\}_{p}.\] On the other hand, we have \[\alpha\mathbin{\raisebox{-1.0pt}{\hbox{\tiny$\otimes$}}}((1+\mu) \mathbin{\raisebox{-1.0pt}{\hbox{\tiny$\otimes$}}}(1+\nu)) = \alpha\mathbin{\raisebox{-1.0pt}{\hbox{\tiny$\otimes$}}}\left(1+\nu+ \sum_{n=0}^{+\infty}\mu\{\nu\}_{n}\right)\] \[= \sum_{p=0}^{+\infty}\alpha\left\{\nu+\sum_{n=0}^{+\infty}\mu\{\nu\} _{n}\right\}_{p}.\] We thus need to prove that \[\sum_{n,p=0}^{+\infty}\alpha\{\mu\}_{n}\{\nu\}_{p}=\sum_{p=0}^{+\infty}\alpha \left\{\nu+\sum_{n=0}^{+\infty}\mu\{\nu\}_{n}\right\}_{p}.\] To prove this identity, we use formula \((vi)\) of Theorem 1.3: \[\alpha\{\mu\}_{n}\{\nu\}_{p}=\sum_{p=\beta+\sum_{i=1}^{n}\alpha^{i}}\frac{1}{n! }\alpha\{\mu\{\nu\}_{\alpha^{1}},...,\mu\{\nu\}_{\alpha^{n}},\nu\}_{1,...,1, \beta},\] which gives \[\sum_{n,p=0}^{+\infty}\alpha\{\mu\}_{n}\{\nu\}_{p}=\sum_{n=0}^{+\infty}\sum_{p =0}^{+\infty}\sum_{\beta=0}^{p}\sum_{p-\beta=\sum_{i=1}^{n}\alpha^{i}}\frac{1} {n!}\alpha\{\mu\{\nu\}_{\alpha^{1}},...,\mu\{\nu\}_{\alpha^{n}},\nu\}_{1,...,1,\beta}.\] In this sum, because of the symmetry, some terms occur several times. For a given \(p\) and \(\beta\), we count the number of partitions of \(p-\beta=\alpha^{1}+...+\alpha^{n}\) of the particular form \(\widetilde{r_{1}\alpha^{1}}+...+r_{q}\widetilde{\alpha^{q}}\). We get \(n(\widetilde{\alpha^{1}},...,\widetilde{\alpha^{n}})=\frac{n!}{\widetilde{r_{ 1}\cdots r_{q}!}}\) for this number. We then have \[\frac{1}{r_{1}!...r_{q}!}\alpha\{\mu\{\nu\}_{\widetilde{\alpha^{1}}},...,\mu \{\nu\}_{\widetilde{\alpha^{1}}},....,\mu\{\nu\}_{\widetilde{\alpha^{q}}},...,\mu\{\nu\}_{\widetilde{\alpha}^{q}},\nu\}_{1,...,1,\beta}=\alpha\{\mu\{\nu\}_ {\widetilde{\alpha^{1}}},...,\mu\{\nu\}_{\widetilde{\alpha^{q}}},\nu\}_{r_{1},...,r_{q},\beta}.\] We conclude by formula \((v)\) of Theorem 1.3. We now need to find an explicit inverse for a given element \(1-\mu\) with \(\mu\in L^{0}\). **Definition 2.14**.: _Let \(t\) be a non-labeled tree with \(n\) vertices and \(\mu\in L^{0}\). We set_ \[\mathcal{O}t(\mu)=\gamma(\mathcal{O}t(\mu^{\otimes n})),\] _for some choice of labeling of \(t\)._ Note that because \(\mathcal{O}\) is \(\Sigma\)-invariant, this quantity does not depend on the choice of a labeling for \(t\). For example, let \(t\) be the non-labeled tree Then \[\mathcal{O}t(\mu)=\mu\{\mu\{\mu\}_{2},\mu\{\mu\}_{3},\mu\}_{2,1,1}.\] **Lemma 2.15**.: _For every \(\mu\in L^{0}\), the element \(1-\mu\) has an inverse in \(1+L^{0}\) for the circular product \(\bar{\phantom{\mu}}\) given by_ \[(1-\mu)^{\otimes-1}=1+\sum_{t\in rRT^{*}}\mathcal{O}t(\mu),\] _where \(rRT^{*}\) is the set of trees without any labeling and with at least one vertex._ Proof.We first see that this defines a right-inverse for \(1-\mu\). Indeed, we first have that \[(1-\mu)\odot\left(1+\sum_{t\in rRT^{*}}\mathcal{O}t(\mu)\right)=1+\sum_{t\in rRT^ {*}}\mathcal{O}t(\mu)-\sum_{k=0}^{+\infty}\mu\left\{\sum_{t\in rRT^{*}} \mathcal{O}t(\mu)\right\}_{k}.\] Then, as every \(t\in rRT^{*}\) can be uniquely described by its root and branches, we have that every term in the first sum at the right hand side can be uniquely described by an element from the second sum, and vice versa. Formulas from Theorem 1.3 thus give the result. We now need to prove that it is a left inverse, which is slightly more difficult. We compute \[\left(1+\sum_{t\in rRT^{*}}\mathcal{O}t(\mu)\right)\odot(1-\mu)=1-\mu+\sum_{t \in rRT^{*}}\mathcal{O}t(\mu)+\sum_{k\geq 1}\left(\sum_{t\in rRT^{*}}\mathcal{O}t( \mu)\right)\{-\mu\}_{k}.\] We focus on one term \(\mathcal{O}t(\mu)\) from the first sum, for some tree \(t\in rRT^{*}\). We say that a vertex of \(t\) is _extremal_ if it is not the root of \(t\) and if it is connected to one and only one other vertex in \(t\). We denote by \(m_{t}\) the number of extremal vertices. If \(m_{t}=0\), then \(t\) is the trivial tree: \(\mathcal{O}t(\mu)=\mu\). This term does not appear in the second sum (because \(k\geq 1\)) and vanishes with \(-\mu\). If \(m_{t}\neq 0\), the idea is to fix a number \(1\leq k\leq m_{t}\), and to see which trees we can obtain if we remove \(k\) extremal vertices of \(t\). These trees will occur in the second sum and give \((-1)^{k}\mathcal{O}t(\mu)\) by adding \(k\) copies of \(-\mu\). Let \(X_{t}\) be the set of extremal vertices of \(t\). Let \(X_{t,k}\) be the set of non ordered subsets of \(X_{t}\) with \(k\) elements. When we remove \(k\) extremal vertices, we need to take into account that we can obtain the same tree by removing a different non ordered set of \(k\) extremal vertices. For example, if we take the previous tree and if we look at the first branch, removing the vertex at the left gives the same tree as removing the vertex at the right. Let \(t^{1}_{k}\),...,\(t^{p_{k}}_{k}\) be all the different trees that we can get from \(t\) by removing \(k\) extremal vertices. We denote by \(X^{t^{i}_{k}}_{t,k}\) the subset of \(X_{t,k}\) formed by all the vertices that lead to \(t^{i}_{k}\) when removing them from \(t\). We then have a disjoint union \(X_{t,k}=\bigsqcup_{i=1}^{p_{k}}X^{t^{i}_{k}}_{t,k}\). Each terms \(\mathcal{O}t^{i}_{k}(\mu)\{-\mu\}_{k}\) will then give, among other terms, \((-1)^{k}Card(X^{t^{i}_{k}}_{t,k})\mathcal{O}t(\mu)\). When we take the sum over \(i\), we obtain \((-1)^{k}Card(X_{t,k})\mathcal{O}t(\mu)=(-1)^{k}\binom{m_{t}}{k}\mathcal{O}t(\mu)\). By taking the sum over \(k\), we therefore obtain \(-\mathcal{O}t(\mu)\) which vanishes with \(\mathcal{O}t(\mu)\) given by the first sum. From Lemma 2.13 and Lemma 2.15, we deduce assertion \((i)\) of Theorem A: **Theorem 2.16**.: _The triple \(\Gamma=(1+L^{0},\odot,1)\) is a group called the gauge group of \(L\)._ ### Maurer-Cartan elements and the Deligne groupoid We now aim to prove assertion \((ii)\) of Theorem A. We first make explicit the definition of the Maurer-Cartan set. **Definition 2.17**.: _A given \(\alpha\in L^{1}\) is a Maurer-Cartan element if it satisfies the Maurer-Cartan equation:_ \[d(\alpha)+\alpha\{\alpha\}_{1}=0.\] _We let \(\mathcal{MC}(L)\) to be the set of all Maurer-Cartan elements of \(L\)._ **Remark 2.18**.: _In the case \(\mathbb{Q}\subset\mathbb{K}\), we retrieve the classical definition:_ \[d(\alpha)+\frac{1}{2}[\alpha,\alpha]=0,\] _written with the dg Lie algebra structure on \(L\)._ As in the case of characteristic zero, we expect the gauge group to act on the Maurer-Cartan set. Before seeing that, we define a new operation. **Definition 2.19**.: _Let \(\alpha\in L_{+},\beta\in L\) and \(1+\mu\in\Gamma\). We set_ \[\alpha\odot(1+\mu;\beta)=\sum_{n=0}^{+\infty}\alpha\{\mu,\beta\}_{n,1}.\] **Lemma 2.20**.: _We have the following identities:_ \[\begin{array}{lcl}(\alpha\odot(1+\mu))\{\beta\}_{1}&=&\alpha\odot(1+\mu; \beta+\mu\{\beta\}_{1}),\\ \alpha\{\beta\}_{1}\odot(1+\mu)&=&\alpha\odot(1+\mu;\beta\odot(1+\mu)),\\ d(\alpha\odot(1+\mu))&=&d(\alpha)\odot(1+\mu)+(-1)^{|\alpha|}\alpha\odot(1+ \mu;d(\mu)).\end{array}\] _Proof._ By applying formula \((vi)\) of Theorem 1.3, we find that \[\begin{array}{lcl}(\alpha\odot(1+\mu))\{\beta\}_{1}&=&\sum_{n=0}^{+\infty} \alpha\{\mu\}_{n}\{\beta\}_{1}\\ &=&\sum_{n=0}^{+\infty}\alpha\{\mu,\beta\}_{n,1}+\sum_{n=1}^{+\infty}\alpha\{ \mu,\mu\{\beta\}_{1}\}_{n-1,1}\\ &=&\sum_{n=0}^{+\infty}\alpha\{\mu,\beta+\mu\{\beta\}_{1}\}_{n,1}\\ &=&\alpha\odot(1+\mu;\beta+\mu\{\beta\}_{1}),\end{array}\] as well as \[\alpha\{\beta\}_{1}\mathbin{\raisebox{-1.075pt}{\scalebox{1.07}{$\circ$}}}(1+\mu) = \sum_{m=0}^{+\infty}\alpha\{\beta\}_{1}\{\mu\}_{m}\] \[= \sum_{p,q=0}^{+\infty}\alpha\{\beta\{\mu\}_{p},\mu\}_{1,q}\] \[= \alpha\mathbin{\raisebox{-1.075pt}{\scalebox{1.07}{$\circ$}}}(1+ \mu;\beta\mathbin{\raisebox{-1.075pt}{\scalebox{1.07}{$\circ$}}}(1+\mu)).\] Finally, by using the compatibility of \(d\) with weighted braces, we obtain, \[d(\alpha\mathbin{\raisebox{-1.075pt}{\scalebox{1.07}{$\circ$}}}(1+\mu)) = \sum_{n=0}^{+\infty}d(\alpha)\{\mu\}_{n}+(-1)^{|\alpha|}\sum_{n=1 }^{+\infty}\alpha\{\mu,d(\mu)\}_{n-1,1}\] \[= d(\alpha)\mathbin{\raisebox{-1.075pt}{\scalebox{1.07}{$\circ$}}}( 1+\mu)+(-1)^{|\alpha|}\alpha\mathbin{\raisebox{-1.075pt}{\scalebox{1.07}{$ \circ$}}}(1+\mu;d(\mu)),\] which concludes the proof of the lemma. We can now prove assertion \((ii)\) of Theorem A. **Theorem 2.21**.: _Suppose that \(\mu\{\alpha,\alpha\}_{1,1}=0\) for every \(\mu\in L^{0}\) and \(\alpha\in L\) with odd degree. Then the gauge group \(\Gamma\) acts on the Maurer-Cartan set \(\mathcal{MC}(L)\) by_ \[(1+\mu).\alpha=(\alpha+\mu\{\alpha\}_{1}-d(\mu))\mathbin{\raisebox{-1.075pt}{ \scalebox{1.07}{$\circ$}}}(1+\mu)^{\mathbin{\raisebox{-1.075pt}{\scalebox{1.07 }{$\circ$}}}-1}\] _for all \((1+\mu)\in\Gamma\) and \(\alpha\in\mathcal{MC}(L)\)._ Note that this theorem is false if we do not assume \(\mu\{\alpha,\alpha\}_{1,1}=0\) for \(\mu\in L^{0}\) and \(\alpha\in L\) with odd degree. The reason is that if we set \(\beta=(1+\mu).\alpha\) following the assertion of the theorem, then the proof of the theorem will exactly give the equality \(d(\beta)+\beta\{\beta\}_{1}=(\mu\{\alpha,\alpha\}_{1,1})\mathbin{\raisebox{-1. 075pt}{\scalebox{1.07}{$\circ$}}}(1+\mu)^{\mathbin{\raisebox{-1.075pt}{\scalebox {1.07}{$\circ$}}}-1}\). This hypothesis then assures that the action preserves \(\mathcal{MC}(L)\). In most cases, this hypothesis is satisfied. We denote three particular situations where this is satisfied: * if \(char(\mathbb{K})=2\), because \(\mu\{\alpha,\alpha\}_{1,1}=2\mu\{\alpha\}_{2}=0\); * if \(L\) has no \(2\)-torsion (e.g. if \(2\in\mathbb{K}^{\times}\)), because by symmetry \(\mu\{\alpha,\alpha\}_{1,1}=-\mu\{\alpha,\alpha\}_{1,1}\); * if the \(\Gamma(\mathcal{P}re\mathcal{L}ie,-)\)-algebra structure of \(L\) is induced by a brace algebra structure (in the sense of Theorem 2.9). _Proof._ We first need to prove that \(\beta=(1+\mu).\alpha\) is indeed a Maurer-Cartan element. For this, we first remark that applying \(d\) on each side of the equality \(d(\mu)=\alpha+\mu\{\alpha\}_{1}-\beta\mathbin{\raisebox{-1.075pt}{\scalebox{1.07 }{$\circ$}}}(1+\mu)\), and by using that \(d(\alpha)=-\alpha\{\alpha\}_{1}\) and the previous lemma, we have \[d(\beta)\mathbin{\raisebox{-1.075pt}{\scalebox{1.07}{$\circ$}}}(1+\mu)=- \alpha\{\alpha\}_{1}-\mu\{\alpha\{\alpha\}_{1}\}_{1}+d(\mu)\{\alpha\}_{1}+ \beta\mathbin{\raisebox{-1.075pt}{\scalebox{1.07}{$\circ$}}}(1+\mu;d(\mu)).\] Moreover, again by the previous lemma, we have \[d(\mu)\{\alpha\}_{1} = \alpha\{\alpha\}_{1}+\mu\{\alpha\}_{1}\{\alpha\}_{1}-\beta \mathbin{\raisebox{-1.075pt}{\scalebox{1.07}{$\circ$}}}(1+\mu)\{\alpha\}_{1}\] \[= \alpha\{\alpha\}_{1}+\mu\{\alpha\{\alpha\}_{1}\}_{1}+\mu\{\alpha, \alpha\}_{1,1}-\beta\mathbin{\raisebox{-1.075pt}{\scalebox{1.07}{$\circ$}}}(1+ \mu;\alpha)-\beta\mathbin{\raisebox{-1.075pt}{\scalebox{1.07}{$\circ$}}}(1+\mu; \mu\{\alpha\}_{1}).\] Then with the remark that \(\mu\{\alpha,\alpha\}_{1,1}=0\). Finally, By the previous lemma, this gives and then \((d(\beta)+\beta\{\beta\}_{1})\,\raisebox{-1.0pt}{\scalebox{1.5}{$\odot$}}\,(1+\mu)=0\), that is to say \(d(\beta)+\beta\{\beta\}_{1}=0\) by composing with \((1+\mu)^{\raisebox{-1.0pt}{\scalebox{1.5}{$\odot$}}-1}\) on the right. We thus have proved that \(\beta\in\mathcal{MC}(L)\). We now need to check that we have indeed an action of \(\Gamma\) on \(\mathcal{MC}(L)\). We have that \(1+0\) acts trivially on \(\mathcal{MC}(L)\), so we just need to prove that \(((1+\nu)\,\raisebox{-1.0pt}{\scalebox{1.5}{$\odot$}}\,(1+\mu)).\alpha=(1+\nu).( (1+\mu).\alpha)\). By hypothesis, we have the following equations: Let \(1+\lambda=(1+\nu)\,\raisebox{-1.0pt}{\scalebox{1.5}{$\odot$}}\,(1+\mu)=1+\mu+ \nu\,\raisebox{-1.0pt}{\scalebox{1.5}{$\odot$}}\,(1+\mu)\). We compute: \[\begin{array}{rcl}\alpha+\lambda\{\alpha\}_{1}-\gamma\,\raisebox{-1.0pt}{ \scalebox{1.5}{$\odot$}}\,(1+\lambda)&=&\alpha+\mu\{\alpha\}_{1}+\nu\, \raisebox{-1.0pt}{\scalebox{1.5}{$\odot$}}\,(1+\mu)\{\alpha\}_{1}+d(\nu)\, \raisebox{-1.0pt}{\scalebox{1.5}{$\odot$}}\,(1+\mu)\\ &&-\beta\,\raisebox{-1.0pt}{\scalebox{1.5}{$\odot$}}\,(1+\mu)-\nu\{\beta\}_{1}\, \raisebox{-1.0pt}{\scalebox{1.5}{$\odot$}}\,(1+\mu)\\ \end{array}\] by the previous lemma. We then have \[\begin{array}{rcl}\alpha+\lambda\{\alpha\}_{1}-\gamma\,\raisebox{-1.0pt}{ \scalebox{1.5}{$\odot$}}\,(1+\nu)\,\raisebox{-1.0pt}{\scalebox{1.5}{$\odot$}} \,(1+\mu)&=&d(\mu)+d(\nu)\,\raisebox{-1.0pt}{\scalebox{1.5}{$\odot$}}\,(1+\mu) +\nu\,\raisebox{-1.0pt}{\scalebox{1.5}{$\odot$}}\,(1+\mu;d(\mu))\\ &=&d(\lambda),\end{array}\] which proves the theorem. We can link these results to the pre-Lie deformation theory developed in [6] by Dotsenko-Shadrin-Vallette. Indeed, recall that on a differential graded Lie algebra \(L\), we can formally add an element \(\delta\) which will make the differential internal in the sense that \(d(\mu)=[\delta,\mu]\) for all \(\mu\in L\). Then, when looking at \(\widetilde{L}=L\oplus\mathbb{K}\delta\), the Maurer-Cartan equation is reduced to a square-zero equation: \[[\alpha,\alpha]=0.\] Moreover, the action of an element \(\lambda\) in the usual gauge group is described by the formula \[\lambda.\alpha=(e^{\lambda}\star\alpha)\,\raisebox{-1.0pt}{\scalebox{1.5}{$ \odot$}}\,e^{-\lambda},\] which can be written, by doing the variables substitution \(1+\mu=e^{\lambda}\), as \[(1+\mu).\alpha=(\alpha+\mu\{\alpha\}_{1})\mathbin{\raisebox{-1.29pt}{\scalebox{0.8}{ \tiny$\otimes$}}}(1+\mu)^{\otimes-1}.\] To retrieve our formula, do the final last variable substitution \(\alpha=\overline{\alpha}+\delta\) and use that \(\delta\{x_{1},...,x_{n}\}=0\) as soon as \(n\geq 2\) to have \((1+\mu).\overline{\alpha}=(\overline{\alpha}+\mu\{\overline{\alpha}\}_{1}-d (\mu))\mathbin{\raisebox{-1.29pt}{\scalebox{0.8}{\tiny$\otimes$}}}(1+\mu)^{ \otimes-1}\), which is the precise action we have defined in the previous theorem. We end this section with the definition of the _Deligne groupoid_. **Proposition-Definition 2.22**.: _Let \(L\) be a \(\Gamma(\mathcal{P}re\mathcal{L}ie,-)\)-algebra such that \(\mu\{\alpha,\alpha\}_{1,1}=0\) for every \(\mu\in L^{0}\) and \(\alpha\in L\) with odd degree. We let \(\mathrm{Deligne}(L)\) to be the category with \(\mathcal{MC}(L)\) as set of objects and \(\mathrm{Mor}_{\mathrm{Deligne}(L)}(\alpha,\beta)=\{(1+\mu)\in\Gamma\mid(1+\mu). \alpha=\beta\}\). Then \(\mathrm{Deligne}(L)\) is a groupoid called the Deligne groupoid of \(L\)._ _Proof_. It is a corollary of the previous theorem. \(\Box\) ### An integral Goldman-Millson theorem We conclude this part with an analogue of the Goldman-Millson theorem. This theorem allows us to give a link between two particular groupoids when changing a dg Lie algebra \(L\) to another one \(\overline{L}\) which is quasi-isomorphic to \(L\) (see [11, SS2.4]). From now, we suppose that every \(\Gamma(\mathcal{P}re\mathcal{L}ie)\)-algebras in this section are free as \(\mathbb{K}\)-modules and without \(2\)-torsion. Let \(A\) be a local artinian \(\mathbb{K}\)-algebra with maximal ideal \(\mathfrak{m}_{A}\). Let \(L\) be a \(\Gamma(\mathcal{P}re\mathcal{L}ie,-)\)-algebra (without any convergence hypothesis). Then \(L\otimes A\) is also a \(\Gamma(\mathcal{P}re\mathcal{L}ie,-)\)-algebra with the following definitions: \[(L\otimes A)^{k} = L^{k}\otimes A,\] \[\gamma(\mathcal{O}t(x_{1}\otimes a_{1},...,x_{n}\otimes a_{n})) = \gamma(\mathcal{O}t(x_{1},...,x_{n}))\otimes a_{1}...a_{n},\] \[d(x\otimes a) = dx\otimes a.\] To retrieve our convergence hypothesis, we can consider the sub \(\Gamma(\mathcal{P}re\mathcal{L}ie,-)\)-algebra \(L\otimes\mathfrak{m}_{A}\). This \(\Gamma(\mathcal{P}re\mathcal{L}ie,-)\)-algebra has a filtration satisfying our convergence hypothesis given by \[F_{n}(L\otimes\mathfrak{m}_{A})=L\otimes\mathfrak{m}_{A}^{n}\] which is \(0\) for \(n\) big enough, because \(\mathfrak{m}_{A}\) is nilpotent. In particular, our series will be reduced to finite sums. Let \(Deligne(L,A)=Deligne(L\otimes\mathfrak{m}_{A})\) the associated Deligne groupoid. As in [11, SS2.3], we remark that \(Deligne(-,-)\) defines a bifunctor such that, for all morphisms of \(\Gamma(\mathcal{P}re\mathcal{L}ie,-)\)-algebras \(\varphi:L\longrightarrow\overline{L}\) and for all morphisms of algebras \(\psi:A\longrightarrow\overline{A}\), we have the following diagram \[\begin{CD}Deligne(L,A)@>{\varphi_{*}}>{}>Deligne(\overline{L},A)\\ @V{\psi_{*}}V{}V@V{}V{\psi_{*}}V\\ Deligne(L,\overline{A})@>{}>{\varphi_{*}}>Deligne(\overline{L},\overline{A}) \end{CD}\] which is commutative. We can now prove Theorem B. **Theorem 2.23**.: _Let \(\mathbb{K}\) be a noetherian integral domain. Let \(L\) and \(\overline{L}\) be two \(\Gamma(\mathcal{P}re\mathcal{L}ie,-)\)-algebras. Suppose that \(L\) and \(\overline{L}\) are free as \(\mathbb{K}\)-modules and that there is no 2-torsion. Let \(\varphi:L\longrightarrow\overline{L}\) be a morphism of \(\Gamma(\mathcal{P}re\mathcal{L}ie,-)\)-algebras such that \(H^{0}(\varphi)\) and \(H^{1}(\varphi)\) are isomorphisms, and \(H^{2}(\varphi)\) is a monomorphism. Then for all local artinian \(\mathbb{K}\)-algebras \(A\), the induced functor \(\varphi_{*}:Deligne(L,A)\longrightarrow Deligne(\overline{L},A)\) is an equivalence of groupoids._ _Proof._ It is easy to check that all the proof given in [11, SS2.5-SS2.11] remains valid when changing the commutator \([x,y]\) to \(x\star y-\pm y\star x\). We only note that the lemma given in [11, SS2.8] can be rephrased in our context by the following assertion: for all \(\alpha\in L^{1}\otimes\mathfrak{m}_{A},\eta\in L^{0}\otimes\mathfrak{m}_{A}\) and \(u\in L^{0}\otimes\mathfrak{I}\), we have \[(1+u+\eta).\alpha=(1+\eta).\alpha-d(u).\] It is a simple calculation, using the fact that \(\mathfrak{I}.\mathfrak{I}\subset\mathfrak{I}.\mathfrak{m}_{A}=0\): \[(\beta-d(u))\odot(1+u+\eta) = \sum_{n=0}^{+\infty}(\beta-d(u))\{u+\eta\}_{n}\] \[= \sum_{n=0}^{+\infty}\sum_{k=0}^{n}(\beta-d(u))\{u,\eta\}_{k,n-k}\] \[= \beta-d(u)+\sum_{n=1}^{+\infty}\beta\{\eta\}_{n}\] \[= \beta\odot(1+\eta)-d(u)\] \[= \alpha+\eta\{\alpha\}_{1}-d(\eta)-d(u)\] \[= \alpha+(u+\eta)\{\alpha\}_{1}-d(u+\eta).\] The other parts of the proof can be easily transposed in a \(\Gamma(\mathcal{P}re\mathcal{L}ie,-)\)-algebra version and remain valid. \(\square\) **Definition 2.24**.: _Two \(\Gamma(\mathcal{P}re\mathcal{L}ie,-)\)-algebras \(L\) and \(\overline{L}\) are quasi-isomorphic if there exists a zig-zag of morphisms of \(\Gamma(\mathcal{P}re\mathcal{L}ie,-)\)-algebras_ \[L=L_{0}\longrightarrow L_{1}\longleftarrow...\longrightarrow L_{m-1} \longleftarrow L_{m}=\overline{L}\] _in which each morphism induces an isomorphism in cohomology._ **Corollary 2.25**.: _If \(L\) and \(\overline{L}\) are isomorphic, then for all local artinian \(\mathbb{K}\)-algebras \(A\), the groupoids \(Deligne(L,A)\) and \(Deligne(\overline{L},A)\) are equivalent. More precisely, we have a zig-zag of equivalence of groupoids_ \[Deligne(L,A)\longrightarrow Deligne(L_{1},A)\longleftarrow... \longrightarrow Deligne(L_{m-1},A)\longleftarrow Deligne(\overline{L},A)\] _which is natural in \(A\)._ Application in homotopy theory for operads The goal of this section is to establish Theorem C, which gives a computation of \(\pi_{0}(\operatorname{Map}(B^{c}(\mathcal{C}),\mathcal{P}))\) where \(\mathcal{C}\) is a \(\Sigma_{*}\)-cofibrant coaugmented cooperad, \(\mathcal{P}\) an augmented operad and \(B^{c}\) the cobar construction (see [7] or [12] for a definition of this construction). In the case of a field of characteristic \(0\), it can be expressed in terms of the Deligne groupoid with the structure of dg Lie algebra of \(\operatorname{Hom}_{\Sigma}(\overline{\mathcal{C}},\overline{\mathcal{P}})\). We extend this result using a structure of \(\Gamma(\mathcal{P}re\mathcal{L}ie,-)\)-algebra that underlies this dg Lie algebra structure. In SS3.1, we define infinitesimal \(k\)-composition and \(k\)-decomposition that generalize the usual infinitesimal composition and decomposition operations given in [12, SS6.1]. These operations will be used in the next section to write more easily weighted brace operations of the convolution operad. In SS3.2, we recall the definition of the convolution operad \(\operatorname{Hom}(\mathcal{C},\mathcal{P})\), as given in [12, SS6.4.1], and study the \(\Gamma(\mathcal{P}re\mathcal{L}ie,-)\)-algebra structure of \(\operatorname{Hom}_{\Sigma}(\mathcal{C},\mathcal{P})\). This structure will be induced by a dg brace algebra structure on \(\operatorname{Hom}(\mathcal{C},\mathcal{P})\) given by its operadic composition. In the same way that infinitesimal composition and decomposition can be used to express the pre-Lie algebra structure of the convolution operad (see [12, Proposition 6.4.5]), we will use infinitesimal \(k\)-composition and \(k\)-decomposition to compute weighted brace operations of the convolution operad. In SS3.3, we just use a cylinder object of \(B^{c}(\mathcal{C})\) given by Fresse in [9, SS5.1] to get our result: the quotient of \(\operatorname{Hom}_{\Sigma}(\mathcal{C},\mathcal{P})\) by the gauge action gives \(\pi_{0}(\operatorname{Map}(B^{c}(\mathcal{C}),\mathcal{P}))\). ### Infinitesimal compositions and decompositions of an operad and a cooperad We first introduce some definitions which will be useful for the computations. Let \(M\) and \(N\) two symmetric sequences. Recall that we have a monoidal structure on the category of symmetric sequences defined by \[M\circ N(n)=\bigoplus_{k\geq 0}M(k)\otimes_{\Sigma_{k}}\left(\bigoplus_{i_{1}+ \ldots+i_{k}=n}Ind_{\Sigma_{i_{1}}\times\ldots\times\Sigma_{i_{k}}}^{\Sigma_{n }}(N(i_{1})\otimes...\otimes N(i_{k}))\right),\] with as unit the symmetric sequence \(I\) defined by \[I(n)=\left\{\begin{array}{ll}\mathbb{K}&\text{if }n=1\\ 0&\text{if }n\neq 1\end{array}\right..\] Every elements of \(M\circ N(n)\) can be identified as a tree of the form: where \(x\in M\), \(y_{1},...,y_{n}\in N\). We now generalize the definition of the infinitesimal composition/decomposition defined in [12], in order to write some formulas in a more convenient way. **Definition 3.1**.: _Let \(M\) and \(N\) two symmetric sequences. Suppose that \(N\simeq I\oplus\overline{N}\) for some other symmetric sequence \(\overline{N}\). For all \(k\geq 0\), we define a new symmetric sequence denoted by \(M\circ_{(k)}N\) called the \(k\)-infinitesimal composite of \(M\) and \(N\) defined, in each arity \(n\), as the submodule of \(M\circ N(n)\) spanned by trees where exactly \(k\) elements at level \(2\) are in \(\overline{N}\), and the others in \(I\)._ One can easily check that if we have morphisms of symmetric sequences \(f:M\longrightarrow\widetilde{M}\) and \(g:N\longrightarrow\widetilde{N}\), then we have a morphism \(f\circ_{(k)}g:M\circ_{(k)}N\longrightarrow\widetilde{M}\circ_{(k)} \widetilde{N}\) given by \(f\circ(id_{I}\oplus g)\). Let \(\mathcal{P}\) an operad with composition \(\gamma:\mathcal{P}\circ\mathcal{P}\longrightarrow\mathcal{P}\) and unit \(\eta:I\longrightarrow\mathcal{P}\), and let \(\mathcal{C}\) a cooperad with coproduct \(\Delta:\mathcal{C}\longrightarrow\mathcal{C}\circ\mathcal{C}\) and counit \(\varepsilon:\mathcal{C}\longrightarrow I\). We will suppose that \(\mathcal{P}\) is augmented, i.e. the unit \(\eta:I\longrightarrow\mathcal{P}\) admits a retraction \(\pi:\mathcal{P}\longrightarrow I\). Equivalently, there exists an operad \(\overline{\mathcal{P}}\) with \(\mathcal{P}\simeq I\oplus\overline{\mathcal{P}}\) such that the first projection on \(\mathcal{P}\) is given by \(\pi\). Similarly, we suppose that \(\mathcal{C}\) is coaugmented, i.e. the counit \(\varepsilon:\mathcal{C}\longrightarrow I\) admits a section \(s:I\longrightarrow\mathcal{C}\). Equivalently, there exists a cooperad \(\overline{\mathcal{C}}\) with \(\mathcal{C}\simeq I\oplus\overline{\mathcal{C}}\) such that the first projection is given by \(\varepsilon\). The following definition gives an extension of the usual infinitesimal composition and decomposition operations given in [12, SS6.1] for \(k=1\). **Definition 3.2**.: _Let \(k\geq 1\)._ * _We define_ the infinitesimal \(k\)-composition _in_ \(\mathcal{P}\) _as_ \[\gamma_{(k)}:\mathcal{P}\circ_{(k)}\mathcal{P}(n)\xrightarrow{\ \ \(\Delta_{(0)}:\overline{\mathcal{C}}\xrightarrow{\Delta}\overline{\mathcal{C}}\circ I \oplus I\circ\overline{\mathcal{C}}\oplus\bigoplus_{k\geq 1}\overline{\mathcal{C}} \circ_{(k)}\overline{\mathcal{C}}\xrightarrow{\mathcal{C}}\circ I\oplus I \circ\overline{\mathcal{C}},\) \(\Delta_{(k)}:\overline{\mathcal{C}}\xrightarrow{\Delta}\overline{\mathcal{C}} \circ I\oplus I\circ\overline{\mathcal{C}}\oplus\bigoplus_{k\geq 1}\overline{ \mathcal{C}}\circ_{(k)}\overline{\mathcal{C}}\)\(\overline{\mathcal{C}}\circ_{(k)}\overline{\mathcal{C}},\) for all \(k\geq 1\). Beware that these notations are in fact abusive and have nothing to do with the infinitesimal \(k\)-decompositions \(\Delta_{(k)}:\overline{\mathcal{C}}\longrightarrow\overline{\mathcal{C}} \circ_{(k)}\overline{\mathcal{C}}\) of the cooperad \(\overline{\mathcal{C}}\), which will not be needed in this paper. ### \(\Gamma(\mathcal{P}re\mathcal{L}ie,-)\)-algebra structure of the convolution operad Let \(M\) and \(N\) be two symmetric sequences of differential graded \(\mathbb{K}\)-modules. We define a new symmetric sequence \(\operatorname{Hom}(M,N)\) in dg \(\mathbb{K}\)-modules by \(\operatorname{Hom}(M,N)(n)=\operatorname{Hom}(M(n),N(n))\), the differential graded module formed by the homogeneous morphisms \(f:M(n)\longrightarrow N(n)\). The differential on \(\operatorname{Hom}(M,N)\) is given by \[d(f)=d_{M}\circ f-(-1)^{deg(f)}f\circ d_{N},\] for all \(f\in\operatorname{Hom}(M,N)\). The action of \(\Sigma_{n}\) on \(\operatorname{Hom}(M(n),N(n))\) is defined by \[\forall x\in M(n),f^{\sigma}(x)=\sigma^{-1}f(\sigma x),\] for all \(\sigma\in\Sigma_{n}\). **Proposition 3.3**.: (see [12]) _Let \(\mathcal{C}\) a cooperad and \(\mathcal{P}\) an operad. Then \(\operatorname{Hom}(\mathcal{C},\mathcal{P})\) has the structure of a dg operad called the convolution operad of \(\mathcal{C}\) and \(\mathcal{P}\)._ We recall the operad structure on \(\operatorname{Hom}(\mathcal{C},\mathcal{P})\). For \(f\in\operatorname{Hom}(\mathcal{C},\mathcal{P})(k)\), \(g_{1}\in\operatorname{Hom}(\mathcal{C},\mathcal{P})(i_{1}),...,g_{k}\in \operatorname{Hom}(\mathcal{C},\mathcal{P})(i_{k})\) the composition \(\gamma(f\otimes g_{1}\otimes...\otimes g_{k}\otimes id)\) is given by the composite where \(n=\sum_{p}i_{p}\) and, for all \(\sigma\in\Sigma_{n}\), \(\gamma(f\otimes g_{1}\otimes...\otimes g_{k}\otimes\sigma)=\gamma(f\otimes g _{1}\otimes...\otimes g_{k}\otimes id)^{\sigma}\). We now suppose that \(\mathcal{P}\) and \(\mathcal{C}\) are connected in the sense that \(\mathcal{P}(0)=\mathcal{C}(0)=0\) and \(\mathcal{P}(1)=\mathcal{C}(1)=\mathbb{K}\). We have a dg brace algebra structure on \(\operatorname{Hom}(\mathcal{C},\mathcal{P})\). **Lemma 3.4**.: _Let \(f\in\operatorname{Hom}(\mathcal{C},\mathcal{P})(r)\) and \(g_{1}\in\operatorname{Hom}(\mathcal{C},\mathcal{P})(p_{1}),...,g_{n}\in \operatorname{Hom}(\mathcal{C},\mathcal{P})(p_{n})\). We define the (non symmetric) braces as_ \[f\langle g_{1},...,g_{n}\rangle=\sum_{1\leq i_{1}<...<i_{n}\leq m}\sum_{ \sigma_{1},...,\sigma_{n}}((...((f\circ_{i_{n}}g_{n})^{\sigma_{n}}\circ_{i_{n- 1}}g_{n-1})^{\sigma_{n-1}}...)\circ_{i_{1}}g_{1})^{\sigma_{1}}\] _if \(n\leq r\), where each \(\sigma_{i}\) is a pointed unshuffle permutation of the form \((1,...,p_{i},...,1)\) with \(p_{i}\) placed at the same position as \(g_{i}\) in \(f\), and \(0\) if \(n>r\). Then this definition endows \(\operatorname{Hom}(\mathcal{C},\mathcal{P})\) with a differential graded brace algebra structure._ _Proof._ When \(n\leq r\), we have in fact that \[f\langle g_{1},...,g_{n}\rangle=\sum_{1\leq i_{1}<...<i_{n}\leq m}\sum_{ \sigma}\gamma(f\otimes 1\otimes...\otimes 1\otimes g_{1}\otimes 1\otimes...\otimes 1 \otimes g_{n}\otimes 1\otimes...\otimes 1\otimes\sigma),\] where the second sum is taken over all pointed unshuffles \(\sigma\) of type \((1,...,1,p_{1},1,...,1,p_{n},1,...,1)\). The lemma follows from the structure of dg operad of the convolution operad. \(\square\) We denote, for all \(n\geq 0\), by \(\operatorname{Hom}_{\Sigma_{n}}(M(n),N(n))\) the submodule of \(\operatorname{Hom}(M(n),N(n))\) formed by all morphisms which commute with the action of \(\Sigma_{n}\). We let \[\operatorname{Hom}_{\Sigma}(\mathcal{C},\mathcal{P})=\bigoplus_{n\geq 0} \operatorname{Hom}_{\Sigma_{n}}(\mathcal{C}(n),\mathcal{P}(n)).\] By Theorem 2.9 and Remark 2.10, \(\operatorname{Hom}_{\Sigma}(\mathcal{C},\mathcal{P})\) is endowed with a \(\Gamma(\mathcal{P}re\mathcal{L}ie,-)\)-algebra structure. We now consider an augmented operad \(\mathcal{P}\simeq I\oplus\overline{\mathcal{P}}\) and a coaugmented cooperad \(\mathcal{C}\simeq I\oplus\overline{\mathcal{C}}\). Assume that we have a decomposition by weight \(\overline{\mathcal{C}}\simeq\bigoplus_{k\geq 1}\overline{\mathcal{C}}^{(k)}\). This hypothesis is satisfied, for instance, for the coaugmentation of a Koszul cooperad \(\overline{\mathcal{P}^{i}}\). We thus have an isomorphism \[\operatorname{Hom}_{\Sigma}(\overline{\mathcal{C}},\overline{\mathcal{P}}) \simeq\prod_{k\geq 1}\operatorname{Hom}_{\Sigma}(\overline{\mathcal{C}}^{(k)}, \overline{\mathcal{P}})\] which is compatible with the \(\Gamma(\mathcal{P}re\mathcal{L}ie,-)\)-algebra structure on \(\operatorname{Hom}_{\Sigma}(\overline{\mathcal{C}},\overline{\mathcal{P}})\). We then define a filtration on \(\operatorname{Hom}_{\Sigma}(\overline{\mathcal{C}},\overline{\mathcal{P}})\) by \[F_{n}(\operatorname{Hom}_{\Sigma}(\overline{\mathcal{C}},\overline{\mathcal{P} }))=\prod_{k\geq n}\operatorname{Hom}_{\Sigma}(\overline{\mathcal{C}}^{(k)}, \overline{\mathcal{P}})\] such that \(\operatorname{Hom}_{\Sigma}(\overline{\mathcal{C}},\overline{\mathcal{P}})\) is complete for this filtration. Because \(\mathcal{C}\) and \(\mathcal{P}\) are connected, the isomorphism and the filtration extend to \[\operatorname{Hom}_{\Sigma}(\mathcal{C},\mathcal{P})\simeq\operatorname{Hom }_{\Sigma}(I,\mathcal{P})\times\prod_{k\geq 1}\operatorname{Hom}_{\Sigma}( \overline{\mathcal{C}}^{(k)},\overline{\mathcal{P}}).\] Moreover, we have a unit element \(1\in\operatorname{Hom}_{\Sigma}(I,\mathcal{P})\) given by \(\eta:I\longrightarrow\mathcal{P}\). We can explicitly describe the weighted braces with one input in terms of infinitesimal decompositions and compositions. **Lemma 3.5**.: _Let \(\overline{f},\overline{g}\in\operatorname{Hom}_{\Sigma}(\overline{C},\overline{P})\). Then \(\overline{f}\{\overline{g}\}_{k}\) is given by the composite_ \[\overline{C}\xrightarrow{\Delta_{(k)}}\overline{C}\circ_{(k)}\overline{C} \xrightarrow{\overline{f}\circ_{(k)}\overline{g}}\overline{\overline{P}}\circ _{(k)}\overline{\overline{P}}\xrightarrow{\gamma_{(k)}}\overline{\overline{P} }\.\] _Proof._ By definition, we have that \(\overline{f}\{\overline{g}\}_{k}=\overline{f}\langle\underbrace{\overline{g},...,\overline{g}}_{k}\rangle=\sum_{1\leq i_{1}<...<i_{k}\leq n}\sum_{\sigma_{1 },...,\sigma_{k}}((...((\overline{f}\circ_{i_{k}}\overline{g})^{\sigma_{k}}... )^{\sigma_{k-1}})\circ_{i_{1}}\overline{g})^{\sigma_{1}}\) where \(\overline{f}\in\operatorname{Hom}_{\Sigma}(\overline{C},\overline{P})(n)\). This can be written in terms of the operad structure on \(\operatorname{Hom}_{\Sigma}(\overline{C},\overline{P})\) by \[\overline{f}\{\overline{g}\}_{k}=\sum_{1\leq i_{1}<...<i_{k}\leq n}\sum_{ \sigma}\gamma(\overline{f}\otimes 1\otimes...\otimes 1\otimes\underbrace{ \overline{g}}_{i_{1}}\otimes 1\otimes...\otimes 1\otimes\sigma)\] which gives the desired identity. \(\square\) In particular, we find that the pre-Lie algebra structure \[\overline{f}\star\overline{g}=\overline{f}\langle\overline{g}\rangle=\sum_{i= 1}^{n}\sum_{\sigma}(\overline{f}\circ_{i}\overline{g})^{\sigma}\] is given by the composite \[\overline{C}\xrightarrow{\Delta_{(1)}}\overline{C}\circ_{(1)}\overline{C} \xrightarrow{\overline{f}\circ_{(1)}\overline{\overline{g}}}\overline{ \overline{P}}\circ_{(1)}\overline{\overline{P}}\xrightarrow{\gamma_{(1)}} \overline{\overline{P}}\] as shown in [12, Proposition 6.4.5]. **Theorem 3.6**.: _The circular product of two elements \(f=1+\overline{f},g=1+\overline{g}\) of the gauge group of \(\operatorname{Hom}_{\Sigma}(\overline{C},\overline{P})\) is given by_ \[f\circ g:\mathcal{C}\xrightarrow{\Delta}\mathcal{C}\circ\mathcal{C} \xrightarrow{f\circ g}\mathcal{P}\circ\mathcal{P}\xrightarrow{\gamma} \mathcal{P}\.\] _Proof._ Because \(f_{|I}=g_{|I}=1\), we have that \(f\circ g_{|I}=1\). We thus need to show the equality on \(\overline{C}\). Recall that we have infinitesimal decompositions on \(\overline{C}\) denoted by \(\Delta_{(0)}\) and \(\Delta_{(k)}\) for \(k\geq 1\) such that \(\Delta_{|\overline{C}}=\Delta_{(0)}\oplus\bigoplus_{k\geq 1}\Delta_{(k)}\). The map \(\Delta_{(0)}\) will give \(\overline{f}+\overline{g}\), and each \(\Delta_{(k)}\) will give \(\overline{f}\{\overline{g}\}_{k}\) according to the previous lemma. We thus have that the composite in the statement of the theorem gives \[1+\overline{f}+\sum_{n\geq 0}\overline{f}\{\overline{g}\}_{n}\] which is exactly \(f\circ g\). \(\square\) ### Computation of \(\pi_{0}(\operatorname{Map}(B^{c}(\mathcal{C}),\mathcal{P}))\) We now extend the computation of \(\pi_{0}(\operatorname{Map}(B^{c}(\mathcal{C}),\mathcal{P}))\) on a field \(\mathbb{K}\) with positive characteristic. Recall first that we can give an explicit cylinder object for \(B^{c}(\mathcal{C})\), where \(B^{c}\) is the cobar construction of \(\mathcal{C}\), when \(\mathcal{C}\) is \(\Sigma_{*}\)-cofibrant (see for instance [9] or [12]). Explicitly, let \(K=\mathbb{K}\sigma^{0}\oplus\mathbb{K}\sigma^{1}\oplus\sigma^{01}\) where \(deg(\sigma^{0})=deg(\sigma^{1})=-1\), \(deg(\sigma^{01})=0\) and \(d(\sigma^{01})=\sigma^{1}-\sigma^{0}\). Then there exists a derivation of operads \(\partial\) such that the free dg operad \((\mathcal{F}(K\otimes\overline{\mathcal{C}}),\partial)\) is a cylinder object for \(B^{c}(\mathcal{C})\). We refer to [9, SS5.1] for an explicit construction of \(\partial\) and a proof of the previous statement. **Theorem 3.7**.: _Suppose that \(\mathcal{C}\) is \(\Sigma_{*}\)-cofibrant. We then have an isomorphism:_ \[\pi_{0}(\operatorname{Map}(B^{c}(\mathcal{C}),\mathcal{P}))\simeq\pi_{0} \mathrm{Deligne}(\operatorname{Hom}_{\Sigma}(\overline{\mathcal{C}},\overline{ \mathcal{P}})).\] _Proof._ It is well known (see [7] for instance) that \(\pi_{0}(\operatorname{Map}(B^{c}(\mathcal{C}),\mathcal{P}))\simeq(Mor(B^{c}( \mathcal{C}),\mathcal{P}),\sim_{h})\) where \(\sim_{h}\) is the homotopy relation. Recall also that the data of a Maurer-Cartan element \(\alpha\) in \(\operatorname{Hom}_{\Sigma}(\overline{\mathcal{C}},\overline{\mathcal{P}})\) is equivalent to give a morphism of operads \(\phi_{\alpha}\) from \(B^{c}(\mathcal{C})\) to \(\mathcal{P}\) (see [9] or [12]). We just need to show that the action of the gauge group on the Maurer-Cartan set of \(\operatorname{Hom}_{\Sigma}(\overline{\mathcal{C}},\overline{\mathcal{P}})\) from one Maurer-Cartan \(\alpha\) to an other one \(\beta\) is equivalent to give a homotopy from \(\phi_{\alpha}\) to \(\phi_{\beta}\). Let \(1+\lambda\) be an element of the gauge group. We define a morphism \(h:\operatorname{Cyl}(B^{c}(\mathcal{C}))\longrightarrow\mathcal{P}\) via \(h:K\otimes\overline{\mathcal{C}}\longrightarrow\mathcal{F}(K\otimes\overline{ \mathcal{C}})\) by setting \[h(\sigma^{0}\otimes\gamma)=\alpha(\gamma),\] \[h(\sigma^{1}\otimes\gamma)=\beta(\gamma),\] \[h(\sigma^{01}\otimes\gamma)=\lambda(\gamma),\] where \(\gamma\) is some element of \(\overline{\mathcal{C}}\). We claim that \((1+\lambda).\alpha=\beta\) if and only if \(h\) is a homotopy from \(\phi_{\alpha}\) to \(\phi_{\beta}\). Accordingly, we must prove the equivalence \[d(\lambda)=\alpha+\lambda\{\alpha\}-\beta\otimes(1+\lambda)\Leftrightarrow d (h)=0\] where \(\delta\) is the differential of \(Mor(B^{c}(\mathcal{C}),\mathcal{P})\). Because \(\alpha\) and \(\beta\) are Maurer-Cartan elements, and by definition of \(\partial\), the second equality is always satisfied for \(\sigma^{\varepsilon}\otimes\gamma\) with \(\varepsilon=0,1\) and \(\gamma\in\overline{\mathcal{C}}\). We just need to check this equality on terms \(\sigma^{01}\otimes\gamma\) for any \(\gamma\in\overline{\mathcal{C}}\): \[\begin{array}{rcl}d(h)(\sigma^{01}\otimes\gamma)&=&d(h(\sigma^{01}\otimes \gamma))-h(\partial(\sigma^{01}\otimes d(\gamma)))\\ &=&d(\lambda(\gamma))-\lambda(d(\gamma))-\alpha(\gamma)+\beta(\gamma)-\lambda \{\alpha\}_{1}(\gamma)+(\beta\otimes(1+\lambda)(\gamma)-\beta(\gamma))\\ &=&d(\lambda)(\gamma)-\alpha(\gamma)-\lambda\{\alpha\}_{1}(\gamma)+\beta \otimes(1+\lambda)(\gamma).\end{array}\] We then have the desired equivalence. \(\square\)
2309.07804
Pop Quiz! Do Pre-trained Code Models Possess Knowledge of Correct API Names?
Recent breakthroughs in pre-trained code models, such as CodeBERT and Codex, have shown their superior performance in various downstream tasks. The correctness and unambiguity of API usage among these code models are crucial for achieving desirable program functionalities, requiring them to learn various API fully qualified names structurally and semantically. Recent studies reveal that even state-of-the-art pre-trained code models struggle with suggesting the correct APIs during code generation. However, the reasons for such poor API usage performance are barely investigated. To address this challenge, we propose using knowledge probing as a means of interpreting code models, which uses cloze-style tests to measure the knowledge stored in models. Our comprehensive study examines a code model's capability of understanding API fully qualified names from two different perspectives: API call and API import. Specifically, we reveal that current code models struggle with understanding API names, with pre-training strategies significantly affecting the quality of API name learning. We demonstrate that natural language context can assist code models in locating Python API names and generalize Python API name knowledge to unseen data. Our findings provide insights into the limitations and capabilities of current pre-trained code models, and suggest that incorporating API structure into the pre-training process can improve automated API usage and code representations. This work provides significance for advancing code intelligence practices and direction for future studies. All experiment results, data and source code used in this work are available at \url{https://doi.org/10.5281/zenodo.7902072}.
Terry Yue Zhuo, Xiaoning Du, Zhenchang Xing, Jiamou Sun, Haowei Quan, Li Li, Liming Zhu
2023-09-14T15:46:41Z
http://arxiv.org/abs/2309.07804v1
# Pop Quiz! Do Pre-trained Code Models Possess Knowledge of Correct API Names? ###### Abstract Recent breakthroughs in pre-trained code models, such as CodeBERT and Codex, have shown their superior performance in various downstream tasks. The correctness and unambiguity of API usage among these code models are crucial for achieving desirable program functionalities, requiring them to learn various API fully qualified names structurally and semantically. Recent studies reveal that even state-of-the-art pre-trained code models struggle with suggesting the correct APIs during code generation. However, the reasons for such poor API usage performance are barely investigated. To address this challenge, we propose using knowledge probing as a means of interpreting code models, which uses cloze-style tests to measure the knowledge stored in models. Our comprehensive study examines a code model's capability of understanding API fully qualified names from two different perspectives: API call and API import. Specifically, we reveal that current code models struggle with understanding API names, with pre-training strategies significantly affecting the quality of API name learning. We demonstrate that natural language context can assist code models in locating Python API names and generalize Python API name knowledge to unseen data. Our findings provide insights into the limitations and capabilities of current pre-trained code models, and suggest that incorporating API structure into the pre-training process can improve automated API usage and code representations. This work provides significance for advancing code intelligence practices and direction for future studies. All experiment results, data and source code used in this work are available at [https://doi.org/10.5281/zenodo.7902072](https://doi.org/10.5281/zenodo.7902072). ## I Introduction Recent advances in code intelligence have incorporated pre-training techniques, where models are pre-trained on large-scale unlabeled source code corpora to learn the code's representation and semantics. The pre-trained code models, such as CodeBERT [1] and GraphCodeBERT [2], can be fine-tuned for various downstream code tasks, such as code completion and translation [3, 4, 5]. Despite the improvement, there is still a significant performance gap between their performance and that of human developers when it comes to using APIs correctly. For instance, several studies have demonstrated that even state-of-the-art pre-trained code models, such as Codex [6], StarCoder [7] and GPT-4 [8], struggle with suggesting the correct APIs during code generation [9, 10]. However, few studies have investigated the reasons behind these models' poor API usage performance. Basically, correct API usage depends on two types of knowledge: how to invoke an API and which API to invoke, with the former being a fundamental step towards the latter. To invoke an API, one must have knowledge of the code grammar of importing libraries and composing the correct API name based on the import and call statements. However, code models are not explicitly instructed by code grammar. Although we often assume that the models can learn to use APIs effectively by observing a large number of examples, this assumption has been poorly validated. Therefore, we pose the following question: **Do Pre-trained Code Models Possess Knowledge of Correct API Names?** The fully qualified name of an API comprises not only the function name but also the name of the package, module, and/or class to which it belongs. To ease the presentation, we will use the term "module" to refer to these entities below. Libraries usually organize APIs into nested modules to help developers understand the available features when using the APIs. APIs are typically organized into nested modules within libraries to aid developers in understanding the available features when using the APIs. These modules form a hierarchical structure, with the higher-level module serving as the parent of its direct lower-level modules, and the APIs acting as the leaf nodes in the structure. The fully qualified API names are obtained by traversing the hierarchy from the root to the leaf, with all names connected by dots. Depending on the imported module's level and whether it is associated with an alias, the API name used for invocation should be adjusted accordingly. This convention can be easily understood by human developers, but code models may find it challenging to learn. In addition, the fully qualified names of APIs convey infor mation about code modularization and API namespace design. If models can learn a good representation of API names, they could be useful for automating API namespace design by providing more precise and relevant design options. For example, consider the Python library <numpy>, which is used for mathematical operations on arrays and matrices. As shown in Figure 1, <numpy> includes several modules, including <linalg> (short for linear algebra) for APIs related to computations in linear algebra, such as <multi_dot> (dot product), <cholesky> (Cholesky decomposition), and <qr> (QR factorization), as well as <ma> (short for masked array) for APIs related to operations on masked arrays. Such namespace design can provide valuable insights when designing relevant libraries or similar libraries for other programming languages. To understand how well pre-trained code models understand API names, it is essential to analyze and interpret the internal mechanisms of pre-trained code models. However, deep neural models are generally complex and opaque [11], making it hard to fully understand their internal mechanisms. In the natural language processing community, the mainstream technique to examine what pre-trained language models know is probing, where a probe is a simple classifier that takes the model's contextual representations as input and predicts a property of interest [12]. Recently, researchers have attempted to analyze the code syntax captured by code models via attention analysis and grammatical structure probing [13, 14, 15]. However, there is an insubstantial effort on assessing API knowledge [16]. In this work, we explore the interpretability of code models and introduce a large-scale knowledge probing task that specifically targets fully qualified names of APIs. We design an automated and effective evaluation framework, INK, to probe code models' understanding of API names with cloze-style pop quizzes. To generate the quizzes, we first extract API full qualified names from API calls in large-scale source code corpora that are commonly used for training code models. Note that, instead of directly extracting API names from libraries, we take into account the limitations of code models during their training. We refrain from testing the models on the knowledge they have never been exposed to. Further, we derive import statements and API call statements based on the API names and mask out different tokens, with only one token masked out at a time, at each module level of these statements. For instance, when invoking <multi_dot> module as shown in Figure 1, we consider from two aspects, API call (<numpy.linalg.multi_dot>) and API import (<from numpy.linalg import multi_dot>). By masking token lin by "[MASK]", we get two quizzes of <numpy.[MASK]alg.multi_dot> and <from numpy.[MASK]alg import multi_dot>. The code models are then expected to predict the masked token given an import or API call statement with the mask applied. To better understand the capabilities and limitations of pre-trained code models in learning API names, we set out to investigate several research questions (RQs) to assess their performance and guide future improvements in this domain: * **RQ1: How well do code models understand API calls and imports?** The code models are assessed on their prediction of masked import or API call statements. The findings can help identify areas where code models may struggle and guide improvements in their API name learning by understanding this question. * **RQ2: Do code models understand API import aliases?** This RQ further assesses code models' understanding of API import aliases. The quizzes are designed based on an aliasing import statement followed by an API call based on the imported module. Tokens are selectively masked out for the statements. * **RQ3: Will natural language context affect the API name knowledge probing results?** We investigate whether the overall performance of the models can be improved by incorporating natural language queries. The findings can guide the development of techniques that leverage contextual information to enhance code models' API name learning. * **RQ4: How well can code models memorize and generalize on API names?** We divide APIs into two groups based on whether they are seen during the training phase. The outcomes indicate if the code models possess robust generalization capabilities and appropriate memorization skills. Models' ability to apply learned knowledge to unseen APIs would be helpful for API namespace design. For the evaluation, we construct the first benchmark on Python API name knowledge, PyINK, and analyze 10 pre-trained code models, including the variants of CodeBERT [1], GraphCodeBERT [2], PLBART [17]. Our work is complementary to developing more advanced techniques for modeling API representations in code, thereby enhancing the efficiency and accuracy of code models. Additionally, the insights gained from this work can pave the way for a deeper understanding of how pre-training impacts the performance of code models, facilitating more informed design decisions in this domain. In summary, our main contributions include: * A cloze-style evaluation framework INK, to probe and benchmark the knowledge of API names in pre-trained code models. * An implementation of INK, which lowers the bar for designing probing techniques on API name knowledge. * An evaluation dataset based on INK, PyINK, containing diverse API name cloze-style pop quizzes. * A comprehensive study on understanding the Python API name knowledge of pre-trained code models via PyINK. Fig. 1: An example of module design of <numpy> library. ## II Background and Related Work This section provides a comprehensive overview of the background and related work that form the foundation of our research. Firstly, we introduce three prominent families of pre-trained code models, namely CodeBERT [1], GraphCodeBERT [2], and PLBART [17], which have been widely adopted in recent studies. Besides, we present a review on knowledge probing in language models, which has emerged as a crucial research area for enhancing the interpretability and understanding of such models. Finally, we discuss how existing works on deep API learning are different from our study. Through these discussions, we establish the contextual and theoretical framework necessary to fully appreciate the novelty and significance of our proposed approach. ### _Pre-trained Code Models_ Pre-trained language models, such as BERT [18], are commonly utilized for knowledge transfer in various downstream natural language processing tasks [19]. These models are trained on extensive NL corpora and fine-tuned on small labeled datasets for different tasks. They capture contextual linguistic information and eliminate the necessity for task-specific models. Similarly, pre-trained code models, such as CodeBERT, GraphCodeBERT, and PLBART, have been developed to leverage the "naturalness" of software [20]. These models excel in various code-related downstream tasks, such as code completion, code-to-text summarization, and code-to-code translation. #### Ii-A1 CodeBERT CodeBERT is a sophisticated multilingual pre-trained code model built on the BERT architecture. It excels in understanding the semantic connections between natural language (NL) and programming language (PL) through masked language modeling and replaced token detection. CodeBERT achieves state-of-the-art results in NL code search and code documentation generation tasks, making it a valuable tool for academic writing and review. Pre-trained on the CSNet dataset [21], CodeBERT encompasses a diverse range of NL and PL instances, including code snippets, comments, and unimodal code. Its impressive performance is achieved without relying on explicit indicators to differentiate PLs. By fine-tuning its parameters, CodeBERT consistently demonstrates exceptional proficiency, reinforcing its position as a top choice for academic writers and reviewers. #### Ii-A2 GraphCodeBERT GraphCodeBERT is a pre-trained code model that revolutionizes code understanding by considering the inherent structure of code. Unlike other models that rely on complex abstract syntax trees (AST), GraphCodeBERT leverages data flow during pre-training to capture the semantic-level structure and encode variable relationships. By focusing on the "where-the-value-comes-from" aspect, GraphCodeBERT achieves more efficient and effective code representation [2]. Pre-trained on the same CSNet dataset as CodeBERT, GraphCodeBERT employs three distinct objectives during pre-training. It involves masked language modeling, data flow edge prediction, and variable alignment across source code and data flow. This comprehensive approach enables GraphCodeBERT to excel in four key downstream tasks: text-code search, clone detection, code translation, and code refinement, outperforming CodeBERT in all of these areas. The model's superior performance validates its efficacy and establishes GraphCodeBERT as a cutting-edge solution in the field of code analysis and understanding. #### Ii-A3 Plbart PLBART is a bidirectional and autoregressive code model that exhibits exceptional proficiency in performing a wide range of code summarization, generation, and translation tasks. The design of PLBART was inspired by BART [22], which is a language model based on denoising autoencoding sequence-to-sequence pre-training. With a similar setup, PLBART is pre-trained using three denoising pre-training objectives, namely, token masking, token deletion, and token infilling. The first two strategies involve randomly sampling tokens and replacing them with a mask token or deleting them from the input sequence. Conversely, in token infilling, a Poisson distribution (\(\lambda\) = 3.5) is used to draw the lengths of text spans that are sampled and replaced with a single mask token. In each instance, 35% of the tokens are masked. In contrast to CodeBERT and GraphCodeBERT, PLBART is pre-trained on a vast collection of Java and Python functions, as well as NL descriptions sourced from GitHub and StackOverflow. Based on evaluations, PLBART has demonstrated remarkable generalization capabilities and superior performance when applied to several multilingual downstream tasks. ### _Knowledge Probing_ The evaluation of internal representations and knowledge of language models is a fundamental and critical process, which involves the technique of knowledge probing [23]. This approach entails presenting a set of questions or statements to the model to assess its understanding of specific concepts or relationships. The inputs are typically presented as cloze sentences with particular concepts or relationships masked as discrete prompts to test the model's performance. We formalize the knowledge probing approach by considering the input cloze sentence, denoted as \(S\), where "Alan Turing was born in [MASK]" is an example. Formally, we define the knowledge probing approach as follows, \[f(S)=\frac{1}{|S|}\sum_{k=1}^{|S|}log(P(t_{k})|S;\theta),t_{k}\in\mathcal{V}(\theta) \tag{1}\] where \(\theta\) represents the model parameters, \(\mathcal{V}(\theta)\) denotes the vocabulary learned by the language model, and \(t_{k}\) is a token inside the model's vocabulary \(\mathcal{V}(\theta)\). The contextualized likelihood \(f(S)\) represents the possibility of replacing [MASK] with the token \(t_{k}\) as per the model's prediction. The final prediction corresponds to the token \(t_{k}\) that maximizes \(f(S)\). Knowledge probing is an essential technique for identifying areas where the model requires improvement, understanding how the model processes and represents information, and exploring the underlying knowledge. Furthermore, knowledge probing enables the development of more robust and reliable language models that reflect human understanding of language and the world. Factual probing is an early application of prompting methods in natural language processing, with the goal of quantifying the factual knowledge encoded in pre-trained language models. This task involves transforming the input into a cloze prompt, either manually crafted or automatically discovered, to retrieve knowledge. Relevant datasets such as LAMA [24] and X-FACTR [25] have been established for evaluating models in fact retrieval. Researchers have explored discrete template search [24, 26, 27] and continuous template learning, as well as prompt ensemble learning [12, 28], to improve the effectiveness of factual probing. The existing body of research demonstrates that pre-trained language models contain substantial factual knowledge, which can be efficiently accessed using various prompting methods. ### _Generation-based API Recommendation_ API recommendation is a challenging task that involves providing a specific API based on an NL query. Previous research has focused on two approaches: (1) Rank-based recommendation and (2) Generation-based recommendation. Rank-based API recommendation utilizes a knowledge graph or knowledge base to search for the most suitable APIs based on the semantic meaning of the NL query [29, 30, 31]. As these methods do not require learning, we discuss another line of work where deep-learning-based approaches are used for API recommendation to generate API sequences using NL queries, close to the scope of our work. DeepAPI [32] was the first to tackle this task by formulating it as a machine translation problem and employing an end-to-end supervised learning model. Subsequently, researchers have investigated the effectiveness of fine-tuning pre-trained code models to perform the API learning task [33, 34]. While these studies demonstrate that code models achieve better performance on API learning, they suffer from four main drawbacks: (1) Code models are only fine-tuned with a few APIs and can not be generalized to generate APIs in the wild. (2) Fine-tuning code models to synthesize API sequences lacks interpretability for capturing API knowledge during pre-training. (3) The API learning task is based on NL inputs that only determine the semantic understanding of API sequence usage. (4) Evaluation of API learning is solely based on BLEU score [35], which measures the similarity between synthesized API sequences and references and fails to reflect synthesis correctness. ## III INK: An evaluation framework of API Name Knowledge ### _Motivation_ Previous research in natural language processing has utilized cloze sentences for token prediction as a means of interpreting the knowledge encoded by pre-trained language models. Building upon this work, we examine probing API name knowledge in CodeBERT-MLM - a variant of CodeBERT pre-trained solely on mask language modeling - with cloze-style sentences serving as pop quizzes, as depicted in Figure 2. We use <tensorflow.compat.v2.boolean _mask> as an example and transform it into a cloze-style pop quiz, as shown in Figure 2. In this study, we define the API module levels as each hierarchical level within the fully qualified name, separated by a period. There are four module levels in the API call statement: (1) <tensorflow> representing the top module level, (2) <compat> as the second module level, (3) <v2> as the third module level, and (4) <boolean_mask> as the last call level. For each level, we request CodeBERT-MLM to fill in the blank via **first** token prediction, as determined by its tokenizer. As our results demonstrate, CodeBERT-MLM correctly predicts the masked token on the first attempt, except for the third level of <v2>. We contend that code models, such as CodeBERT-MLM, can learn API names during function pre-training. Fig. 2: API fully qualified names such as <tensorflow.compat.v2.boolean_mask> can be formalized into the cloze-style tests from two perspectives, API calls and API imports. The example shows the top-\(k\) predictions of CodeBERT-MLM model for each test. Note that the dialogues are for illustration only. Given that <tensorflow.compat.v2.boolean__mask> can be reconstituted to the form of an API import statement, we then investigate how well CodeBERT-MLM understands the API import statements. As demonstrated in Figure 2, we transform the API import statement into four cloze templates by masking the first token of each module level, similar to the ones of the API call. From the results illustrated in Figure 2, we discover that predicting some API modules at the first shot is challenging for CodeBERT-MLM, unlike the case of API call probing. This behavior suggests that code models possess varying degrees of knowledge in API import and API call statements. Having identified several potential patterns from our preliminary knowledge probing study, which provides clues to API name knowledge, we find it necessary to further explore this phenomenon through quantitative analysis and systematic evaluation. Motivated by the aforementioned observations, this paper investigates whether pre-trained code models learn the API names and to what extent they store API name knowledge, by conducting knowledge probing as the pop quiz. Specifically, we analyze two perspectives of API names, i.e., API call and API import, within the purview of code models. ### _API-to-Pop-Quiz Conversion_ We consider three main types of transformation of evaluating API name knowledge: **API calls**, **API imports** and **API import aliases**. As aforementioned in Section III-A, cloze-style pop quizzes are structured based on each call level split by the delimiters of ".", "from" and "import". To benchmark models fairly, we construct pop quizzes by unifying the entire vocabulary of each model. For all the evaluations, we follow previous work of knowledge probing in language models [24] and choose to evaluate the prediction of a single token masking in the pop quiz. We provide the detailed design of each process as follows. #### Iii-B1 **Evaluation Design on API Call** We treat each API call as a modular pattern on the basis of each module level. To formalize the pop quiz construction on API calls, an example of API call <A.B.C> and a code model \(\mathcal{M}\) are given to demonstrate the workflow. The model \(\mathcal{M}\) firstly tokenizes the API as follows, \[\mathcal{M}(\texttt{<A.B.C>})\rightarrow\] \[\left\{\{t_{1}^{A},t_{2}^{A},\ldots,t_{N_{a}}^{A}\},t^{dot},\{t_{1}^{B},t_{2 }^{B},\ldots,t_{N_{b}}^{B}\},t^{dot},\right.\] \[\left.\{t_{1}^{C},t_{2}^{C},\ldots,t_{N_{c}}^{C}\}\right\}\] where each \(t\) represents the token produced by the model \(\mathcal{M}\), and \(N\) represents the length of tokens in each level. For each level, the tokens are grouped by \(\{\ldots\}\). When converting the tokenized API to the pop quiz, we mask a specific token by replacing \(t\) with a "[MASK]" in each level. To visualize the pop quiz input, we mask the last token in the second module level of <A.B.C> as follows: \[\texttt{<A.B.C>}\rightarrow\texttt{<A.B^{\prime}[MASK].C>}\rightarrow\texttt{<A.B ^{\prime}\_\_.C>}\] where \(B^{\prime}\) is the concatenation of \(\{t_{1}^{B},\ldots,t_{N_{b}-1}^{B}\}\). We prompt the model \(\mathcal{M}\) to fill in the blank of <A.B^{\prime}\_\_C> via mask prediction. #### Iii-B2 **Evaluation Design on API Import** We explore the evaluation design on the API import statement of "from...import...". Similarly, we consider the example of <from A.B import C>. Using the model \(\mathcal{M}\) to tokenize the API import, we can devise the following tokens: \[\mathcal{M}(\texttt{<from A.B import C>})\rightarrow\] \[\left\{t^{from},\{t_{1}^{A},t_{2}^{A},\ldots,t_{N_{a}}^{A}\},t^{dot},\right.\] \[\left.\{t_{1}^{B},t_{2}^{B},\ldots,t_{N_{b}}^{B}\},t^{import},\{t_{1}^{C},t_{2 }^{C},\ldots,t_{N_{c}}^{C}\}\right\}\] We visualize an example of API import quiz, where the first token in the bottom level of <from A.B import C> is masked: <from A.B import C> \[\rightarrow\texttt{<from A.B import [MASK]C^{\prime}>}\] \[\rightarrow\texttt{<from A.B import \_C^{\prime}>}\] where \(C^{\prime}\) is the concatenation of \(\{t_{2}^{C},\ldots,t_{N_{c}}^{C}\}\). We probe the model \(\mathcal{M}\) to fill in the blank of <from A.B import C^\prime}> via mask prediction. Fig. 3: An overview of the INK framework. #### Iii-A3 **Evaluation Design on API Import Alias** We note that import aliases are supported in some programming languages, such as Python. For example, "import...as..." and "from...import...as..." are the typical import alias syntax. Therefore, we further examine pre-trained code model's understanding of the aliases of API calls after packages and libraries imports. We illustrate the design choice via the example of <import A as K \n K.B.C>, where <K> is the alias and K.B.C is the API call statement. After being tokenized by the model \(\mathcal{M}\), the example is formalized as follows: \[\mathcal{M}(\texttt{<import A as K \n K.B.C>})\rightarrow\] \[\left\{t_{1}^{import},\{t_{1}^{A},t_{2}^{A},\dots,t_{N_{a}}^{A}\},t^{as},\{t_{1}^{K},t_{2}^{K},\dots,t_{N_{a}}^{K}\},t^{newline},\right.\] \[\left.\{t_{1}^{K},t_{2}^{K},\dots,t_{N_{a}}^{K}\},t^{dot},\right.\] \[\left.\{t_{1}^{B},t_{2}^{B},\dots,t_{N_{b}}^{B}\},t^{dot},\{t_{1} ^{C},t_{2}^{C},\dots,t_{N_{c}}^{C}\}\right\}\] We then convert the example to the following API alias pop quiz with the masked last token of <C>: \[\texttt{<import A as K \n K.B.C>}\rightarrow\texttt{<import A as K \n K.B.C' [MASK]>}\rightarrow\texttt{<import A as K \n K.B. We also include GPT-3.5-turbo for the assessment, though it is not specifically trained with mask prediction. We instruct GPT-3.5-turbo to predict top-20 masked tokens for each test of 1,000 randomly selected samples on API calls and imports, respectively. To guide GPT-3.5-turbo to correctly perform the prediction task, we prompt it with the instruction of "Predict top-20 answers of \(<\)mask\(>\) part of the following Python API fully qualified name, pay attention to the connected characters right after \(<\)mask\(>\). Note the number of masked characters is at least one. Print each of 20 answers per line with index." The description of each model is summarized in Table II. ### _Evaluation Metric_ We present an evaluation methodology based on rank-based metrics in the context of API name prediction. Our approach involves computing results per test sample and means across pop quizzes, utilizing the mean precision at k (\(P@k\)) metric. Specifically, \(P@k\) is computed as 1 if the target object is ranked among the top k results, and 0 otherwise. ## V Results ### _RQ1: How well do code models understand API calls and imports?_ Our evaluation assesses the capability of pre-trained code models to encode knowledge of Python API names for both API calls and imports. We computed \(P@k\) scores for each masking strategy and present the results in Table III. In addition, we provide a few examples in Table IV. Firstly, we have observed that the relative performances of different code models remain consistent when we vary the value of \(k\) in the \(P@k\) metric. Secondly, as \(k\) increases, the improvement in performance for each model becomes less significant. These observations provide strong evidence to support the effectiveness of the PyINK benchmark. When comparing the model variants, our analysis reveals that CodeBERT-MLM-Python and GraphCodeBERT-MLM demonstrate superior performance on API calls and imports compared to other models. However, their overall precision of 30% measured by \(P@1\) falls short of perfection, indicating a lack of knowledge about API names. While we expected GPT-3.5-turbo to have a better understanding of API names, it shows that the model performs slightly worse than CodeBERT-MLM. Additionally, our comparison shows that PLBART variants perform much worse on understanding Python API name knowledge than BERT-like models, which can be explained by the pre-training objectives of PLBART. PLBART-Large consistently outperforms other variants, indicating that model size may be an important factor in the amount of stored API name knowledge. However, this finding should be interpreted in light of the scaling law of mixed-modal language models [38], which suggests that larger models are likely to achieve better performance on downstream tasks, such as code generation. Finally, we find that pre-trained data can influence the understanding of API names to some extent, as shown by the performance gap between PLBART-Base and PLBART-CSNet. Our results indicate that fine-tuning on code generation tasks can improve the performance of pre-trained models, while text generation tasks may negatively impact them. ### _RQ2: Do code models understand API import aliases?_ To assess the code models on the knowledge of API import aliases, we pair the 17,971 API import alias quizzes with the adversarial examples designed to test the model's robustness. To construct the adversarial set, we randomly selected 10 distinct aliases that are used in other modules and replaced the original aliases in the quizzes with these new aliases. For example, "import numpy as np \(\backslash\)n np.load_(" will be transformed to "import numpy as pmd \(\backslash\)n pmd.load_(" via the replacement of "np". In the end, we collected 179,710 adversarial quizzes. We report \(P@K\) results of ten models in Table V and illustrate examples in Table VI. Based on the com \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{**Model**} & \multicolumn{2}{c|}{**pre-trained Dataset**} & \multicolumn{2}{c|}{**Objective**} & \#Param & **Fine-tuned** \\ \hline CodeBERT & MLM & \multicolumn{2}{c|}{CNN} & MLM & 125M & **✗** \\ \cline{2-6} & MLM-Python & CSNet & CodePart parison, CodeBERT-MLM-Python consistently outperforms GraphCodeBERT-MLM, achieving higher \(P@1\) scores of up to 24.17% for Alias scenarios compared to 18.91%. CodeBERT variants also show better overall performance with \(P@50\) scores ranging from 59.88% to 68.37%, while GraphCodeBERT-MLM ranges from 59.09% to 62.17%. Our initial finding suggests that code models have a weaker understanding of API aliases compared to API calls and imports, as shown in Table III. This indicates that current code models encode little knowledge of import aliases. Based on the performance of the GPT-3.5-turbo model on 1,000 randomly sampled quizzes, we can infer that it has a greater capability to understand API import aliases. When comparing the results of the original API import alias quizzes with those of the adversarial aliases, we found only minor discrepancies, indicating that these code models have strong robustness in understanding API import aliases. We further analyze the distribution of the API import aliases and find that an API is paired with 1.16 aliases on average, and 8% of APIs have more than 1 alias. We hypothesize that these code models are able to learn the compositional patterns of these APIs via different aliases, and thus manage to generalize to adversarial import aliases. ### _Rq3: Will natural language context affect the API name knowledge probing results?_ Our study examines the impact of natural language (NL) context on a code model's ability to comprehend API names. To build a dataset of API-related NL context, we utilize the NL queries designed for Python API sequence usages in the work of [33]. For example, <os.path.isfile> is paired with "file directory check". We selected queries containing API sequences containing the PyINK API and transformed them into API pop quizzes. As some of the NL queries are long and infeasible for code models to process, we only choose the 10 shortest ones among these satisfied NL queries, and filter out the cases where the length is no more than 512 tokens determined by the CodeBERT tokenizer. At the end, we collected 56,645 pop quizzes for API calls, and 56,644 for API imports. We compute the average \(P@k\) on each API name pop quiz with the group of NL queries concatenated ahead. We compare them with the overall results without adding NL context at the beginning in Figure 4. Our results show that incorporating natural language queries led to a 2% improvement in probing API call name knowledge, although we contend that this may be due to their non-API-focused design, as they were initially created for API sequences rather than individual API calls [33]. In addition, the relative performances among code models do not change before and after adding NL context, suggesting PyINK is robust for API name evaluation. We anticipate that incorporating API-focused natural language context will yield more substantial gains. ### _Rq4: How well can code models memorize and generalize on API names?_ We evaluate whether code models demonstrate a deeper understanding of the names of **seen** APIs during pre-training than the **unseen** ones by conducting experiments on CodeBERT-MLM, GraphCodeBERT-MLM, and PLBART-CSNet, which were pre-trained on the train set of CSNet. To create our PyINK-Mem version, we take all APIs appearing during training as the **seen** split and the remaining APIs as the **unseen** split. We filter out all the APIs belonging to the **seen** libraries in the **unseen** split. The PyINK-Mem **seen** split contained 281,945 quizzes for API calls and 281,945 quizzes for API imports. The unseen split had 7,640 API call quizzes and 7,640 API import quizzes. We note that the selected models do not memorize any structures of the open-source repositories, due to the function-level pre-training objective. In Table VII, we measure the model performance via \(P@k\) up to \(P@50\). Our inspection of the results on API call pop quizzes suggests there are slight differences between **seen** and **unseen** sets, indicating the strong generalization ability of these code models to new APIs. Among the three models we evaluated, CodeBERT-MLM demonstrates the most robust performance, while GraphCodeBERT-MLM demonstrates a greater ability to memorize API names during pre-training. Surprisingly, we found that there were 1,288 and 5,468 distinct ground-truth tokens in the **seen** and **unseen** splits for API calls, respectively, and 1,257 tokens (97.59% of the **unseen** split) were overlapped. This indicates that the API namespace designs share unexpected commonalities. **Findings of RQ4:** Code models demonstrate impressive generalization abilities in predicting the names of programming functions for new domains and reasonable memorizations of APIs from the training data. ## VI Discussion This section details the implications and importance of our experiments, and discusses threats to their validity. \begin{table} \begin{tabular}{|l l l|l l|l l|l l|l l|} \hline \multicolumn{1}{|c}{**Test**} & \multicolumn{2}{c|}{**Answer**} & \multicolumn{2}{c|}{**CodeBERT-MLM-Python**} & \multicolumn{2}{c|}{**CropCodeBERT-MLM**} & \multicolumn{2}{c|}{**CropCodeBERT-MLM**} \\ \cline{3-11} \multicolumn{1}{|c}{} & \multicolumn{1}{c}{import remote-aint-c.c.c.clinks as allchecks} & \multicolumn{1}{c}{git} & \multicolumn{1}{c}{duse(0.19)} & \multicolumn{1}{c}{duse(0.16)} & \multicolumn{1}{c}{duse(0.17)} & \multicolumn{1}{c}{duse(0.07)} & \multicolumn{1}{c}{duse(0.04)} & \multicolumn{1}{c}{duse(0.22)} & \multicolumn{1}{c}{duse(0.09)} & \multicolumn{1}{c}{duse(0.07)} & \multicolumn{1}{c}{duse(0.07)} \\ \cline{3-11} \multicolumn{1}{|c}{} & \multicolumn{1}{c}{single-c.c.c.m.-based} & \multicolumn{1}{c}{single-c.c.m.-based} & \multicolumn{1}{c}{single-c.c.m.-based} & \multicolumn{1}{c}{single-c.c.m. ### _The implication of the evaluation on PyINK_ Our study evaluates API name knowledge in code models on the PyINK benchmark, encompassing a wide range of Python APIs. Our analysis reveals that while these APIs share similar design patterns regarding fully qualified naming, they also exhibit diverse characteristics. This observation underscores the challenges associated with modeling and comprehending the name information pertaining to these APIs. Our experiments on PyINK provide qualitative insight into the API name knowledge in the pre-trained code models. However, it is difficult to infer the specific quantitative amount, given the models are selected based on different architectures. To address this, we suggest comparing models from the same family would be more reasonable. Based on a detailed evaluation of PyINK, we find that code models can store some API name knowledge while learning from source code, but the knowledge is often insufficient. Although these models achieve reasonable precision within the top-50 predictions, they may struggle to answer pop quizzes correctly within the first few attempts. One consistent finding across various experiments is that code models may not treat API calls and imports similarly, even though these two are derived from the same API fully qualified name. We also empirically demonstrate that the code model's ability to capture API name knowledge can be influenced by several factors, including pre-training strategies, model parameters, pre-training data, and fine-tuning. Our results from RQ3 indicate that natural language queries can assist code models in locating specific API name information to some extent, underscoring the importance of context in constructing the approach. Furthermore, we emphasize that API name knowledge is distinct from most knowledge in natural language, where code models can generalize to unseen API names. In comparison, language models consistently perform inferior when generalizing to new domains [39, 40]. ### _Strengths of INK evaluation framework_ The INK framework has two key strengths. Firstly, it can diagnose code models' understanding of API names. While we found that discrete prompts may not be sufficient for instructing models like GPT-3.5 to complete pop quizzes, we recommend using few-shot examples in prompts to improve performance. For publicly generative code models such as CodeGen [41] and SantaCoder [42] that may not be able to perform API name understanding tasks directly, we can use prompt tuning strategies like P-tuning [43] to effectively evaluate their performance. Secondly, INK's cloze-style pop quiz on API name understanding can be a valuable pre-training and fine-tuning task for code models. By learning to predict API names, code models can develop better representations of APIs and can be more easily adapted to various downstream tasks involving APIs, such as code summarization on API invocation and invoking APIs during code completion. ### _The importance of interpreting API name knowledge_ Acquiring knowledge of API names is essential for software development, as APIs play a critical role in automating various functionalities. However, code models used in current benchmarks do not adequately consider API usage when assessing performance. Our research shows that code models often struggle to preserve essential API name knowledge, leading to the need for knowledge-enhanced training paradigms. Inspired by recent advancements in knowledge-enhanced pre-trained models in natural language processing and computer vision, we propose exploring approaches that integrate API knowledge graphs into code models. Learning from research on API knowledge graphs [44, 45, 46] holds great promise for advancing the field. By incorporating these approaches into code models, we can improve the accuracy and effectiveness of automated software development. ### _Threats to Validity_ Choice of Cloze-style Pop Quiz DesignThe employed pop quiz design involves predicting the partial or full name of each API module level. It is possible to prompt code models to predict on the tokens in between when an API level involves multiple tokens. However, masking the first and last tokens of each module level is a suitable approach for assessing the extent of API name knowledge retained by code models, validating our pop quiz choice. While some may criticize our pop quizzes for targeting single token prediction in the missing slot instead of multiple tokens, this design choice aligns with prior works in probing language models [24]. Introducing multi-token prediction could lead to more variability in decoding strategies among code models and make it difficult to finalize the sequence combined by all predicted tokens during evaluation. Although we acknowledge this potential threat to validity, we leave it to future work. Choice of Evaluation DataThe choice of evaluation data may affect experimental results of INK distillation. While we used a widely-used corpus, CSNet, which covers a substantial number of Python APIs, it is important to acknowledge that there are additional resources, such as Stack Overflow1, that may contain more Python APIs. Moreover, CSNet was proposed in 2019, and new APIs may have been developed since then. We contend that our evaluation of PyINK using CSNet is statistically significant, but we also acknowledge the limitations of this corpus. Furthermore, code models may exhibit different behaviors when evaluated with APIs in other programming languages such as Java and C. To address this threat to validity, we can enhance the completeness of our evaluation by incorporating more programming languages on which these code models are trained. By evaluating the code models on a wider range of programming languages, we can better ensure their robustness and generalizability to real-world programming tasks. Footnote 1: [https://stackoverflow.com/](https://stackoverflow.com/) ## VII Conclusion In this paper, we have explored the interpretability of code models for source code (CodeBERT, GraphCodeBERT and PLBART). We conduct a thorough API name knowledge analysis based on a large-scale benchmark, PyINK, from the following four aspects, aiming to give an interpretation of code models. Firstly, we determine the API name knowledge stored by code models from two perspectives, API call and API import. Secondly, we investigate whether code models can robustly understand API import aliases. Thirdly, we revisit the settings in deep API learning and assess if providing additional natural language context can help code model retrieve more precise API name knowledge. Fourthly, we examine the memorization and generalization of code models on API names. The analysis in this paper has revealed several interesting findings that can inspire future studies on code representation learning and interpretation of knowledge encoded by code models.
2305.19737
Prevalence of non-stationarity in quasi-periodic pulsations (QPPs) associated with M- and X-class solar flares
Quasi-periodic pulsations (QPPs) are frequently observed in solar and stellar flare emission, with recent studies suggesting that an increasing instantaneous period is a common characteristic of QPPs. Determining the prevalence of non-stationarity in QPPs contributes to a better understanding of which mechanisms are responsible in QPP generation. We obtain the rate of period evolution from QPPs in 98 M- and X-class flares from Solar Cycle 24 with average periods between 8-130s and investigate the prevalence of QPP non-stationarity. We also investigate whether the presence of a Coronal Mass Ejection (CME) impacts the period evolution of QPPs. We analyse soft X-ray lightcurves obtained from GOES' X-Ray Sensor (XRS) and assess the dominant periods in the impulsive and decay phases of the flares using the Fast Fourier Transform. We relate the rate of period evolution to flare duration, peak flare energy, and average QPP period. We find evidence of non-stationarity in 81% of the flares assessed, with most QPPs exhibiting a period evolution of less than 10s between the impulsive and decay phases, of which 66% exhibited an apparent period growth and 14% showed an apparent period shrinkage. We find a positive correlation between the absolute magnitude of period evolution and the duration of the flare and no correlation between the period evolution of the QPPs and flare energy or CME presence. Furthermore, we conclude that non-stationarity is common in solar QPPs and must be accounted for in flare analysis.
Tishtrya Mehta, Anne-Marie Broomhall, Laura Hayes
2023-05-31T11:03:02Z
http://arxiv.org/abs/2305.19737v1
Prevalence of non-stationarity in quasi-periodic pulsations (QPPs) associated with M- and X- class solar flares ###### Abstract Quasi-periodic pulsations (QPPs) are frequently observed in solar and stellar flare emission, with recent studies suggesting that an increasing instantaneous period is a common characteristic of QPPs. Determining the prevalence of non-stationarity in QPPs contributes to a better understanding of which mechanism(s) is (are) responsible in QPP generation. We obtain the rate of period evolution from QPPs in 98 M- and X- class flares from Solar Cycle 24 with average periods between 8-130 s and investigate the prevalence of QPP non-stationarity. We also investigate whether the presence of a Coronal Mass Ejection (CME) impacts the period evolution of QPPs. We analyse soft X-ray lightcurves obtained from GOES' X-Ray Sensor (XRS) and assess the dominant periods in the impulsive and decay phases of the flares using the Fast Fourier Transform. We relate the rate of period evolution to flare duration, peak flare energy, and average QPP period. We find evidence of non-stationarity in 81% of the flares assessed, with most QPPs exhibiting a period evolution of \(\leq\)10 s between the impulsive and decay phases, of which 66% exhibited an apparent period growth and 14% showed an apparent period shrinkage. We find a positive correlation between the absolute magnitude of period evolution and the duration of the flare and no correlation between the period evolution of the QPPs and flare energy or CME presence. Furthermore, we conclude that non-stationarity is common in solar QPPs and must be accounted for in flare analysis. keywords: Sun: flares, Sun: oscillations, Sun: particle emission, Sun: X-rays, Sun: coronal mass ejections (CMEs) ## 1 Introduction The emission from a solar flare often demonstrates fluctuations in intensity as a function of time. These fluctuations are known as Quasi-Periodic Pulsations (QPPs) and are characterised as repetitive bursts with similar time-scales that can range from seconds to several tens of seconds (Nakariakov and Melnikov, 2009; Van Doorsselaere et al., 2016; Kupriyanova et al., 2020). QPPs are identified across the entire electromagnetic spectrum of flare emissions, meaning that they are typically a multi-wavelength phenomenon (e.g. see Clarke et al., 2021). While non-thermal hard X-ray and microwave observations clearly demonstrate the most prominent pulsations during a flare, measurements from the past Solar Cycle with Sun-as-a-star soft X-ray (SXR) and extreme ultraviolet (EUV) observations have shown that small-amplitude QPPs are a very common feature of solar flares (Simoes et al., 2015; Dominique et al., 2018; Hayes et al., 2020). The study of solar flare emission fluctuations extends beyond our solar system as stellar flare QPPs have been extensively observed (Zhijayev et al., 2000; Pugh et al., 2016; Broomhall et al., 2019). These QPPs observed in stellar flares are largely similar in characteristics to those observed in solar QPPs, which strengthens the case for a solar-stellar analogy for QPPs (see Zimovets et al., 2021, for an overview on recent advances in observations of stellar QPPs). Therefore a better understanding of the mechanism(s) driving QPPs in solar flares is likely to lead to advances in stellar QPPs. The question as to what causes these repetitive flare emissions has been the topic of significant discussion (McLaughlin et al., 2018), with over fourteen different mechanisms suggested to date (See Kupriyanova et al., 2020; Zimovets et al., 2021, and references therein for an overview on generation mechanisms). The proposed generation mechanisms can be sorted into three groups; \(1\). Mechanisms that modulate the direct release of plasma emissions as the result of MHD oscillations; 2. Mechanisms where MHD waves modulate the efficiency of energy release; \(3\). Mechanisms based on spontaneous quasi-periodic energy release. Despite the growing number of mechanisms proposed to underpin the generation of QPPs, we are not yet in a position to confidently identify which mechanism is responsible and it seems likely that there are multiple mechanisms at play in generating QPPs. There is an expanding catalogue of QPPs which exhibit non-stationary properties, with both the phase, period, and amplitude varying in time (see Nakariakov et al. 2019, for review). For example, period drifts have been identified in several flares (Kupriyanova et al., 2010; Simoes et al., 2015; Kolotkov et al., 2018), with it common to find the decay phase periods longer than the associated impulsive phase periods (e.g. Hayes et al., 2016, 2020). Notably, in some cases, QPPs can be observed to extend late into the decay phase of solar flares and illustrate systematic increases in periods (Dennis et al., 2017; Hayes et al., 2019). There is a growing need to understand how the periods evolve over flares, whether period drifts are a common feature of flare QPPs, and whether the period drifts are systematic based on flare class, duration, or whether they are eruptive or not. We also need to address the prevalence of non-stationarity in solar QPPs, as the majority of detection methods used currently rely on a periodogram-based approach. As discussed in Broomhall et al. (2019), periodogram-based approaches tend to be less successful when detecting a non-stationary QPP. It is likely that we are missing, or at best, poorly characterising, the presence and behaviour of many QPPs by assuming their dominant periods are stationary. In quantifying the proportion of QPPs that exhibit non-stationarity we can better discern which analysis methods are the most appropriate to use when searching and categorising QPPs. In this work we explore the nature of QPP period drifts by investigating whether non-stationarity is an inherent feature of QPPs. To achieve this we build upon the work of Hayes et al. (2020) and we present a comparison of the dominant periods (the periodicity that corresponds to the largest peak relative to the confidence level in a power spectrum) in the impulsive phase of the flare (characterised as the time from the start of the flare to the time corresponding to flare maximum) and the decay phase (after the flare peak) in QPPs from M- and X- class flares from Solar Cycle 24. By examining the prevalence of QPPs that show evidence of non-stationarity we can potentially classify the different types of QPPs present in solar flare emission, and help constrain which mechanisms can drive QPPs. ## 2 Observations and Analysis Methods ### Data To select a list of flares for which to perform this study, we utilise a list of M- and X- GOES class flares from 1st February 2011 to 31st December 2018 (i.e. Solar Cycle 24) that demonstrated strong evidence of QPP signatures in their emission from the study of Hayes et al. (2020). This list consists of 205 flare events that showed enhanced Fourier power in the periodograms of the GOES-XRS 1-8 A channel observations. We further analyse this list of flares by focusing on the same 1-8 A channel from the GOES-15 satellite which has a cadence of 2.047 s, and focus on analysing the impulsive and decay phases of the flares independently to identify features of non-stationarity and period drift. To determine the duration of the impulsive phase, we use the flare start and peak times defined within the GOES flare catalogue produced by the National Oceanic and Atmospheric Administration (NOAA). NOAA define the flare start time as the first minute in a sequence of four minutes wherein there is a steep monotonic increase in the 1-8 A channel and the final flux value is greater than the first by a factor of 1.4. The flare peak time is the time at which the flares soft X-ray emission reaches its flare peak energy, which is its maximal value as measured in the 1-8 A channel. For our analysis, we limit the time window of the decay phase to the same duration as the impulsive phase. We use this method of choosing the end times rather than using the end times defined within the GOES flare catalogue. This is because of our implementation of criteria (_ii_) (discussed in Section 2.2) which requires 5 or more full cycles in each phase of the flare. This means that for a flare with impulsive/ decay phases of unequal length, each phase has a different upper limit on the maximal periodicity that can be obtained. This discrepancy in the upper limit threatens to artificially induce artefacts in the data. Therefore for consistency we limit the time window of the decay phase to the same duration as the impulsive phase, as can be seen in Fig. 1. However for most events the end times we chose and those defined by the GOES catalogue were similar. To examine whether the presence of a Coronal Mass Ejection (CME) correlates with the appearance or magnitude of a period evolution of the QPPs, we use the publicly available SOHO/LASCO CME catalogue, to determine which flares had associated CMEs. ### Method We separate the flare into the impulsive and decay phases, we perform a Fast Fourier Transform (FFT) on each phase and test whether a periodic signature is present above a 95% confidence level. We obtain the confidence levels by making use of the technique outlined in Pugh et al. (2017) which is based upon the work in Vaughan (2005). This method involves fitting the power spectrum with a broken power law, which accounts for both the presence of red and white noise in the signal and avoids the problems that can arise in assessing the significance of an identified periodic signature when detrending data. Using this fitting we determined the 95% confidence level. Any peaks in the power spectra above these confidence levels were deemed to be statistically significant. We make use of this method as it was determined to be highly effective in robustly detecting the period of QPPs in a Hare and Hound exercise (see Table 5 in Broomhall et al., 2019). However we note that periodogram-based methods do fail in the detection of non-stationary QPPs, (as discussed in Section 5.4 of Broomhall et al., 2019), whereas EMD and other methods that allow for varying time scales were more effective in detecting these QPPs. We chose not to use EMD as it struggles with non-detrended data and can be a user intensive process. Instead we opted to use the Fourier based method on a windowed signal. This constrained our study to periodicities that are relatively stationary within their shortened durations. This is a clear limitation in our work as we are unlikely to detect periodicities that evolve rapidly in either flare phase due to spectral leakage in the resulting power spectra. In theory, we may be able to detect some of the more rapidly evolving periodicities in the data using shorter or overlapping windows, should they exist, however preliminary studies showed that reducing the duration of the signals resulted in fewer overall detections which we attribute to the decreased number of oscillatory cycles in the data. The flare database that this study uses originates from a periodogram-based approach (Hayes et al., 2020), and so we find it likely that the FFT will produce statistically meaningful results in both phases. This technique allows for a statistically sound analysis that can be applied to a large sample of flares. We note that recent literature suggests that the significance of peaks in periodograms can be overestimated for non-stationary QPPs if segments are poorly selected. We follow a suggested mitigation strategy put forward in Hubner et al. (2022) by splitting the flare event into two phases and only assessing events in which there is similar statistically significant QPP-like behaviour in both segments, as outlined below. After performing a FFT on both phases of a given flare and obtaining the dominant periods, we discard the data if it does not fulfil the following criteria; _(i)_ the periods obtained for both phases must be statistically significant above a 95% confidence level, _(ii)_ the periods for both phases must be less than one tenth of the full duration of the flare, _(iii)_ the periods for both phases must be greater than four times the cadence of the data (i.e. both periods must be greater that 8.19 s), and _(iv)_ the impulsive phase period must not be greater or smaller than the decay phase period by more than a factor of eight. Criteria _(ii)_ aims at targeting QPPs with at least five full oscillatory cycles in both the impulsive and decay phase. We also restrict our periods to be greater than four times the cadence of the dataset (criteria _(iii)_). This is because we believe detections of periods smaller than this are unreliable when detected by GOES alone and must be accompanied by other data sources with better time resolution. Finally we believe QPPs that exhibit a change in period by a factor larger than eight (criteria (iv)) implies that the QPP in the impulsive phase does not correspond to the QPP in the decay phase. This could, for example, be caused by two periodicities present in the signal but one not reaching the 95% confidence level due to a change in the signal to noise ratio. It is important to state that the absence of the above criteria being met for a given flare event does not necessarily imply that no QPPs were present. Rather there may have been QPPs that were not statistically significant in both phases, or one whose period evolution was outside of the criteria we put forward. However we restrict our study to these criteria in the interest of reliability and consistency of results. This resulted in 98 flares which fulfilled all the criteria, which are discussed in Section 3. We define the term _period drift_ to measure the change in period from the impulsive phase to that in the decay phase, equal to \(\mathrm{Period}_{Decay}\) - \(\mathrm{Period}_{Implulsive}\). A positive period drift implies an increase in dominant period from the impulsive phase to the decay phase and vice versa. We emphasise that there may be multiple processes present in generating the QPPs and a positive period drift does not imply the growth in period of a singular QPP process- for example such an effect could similarly be produced by a process producing shorter period QPPs decaying in amplitude in tandem with a secondary longer period process growing in amplitude. This would result in a growth in dominant period across the two phases, i.e. a positive period drift. We determine the average period of the flare by taking the mean of the dominant periods in the impulsive and decay phases. As we are examining the prevalence of non-stationarity in QPPs we avoid taking an FFT of the entire duration of the flare to obtain the average period, as a non-stationary signal that has significant period evolution is not well suited to the FFT which assumes a stationary input. It is possible that a non-stationary signal which evolves over several frequencies will show evidence of spectral leakage in its associated power spectrum, leading to any dominant peaks being smeared out, and presenting no statistically significant peaks. This is naturally still an issue to be considered when assessing only the impulsive or decay phase and any quickly evolving periodicity is likely to be obscured in the same manner, which may lead to a number of false negatives in our results when statistically significant periods are not found in our analysis. However by splitting the flare into sections we still should be able to observe some periods with sufficiently slow evolution and still pick up on their long-term non-stationarity. We determine the errors on the periods from the impulsive and decay phases by use of the standard approach, and propagate these errors to obtain the errors on period drift and the average period (See Section 4.2.1 in Hughes and Hase 2010, for a detailed discussion on error propagation). Fig 1 shows the 1-8 A lightcurve for Flare 40 where the duration of the flare has been symmetrically split into the impulsive phase until flare maximum, and the decay phase. Fig. 2 shows the Fourier spectra of Flare 40's impulsive and decay phases, which show significant periods of \(43.3^{~{}.7}_{~{}.16}\) and \(54.9^{~{}.28}_{~{}.25}\) s respectively, corresponding to a period drift of \(116.6^{~{}.3}_{~{}.3}\) s. ## 3 Results We examine 205 solar flares from M- and X-class flares over Solar Cycle 24, resulting in 98 flares that show statistically significant periods in both the impulsive and decay phases of the flare that have both periods greater than four times the cadence of the dataset, less than one tenth of the full duration of the flare, and separated in period by no more than a factor of eight. We consider a period drift to be statistically significant if its absolute magnitude is greater than 4.09 s, which is twice the cadence of the data. This is a cautious approach as we see that the errors on periods are generally smaller than the cadence. Of these 98 flares, 19 (equivalent to 19%) showed no significant period drift. Of the remaining 79 QPPs, 65 (66% of the sample) exhibited a positive period drift where the dominant period appears to increase from the impulsive to the decay phase. 14 flares (14%) exhibited a negative period drift where the dominant period appears to shrink between the phases. Fig. 3 shows the relationship between the impulsive and decay phase periods of the 98 flares examined. It can be seen that the majority of results appear above the 1:1 ratio line shown in solid black, which indicates more QPPs have a larger decay phase period than impulsive phase period. For the QPPs showing an apparent period growth, the decay phase periods are loosely correlated to the impulsive phase periods by a factor of approximately \(\sim\)1.4, although there is significant scatter for events with decay phase periods greater than 40 s. This correlation agrees well with the factor of \(\sim\)1.6 that was found in a similar analysis, shown in Fig. 10 of Hayes et al. (2020), which shows the difference in periods detected during the impulsive and decay phases of 28 flaring events (20 of which overlap with the study presented in this paper). We note that Figure 1: Profile of Flare 40 in GOES-XRS 1–8 Å, where the impulsive phase is shaded in red and the decay phase is unshaded. The analysed impulsive and decay phases are equal in duration and are delineated by flare maximum which occurs at approximately 11:10 UT. the authors found that 26 of these events (92%) showed a larger decay phase period than impulsive phase period and their factor is based on the fitting of all 28 events, not just those that show period growth. For the 65 QPPs exhibiting positive period drift, the median period drift is 13\({}^{\,1.0}_{-9}\) s where the errors correspond to the periods in the upper and lower 25\({}^{\rm th}\) percentile. Similarly the median negative period drift for the 14 flaring events is -10\({}^{\,3}_{-24}\) s. We examine whether the presence of a CME associated with the flare impacts the distribution of period drifts in QPPs. Of the 98 QPPs, 69 were associated with a CME and 29 were not. Fig. 4 shows the histogram of period drifts in QPPs from flares associated with CMEs (red) and those from flares not associated with CMEs (black). The distributions of the two sets are reasonably similar with median period drifts of 10\({}^{\,12}_{-9}\) s for the CME associated flares and 5\({}^{\,4}_{-6}\) for the non-CME associated flares. The maximal and minimal period drifts across both groups are also similar with the CME-associated group having maximal and minimal period drifts of 98 and -126 s, and the non-CME associated group with 121 and -76 s. Fig. 5 shows the relationship between absolute period drift and average QPP period. Positive period drifts are shown in blue, and the absolute values of negative period drifts are shown in orange. QPPs associated with a CME are shown with a triangle and non-CME associated events are marked with a circle. The meanings of the colours and symbols used in Fig. 5 are consistent for the remainder of this paper. A positive correlation, with a Pearson correlation coefficient of 0.76, can be seen between the average period of the QPPs and the magnitude of the period drift. However we emphasise that this artificial correlation is largely induced by the selection criterion (iv) of the flares. Maximal flare energy, which is taken to be the maximal emission as measured in the 1-8 A channel, and QPP period drift are seen to have no correlation in Fig. 6. As expected the flares not associated with CMEs are more commonly found at lower energies but this distinction has no significant effect on the magnitude or direction of the period drifts observed. Fig. 7 shows a positive correlation between the absolute value of the period drift of the QPPs and the duration of the flare, with a Pearson correlation coefficient of 0.82. This relationship can likely be attributed to the fact that longer duration flares allow more time Figure 4: Histogram of period drifts of QPPs, separated by CME association. The QPPs seen in flares associated with CMEs are given in red, and those not associated with a CME are shown in black. Figure 3: QPP impulsive phase periods against decay phase periods. A 1:1 ratio line (which indicates no period drift) is shown as a solid black line. The impulsive phase periods are between 8 – 75 s, and are approximately similar across the all flares, whereas the decay phase periods have a larger spread between 8 – 110 s. The line of best fit for QPP periods that grew between the impulsive and decay phases is shown as a dashed blue line, and the line of best fit for period that shrunk is shown as a dot dashed blue line. This figure uses new data to recreate Fig. 10 from Hayes et al. (2020). Figure 2: Fourier spectra of Flare 40. _Top_: Fourier spectrum of the impulsive phase. _Lower_: Fourier spectrum of the decay phase. Fits of the spectra by broken power laws are shown by solid red lines, and the 95% confidence levels are indicated with dashed red lines. Statistically significant peaks (indicated by vertical orange lines) can be seen corresponding to periods of 43.3 s in the impulsive phase and 54.9 s in the decay phase. for any non-stationary QPP periods to evolve which leads to greater magnitude period drifts, in addition to the artificial correlation between average period and absolute period drift, seen in Fig. 5. There is no noticeable difference between the relationship of flare duration to period drift magnitude for positive or negative period drifts. The period drift of all QPPs in the 98 flares may be visualised in Fig. 8 (or explored in Table 1 found in the Appendix). The periods of the QPPs are given in the horizontal axis, with bullet points indicating the period in the impulsive phase, and arrow heads indicating the period at the decay phase. Therefore arrows pointing right and coloured red indicate a positive period drift. Conversely blue arrows, pointing left, indicate a negative period drift. The period drift from a given flare is plotted against the corresponding flare's duration. The inset axes shows an enlarged region of the plot for flares with durations less than 2500 s. Flares with longer durations naturally allow for more time to evolve, leading to larger magnitude period drifts as discussed previously. The majority of results are clustered for flare durations less than 2500 s (\(\sim\) 40 minutes), with impulsive and decay phase periods of 40 s or less. We control for the duration of the flares and now examine the rate at which the QPP periods evolve. The rate of period drift is defined as the period drift divided by half the duration of the flare, and is therefore a unitless quantity. Fig. 9 shows the distribution of the magnitude of the rate of period drift against average QPP period as a scatter plot (Top) and histogram (Lower). As can be seen, the rates of period drift have considerable scatter, although the absolute rate of period drift appears to cluster around \(\sim\)0.01 for average periods greater than 40 s, an effect that cannot be attributed to the selection criteria. Due to the selection criteria discussed in the methods section, the maximal possible absolute rate of period drift for the data used in this study is 1.4. The maximal rate of positive period drift seen in these results is 0.06 and the maximal rate of negative period drift is -0.1, although the majority of the rates of period drift are between0.02 and 0.03. There is no apparent Figure 5: Average QPP period plotted against the absolute magnitude of the QPP period drift. Positive period drifts, indicating a growth in dominant period, are shown in blue, and negative period drifts are shown in orange. QPPs from flares associated with CMEs are indicated by a triangle marker whereas those not associated with QPPs are shown with bullet points. The Pearson correlation coefficient is 0.76 indicating a positive correlation. A linear fit of the data is shown as a black dashed line. Figure 8: Arrows show evolution of statistically significant periods in the impulsive and decay phases of 98 flares, with the arrow pointing from impulsive phase (indicated with a bullet point) to decay phase (arrow head). A period growth, i.e. a positive period drift, is shown in red and a negative period drift is given in blue. The period drift in the QPP is plotted against the flare’s duration, both given in seconds. Figure 6: Peak flare energy as measured in _GOES_ 1 – 8 Å plotted against QPP period drift with no correlation. The meanings of colours and symbols are as given in Fig. 5. Figure 7: Flare duration plotted against the absolute magnitude of the QPP period drift. The Pearson correlation coefficient is 0.82, indicating a positive correlation. A linear fit of the data is shown as a black dashed line. The meanings of colours and symbols are as given in Fig. 5. correlation between the presence of a CME and the rate at which the QPP in the associated flare evolves. We also find that there is no correlation between the rate of period change and the flare energy, which suggests that QPP periods evolve at a rate independent of the peak flare energy. We also see the rate of period change to be uncorrelated with flare duration. This can be seen in Figs. A1, A2 in the Appendix. ## 4 Discussion Firstly, we remind the reader of the biases and limitations of our study. All of the flaring events we examined had evidence of QPPs in the first place, detected by Fourier analysis. This biases the dataset towards QPPs that were stationary or slowly-evolving in periodicity, meaning that the results in this paper are likely to underestimate the population of QPPs undergoing rapid period evolution. We have chosen to split the flare into two phases, a choice which is ultimately arbitrary and done for convenience. This again biases the data and forces QPPs to be represented as stationary within an individual phase. It also neglects the possibility of QPPs which exist in e.g. only the impulsive or decay phase, or a shorter duration, which may be driven by entirely different generation mechanisms to the QPPs examined here. A more comprehensive study should look at QPP period evolution as a continuous process. It may be that any apparent period evolution is non-linear and follows some different schema. By repeating this analysis with some method that has time resolution, such as a continuous wavelet-transform (CWT) or Empirical Mode Decomposition (EMD) we may be able to uncover valuable information about the time evolution of the apparent period drifts. This may also be useful in discerning the generation mechanism(s) that are active in the appearance of these QPPs. As discussed earlier, a reader may be misled by these results into thinking that a single process is occurring in which the period is growing or shrinking. Instead it is possible that several periodicities exist at once, each generated by a separate QPP mechanism. A limitation of this work is that we only extract the period associated with the dominant peak from the FFT spectrum, ignoring additional potentially statistically significant peaks. In this paper, we associate the dominant periods in the FFT spectrum of each phase to produce a period drift, however this may not always be the most appropriate way to examine the change in instantaneous period of a QPP. For example it is possible for a given stationary periodicity to be present throughout the duration of the flare, and appear as the dominant peak in the FFT spectrum of the impulsive phase but as a secondary peak in the FFT spectrum of the decay phase due to an emergence of a secondary periodic process with greater amplitude. This may produce the appearance of a large magnitude period drift when both processes may in fact be stationary. However for the majority of the events assessed here (77/98, 79%), both the FFT spectra of the impulsive and decay phases either resulted in dominant periods that were similar in magnitude (suggesting the direct evolution of a singular process) or produced only one peak in each phase that fulfilled the criteria discussed in Section 2.2 and appeared above the 95% confidence level. Therefore for these results the risks of drawing incorrect conclusions due to erroneously associated periodicities is low. We have shown that the majority (81%) of flaring events which have evidence of QPPs in both the impulsive and decay phases exhibit non-stationary behaviour. Although this sample is not strictly representative of the behaviour of QPPs on mass, due to the aforementioned biases in the data, the results discussed here are a strong indicator that we must consider non-stationarity to be a common property of QPPs and account for it in our methodology. If we search for QPPs by utilising methods that assume a stationary output, such as the FFT, we risk false-negative results where the non-stationarity of QPPs may cause spectral leakage. We also risk poorly categorising the behaviour of QPPs by assigning a single value for QPP period. This is important because different QPP mechanisms allow for the presence of non-stationarity in different ways and we must not omit the valuable data by treating the QPP periods as a fixed value if we are to determine what causes QPPs. We also note the disparity in the proportion of flaring events showing a positive period drift (66%) compared to those showing a negative period drift (14%). This suggests an apparent growth in QPP period is more common than an apparent shrinkage, as previously reported in single event studies (e.g. Hayes et al., 2016; Dennis et al., 2017; Hayes et al., 2019), and for a smaller statistical study (Simoes et al., 2015; Hayes et al., 2020). We also note that most of the period drift that we observe is of small magnitude- most commonly between \(\pm 10\) s. The rates at which the QPPs evolved in period exist over the same ranges and in roughly the same populations for both growing and shrinking QPP periods, without any dependence on QPP average period or maximum flare energy. We note that the presence of CMEs or peak flare energy seem to have no effect on whether the QPP periods grow or shrink, or the magnitudes of the period drifts. Figure 9: _Top:_ Scatter plot of the absolute magnitude of the rate of period drift, plotted against the average period of the QPP. _Lower:_ Histogram of rate of period drift. The meanings of colours and symbols in the Top panel are as given in Fig. 5. We see that longer duration flares are correlated with greater magnitude period drift. It is possible that other properties, such as CME speed or the magnetic configuration of the Active Region could play a role in determining if and how the QPP periods evolve. ## 5 Conclusions There is clear evidence that non-stationarity is a common phenomenon in QPPs observed in M- and X- class solar flares, with period growth appearing more common than period shrinkage. We must consider this when investigating flaring events for QPPs and be wary about how we assign values to QPP periodicities. It appears that most QPP that show non-stationarity evolve in period at similar rates. It is unlikely that the presence alone of CMEs, or the peak flare energy impacts the presence or magnitude of QPP period evolution. As seen in Table 1 of Zimovets et al. (2021) there are many generation mechanisms (from all of the previously mentioned groupings) that have the potential to produce QPPs with non-stationary properties. In building a catalogue of QPPs that exhibit non-stationarity (see Table B1 in the appendix) future work may determine commonalities, such as the magnetic configuration of the flare site, which could be used to narrow down which mechanisms are responsible for driving non-stationary behaviour. Further work with spatial resolution of the flare site may be valuable in investigating the cause of QPP period evolution. ## Acknowledgements L.A.H is supported by an ESA Research Fellowship. A-MB acknowledges support from the Science and Technology Facilities Council (STFC) consolidated grant ST/T000252/1. The CME catalogue is generated and maintained at the CDAW Data Center by NASA and The Catholic University of America in cooperation with the Naval Research Laboratory. SOHO is a project of international cooperation between ESA and NASA. This research was supported by the International Space Science Institute (ISSI) in Bern, through ISSI International Team project 527: Bridging New X-ray Observations and Advanced Models of Flare Variability: A Key to Understanding the Fundamentals of Flare Energy Release. This research made use of sunpy (SunPy Community et al., 2020), matplotlib (Hunter, 2007), numpy (Oliphant, 2007), pandas (Wes McKinney, 2010), and scipy (Virtanen et al., 2020). ## Data Availability The data used here are all publicly available. The GOES-XRS data is available online from NOAA (ngdc.noaa.gov/stp/satellite/goes/index.html) and the SOHO/LASCO CME catalogue can be found here (cdaw.gsfc.nasa.gov/OME_list/). The procedures described in Section 2.2 can be found in the following repository: github.com/chloopugh/QPP-confidence-levels.
2310.20472
Direct detection of dark matter: a critical review
The nature of the dark matter in the Universe is one of the hardest unsolved problems in modern physics. Indeed, on one hand, the overwhelming indirect evidence from astrophysics seems to leave no doubt about its existence; on the other hand, direct search experiments, especially those conducted with low background detectors in underground laboratories all over the world seem to deliver only null results, with a few debated exceptions. Furthermore, the lack of predicted candidates at the LHC energy scale has made this dichotomy even more puzzling. We will recall the most important phases of this novel branch of experimental astro-particle physics, analyzing the interconnections among the main projects involved in this challenging quest, and we will draw conclusions slightly different from how the problem is commonly understood.
Marcin Misiaszek, Nicola Rossi
2023-10-31T14:05:58Z
http://arxiv.org/abs/2310.20472v1
# Direct Detection of Dark Matter: ###### Abstract The nature of the dark matter in the Universe is one of the hardest unsolved problems in modern physics. Indeed, on one hand, the overwhelming indirect evidence from astrophysics seems to leave no doubt about its existence; on the other hand, direct search experiments, especially those conducted with low background detectors in underground laboratories all over the world seem to deliver only null results, with a few debated exceptions. Furthermore, the lack of predicted candidates at the LHC energy scale has made this dichotomy even more puzzling. We will recall the most important phases of this novel branch of experimental astro-particle physics, analyzing the interconnections among the main projects involved in this challenging quest, and we will draw conclusions slightly different from how the problem is commonly understood. **Abstract** [1]Title: **Direct detection of dark matter: a critical review** (\(\Lambda\)CDM) [2]. This scenario can be easily actualized with the existence of heavy and collisionless particles, _e.g._ massive particles with mass comparable or larger than typical atomic nuclei (tens of GeV/c\({}^{2}\) or more). If massless or much lighter, the dark matter should have some interaction properties, or some coherent aggregation, sufficient to simulate a non-relativistic gravitational clustering, capable of speeding up the structure formation, as observed, after the matter-radiation decoupling epoch in the early Universe [3]. Alternative theories, as the _Modified Newtonian Dynamics_ (MOND) [4], super-fluid dark matter [5], or even full General Relativity approach [6], have been showing waves of interest in the scientific literature. It is worth mentioning, even though it is not the main topic of this article, that some empirical evidences, as the Tully-Fisher relation [10] and the Renzo's rule [11] for galaxies are hardly explainable by standard cold dark matter galactic halos, and this issues, very well addressed by the MOND theories, seem sometimes completely ignored when considering whatever particle explanation of the missing mass of the Universe. And, along with this, many issues related to the \(\Lambda\)CDM model are still unsolved, such as the _missing satellites_ problem, the _cusp-core_ problem and _too-big-to-fail_ problem (see _e.g._[1] and Refs. therein). Nevertheless, the particle hypothesis for dark matter, even if strongly weakened by the astonishing absence of the super-symmetry (SUSY) at the Higgs scale in the LHC accelerator [12], shows still a constant interest in the scientific community, mostly for its experimental feasibility. In other words, the costs and technology for the direct dark matter search in underground laboratories is still so affordable that the biggest part of the experimental community believes that it is still worth trying, as the main goal in scientific programs for the forthcoming decades. Since the early 90s, when the hypothesis of dark matter began to be taken seriously by scientists, many candidates have been proposed, with mass ranging from axions with masses starting from \(10^{-22}\) eV/c\({}^{2}\) to primordial black holes with masses up to 5 M\({}_{\odot}\approx(10^{67}\) eV/c\({}^{2})\)[1]. This wide range of 89 orders of magnitude is filled almost homogeneously with many variants of the proposed models, and it is basically limited by astrophysical constraints related to the observed macro-structures. Within this range, it is worth mentioning the most important candidates with increasing mass: axions, with sub-eV/c\({}^{2}\) mass, detectable in haloscopes or low background detectors [1]; sterile neutrinos [14], with mass of the order 1 keV/c\({}^{2}\), ever-green candidates for many presumed anomalies, never detected; weakly interacting massive particles (WIMP) [15, 16], with mass in the range 1-\(10^{3}\) GeV/c\({}^{2}\), predicted by the SUSY extension of particle Standard Model (SM) and considered the top reference model almost up to the first null results by LHC [17], with some monster extensions up to the Grand Unification scale (GUT) of \(10^{15}\) GeV/c\({}^{2}\) often called WIMPzillas [18], yet never detected; finally, dark objects (MACHOs) with the size of a planet (\(\sim 10^{24}\) kg) or so [19], and primordial black holes, with huge mass, up to 5 \(M_{\odot}\)[21]. Finally, it is also worth mentioning the Mirror Matter model [22], conceptually different from WIMPs, but predicting the existence of dark matter candidates with mass comparable with the mass of visible chemical elements (1-100 GeV/c\({}^{2}\)). Another problem is the cross section scale between visible and dark matter. If one assumes that the typical cross section of the weak interaction (\(\sigma\sim 10^{-44}\) cm\({}^{2}\), or so), is already experimentally at reach, the range of possible interaction cross sections is actually really large. If one takes the squared Planck length as lower bound \(\sigma\sim 10^{-66}\) cm\({}^{2}\), there are 20 possible orders of magnitude, even though not all of them are actually testable. All the models discussed above, basically agree on the _invisible_ (electrically neutral) nature of the dark matter candidate: astrophysical observations constrain its possible electric charge and self-interaction, and require temporal stability with a lifetime comparable to the age of the Universe [1]. Indirect dark matter searches, such as hidden channels in accelerators or annihilation/decay in visible diffused particle backgrounds in the Galaxy, have also yielded no results [1]. Table 1 summarizes the known physical properties of dark matter particle candidates. The Table is essentially empty, except for some naive properties inferred by indirect astrophysical observations. The absence of experimental evidence of dark matter in either direct or indirect search, has sometimes induced the literature to hide or rename some of the historical candidates, often with fancy names as "axion-like" (ALPs) or "WIMP-like" [13], just because the original proposed theory seemed not to hold any more in upgraded experimental contexts. Furthermore, often there are unconscious biases, like the belief that, if the dark matter is not found in the range of the WIMPs, maybe it is really important to search for some _light_ WIMP-like dark matter particle with mass below 1 GeV/c\({}^{2}\), down to a few tens of MeV/c\({}^{2}\). This kind of suggestion is of course much weaker, if one considers that the theoretical prior on some plausible model, from \(10^{-22}\) eV/c\({}^{2}\) to 5 M\({}_{\odot}\) is currently basically uniform. The discussion about the details and the status of the direct dark matter search will proceed as follows: In Sec. 1, the reason, according to the authors, why the direct dark matter detection is so popular, and increasingly supported and financed is outlined; in Sec. 2, the basic ideas about the direct detection of dark matter are reviewed; in Sec. 3, important and often under-valued aspects of the direct dark matter analysis are highlighted; in Secs. 4 and 5, the case of both NaI-based and noble gas (xenon and argon) detectors are largely reviewed, respectively. Finally, in Sec. 6, the evolution of dark matter search in the new millennium is critically reviewed and analyzed. ## 1 A simple theory The direct dark matter search has received an increasing consensus in the recent decades also \begin{table} \begin{tabular}{|c|c|} \hline Property & Value \\ \hline Composition & – \\ Statistics & – \\ Family & – \\ Generation & – \\ Interaction & – \\ Symbol & \(\chi\) \\ Antiparticle & – \\ Mass & \(10^{-22}\) eV \(\div\)5 M\({}_{\odot}\) \\ Half-life & \(\gtrsim 10^{10}\) y \\ Electric charge & \(\lesssim 10^{-7}\) e [at 1 GeV] \\ Self interaction & \(<0.5\) cm\({}^{2}\) [at 1 GeV] \\ Magnetic Moment & – \\ Spin & – \\ Weak isospin & LH: –, RH:– \\ Weak hypercharge & LH: –, RH:– \\ Others & – \\ \hline \end{tabular} \end{table} Table 1: Properties of the dark matter particle candidate to date. The choice of the symbol \(\chi\) is not universal, but it is assumed in the present article as a reference for a generic dark matter candidate. From the top: composition, Bose or Fermi statistics, particle generation or family, fundamental interaction, symbol, antiparticle, mass, half-life, electric charge, self interaction, magnetic moment, spin, weak isospin, weak hypercharge and other characteristics. Symbol “–” stands for _not available_. Figure 1: Schematic view of the expected Keplerian fall. The behavior of the rotation curve \(v(r)\) (top) is shown both inside and outside the galaxy bulge, with its cross section displayed at the bottom. thanks to its simplicity, and then to the (apparent) solidity of its foundations, invoking some principle invented in the context of theology as the _Occam's razor_. The possibility of such a detection is enclosed in simple equations, understandable at the level of an average high school student. Three basic formulas, concerning the dark matter hypothesis and its possible detection, are hidden among a list of seemingly simple equations. Here is the list: \[\nexists A:\aleph_{0}<\#A<2^{\aleph_{0}} \tag{1}\] \[v=\sqrt{GM/r}\] (2) \[\vec{\nabla}\cdot\vec{\mathcal{F}}=0,\vec{\nabla}\times\vec{ \mathcal{F}}=i\frac{\partial\vec{\mathcal{F}}}{\partial t}\] (3) \[\mathcal{H}u=-\frac{\partial v}{\partial t},\mathcal{H}v=\frac{ \partial u}{\partial t}\] (4) \[E_{A}=\frac{4M_{A}M_{\chi}}{(M_{A}+M_{\chi})^{2}}E_{\chi}\] (5) \[x^{3}+x=1\] (6) \[\mathcal{R}=n\sigma\Phi\] (7) \[\int_{\Omega}\mathrm{d}\omega=\int_{\partial\Omega}\omega\] (8) \[(ih\gamma^{\mu}\partial_{\mu}-mc)\psi=0\] (9) \[e^{i\pi}+1=0 \tag{10}\] An expert reader has immediately recognized that the relevant three equations are Eq. (2), Eq. (5) and Eq. (7). In the following Subsections, an explanation for each of the three Equations is discussed1. Footnote 1: For the sake of completeness, the explanation why the rest of the listed equations look simple, but they are not, is hereby reported. The expression (1), both simple and deeply complicated, represents in symbols the Continuum Hypothesis, partially still the subject of discussion among modern mathematicians. The Eqs. in (3) represent the Maxwell equation in vacuum, just taking the electric and the magnetic field as the real and the imaginary part of \(\vec{\mathcal{F}}\). Whereas, the Eq. (4) displays the Shroedinger equation, but this time with two real functions \(u\) and \(v\)! Equation (6) is really simple, but it has “complex” solutions, definitely not easy to be found if one ignores some algebraic technicalities. The Eq. (8), usually called the Stokes theorem within the \(k\)-form formalism, is probably the most beautiful theorem in advanced calculus, as it encloses the Fundamental Theorem of Calculus, the 3D Green and Stokes theorems, their 4D version in the Riemann space-time and so on. The Equation (9) is the famous Dirac equation, whose correct interpretation in Quantum Field Theory predicts the existence of antiparticles for Fermions. Finally, the Eq. (10), the Euler identity, often advertised by Richard Feynman for its beauty, is a complex number identity that is far more tricky than its apparent simplicity, as it involves the top five numbers in mathematics. ### Existence - Eq. (2) A spiral galaxy contains about two thirds of its mass (\(M\)) in the galactic core (or bulge) [23, 24]. Assuming for the latter a spherical shape with spherically symmetric density \(\rho(r)\), the Gauss theorem predicts, outside the bulge, a gravitational pull \(\propto M/r^{2}\), where \(r\) is the distance from the center of the sphere and \(M\) is the enclosed galactic mass (see Fig. 1). More precisely, the profile of the visible matter depends on the galaxy type, but its specific realization does not invalidate this general argument. This force provides a centripetal acceleration \(\propto v^{2}/r\) for all objects gravitating around the galaxy center. Here one has implicitly assumed the Newtonian weak field approximation of General Relativity at the galactic scale and at the galactic speed (this point has been recently debated, see [6] already anticipated above). The model described so far immediately implies that the velocity of stars as a function of \(r\) should follow the so-called "Keplerian fall" described by Eq. (2). But this is not what is observed [25]: rotation curves always lie largely above the expected behavior coming from the independent quantification of the visible galactic mass from mass-to-luminosity ratio of stars and non-luminous (gaseous or solid) mass. Sometimes it is somewhat inaccurately said that observed rotation curves are _flat_. Those curves are actually exceeding the Keplerian fall, but with a family of universal curves that depends on the visible size of the galaxy, upon which the ratio between visible and dark matter also depends [23, 24]. The dark matter distributed in a spherically symmetric halo, present in the interstellar space inside each galaxy, fills this discrepancy between predictions and observations. There is considerable debate about the profile of the radial dependence of the spherically symmetric matter distribution in the hypothetical dark matter halo, depending on whether it is inferred from numerical simulations of collisionless particle clustering or from observed rotation curve analysis. The majority of the proposed profiles can be described by the generic formula: \[\rho_{s}(r)=\frac{\rho_{0}}{\sum_{i=0}^{3}a_{i}\left(\frac{r}{r_{s}}\right)^{i}}, \tag{11}\] where \(\rho_{0}\) is a normalization constant, and \(r_{s}\) is a characteristic size scale. According to the choice of the dimensionless coefficients \(a_{i}\), the profile can be more "cuspy" (as in the Navarro-Frenk-White model [7]) or more "cored" (as in the pseudo-isothermal or Burkert models [8]); or something different, but anyway traceable by approximation to Eq. (11), as in the Einasto profile family [9]. To conclude this Section, it is worth mentioning that, interestingly, the absence of the Keplerian fall has been recently questioned, at least from some interpretation of accurate observations of the Milky Way by GAIA [26, 27]. The correctness and the implication of such results have yet to be validated and fully understood. ### Kinematics - Eq. (5) A typical dark matter candidate \(\chi\) of mass, say, \(M_{\chi}=50\) GeV/c\({}^{2}\), hitting a target (visible) nucleus \(A\) with mass of about \(M_{A}=50\) GeV/c\({}^{2}\) in laboratories, has a typical velocity \(v_{\chi}\) in the galaxy of the order of a few hundreds of km/s (\(v_{\chi}\ll c\)). As a consequence, the kinematics of the collision \(\chi\)-\(A\) is not different from two billiard balls with masses \(M_{\chi}\) and \(M_{A}\) hitting each other (in a non relativistic regime, actually \(\beta=v_{\chi}/c\sim 10^{-3}\)). The classical energy and momentum conservation leads to Eq. (5), in case of linear collision, otherwise the decrease for the cosine of the scattering angle has to be included. The kinetic energy of the target recoil \(E_{A}\) is a _kinematic_ fraction of the dark matter kinetic energy \(E_{\chi}\). Based on the numbers provided above, calculations show that the expected recoil energy in a detector on Earth is of the order of \(\lesssim 10\) keV, and so extremely difficult to detect, but not impossible with the present technologies. ### Rate - Eq. (7) Lastly, Eq. (7) represents the interaction rate \(\mathcal{R}\) expected in a given detector on Earth, exposed to the dark matter wind caused by the Solar System's relative motion inside the Milky Way: the rate is given by the target density \(n\) times the \(\chi\)-\(A\) cross section per nucleon \(\sigma\), typically assumed to be spin-independent (SI), and the dark matter flux \(\Phi\). In detail, assuming a Maxwell distribution of the dark matter particle velocities inside the gravitational potential "box" of the Galaxy, and the experimental features of the detector, Eq. (7) has to be integrated over the velocity distribution, taking into account the detector energy threshold. Then, all the data has to be combined with the experimental resolution, normalized to the energy quenching for nuclear recoils, and, finally, scaled according to the detector acceptance [28]. In the SI case, the full formula can be summarized as \[\frac{d\mathcal{R}}{dE}=\frac{\rho_{\chi}A^{2}\sigma F^{2}(E)}{2M_{\chi}\mu^{2 }}\int\limits_{v_{\rm min}}^{v_{\rm esc}}\frac{f(v,v_{0})}{v}\mathrm{d}v\! \otimes\!\mathcal{G}(E) \tag{12}\] Where \(\sigma\) is the cross section per nucleon of the target \(A\), \(F(E)\) is the nuclear form factor, \(\mu\) is the reduced mass between the target and the dark matter mass. Whereas, assuming the standard (galactic) halo model (SHM) [29, 30], \(\rho_{\chi}=0.3\) GeV/cm\({}^{3}\) represents the local dark matter density in the Solar System, \(v_{\rm min}\) the minimum detectable velocity (given the experimental energy threshold) and \(v_{0}=220\) km/s the circular rotation velocity and \(v_{\rm esc}=544\) km/s is the Milky Way escape velocity, taken as the cutoff for the Maxwellian velocity distribution. The Sun velocity is \(v_{\odot}=232\) km/s. Those parameters have been slightly updated recently, but those small variations go beyond the basic purpose of this article. It is worth noticing that the notion of SHM could be criticized as non-realistic, and one can imagine a vast zoology of monster halos with local density anisotropies and streams: this can change a little the result interpretation (as the rate normalization can change), but cannot create specific detection anomalies out of nothing. Finally, the convolution, \(\otimes\), with the energy dependent function \(\mathcal{G}(E)\) symbolically includes the experimental features, such as resolution, nuclear quenching and acceptance in the region of interest. The Equation (12) can be integrated over the experimental energy window and made explicit as \(\sigma_{\rm SI}(M_{\chi})\). An experimental limit, _e.g.,_ the absence of events at 90%CL, typically looks like an asymmetric hyperbole branch, as depicted in Fig. 2 (greenish region): the branch on the left represents the experimental threshold wall, instead the branch on the right corresponds to the loss of sensitivity due to the reduced density of targets for heavier dark matter masses. An experiment has typically the maximal sensitivity for \(M_{\chi}\simeq M_{A}\), corresponding to the minimum of the green curve. The violet region in the Figure corresponds to the so-called _neutrino floor_, _i.e._ the region in which neutrinos coming from Sun, atmosphere and diffused supernova background gives a nuclear recoil from coherent scattering, similar to the expected equivalent dark matter interaction with a given mass. This actual experimental limitation can be overtaken only by future experiments exploiting the "directionality" of the dark matter wind, _i.e._ the preferred direction along the galactic plane, due to the relative motion of the Sun with respect to the galactic center, against the neutrino background, assumed as uniform and isotropic. This particular aspect will be further discussed in Sec. 6. ## 2 Detectors A grand piano is constructed from durable and raw materials. Indeed, it is basically made of hard substances, such as wood, cast iron, steel and felt. A single string holds a tension of about 80 kg and many of its 88 keys own 2 or 3 strings, for a total tension of 15000 kilogram-force. However, only after a wise assembly and a proper (fine) tuning, this terrific music instrument can produce a very soft and profound sound, which one can appreciate for example in the famous Chopin's Nocturnes. Similarly, a particle detector is made of hard and raw stuff as well: from metals, crystals or liquids and electronics, one can get an answer about fundamental question in particle physics and cosmology, but this process is not that straightforward, and a poor knowledge of its functioning can deceive the smart experimentalist, and the brilliant theoretician who wants to jump too quickly to the conclusion. Consider a simple example. A typical particle detector is made of a scintillating target (solid or Figure 3: Example of a simple detector made of a scintillating crystals optically coupled to a PMT. The scintillation signal is converted into a series of photo-electrons producing an electrical pulse related to the time distribution of the scintillation light emission. Figure 2: The greenish region represents the typical excluded region of dark matter experiment with null result in the SI \(\sigma\)–\(M_{\chi}\) plot. The violet region represents the so-called neutrino floor. Finally, the red curve, with a cusp, represents a limit in case of annual modulation analysis, see Sec. 3. liquid) coupled with a light sensor, for example a photo-multiplier tube (PMT) [31]. When an ionizing particle hits the target, a given number of electromagnetic field quanta are excited, and part of them (depending on detection efficiency) collapse to form what is known as a "photo-electron" on the light detector. The corresponding electric pulse output is then shaped and amplified and eventually converted into a binary number, and finally stored on a computer hard drive. The corresponding data are retrieved and analyzed by numerical algorithms or passed to some artificial intelligence black box, see Fig. 3. A real detector is, of course, sensitive to internal and external radioactive backgrounds, as well as hypothetical dark matter particles, whose foreseen amount plays a crucial role in the goodness of the proposed experimental setup. Furthermore, the electronic (non physical) noise, intrinsic and/or picked up from the environment, can mimic, to some extent, the signal produced by particle interactions. As a matter of fact, only a very deep knowledge of all those effects described above, can enable a good investigation. And sometimes, as it happens in most of the experiments, part of those effects are not known _a priori_ and can be addressed properly only after a lot of years of calibrations, analysis and hardware improvements. Even a seemingly simple device like a PMT can produce a wide variety of noise pulse signals that might not be distinguishable by a basic classifier, and requires a deep characterization, sometimes using novel techniques based on multidimensional mathematical algorithms such as multivariate likelihood ratios or support vector machines, or even nonlinear methods based on boosted decision trees or multi-layer perceptrons, and machine learning in general. Finally, the optimal detector has some properties capable of discriminating, not only between signal and noise, but also between the physical characteristics of the primary interaction, as between electron and nuclear recoils (ER-NR, hereafter), especially useful for addressing the nature of a possible dark matter candidate. In the example mentioned above, the discrimination could be enabled by the time distribution of the scintillation light. Typically, a classifier, _i.e._ a parameter defined through _e.g._ a likelihood ratio or an artificial neural network, shows a characteristic distribution depending in general on the particle energy and exhibiting a partial overlap (inefficiency) in the region of interest. Usually, after a deep training with known sources, an acceptance region, in which the dark matter candidate is expected, is defined (See Fig. 4 as an example). If a statistically significant group of events emerges over the expected background, one can reasonably claim evidence of new physics, which should later be confirmed by other equally sensitive experiments or complementary techniques. Dark matter detectors typically exploit various phenomena to develop classifiers. An ionizing particle hitting a material can, in general, produce scintillation, ionization or phonon excitation. Many detectors, depending on state and on the temperature, can utilize one or more signals, offering broader capabilities in particle discrimination [32]. The high level standard of low background detection requires the choice of highly radio-pure target materials to be operated in shielded and, Figure 4: Classifier distribution for NR (gray) from calibration and ER (blue) from a physics data collection. The acceptance region (red box) for NR candidates can be defined e.g. as the median of the NR distribution. The observed three events in the NR band are to be interpreted according to event/noise understanding and expected physical background. possibly, actively vetoed detectors, located in underground laboratories, far enough from the atmospheric muons and cosmogenic-induced radiations. These materials are usually not available on the market and require a long and accurate R&D program, sometimes with no predictable outcome. Lastly, an important experimental aspect to consider is nuclear quenching. A NR indeed releases only a portion of its actual kinetic energy due to non-radiative excitation. For that reason, the observable energy is less than the one released by an equivalent ER. This relative ratio is important for the final interpretation of the experimental result, for this reason results are usually presented in electron-equivalent energy (_e.g._ keVee) and the quenching factor is given independently by dedicated calibrations with neutrons. Besides the counting of outliers in the expected acceptance region, direct dark matter detection in underground laboratories can also be pursued through the detection of annual modulation. This signal arises from the relative motion of the Earth around the Sun with respect to the center of the Galaxy. In addition, tracking detectors sensitive to the directionality of the dark matter can be employed. With these basic concepts in mind, one can now move to the present experimental situation. ## 3 Analysis Imagine that one gets a few (unsuspected) points in the square box of Fig. 4. Can one claim for the dark matter discovery based on that? A wise response might be that the number and the distribution (_e.g._ in energy and space) of the expected background has to be declared in advance. For example, if an experiment observes 4 events out of 2 (expected), one may argue that the Poissonian fluctuation of 2 can likely give 4 in a reasonable number of cases. But if one gets 8 out of 2, the story becomes more intriguing, as the fluctuation of 2 can hardly return 8. ### Which statistics? The way in which these naive words "likely" or "hardly" are converted into some quantitative parameters is not universally accepted by a common procedure, and there are multiple approaches based on different statistical interpretations, often with heated debates, like the ones between Frequentists (_e.g._ Feldman-Cousin [34]) or Bayesians [35] statisticians. Of course this is a fundamental and longstanding controversial debate that cannot be solved for sure here and sometimes the solution is not unique, but depends on the specific situation. Therefore, for an experimentalist very often it is more convenient to quote results (whether it is a measurement or a limit) in more than one approach, just to delegate some possible controversial matters to others. The real challenge might be how to properly combine results coming from different approaches. However, it is also true that as long as results are just limits the process itself is not really harmful, and the nuances in results stemming from different philosophies are basically hidden by the line width in the \(\sigma\)-\(M_{\chi}\) exclusion plot. For real positive results, the problem could be more delicate. ### How many \(\sigma\)'s? If some statistical procedure is assumed, it becomes important to show how many sigmas (actually the \(p\)-value over the background fluctuation) are necessary to claim for a discovery. Among physicists, there is a common (questionable) practice to associate a naive meaning to a certain number of sigmas, such as: _mild indication_ (\(1\sigma\)), _indication_ (\(2\sigma\)), _evidence_ (\(3\sigma\)), _never quote that, it's bad luck!_ (\(4\sigma\)) and _discovery_ (\(5\sigma\)). The real problem here is that all of them are only indicators: forgotten systematic uncertainties and mistakes are always possible and may unexpectedly arise, and unfortunately there is plenty of such examples in literature. Besides these folkloristic topics, to claim a scientific discovery, the way is much harder, as it is explained in the following Subsection. ### No background Given that background fluctuations can deceive experimentalists, it would ideally be preferable not to have any background at all. In other words, when planning a detector, it is ideal for the expected background events to be significantly fewer than the expected signal to avoid being misled by fluctuations. This has created the fashion of having high-sounding names as _background-free_ experiments, or even _zero-background_ experiments. What is the difference between "Cola Light" and "Cola Zero"? In reality, there is no difference: both contain chemical sweeteners instead of sugar, but in marketing names really matter. Some people might avoid a drink labeled as "Light", but would choose one labeled "Zero", as if it is not for everyone facing the zero with no fear. The same concept applies to the background in dark matter experiments: it is primarily marketing. The honest name should be "low background", where the adjective "low" should go with some objective quantifier. In a popular, and absolutely interesting, review on neutrinoless double beta decay, a foot-note reads [36]: "For a truly zero background experiment, one event is enough to claim a discovery. In a similar fashion, encountering a unicorn is enough to claim its existence, provided that we have a template of a unicorn to which to compare the observed candidate." When the authors asked GPT-4[37] to comment on this statement, its response was notably insightful. Here is its reply: "A single observation of an event can suggest that something exists, but in a scientific context, such conclusions are usually considered weak evidence. Scientists value repeatability and reliability of data, meaning that multiple, consistent, and independent observations are necessary to establish existence. In the case of a unicorn sighting, a single observation would likely not be considered sufficient proof of their existence. Scientists would need to examine the evidence, conduct additional observations, and demonstrate that these observations are indeed consistent and repeatable. For rare or extraordinary phenomena like unicorns, scientists may be more cautious and require even more evidence to accept their existence. In summary, a single observation can lead to the belief that something exists, but scientists typically rely on repeatable, reliable, and independent data. A lone unicorn sighting would likely be insufficient, and further research and observations would be needed to confirm their existence." If one ignores this crispy reasoning, one can be easily fooled and forget the basic principles of good science. Indeed, continuing with the metaphor, the unicorn is quite an unfortunate example, not only for our prior about its existence, but also because there is plenty of alicorns (unicorn's horns) in museums, coming from the Middle age [38], sometimes made actually of bone. Yes, but a lot of people, even today as in the Middle Age, ignore the existence of narwahals. To conclude, what one sees in a detector is not a unicorn anyway, if for unicorn one means something completely weird and different from ordinary stuff. In a real particle detector, what one records is nothing else than a pure electronic signal, digitized and analyzed as a discrete waveform, that resembles either physical events and/or noise artifacts more often than expected. ### Annual modulation A "model-independent" approach, which in principle can ignore the nature of the \(A\)-\(\chi\) interac tion, involves detecting an annual modulation signal from a shielded detector over a prolonged (multi-annual) exposure period. In this case, the ER-NR discrimination is not necessary, as one is interested only in the typical signature of such a signal. The Earth, indeed, revolves around the Sun at 30 km/s while the Solar System, tilted of \(60^{\circ}\) with respect to the galactic plane, moves altogether around the center of the Galaxy at about 232 km/s, as depicted in Fig. 5. As a result of this simple geometry, the expected signal is \[\mathcal{R}(t)=\mathcal{R}_{0}(t)+\mathcal{S}_{m}\cos(\omega(t-t_{0})), \tag{13}\] Where \(\mathcal{R}_{0}(t)=\mathcal{S}_{0}+\mathcal{B}(t)\) is the total trend given by the sum of a constant part \(S_{0}\), due to the _unmodulated_ dark matter component, and the detector background \(\mathcal{B}(t)\) that in principle can depend upon time, as _e.g._ in presence of radioactive decaying contamination, or time varying cosmogenic background; \(\mathcal{S}_{m}\) is the modulation amplitude of the (expected) dark matter signal, basically given by the relative speed of the Earth with respect to the local dark matter velocity distribution in the Galaxy, and therefore of the order \(30/232*\cos(60)\simeq 6.5\%\) with respect to \(\mathcal{S}_{0}\); \(\omega\) is the annual angular velocity corresponding to \(2\pi/(365\,\mathrm{d})\simeq 0.0172\) rad\({}^{-1}\); finally, \(t_{0}\) is the time (phase) corresponding to the maximum rate, _i.e._ to the date in which Earth and Sun go towards the same direction (on June 2nd). If one builds up a radio-pure and shielded setup, acquires data for some years and performs a temporal regression analysis in which \(S_{m}\), \(\omega\) and \(\phi\) are free parameters returned simultaneously in the expected ranges, one can in principle claim that such a signal is compatible with the presence of a diffused dark matter gas in the interstellar space at the Earth distance from the Galaxy center. Indeed, Freese et al. [39] clearly states: "We argue that a modulation can itself be used as primary means of detecting WIMPs [more in general dark matter (A.N.)] (rather than as a confirmation) even in presence of a large background." This sentence is _for common sense_ correct but actually _theoretical_ and _practically_ wrong. First of all, it corresponds to the material implication "**IF** the dark matter exists **THEN** a modulation is visible", _i.e._ \[\textsc{dark matter}\rightarrow\textsc{modulation}, \tag{14}\] and so, looking at its truth table, the fact that one can see modulation even if dark matter does not exist is still a valid possibility: one can argue for example, that a time-varying background related to some possible seasonal (o seasonal-induced) signal is still possible [40, 41]; moreover, Figure 5: Diagram of the Earth and Sun velocity vectors with respect to the galactic plane. Figure 6: Two limit cases of distortion: June 2nd (orange), with maximum relative velocity of the Earth with respect to the dark matter halo, and the opposite case on December 2nd (cyan). Finally, the red lines mark a possible experimental integration interval. a time-varying \(\mathcal{B}(t)\) background, if not properly accounted for, can bias the final results [42], as it is discussed later. Then a robust modulation analysis has to show the consistency of all terms of Eq. 13, no one excluded, at the very least. As a further example, it is also worth mentioning that the literature is full of apparent violations of the exponential law with annual modulation components in radioactive decays, opportunely criticized, see [43] and Refs. therein. Finally, in case of annual modulation the limit in Fig. 2 has a cusp around the minimum (red curve). This is an artifact of integrating each time bin over a limited energy window. It is possible, indeed, that the distribution of the target recoil spectrum is distorted between June 2nd ad December 2nd in such a way that the shape changes, but the total area not: in this case there is a specific dark matter mass for which the sensitivity is lost, see Fig. 6. The same accurate analysis also leads to the so-called _phase inversion_ phenomenon, happening in the very low energy region, that should be present in the experimental data, but sometimes not visible because of a too high experimental threshold with respect to the inversion point that depends also on the specific dark matter mass. ### Blind analysis The blind analysis, consisting for example in closing, completely or in part, the red box of Fig. 4, is a good practice, adopted by other disciplines such as medicine, to avoid biases in the analysis, especially in low-rate critical conditions. A collaboration that decides to apply this practice, usually closes the box for physical data taking [44, 45], training the event reconstruction and the selection criteria only on a subset of data, possibly not used in the final analysis. Figure 7: Genealogy of the NaI-based detector. Besides COSINUS, which is planning to deploy cryogenic crystal with ER–NR discrimination, all the others are using the model-independent approach exploiting the annual modulation expected signal. The collaboration decides to freeze the data-set, and eventually to open the box. The scenario is significantly different when the "un-blinding" is done in public with journalists. With a few exceptions [46], this is usually done behind closed doors instead, and nobody knows what happens inside. In this case one can only rely on the honesty and professionalism of colleagues. ### Data sharing It is worth citing what Wikipedia [47] says about the important item of the "data sharing": "When additional information is needed before a study can be reproduced, the author of the study might be asked to provide it. They might provide it, or if the author refuses to share data, appeals can be made to the journal editors who published the study or to the institution which funded the research." Raw data are usually not understandable for non-experts. However, all physically reconstructed events and procedures must be available. The dark matter community should, sooner or later, converge towards an open data policy, especially if large amounts of funding are going to be spent on reproducing some experiments. ## 4 The NaI case Sodium iodide crystals doped with thallium NaI(Tl) are largely used as particle detectors exploiting their scintillation properties. The light produced when an ionizing particle hits the crystal is usually detected by PMTs, based on a very well known and consolidated technology, which nowadays can be easily manufactured with highly radio-pure materials [44]. The possibility of using these crystals for dark matter direct detection was first explored by the DAMA collaboration (see [48] and Refs. therein for a detailed review). DAMA, since its smaller version, has been detecting a modulation signal compatible with period and phase with the one expected by the presence of the dark matter halo in the Milky Way, rejecting the no-oscillation hypothesis at 12.9\(\sigma\) (20 annual cycles) in the 2-6 keVee energy interval. These interesting results, already released in its first and less massive version at the turn of the new millennium, has become a media event, and has pushed, to some extent, the direct dark matter search in the first decade of 2000s, as it will be detailed in Sec. 6. The signal detected by DAMA is significantly incompatible, in the framework of the SI interaction with SHM, with the absence of a corresponding signature in detectors with higher sensitivity, as the ones based on noble gases that will be discussed in Sec. 5, and with other detectors exploiting a big variety of different techniques, briefly reviewed in Sec. 5.4. In other words and in practical terms, the DAMA signal is so intense that the same WIMP-like interactions should be visible in a cup of liquid xenon. Instead, xenon-based detectors, as discussed later, are currently operating multi-ton targets with no results. The remote explanation that the sodium and iodine nuclei have some "special feature", not met by other target nuclei, has anyway strongly motivated the scientific community to reproduce the DAMA experiment using similar NaI crystals independently, as it will be discussed later. Figure 7 shows approximately the genealogy of experiments born after DAMA to accomplish this goal. At the moment, COSINE-100 [49] (originated by the merging of KIMS-NaI [50] and DM-Ice [51], and ANAIS-112 [52] are the only two experiments already taking data for dark matter search. SABRE [53] and PICOLON [54] are taking data in a R&D stage, to basically prove the detection principle and quantify the intrinsic contamination. COSINUS [55] is the only detector using NaI crystals with a bolometric technique, and thus exploiting both scintillation and temperature signals for ER-NR discrimination. After some preliminary results on small crystal samples [56], the COSINUS collaboration is building a bigger and complete setup. In the following, the major experiments in this genealogy are discussed. Finally, the interpretation of the NaI iodide results in terms of SI solution on the \(\sigma\)-\(M_{\chi}\) parameter space could depend on the quenching factor for both Na and I nuclei separately, that could in principle depend on the specific crystal. There is a lot of debate on this issue and a lot of controversial measurement of those important parameters [57]. ### The DAMA experiment The impressive radiopurity of the DAMA crystals, as low as one count per day per kg per keV (cpd/kg/keV), with a threshold of \(1\div 2\) keVee, made it possible to reach a sensitivity on the \(A\)-\(\chi\) cross section better than \(10^{-42}\) cm\({}^{2}\) for the model independent annual modulation in the energy region \(\lesssim 10\) keV. However, the typical NaI(Tl) light yield of the order of \(\sim 10\) PE/keV does not allow one for statistically sensible ER-NR discrimination. The DAMA collaboration had been operating a 100 kg detector in the early phase called DAMA/NaI. The target was replaced with 250 kg of high purity NaI crystals in the subsequent Phase-I and Phase-II. In the latter, the energy threshold has been lowered from 2 keV to 1 keV. The DAMA collaboration is currently operating an empowered Phase-II with threshold as low as 0.5 keV with the aim of adding an further extreme low energy point in the analysis. This region is extremely important as it could show the phase inversion described above, not yet present in the recoil spectrum published so far by the DAMA collaboration. The modulation measured by DAMA is \(\mathcal{S}_{m}\simeq 0.01\) cpd/kg/keVee in the 2-6 keVee energy interval, extracted through a time fit of the residual single-hit (non coincident) rate as reported in Fig. 8 of Ref. [48]. If one considers that this quantity represents only 6.5% of the total rate detected in the crystals, one can naively assume, by scaling, that in the same energy window the unmodulated component of the total rate is about \(\mathcal{S}_{0}\simeq 0.15\) cpd/kg/keVee, and then smaller than the total rate, _i.e._\(\lesssim 1\) cpd/kg/keV. As from Fig. 8, the modulation reported by DAMA is extracted on the experimental residual rate of the single scintillation hits, _i.e._\(\mathcal{S}(t)-\hat{\mathcal{S}}\)(t) where \(\hat{\mathcal{S}}(t)\) is the _detrend_ function. The detrend function used by DAMA is a piecewise function made with the average annual cycles of the total rate for each crystal. It is worth noticing that this method applies without bias, if and only if, the total rate of single-hit is constant; in all other cases, in which there is an explicit dependence upon time of the total rate, amplitude and phase are biased as described in [42]. Basic signal processing theory, indeed, warns about the fact that injecting a periodical manipulation in time series will make the injected frequency itself appear in the final periodogram. The total and explicit rate \(\mathcal{S}(t)\) is never reported by the collaborations (see always [48] and Refs. therein), even if, from the same publications, it is evident Figure 8: Experimental residual rate of the single-hit scintillation events measured by DAMA/LIBRA Phase-I and DAMA/LIBRA Phase-II in the (2–6) keV energy intervals as a function of the time. The superimposed curve in Eq. 13, with period fixed to one year, phase fixed to 152 days (June 2nd) and amplitude equal to the central value obtained by best fit on the data [48]. that this rate is different in the two experimental phases, is time dependent because of the presence of decaying contaminants, and is presumably affected by many discontinuities because of hardware operations. One is not saying that the DAMA signal is completely artifact from an incorrect analysis, but one is only suggesting that the result in phase and amplitude could be biased, and consequently the interpretation in terms of dark matter not exactly correct. ### Reproducing DAMA At present, SABRE is the only R&D project that was able to manufacture high-purity crystals with a counting rate close to 1 keV/kg/keV comparable with DAMA [60], and plans to deploy two different detectors in both Earth's hemispheres, to factor out all possible systematics due to unaccounted seasonal effects. PICOLON R&D also plans to deploy a target of 250 kg of NaI crystals in forthcoming years. Finally, the COSINUS collaboration has proved that the combined scintillation and temperature signals in bolometric usage of the NaI can feature the ER-NR separation, and it is building the first real experiment for a counting experiment with a relatively small crystal. Furthermore, it plans, after the first data-taking, to increase the target and exploit the annual modulation analysis as well. In summary, the NaI case is not solved yet. The experiments trying to reproduce DAMA have not yet definitively clarified this debated result, therefore the funding agencies should strongly support all the attempts to independently clarify the question as soon as possible, with substantial resources and manpower. If there is a physical explanation for the DAMA signal, this explanation would be anyway highly interesting. ## 5 Noble gases At present, noble gas based experimental setups are the most promising dark matter detectors in terms of sensitivity in a wide dark matter mass range, from 1 GeV/c\({}^{2}\) to 1 TeV/c\({}^{2}\)[44, 45, 61, 62, 63], and recently even for masses lower than 1 GeV/c\({}^{2}\)[65, 44]. Liquid targets made of these special elements, can reach high level of radiopurity and, thanks to their scalability, in terms of target mass, represent a realistic technology and a candidate for the ultimate experiment in dark matter search, capable of reaching the optimal sensitivity as will be discussed in Sec. 6. Among the known noble elements, argon and xenon are the sole gases permitting a feasible realization in terms of reliable technology and sustainable costs. Both gases can be exploited in single phase (liquid) detectors or double phase (liquid and gas) time projection chambers (TPCs). Table 2 summarizes the main properties of the two noble elements, making the two technologies complementary in terms of _pros_ and _cons_. All details will be discussed in the two following subsections dedicated to xenon and argon, respectively. Figure 7 sketches the genealogy of both technologies, showing weak (dashed) and strong (solid) relationships in terms of collaborators and/or merging of the corresponding experimental groups, while the gray block represents future projects. Basically the two technologies have a common origin in large noble gas neutrino detectors, as the ICARUS project [67, 68]. Two distinct branches originated from some preliminary R&D projects: the argon and xenon communities, even if this nomenclature is not formally shared among all collaborators. The genealogy includes neither the DAMA/Xe project [69], a single-phase small scintillation not upgraded by the collaboration, nor XMASS [70], another single-phase xenon-based detector that has anyway set a fair limit on WIMP-like dark matter. Some of the experimental aspects of the two approaches will be discussed below. Noble gases can be detectors (liquid) or in double phase (liquid and gas) TPCs. In the first case, the sole scintillation signal (S1) can be used for position reconstruction and pulse shape, where possible. In the second case, for each primary interaction in the liquid target, two signals are exploited: scintillation in liquid (S1) and electro-luminescence in the top gas pocket, due to ionization electrons accelerated and extracted by strong electric fields (S2). Exploiting both S1 and S2 can help in volume fiducialization, background rejections and particle identification when S1 alone, as in the case of the xenon, is not capable of performing the ER-NR separation. A single phase detector, indeed, is easier from a technical point of view, but in general does not allow for a high performance in event reconstruction. S1 and S2 are usually correlated through the ion recombination process, therefore, to improve the performances, the energy estimator is defined as a linear combination of the two, whose parameters are calculated from accurate energy calibrations. A scheme of the noble gas detectors in single and double-phase is depicted in Fig. 10. Recently, the possibility of using an ionizing only signal (S2-only) has been exploited both by argon and xenon detectors, allowing to set a limit, after an accurate low energy background modeling, and considering the shape of Eq. (12) for the expected dark matter signature. These technologies have confirmed a competitive performance also for the light dark matter detection Figure 9: Genealogy of the noble gas detector. Starting from a common origin, the xenon (green) and argon (red) community split into two main branches. The dashed lines point to some weak connection between different projects, while the solid line represents the natural evolution of different detector scales. Finally, the gray diagrams represent future projects. The xenon is divided into three main projects: ZEPLIN, XENON, and PandaX. The first two will possibly evolve in a common final project. The argon community, started with three projects (ArDM, WArP and DarkSide), are currently reunited in the common project DarkSide-20k. with mass below 1 GeV [65, 66]. ### Xenon The xenon has a high atomic number, and so an optimal self-shield from external background. Xenon-based detectors can search for both spin-dependent and spin-independent dark matter interactions. Even if the nuclear form factor is not that favorable [28], the xenon-based detectors can be a factor 5-7 smaller than argon-based detectors, mostly because of the factor \(A^{2}\) in the cross section. Because of the relatively higher boiling point (165 K), the xenon does not present technical issues, typical of electronic components at very low cryogenic temperatures: PMTs and the electronics chain are usually working smoothly with high performances. Even though the scintillation wavelength is in the near UV region (178 nm), the corresponding light can be easily detected by commercial photo-cathodes. It does not require wavelength shifting, resulting in high performance in event reconstruction. Since the two decay components of the scintillation light are very close to each other, S1 alone is not able to discriminate between ER and NR, while the ratio S2/S1 is usually used. Typically, the discrimination power of this classifier is one over 300. For this reason, xenon-based detectors require a high accuracy in the background control through deep purification systems and high accuracy in the material screening and selection. If not, there is a real risk of saturating the detector sensitivity in a very short exposure and creating puzzling results. Investing in the S1 only discrimination with digitizers with high sampling rate, of the order of 1 GSa/sec, could be anyway valid: even a mild preference in the NR-ER discrimination could be enough to improve the separation when combined with S2/S1 and other observables in a multivariate approach, but at present no progress has been made in this direction. In xenon detectors it is easy to remove volatile radioactive contaminants as \({}^{85}\)Kr, but it is more difficult to remove radon with comparable atomic mass, even if a lot of progresses has been recently accomplished in distillation techniques [44]. Figure 12 reports the most updated comparison between the leading dark matter detectors LZ, XENONnT (5.9 ton) and PandaX-4t (3.7 ton) with same order target, as reported in [44]: LZ, originated from the merging or the groups ZEPLIN and LUX is a double phase TPC, in operation since 2022 with a target of 5.5 ton [63]. XENONnT, coming from a long preceding history of versions with increasing mass (XENON10, XENON100 and XENON1T), has shown an unprecedented low background level [71] and is currently taking data with 5.9 ton target; PandaX-4t, is an independent collaboration that, after different preceding versions, is currently operating a 3.7 ton version [62]. From the experience of experiments described above, it is evident a gradual difficulty in the scaling process: a working prototype can prove the detection principle, but cannot prove the increasing technical complication coming from the increase of the target mass and the detector volume. Only a step-by-step scaling, with interme Figure 10: Scheme of the noble gas signals S1 and S2 in double-phase TCPs. A collision of an ionizing particle with the noble gas (dashed brown) produces both scintillation light and ions. In the presence of an electric field, part of the ions does not recombine producing additional light (S1), but is extracted and produces the secondary signal (S2). As a consequence, S1 and S2 are partially correlated. diate stages, can guarantee a solid progression of the project and a success against a highly probable failure. The current generation of projects can reach a sensitivity very close (even though a logarithmic decade above) to the neutrino floor. All collaborations are planning future projects with targets bigger of a order of magnitude (30-50 ton) to reach a sensitivity that will basically touch the neutrino floor, as ultimate experiments on this research field. In particular, PandaX is moving independently towards a multi-ton detector, while LZ and XENON (with a possible middle scale DARWIN [72]) are discussing a possible joint venture in a project called XLZD [73]. ### Argon The argon is lighter than xenon, therefore the self-shielding is less effective. Its boiling point is 87 K, with a corresponding technical difficulty as the one observed in the PMT electronics at this temperature [74] (even if this argument is a kind of myth: a failure of a specific PMT batch does not mean that a possible R&D with the PMT producer would not have solved the observed issues). The scintillation light of 128 nm is hardly matching the photo-cathode sensitivity, for this reason typically a wavelength shifter (as the TPB at 420 nm [74]) is used, with a corresponding degradation of the event reconstruction, due to the extra diffusion of light. Nevertheless, the argon shows an excellent ER-NR discrimination using S1 only: a very large separation (three order of magnitude) in the fast and slow scintillation components permits to discriminate ER from NR as one over ten billions [61]. It is worth mentioning that the argon nucleus has spin equal to zero, therefore the spin-dependent search cannot be performed, and the non-relativistic expansion of all possible relativistic operators can be done only for a reduced subset [75]. Contrary to xenon, argon coming from the atmosphere is highly contaminated with the long-lived beta emitter \({}^{39}\)Ar. For this reason, part of the argon community has moved towards the usage of deep underground argon in which this contaminant is reduced by a factor better than one over 1000 [76]. For the reasons described above, the argon-based detectors have a different story. Two independent lines, indeed, have emerged over the years: single phase as DEAP-3600 [61] and double phase TPCs as DarkSide-50 (and marginally ArDM [78]). DEAP, operating 3.6 tons of atmospheric argon, has set the strongest limit to date for this technology, showing its intrinsic limitations. The double phase liquid argon TPC has a more complex history. Before DarkSide-50 with atmospheric argon (A-Ar) [74], a smaller prototype of WArP had the only result for this technology using 2.3 liters of A-Ar [78]. This limit was further improved by the first data of underground ultrapure argon (U-Ar) distilled from the deep Earth mantle CO\({}_{2}\)[76]. After this result, DEAP-3600 currently holds the best limit for argon-based detectors (in single phase). DarkSide-50, DEAP-3600 and ArDM are now joining a common project called GMDMC (Global argon Dark Matter Community) [79]. The first instance of this joint venture is DarkSide-20k, a giant TPC containing about 50 ton of U-Ar, currently under construction [80]. Furthermore, the GADMC is also planning a futuristic version with 300 tons of active target mass, called ARGO-300. DarkSide-20k presents a lot of challenging aspects, and many technological novelties as compared with the existing TPCs for dark matter: first, the use of SiPM-based photo-detector modules instead of standard PMTs to overcome the issue of the cold electronics, with quite a few critical issues discussed in literature [81, 82]; second, extraction and distillation of more than 50 ton of U-Ar, and preservation of its radio-purity; third, realization of a very big TPC with many challenges for high voltage, purity and event pile-up handling; fourth, use of multi-ton acrylic vessels; finally, A-Ar veto with acrylic doped with gadolinium. Even in the case of DarkSide-20k, it is not clear why the funding agency panels have not supported intermediate scale detectors (like _e.g._ 1-ton scale) with the intermediate physics goal of exploring the light dark matter mass, pushing instead for something bigger, just to fill the gap in a phantom competition with the xenon-based detectors. This choice is anyway questionable: even in case the argon technology was left behind the xenon, the unknown behavior in terms or radiopurity of multi-ton xenon-based detectors is enough worrisome to justify an alternative, even smaller but solid, with high background rejection capability, as featured by the argon. In conclusion, as a matter of fact, the xenon-based technology has always been considered full of criticality, but in the real world it continues to be the most advanced branch of the noble gas based detectors. Liquid argon, on the contrary, is struggling to keep up and future stages are not completely clear. ### Solar neutrinos Before moving ahead, it is worth mentioning that the multi-ton noble gas detectors have some promising by-product purposes. The very low achievable background, high target and high scintillation light yield can make them optimal detectors for precision measurement of solar neutrino fluxes coming from the proton-proton chain and the CNO cycle [83], the two processes responsible of the hydrogen fusion in the Sun. The current precision measurements, produced by detectors such as GALLEX/GNO [84], Super Kamiokande [85], SNO [86] and especially Borexino [87, 88, 89, 90, 91], have helped to better understand solar physics and neutrino oscillation. However, further improvement in precision can address plenty of other open problems, as the solar metallicity abundance [90], the tensions on the solar \(\Delta m^{2}\) with reactor experiments [1], the precision constraint of the total solar luminosity in the low energy spectrum for extra source of energy in the Sun (as dark matter decay, indeed) search for solar axions [92], and non-standard neutrino interaction as smoking gun of new physics beyond SM [93]. In other words, trying to build large dark matter detectors with some multi-purpose possibility, such as solar neutrino and also neutrino-less double beta decay detection, could in principle be reasonably acceptable. \begin{table} \begin{tabular}{|c|c|c|} \hline Property & Argon & Xenon \\ \hline \(Z\) & 18 & 54 \\ \(A_{r}\) & 39.9 & 131.9 \\ \(\rho\) & 1.4 g/cm\({}^{3}\) & 3 g/cm\({}^{3}\) \\ \(T_{B}\) & 87 K & 165 K \\ \(\lambda\) & 128 nm & 178 nm \\ \(\tau_{\rm fast}\) & 4 ns & 6 ns \\ \(\tau_{\rm slow}\) & 25 ns & 1600 ns \\ LY & 40 PE/keV & 46 PE/keV \\ ER–NR Classifier & S2/S2 & S1(t) \\ \hline \end{tabular} \end{table} Table 2: Comparison between xenon and argon in dark matter detectors. From the top: atomic number, atomic weight, density, boiling point, scintillation wavelength, singlet decay, triplet decay, light yield, ER–NR classifier. Figure 11: _Left:_ single phase detector in which the light emission from the scintillation process is collected by PMT’s instrumented all around the detector. _Right:_ double phase TPC. A primary scintillation signal (S1) is produced in the liquid. The drift electric field between the cathode (K) and the grid (G) moves the ionization electrons upwards, finally extracted by another electric field between the grid and the anode (A). The accelerated electrons in gas produce a second and stronger light signal (S2). ### Others Besides NaI-, xenon- and argon-based detectors, there is plenty of other experiments and R&D's using a big variety of target nuclei and techniques. Some of those, which are less sensitive to the WIMP-like particles (or sensitive only to light masses) are not discussed for the purpose of present articles. It is interesting to mention some of them, which are playing an important role in direct dark matter search: CRESST using a target with CaWO\({}_{4}\)[94], CDMSLite and SuperCDMS [95] using germanium, DAMIC using silicon [96], PICO-60 using C\({}_{3}\)F\({}_{8}\)[97], and NEWS-G using neon [98]. If one may think that DAMA, made of sodium (light nucleus, \(A\simeq 23\)) and iodine (heavy nucleus, \(A\simeq 127\)) cannot be compared to xenon (heavy nucleus, \(A\simeq 131\)) because the origin of the annual modulation comes from the interaction with sodium rather than iodine, one should also think to what happens with a big variety of other atoms, used by other experiments, with null results. And they are made of carbon (\(A\simeq 12\)), oxygen (\(A\simeq 16\)), fluorine (\(A\simeq 19\)), calcium (\(A\simeq 40\)), argon (\(A\simeq 40\)), tungsten (\(A\simeq 183\)), germanium (\(A\simeq 73\)), silicon (\(A\simeq 28\)) and neon (\(A\simeq 20\)). Results do not easily reconcile even in case of spin-dependent (SD) interactions, as _e.g._ the xenon contains approximately the same percentages of isotopes with nuclear spin 0, 1/2 and 3/2. Considering the full list of nuclei tested so far in the framework of SI (and also SD) interaction with SHM, it is extremely hard to believe that sodium plays such a special role among all the elements in the periodic table. ## 6 Evolution of results A good way to explore the evolution of results on direct detection of dark matter can be done reading the dark matter review as reported by the Particle Data Group (PDG) [99]. Downloading old versions from 1996 to the latest update (2022) (see [100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 1] and Refs. therein ), one can see that the Review has increasingly dedicated a larger and larger number of pages, passing from about 4 to more than 30 in about 25 years. This fact, of course, is not only related to the increasing interest in dark matter, but also to the overall increase of the space dedicated to physics reviews in PDG. A correct comparison should be somehow normalized. The first SI \(\sigma\)-\(M_{\chi}\) plot appeared only in 2010 and it was updated on 2011 [107]. In this paradigm, one can immediately see the strong tension of the two DAMA solutions (as interpreted in the SI framework by [113]), made of two islands (one for the sodium solution and one for the iodine solution), with the other experiments, as XENON-100, EDELWEISS and CMDS-Si. Along with DAMA, another small-scale detector based on germanium and called CoGeNT [114] shows a similar solution close to the DAMA sodium regions. In the same plots, it appears the region in which possible SUSY candidates can be potentially discovered, as hidden channels, at LHC. In 2012 nothing was really changed, but in the 2013 update [108] (see Fig. 13), persisting also in 2014 [109], new islands appeared, very close to DAMA: both CDMS-Si and CRESST had an excess over the predicted background. All Figure 12: Comparison of the three leading xenon-based dark matter detectors, as reported in [44], which is the publication referred to as “this work”. these positive results appeared singularly close to the Higgs boson discovery in July 2012 at LHC [115, 116]. At the same time, the absence of evidence of SUSY candidates started to be digested by the scientific community, but still non widely consolidated [117]. In 2015, CRESST-II observed no excess, and then the previous islands due to CaWO\({}_{4}\) were removed in the updated PDG [109]. In 2016 nothing was really changing, except for some improvement on previous limits by many experiments [110]. In the 2017 update [110], from null results by further upgraded versions of CoGeNT, the corresponding island was removed, and new limits, such as PICO-60 and DEAP-3600 were added. In the 2018 update [111], coinciding with important releases from ANAIS-112 and COSINE-100, the DAMA islands were removed from the PDG. Notice that the DAMA SI islands are almost never presented in the official DAMA publications, and recently survive mainly in the publications of the NaI-based experiments trying to reproduce DAMA. Hereafter, up to the last upgrade in 2022 [1], the PDG SI \(\sigma\)-\(M_{\chi}\) plot reports only upgraded limits (see Fig. 14), closer and closer to the neutrino floor, for both low and high masses. Recently, since the statistical fluctuations of the neutrino floor can become important in present and next generation of dark matter experiments, the name has been changed to _neutrino fog_ or _mist_, considering the real impact of how this expected background grows with the experimental exposure [118]. The common practice of focusing on the SI \(\sigma\)-\(M_{\chi}\) plot has received some criticisms. One may think that comparing all experimental results in the same SI \(\sigma\)-\(M_{\chi}\) plot could not be a comprehensive and accurate way to address the dark matter problem, and would be only a generally subjective, limited and imprecise action. One can reasonably accept this criticism, but it remains still unexplained how a unique positive result (from DAMA) can be compatible, independently of the model, with a tens of other null results made by experiments of comparable or larger sensitivity and using even similar nuclei as discussed above. Those experiments are not detecting any positive signal anyway, regardless of the fact that nature has chosen a SI interaction, or whatever, for visible and dark matter particles. As has happened many times in the history of physics, the research is going through a hard period in which the knowledge is stuck and experiments are becoming more and more challenging Figure 14: Cross section as a function of the WIMP-like mass in the SI framework, as reported in PDG 2022. Only limits (solid curves) are reported by various experiments, see text. Figure 13: Cross section as a function of the WIMP-like mass in the SI framework, as reported in PDG 2013. In the same plot, limits (solid curves) and positive results (islands with a given CL) are reported for various experiments, see text. and expensive. The whole scientific community is moving by inertia, after the thrust of a strong theory that is now left behind the shoulders, and becomes everyday fuzzier and fainter. Moreover, this motion proceeds so smoothly that someone could not even realize how he ended up in it. Furthermore, the belief is so strong that people are already thinking about the future made, not only of bigger and bigger detectors capable of reaching the neutrino floor, but also thinking about how to drive into the neutrino fog with a huge detector capable of exploiting the directionality of the dark matter. The latter in particular looks very futuristic, given the present status of results. From recent history, it is clear that there was some original enthusiasm, supported by a very appealing theory as SUSY, and close to Higgs discovery there was a cluster of experimental "excesses", leaded by the DAMA result that has probably amplified the expectation that dark matter particles could be really detectable. It is anyway commonly accepted that reaching at least the neutrino floor is a kind of "moral duty" for the scientific community, before trashing completely WIMPs and all WIMP-like paradigms all at once, in favor of other particle or non-particle solutions of the problem of the missing mass of the Universe. Finally, to conclude with another interesting sociological case, one can remind when XENON1T published a presumed excess above the threshold as reported in Fig 15[120], and later non confirmed by XENONnT with a lower overall background [71]. The collaboration was also warning that such a spectrum could even come from an unaccounted tritium beta spectrum. Regardless of whether or not some trained eyes see these two outlier points above the event threshold as an excess, it is curious that these two points have received about 600 citations to date, mostly from theorists and phenomenologists. What can one do as a particle physicist? The choice of spending some more time in this kind of search will for sure pay back, as advances in knowledge and technological implications are granted by basic research. Anyway, one should not exaggerate, because if the main purpose of an experiment are its secondary goals, in this case one can propose to search for whatever non-falsifiable theory, and this would make the science become a practical paradox. ### A Drake equation The Drake equation is a probabilistic argument used to estimate the number of active, communicative extraterrestrial civilizations in the Milky Way galaxy [121]. Of course, this equation is more useful for "understanding" rather than "quantifying", as its parameters are affected by large uncertainty and supported sometimes by naive arguments. One can imagine to follow the same approach for the probability \(\mathcal{P}_{\chi}\) that dark matter is made of WIMP-like (or light WIMP) particles and can be detected by experiments on the Earth. A possible ansatz, containing the main terms, is: \[\mathcal{P}_{\chi}=f_{\rm e}\cdot f_{\rm s}\cdot f_{\rm th}\cdot f_{\rm exp} \cdot f_{\rm det}\cdot f_{\Omega} \tag{15}\] where: \(f_{\rm e}=\) fraction of energy in the full mass interval for dark matter candidates. As there is no Figure 15: Excess of events above the threshold reported by XENON1T, compared with their background model (red). actual reason to prefer one candidate to another, one can estimate this ratio as 5/89, where 5 corresponds in the logarithmic scale to 0.1-10000 GeV/c\({}^{2}\) interval over the full range, from axions to massive primordial black holes. \(f_{\rm s}=\) fraction of possible cross section available for a WIMP-like interaction. If one considers the full range, from the already reached (\(\log(\sigma)=-46\)) down to the squared Plank mass (\(\log(\sigma)=-66\)) of 20 orders of magnitude, and that this kind of candidates cannot be much lower than the neutrino floor (\(\log(\sigma)=-50\)) for experimental reasons, this factor has to be taken as 4/20. \(f_{\rm th}=\) probability that the missing mass of the Universe is explained by particles or by some acceleration anomaly, _i.e._ the failure in extrapolating the Newton law (General Relativity) for the Solar System (\(10^{8}\) km) to the galactic scale (kpc), or something else. There is no real reason why this fraction should not be at least 1/2, or even lower (1/3). \(f_{\rm exp}=\) probability that only one experiment (DAMA) has detected dark matter over about 10 with comparable sensitivity, that is about 1/10. \(f_{\rm det}=\) probability that the dark sector cannot exchange information with the visible sector with other (weak) interaction but gravity, as in some versions of the so-called Mirror Matter models [122]. This probability can be set to 1/2, as there is not a real reason why the dark matter should share the same interaction properties of the visible matter. \(f_{\Omega}=\) catastrophic probability that the Big Bang cosmology is all wrong. Nobody will ever admit that, but also saying that the Big Bang cosmology is 100% correct, would look weird as well. Besides static or quasi-static physical cosmological models and the like, one could consider alternative and radical views of the Universe, such as the Simulation hypothesis [123] or the Mathematical Universe hypothesis [124]. In this case, one would reasonably accept a 50% chance from a philosophical point of view, even though one may also think that dark matter could anyway be included in this kind of Simulation. Furthermore, this probability could be correlated to \(f_{\rm th}\), if modification of gravity can explain the missing mass. Anyway, assuming one for this factor is part of the game, even if quite disturbing. With those basic terms, one gets about 1:3560 for \({\cal P}_{\chi}\). If the equation is empowered with other terms, each of them will be \(\lesssim 1\), then the updated probability will be likely less than this first estimation. Now it is time to bet. ### Mala tempora Until the early 80's of the past century, new particles came out of accelerators every day, and it was relatively easy to understand the hidden logic among particles and interactions in the frameworks of quantum field theory. Theorists got carried away and fantasized about many extensions of SM, which, although constituting an excellent description of the observed phenomena, left and still leaves indications of a more complete high-energy theory. And SUSY, with its by-product "WIMP miracle" [125], was the most awaited guest at the party. LHC, close to the discovery of the Higgs boson, has shown not to be really suitable as a discovery machine for the new physics beyond SM. It is therefore probable that the planned high luminosity stage [126] will end up in controversial _anomalies_ and _tensions_, which will only complete an already existing long list. It would have been probably better to shut down LHC and speed up the construction of the Future Circular Collider, also known as FCC, with a center-of-mass collision energy of 100 TeV, that is almost one order of magnitude higher than LHC. To be honest, one should also admit that, in principle, there is no indication that a possible new physics emerges just at 100 TeV rather than at 1 PeV or more. Therefore, it is anyway like sailing in the open sea without knowing if and where the next land will be. If one can speak of a possible "crisis in modern particle physics", now is precisely that moment. The only (weak) hope, in light of the phrase "mala tempora currunt", is the memory that in similar situations in the history of physics, a significant revolution often followed a deep crisis. The only serious risk is that, in the absence of concrete scientific objectives, experimental collaborations may become uncontrollable inefficient organizations, whose main goals can be something different from scientific research. ### Falsifiable and Scientific Karl Popper in his book _The Logic of Scientific Discovery_ (1934) suggested that a statement, a hypothesis, or theory, to be considered scientific, should be "falsifiable", _i.e._ logically contradicted by an empirical test. The material implication "**IF** it is scientific **THEN** is falsifiable", _i.e._, \[\textsc{scientific}\rightarrow\textsc{falsifiable}, \tag{16}\] leaves a wide room to theories that are falsifiable but not scientific. Proposing a massive particle detectable in underground experiments is for sure falsifiable, but not necessarily scientific, especially if there is no theory behind, and no clear motivation why it should be worth searching for it. Reading the material implication in the opposite direction (as basic logical fallacy) has sometimes made a lot of confusion, not only for the dark matter case, but also for numerous extensions of SM, based on aesthetic argument instead of real necessity. This misunderstanding becomes even more threatening when the properties of a given dark matter candidate are updated after the initial dark matter candidate is not found in the place in which it was proposed [127]. This is the case of WIMP particles emerging from SUSY. The Minimal Supersymmetric Standard Model (MSSM), introduced to accommodate the problem of the hierarchy of the Higgs mass, should have broken at the Higgs mass scale (\(\sim 100\) GeV) and should have predicted the existence of stable massive dark matter candidates. The absence of a SUSY particles discovery at the LHC has pushed theorists to abandon the "naturalness" concerns [128], add other parameters and mechanisms, and increase the SUSY breaking scale, creating a big family of X-MSSM models, where X stands for the acronym of the case, see [129] and Refs. therein. The PDG has recently removed those families of allowed regions in SI dark matter parameter space [1], being already halved by xenon-based dark matter detectors. Furthermore, saying that there is still a 50%, or so, unexplored region is, for what is discussed, definitely pointless. What one needs is a change of paradigm, and this is very well summarized by F. Nesti _et al._[131]: "In detail, we advocate for a paradigm according to which, after abandoning the failing \(\mathsf{\Lambda}\)CDM scenario, we must be poised to search for scenarios without requiring that: (a) they naturally come from (known) "first principles" (b) they obey to the Occam razor idea (c) they have the bonus to lead us towards the solution of presently open big issues of fundamental Physics. On the other side, the proper search shall: (i) give precedence to observations and the experiment results wherever they may lead (ii) consider the possibility that the Physics behind the Dark Matter phenomenon be disconnected from the Physics we know and and does not comply with the usual canons of beauty". This strategy is quite reasonable. However, the discovery of some elusive particle solution in underground laboratories, not supported by any theoretical framework, will be a big deal from an epistemological point of view, but anyway better than nothing. ### The emperor is naked! The dark matter in shape of WIMP or WIMP-like or light WIMP particles is widely accepted as true or professed as being plausible, due to an unwillingness of the general scientific community to criticize it or be seen as going against the mainstream opinion. The _Emperor's New Clothes_ fairy tale by Hans Christian Andersen is a metaphor about logical fallacies, that reads in this case: no one believes in such a dark matter, but everyone believes that everyone else believes in it, until some child comes out of the crowd and shouts: "the emperor is naked"! But, in this case, such a child, if ever any, has yet to be born. ### A way out Even though direct searches have not solved the dark matter problem, there are a few interesting things to do. First, one might need to rethink our understanding of gravity. MOND, or similar, is one idea, essentially saying that our current gravitational laws might not apply on larger scales. Alternatively, one might discover new particles that interact even more weakly than previously thought, making them incredibly hard to detect. Another possibility is that one is missing something in our theoretical framework. Maybe our understanding of particle physics needs an upgrade. Some theories propose new types of particles, like sterile neutrinos or axions, which could be potential dark matter candidates. These particles would be elusive and might interact with normal matter in ways that are not fully understood yet. An interesting change of paradigm is proposed here [132]. On the observational side, upcoming experiments, like the James Webb Space Telescope [133] and Euclid [134] might provide more insights into the cosmic web and the distribution of matter in the Universe. High-energy cosmic-ray observatories and gravitational wave detectors could also bring new clues. In essence, solving the dark matter puzzle might require a combination of refining our theories, developing more sensitive detection methods, and pushing the boundaries of our observational capabilities. ## Conclusions We do not want to spoil the party, so we will conclude by reiterating the conviction that the current direct detection of dark matter in the region of the WIMP-like and light dark matter is absolutely valid, as the cost/benefit is still affordable and justifiable. About the next generation of detectors, the same cannot be said with the same certainty. However, we want to point out that this quest is currently moved by inertia after the strong thrust impressed a few years ago by a solid theoretical framework that now is becoming farther and fainter. It happened many times in the history of physics that some puzzle has been solved because there was a clear indication where to search for its solution. Now it looks as though it is not the case for dark matter any more. The missing mass of the Universe, explained as a heavy particle forming a halo all around galaxies is an simple theory, so simple and easy to attract many scientists who may be not so willing to dwell in complicated calculations and ideas. But it is also persisting, because we are led to think that easier solutions are always the ones chosen by nature. The fact that the presence of the missing mass of the Universe is supported by so many irrefutable pieces of evidence makes the direct search of dark matter a kind of moral duty that we seem to pursue at all costs. But this idea is pointless, and probably dangerous. In this report, we have seen that the leading role in the dark matter search has been driven first by the DAMA annual modulation result, followed by other results singularly close to the discovery of the Higgs boson. This apparent convergence slowly disappeared in the subsequent years, and was overcome by the liquid noble gas detectors, which are continuing showing null results, and increasing their sensitivity with bigger and bigger targets. Given the very small chance of detecting such dark matter particle candidates, as inferred by a procedure similar to the Drake equation, we can conclude that the present stage of the quest, without getting sidetracked by science fiction-like projects, is anyway necessary to start thinking the Universe. ## Acknowledgements First of all, we would like to thank V. Fano and his team from University of Urbino: the origin of this work can somehow be traced back to the interaction with him. A lot of pieces of information for Fig. 7 and 9 are coming from colleagues, as they are not officially reported in collaboration articles. We would like to thank in particular, G. Di Carlo, L. Grandi, A. Ianni, N. Di Marco, K. Pelczar and C. Vignoli for useful discussions about the dark matter problem and for helping with the reconstruction of the experiment genealogies. Finally, we would like to thank G. Ranucci, F. Nesti, A. Di Giovanni and R. Biondi, for comments, suggestions and proofreading. ## List of acronyms LHC = Large Hadronic Collider \(\Lambda\)CMD = Lambda Cold Dark Matter MOND = Modified Newtonian Dynamics SUSY = Super Symmetry WIMP = Weakly Interacting Massive Particle GUT = Grand Unification Theory MACHO = Massive Astrophysical Compact Halo Object ALP = Axion-Like Particle SI = Spin Independent SD = Spin Dependent SM = Standard Model SHM = Standard Halo Model CL = Confidence Level PMT = Photo-Multiplier Tube ER = Electron Recoil NR = Nuclear Recoil R&D = Research and Development SiPM = Silicon Photo-Multiplier TPC = Time Projection Chamber S1 = Primary scintillation light in TPCs S2 = Secondary scintillation light in TPCs UV = Ultra Violet FCC = Future Circular Collider MSSM = Minimal Supersymmetric Standard Model
2309.11914
Survival causal rule ensemble method considering the main effect for estimating heterogeneous treatment effects
With an increasing focus on precision medicine in medical research, numerous studies have been conducted in recent years to clarify the relationship between treatment effects and patient characteristics. The treatment effects for patients with different characteristics are always heterogeneous, and various heterogeneous treatment effect machine learning estimation methods have been proposed owing to their flexibility and high prediction accuracy. However, most machine learning methods rely on black-box models, preventing direct interpretation of the relationship between patient characteristics and treatment effects. Moreover, most of these studies have focused on continuous or binary outcomes, although survival outcomes are also important in medical research. To address these challenges, we propose a heterogeneous treatment effect estimation method for survival data based on RuleFit, an interpretable machine learning method. Numerical simulation results confirmed that the prediction performance of the proposed method was comparable to that of existing methods. We also applied a dataset from an HIV study, the AIDS Clinical Trials Group Protocol 175 dataset, to illustrate the interpretability of the proposed method using real data. Consequently, the proposed method established an interpretable model with sufficient prediction accuracy.
Ke Wan, Kensuke Tanioka, Toshio Shimokawa
2023-09-21T09:23:33Z
http://arxiv.org/abs/2309.11914v1
Survival causal rule ensemble method considering the main effect for estimating heterogeneous treatment effects ###### Abstract With an increasing focus on precision medicine in medical research, numerous studies have been conducted in recent years to clarify the relationship between treatment effects and patient characteristics. The treatment effects for patients with different characteristics are always heterogeneous, and various heterogeneous treatment effect machine learning estimation methods have been proposed owing to their flexibility and high prediction accuracy. However, most machine learning methods rely on black-box models, preventing direct interpretation of the relationship between patient characteristics and treatment effects. Moreover, most of these studies have focused on continuous or binary outcomes, although survival outcomes are also important in medical research. To address these challenges, we propose a heterogeneous treatment effect estimation method for survival data based on RuleFit, an interpretable machine learning method. Numerical simulation results confirmed that the prediction performance of the proposed method was comparable to that of existing methods. We also applied a dataset from an HIV study, the AIDS Clinical Trials Group Protocol 175 dataset, to illustrate the interpretability of the proposed method using real data. Consequently, the proposed method established an interpretable model with sufficient prediction accuracy. Heterogeneous treatment effect Interpretability Randomized control trial Rule ensemble Survival analysis ## 1 Introduction Randomized controlled trials (RCTs) are widely regarded as the gold standard for evaluating treatment effectiveness in evidence-based medicine. In RCTs, the average treatment effect (ATE) can be easily estimated to provide valuable evidence for treatment effectiveness. Although the ATE from RCTs always represents the average effect for specific subject groups, it ignores the heterogeneity of the treatment effect for subjects with various characteristics. Thus, ATEs are unable to provide sufficient information to make optimal treatment decisions for each subject. Therefore, in recent years, estimation of treatment effects has become extremely important across patients with various characteristics [1, 2, 3, 4, 5, 6, 7, 8]. However, most of these studies have primarily focused on continuous or binary outcomes; nonetheless, survival outcomes are also crucial in medical research. Therefore, in this study, we focused on estimating heterogeneous treatment effects (HTEs) on survival outcomes. Machine learning methods are usually used for existing survival HTE estimation because of their high prediction accuracy [4; 7; 8]. However, their reliance on black-box models poses a limitation because they lack explicit insight into the relationships between HTEs and covariates. This lack of interpretability hampers the understanding and trustworthiness of the results, particularly for medical researchers and professionals [9; 10]. To overcome this limitation, we focused on RuleFit, a method known for its interpretability and prediction accuracy, which are similar to those of tree ensemble methods[11]. Although RuleFit was initially developed for continuous outcomes, it has been extended to handle survival outcomes based on Cox proportional hazards models [12]. To distinguish it from the original RuleFit, we refer to this approach as the "survival RuleFit." However, the survival RuleFit can only construct models to interpret the relationships between outcomes and covariates. Therefore, in this study, we propose a survival HTE estimation method based on the survival RuleFit that can construct an interpretable model while maintaining a prediction accuracy similar to that of previous machine learning methods. To estimate the HTEs using survival RuleFit, we can directly apply survival RuleFit using the meta-algorithms T-learner and S-learner, which provide an HTE estimation framework for existing machine learning methods [13]. However, it is important to note that both approaches have drawbacks. First, within the T-learner framework, survival RuleFit models are constructed for the treatment and control groups, and then, HTE is estimated using the difference between the estimates of the treatment and control group models. The primary drawback of this approach is its inability to guarantee that the models for the treatment and control groups share the same base functions. Such differences in base functions could result in differences in the estimates between the treatment and control groups, which are not attributable to the treatment effect. This discrepancy can cause bias in the estimation of the HTE. To address this drawback, Powers et al. (2018)[14] proposed a new framework called "shared-basis conditional mean regression" by restricting T-learners and creating models with the same base functions in the treatment and control groups. This framework removes bias caused by differences in the model base functions. However, the survival RuleFit was proposed based on the Cox proportional hazard model. Although the shared-basis conditional mean regression framework can restrict the treatment and control group models to sharing the same base functions, differences in the baseline hazard functions may still introduce bias in the HTE estimation. Furthermore, these approaches ignore the possibility of main effect differences between the treatment and control group models, which may also lead to bias in HTE estimation. Danel et al. (2022)[15] highlighted the importance of correctly specifying the main effect when estimating HTEs using the Cox proportional hazards model. Second, within the S-learner framework, a survival RuleFit model is constructed using the treatment indicator as a covariate, and then, the HTE is estimated from the difference between the treatment and control group estimates. This approach creates a single model that can automatically model the main and treatment effects. Therefore, the S-learner strategy ensures that the treatment and control group models share the same baseline hazard function and main effects. However, we cannot guarantee that the models of the treatment and control groups have the same base functions. Further details are provided in Section 2. To address these drawbacks, the proposed method combines the strategies of S-learner and shared-basis conditional mean regression. The S-learner strategy enables the created model to incorporate both main and treatment effects. Consequently, by utilizing the S-learners strategy in the survival RuleFit rule generation algorithm, we can construct a set of rules, including main effect rules, which do not include interactions between treatment indicators and covariates, and treatment effect rules, which do include interactions between treatment indicators and covariates. Similar to the algorithm of survival RuleFit, the main and treatment effect rules were fitted to a sparse linear model to estimate the relationship between the outcomes and the rules. Our proposed method builds a single model consisting of three parts: relationships between the outcome and main effect rules, relationships between the outcome and treatment rules for the control groups, and relationships between the outcome and treatment rules for the treatment groups. To ensure that the model would be interpretable for the HTE and to avoid bias caused by model differences in base functions, we created a model in the shared-basis conditional mean regression framework using group lasso. The remainder of this paper is organized as follows: Section 2 introduces related work on HTE estimation and illustrates the main purpose of our work. Section 3 explains the original RuleFit method and the proposed method in detail. In Section 4, we describe several simulation studies to compare the prediction performance of the proposed method with those of previous HTE estimation methods. In Section 5, we apply the proposed method to real data and explain how it works. Section 6 summarizes the study and discusses its results. ## 2 Previous methods and motivation for proposed method In this section, we start by introducing the definition of HTE for survival outcomes. Next, we introduce the RuleFit method for survival outcomes. Finally, we illustrate how the RuleFit method is utilized to estimate HTE for survival outcomes in existing frameworks and discuss their drawbacks. ### HTEs for survival data To formalize HTEs for survival data, we referred to the definitions used in previous studies[7; 16; 8]. Consider a dataset \(\{(t_{i}=\min(t_{i}^{*},c_{i}),\delta_{i},z_{i},\mathbf{x}_{i})\}_{i=1}^{N}\), where \(t_{i}\) is the observed survival/censoring time for individual \(i\), \(t_{i}^{*}\) is the true survival time for individual \(i\), and \(c_{i}\) denotes the censoring time for individual \(i\). \(\delta_{i}\in\{0,1\}\) is the censoring indicator: if individual \(i\) experienced an event \(\delta_{i}=1\), otherwise \(\delta_{i}=0\). The treatment indicator \(z_{i}\in\{0,1\}\) assigns individual \(i\) to the treatment group if \(z_{i}=1\) and to the control group if \(z_{i}=0\). Furthermore, \(\mathbf{x}_{i}=(x_{i1},x_{i2},\cdots,x_{ip})^{T}\) is the \(p\)-variable covariate vector for individual \(i\). The conditional survival function at time \(t\) for individual \(i\) is defined as: \[S(t|\mathbf{x}_{i},z_{i}):=P(T_{i}>t|\mathbf{X}_{i}=\mathbf{x}_{i},Z_{i}=z_{i}),\] where \(T_{i}\), \(\mathbf{X}_{i}\), and \(Z_{i}\) are random variables representing the observed survival/censoring time, covariates, and treatment indicators, respectively. Within the framework of potential outcomes [17], the HTE for the survival outcome can be defined as the difference in survival rate between the treatment and control group conditions on the covariates \(\mathbf{x}_{i}\), as follows: \[\Delta(t|\mathbf{x}_{i}) =P(T_{i}>t|\mathbf{X}_{i}=\mathbf{x}_{i},Z_{i}=1)-P(T_{i}>t|\mathbf{X}_{i}= \mathbf{x}_{i},Z_{i}=0)\] \[=S(t|\mathbf{x}_{i},z_{i}=1)-S(t|\mathbf{x}_{i},z_{i}=0). \tag{1}\] ### RuleFit for survival outcome Given the covariate vector \(\mathbf{x}_{i}=(x_{i1},x_{i2},\cdots,x_{ip})^{T}\in\mathbb{R}^{p}\), the survival RuleFit model is defined as: \[h_{\mathrm{RuleFit}}(t|\mathbf{x}_{i})=h_{0}(t)\exp\left(\sum_{k=1}^{K}\theta_{k} r_{k}(\mathbf{x}_{i})+\sum_{j=1}^{p}\theta_{j}^{*}l_{j}(x_{ij})\right), \tag{2}\] where \(h_{0}(t)\) is the baseline hazard function, \(r_{k}(\mathbf{x}_{i})\) is the rule term, and \(l_{j}(x_{ij})\) is the linear term. The rule terms \(r_{k}:\mathbb{R}^{p}\mapsto\mathbb{R}\) represent conjunctions of indicator functions, while the linear terms \(l_{j}:\mathbb{R}\mapsto\mathbb{R}\) are represented by the "winsized" version. The coefficients \(\theta_{k}\in\mathbb{R}(k=0,1,\cdots,K)\) and \(\theta_{j}^{*}\in\mathbb{R}(j=1,\cdots,p)\) correspond to the rule term coefficients and linear term coefficients, respectively. The rule terms \(r_{k}\) and linear terms \(l_{j}\) are explained in detail below. Rule termsThe \(k\)-th rule term is defined as: \[r_{k}(\mathbf{x}_{i})=\prod_{j=1}^{p}I(x_{ij}\in S_{jk}),\] where \(I(\cdot)\) is an indicator function that returns 1 if the condition within parentheses is true and 0 if it is false. \(S_{j}\) is the set of all possible values for \(x_{j}\) and the subset \(S_{jk}\subset S_{j}\) is defined by the interval: \[S_{jk}=[x_{jk}^{-},x_{jk}^{+}),\] where \(x_{jk}^{-}\) and \(x_{jk}^{+}\) represent the lower and upper bounds of \(x_{j}\) defined by the \(k\)th rule term. Linear termsFriedman and Popescu (2008)[11] noted that including the basis function with different structures can increase the prediction accuracy, and various linear terms were chosen for addition to the RuleFit model. However, directly adding linear terms can reduce the model's robustness against outliers compared to a model composed solely of rule terms. To address this issue, the linear terms were given a "winsized" form as: \[l_{j}(x_{ij})=\min(\delta_{j}^{+},\max(\delta_{j}^{-},x_{ij})), \tag{3}\] where \(\delta_{j}^{+}\) and \(\delta_{j}^{-}\) are the thresholds for determining the outliers defined by the \(q\)-quantile and \((1-q)\)-quantile of \(x_{j}\), respectively, with a recommended value of \(q=0.025\). To give the rule and linear terms an equal chance of being selected, the linear term was normalized as follows: \[l_{j}(x_{ij})\gets 0.4\cdot l_{j}(x_{ij})/std(l_{j}(x_{ij})),\] where \(std(\cdot)\) represents the standard deviation and 0.4 is the average standard deviation of the rule terms under the assumption that the support of the rule terms \(r_{k}(\mathbf{x}_{i})\) from the training data \[\rho_{k}=\frac{1}{N}\sum_{i=1}^{N}r_{k}(\mathbf{x}_{i}) \tag{4}\] are distributed uniformly from \(U(0,1)\). ### Drawbacks of HTE estimation using the existing survival RuleFit model According to the definition of the survival HTE in Eq. 1, the survival HTE can be derived from the difference in the conditional survival function estimates at time \(t\) between the treatment and control groups. Therefore, we can use the survival RuleFit model for the treatment and control groups and calculate the HTE from the difference between the estimates for the treatment and control groups. There are two common frameworks used for addressing such tasks: T-learner and S-learners in Kunzel et al. (2019)[13]. Both provide a simple framework for easily extending existing regression approaches to HTE estimation. Next, we briefly introduce the estimation of HTE using survival RuleFit based on T-learner and S-learner, and discuss their drawbacks for our purposes. T-learner for survival RuleFitTwo survival RuleFit models were fitted to the treatment and control groups. The conditional mean survival function for the control group can be estimated as: \[S^{(0)}(t|\mathbf{x}_{i})=\exp\left(-\int_{0}^{t}h_{\mathrm{RuleFit}}^{(0)}(u|\bm {x}_{i})du\right)=\exp\left(-\int_{0}^{t}h_{0}^{(0)}(u)\exp\left(\sum_{k=1}^{ K}\hat{\gamma}_{k}^{(0)}r_{k}^{(0)}(\mathbf{x}_{i})+\sum_{j=1}^{p}\hat{\gamma}_{j}^{* (0)}l_{j}(x_{ij})\right)du\right),\] and that for the treatment group can be estimated as: \[S^{(1)}(t|\mathbf{x}_{i})=\exp\left(-\int_{0}^{t}h_{\mathrm{RuleFit}}^{(1)}(u|\bm {x}_{i})du\right)=\exp\left(-\int_{0}^{t}h_{0}^{(1)}(u)\exp\left(\sum_{k=1}^{ K}\hat{\gamma}_{k}^{(1)}r_{k}^{(1)}(\mathbf{x}_{i})+\sum_{j=1}^{p}\hat{\gamma}_{j}^{* (1)}l_{j}(x_{ij})\right)du\right),\] where \(h_{0}^{(0)}(t)\) and \(h_{0}^{(1)}(t)\) are the baseline hazard functions for the treatment and control groups, respectively; \(r_{k}^{(0)}(\mathbf{x}_{i})\) and \(r_{k}^{(1)}(\mathbf{x}_{i})\) are the rule terms for the treatment and control groups, respectively; \(\hat{\gamma}_{k}^{(0)}\) and \(\hat{\gamma}_{k}^{(1)}\) are the estimated coefficients of the rule terms for the treatment and control groups; and \(\hat{\gamma}_{k}^{*(0)}\) and \(\hat{\gamma}_{k}^{*(1)}\) are the estimated coefficients of the linear terms for the treatment and control groups. Then, the HTE can be estimated as: \[\hat{\Delta}(t|\mathbf{x}_{i})=S^{(1)}(t|\mathbf{x}_{i})-S^{(0)}(t|\mathbf{x}_{i}).\] The main drawback of this approach is that the fitted models for the treatment and control groups can consist of different base functions. Powers et al. (2018)[14] pointed out that such a difference could lead to a difference in the treatment and control group estimates not solely attributable to the effect of the treatment, and this discrepancy can cause bias in the estimation of HTE. The proposed framework, based on the T-learner framework and named the "shared-basis conditional mean regression," addresses this issue by adding a restriction that ensures the treatment and control group models share the same base function. This approach reduces the bias arising from differences in the base functions of the models between the treatment and control groups. It also ensures the comparability of rule terms between the treatment and control group models and allows the estimation of the HTE for each rule. Therefore, by applying the survival RuleFit within the shared-basis conditional mean regression framework, the estimated HTE becomes interpretable contingent on its base function. However, both T-learners and shared-basis conditional mean regression construct separate models for the treatment and control groups without constraining the main effects to be the same. This may lead to inaccuracies in HTE estimation owing to differences in the main effects. S-learner for survival RuleFitThe survival RuleFit model was fitted to the data, with the treatment indicator considered as a covariate. Subsequently, the survival RuleFit model, including the interactions between treatment and covariates, was created. The estimates of the conditional mean survival function were denoted as: \[S(t|\mathbf{x}_{i},z_{i})=\exp\left(-\int_{0}^{t}h_{\mathrm{RuleFit}}(u|\mathbf{x}_{i },z_{i})du\right)=\exp\left(-\int_{0}^{t}h_{0}(u)\exp\left(\sum_{k=1}^{K}\hat{ \gamma}_{k}r_{k}(\mathbf{x}_{i},z_{i})+\sum_{j=1}^{p}\hat{\gamma}_{j}^{*}l_{j}(x_{ ij})\right)du\right),\] and the HTE was estimated as: \[\hat{\Delta}(t|\mathbf{x}_{i})=S(t|\mathbf{x}_{i},z_{i}=1)-S(t|\mathbf{x}_{i},z_{i}=0).\] Compared to the T-learner framework, S-learner creates a single model consisting of the interaction between the treatment and covariates. Therefore, the S-learner strategy ensures that the treatment and control groups have similar main effects. However, the survival RuleFit creates rules using tree-based learners, and the interaction rules between the treatment and covariates take the forms of "\(I(z=1)I(x<c)\)" or "\(I(z=0)I(x<c)\)." Consequently, for the subgroups where \(x<c\), we can only obtain the effect in the treatment group (\(z=1\)) or in the control group (\(z=0\)) at once and the other part will be ignored. Therefore, the HTE for the subgroups "\(x<c\)" cannot be estimated, making the relationships between \(x<c\) and HTE uninterpretable. Thus, the application of survival RuleFit in the S-learner framework cannot be used to build an interpretable model for HTE and loses the primary advantage of survival RuleFit. ## 3 Proposed method In the previous section, we discussed the drawbacks of applying survival RuleFit for HTE estimation. First, within the framework of a shared-basis conditional mean regression, an interpretable HTE estimation model can be constructed. However, this approach may still suffer from potential inaccuracies owing to differences in the main effects between the models for the treatment and control groups. Second, within the framework of S-learner, the models created for the treatment and control groups shared the same main effects, avoiding inaccuracies arising from differences in main effects. However, the estimated HTE cannot be interpreted using this approach. To address these issues, we combine these approaches and propose a novel method for survival HTE estimation. In this section, we first introduce our proposed model and then explain how to estimate the parameters. ### Model of the proposed method We propose a method for survival HTE estimation based on the survival RuleFit model. Unlike the survival RuleFit model shown in Eq. 2, our proposed method models the main effect, the effect of the treatment groups, and the effect of the control groups simultaneously. Given the dataset \(\{(t_{i},\delta_{i},z_{i},\mathbf{x}i)\}_{i=1}^{N}\), the model of the proposed method is as follows: \[h(t|\mathbf{x}_{i},z_{i}) =h_{0}(t)\exp\Biggl{[}\sum_{k=1}^{K^{\dagger}}\theta_{k}r_{k}^{ \dagger}(\mathbf{x}_{i})+\sum_{j=1}^{p}\theta_{j}^{*}l_{j}(x_{ij})\] \[+I(z_{i}=1)\Biggl{\{}\sum_{k=1}^{K^{\dagger}}\alpha_{k}r_{k}^{ \dagger}(\mathbf{x}_{i})+\sum_{j=1}^{p}\alpha_{j}^{*}l_{j}(x_{ij})\Biggr{\}}\] \[+I(z_{i}=0)\Biggl{\{}\sum_{k=1}^{K^{\dagger}}\beta_{k}r_{k}^{ \dagger}(\mathbf{x}_{i})+\sum_{j=1}^{p}\beta_{j}^{*}l_{j}(x_{ij})\Biggr{\}} \Biggr{]}, \tag{5}\] where \(r_{k}^{\dagger}(\mathbf{x}_{i})\) is the rule term for the main effect and \(r_{k}^{\dagger}(\mathbf{x}_{i})\) is the rule term for the treatment effect. \(\theta_{k},\alpha_{k},\beta_{k}\in\mathbb{R}(l=1,2,\cdots,L;k=1,2,\cdots,K)\) and \(\theta_{j}^{*},\alpha_{j}^{*},\beta_{j}^{*}\) are the coefficients of the rule terms and linear terms, respectively. Furthermore, to ensure that the models for the treatment and control groups had the same structure and to maintain model interpretability, our proposed method was created under the shared-basis conditional mean regression framework. Therefore, a constraint on the coefficients for the rule terms, \[\begin{cases}\hat{\alpha}_{k}=0\wedge\hat{\beta}_{k}=0\\ \hat{\alpha}_{k}\neq 0\wedge\hat{\beta}_{k}\neq 0\end{cases},\] and a constraint on the coefficients for the linear terms, \[\begin{cases}\hat{\alpha}_{k}^{*}=0\wedge\hat{\beta}_{k}^{*}=0\\ \hat{\alpha}_{k}^{*}\neq 0\wedge\hat{\beta}_{k}^{*}\neq 0\end{cases},\] were added to the model. Therefore, the estimates of the conditional hazard function for the control groups can be estimated as: \[\hat{h}(t|\mathbf{x}_{i},z_{i}=0)=h_{0}(t)\exp\left\{\left(\sum_{k=1}^{K^{\dagger }}\hat{\theta}_{k}r_{k}^{\dagger}(\mathbf{x}_{i})+\sum_{j=1}^{p}\hat{\theta}_{j}^{ *}l_{j}(x_{ij})\right)+\left(\sum_{k=1}^{K^{\dagger}}\hat{\alpha}_{k}r_{k}^{ \dagger}(\mathbf{x}_{i})+\sum_{j=1}^{p}\hat{\beta}_{j}^{*}l_{j}(x_{ij})\right) \right\}.\] The estimates of the conditional hazard function for the treatment groups can be expressed as \[\hat{h}(t|\mathbf{x}_{i},z_{i}=1)=h_{0}(t)\exp\left\{\left(\sum_{k=1}^{K^{\dagger }}\hat{\theta}_{k}r_{k}^{\dagger}(\mathbf{x}_{i})+\sum_{j=1}^{p}\hat{\theta}_{j}^{ *}l_{j}(x_{ij})\right)+\left(\sum_{k=1}^{K^{\dagger}}\hat{\alpha}_{k}r_{k}^{ \dagger}(\mathbf{x}_{i})+\sum_{j=1}^{p}\hat{\alpha}_{j}^{*}l_{j}(x_{ij})\right) \right\}.\] Therefore, the HTE can be estimated as: \[\hat{\Delta}(t|\mathbf{x}_{i}) =S(t|\mathbf{x}_{i},z_{i}=1)-S(t|\mathbf{x}_{i},z_{i}=0)\] \[=\exp\left(-\int_{0}^{t}\hat{h}(u|\mathbf{x}_{i},z_{i}=1)du\right)- \exp\left(-\int_{0}^{t}\hat{h}(u|\mathbf{x}_{i},z_{i}=0)du\right). \tag{6}\] Furthermore, the proposed method (Eq.5) is also based on the Cox proportional hazard model. Thus, the difference in survival rates between the treatment and control groups could also be interpreted using the hazard ratio, and the HTE estimates can be interpreted based on the estimated hazard ratio as \[\frac{\hat{h}(t|\mathbf{x}_{i},z_{i}=1)}{\hat{h}(t|\mathbf{x}_{i},z_{i}=0)} =\frac{h_{0}(t)\exp\left\{\left(\sum_{k=1}^{K^{\dagger}}\hat{\theta}_{k}r_ {k}^{\dagger}(\mathbf{x}_{i})+\sum_{j=1}^{p}\hat{\theta}_{j}^{*}l_{j}(x_{ij}) \right)+\left(\sum_{k=1}^{K^{\dagger}}\hat{\alpha}_{k}r_{k}^{\dagger}(\mathbf{x}_{ i})+\sum_{j=1}^{p}\hat{\alpha}_{j}^{*}l_{j}(x_{ij})\right)\right\}}{h_{0}(t) \exp\left\{\left(\sum_{k=1}^{K^{\dagger}}\hat{\theta}_{k}r_{k}^{\dagger}(\mathbf{x} _{i})+\sum_{j=1}^{p}\hat{\theta}_{j}^{*}l_{j}(x_{ij})\right)+\left(\sum_{k=1}^ {K^{\dagger}}\hat{\beta}_{k}r_{k}^{\dagger}(\mathbf{x}_{i})+\sum_{j=1}^{p}\hat{ \beta}_{j}^{*}l_{j}(x_{ij})\right)\right\}}\] \[=\exp\left\{\left(\sum_{k=1}^{K^{\dagger}}\hat{\alpha}_{k}r_{k}^{ \dagger}(\mathbf{x}_{i})+\sum_{j=1}^{p}\hat{\alpha}_{j}^{*}l_{j}(x_{ij})\right)- \left(\sum_{k=1}^{K^{\dagger}}\hat{\beta}_{k}r_{k}^{\dagger}(\mathbf{x}_{i})+\sum_ {j=1}^{p}\hat{\beta}_{j}^{*}l_{j}(x_{ij})\right)\right\}\] \[=\exp\left\{\sum_{k=1}^{K^{\dagger}}\left(\hat{\alpha}_{k}-\hat{ \beta}_{k}\right)r_{k}^{\dagger}(\mathbf{x}_{i})+\sum_{j=1}^{p}\left(\hat{\alpha}_ {j}^{*}-\hat{\beta}_{j}^{*}\right)l_{j}(x_{ij})\right\}.\] Consequently, the base functions \(\{r_{k}^{\dagger}(\mathbf{x}_{i})\}_{k=1}^{K^{\dagger}}\) and \(\{l_{j}(x_{ij})\}_{j=1}^{p}\) and corresponding differences in the coefficients \(\{(\hat{\alpha}_{k}-\hat{\beta}_{k})\}_{k=1}^{K^{\dagger}}\) and \(\{(\hat{\alpha}_{j}^{*}-\hat{\beta}_{j}^{*})\}_{j=1}^{p}\) can be used to interpreted the estimated HTE. The next section provides a detailed explanation of the parameter estimation process. ### Algorithm of proposed method To construct the proposed model, we developed an algorithm based on the survival RuleFit algorithm. The proposed algorithm consists of three steps: 1) rule generation, 2) rule division, 3) and rule ensemble. We provide a brief overview of our proposed algorithm and illustrate its differences from survival RuleFit. A detailed description of this algorithm is provided in the following subsections. **Step1: Rule generation**: The purpose of this step is to generate candidate rule terms for the proposed method. To ensure the proposed model's interpretation of survival HTE, these rules should be related to treatment effects. In this step, we followed the same algorithms as used in survival RuleFit, but we considered the treatment indicators as covariates during model construction. In this manner, we automatically obtained rules related to the treatment indicators that are considered to be related to survival HTE. **Step2: Rule division**: The purpose of this step is to divide the set of rule terms generated in the rule generation step into two subsets: one for the main effect and the other for the treatment effect. This step is the main difference between survival RuleFit and the proposed method. The resulting sets of main effect rules and treatment effect rules were then used to model the main and treatment effects, respectively, in the subsequent steps. **Step3: Rule ensemble**: The purpose of this step is to select base functions and estimate coefficients for the final model. In survival RuleFit, this is achieved by fitting the rules and linear terms to sparse linear models using lasso. This simplifies the model, improves interpretability, and prevents overfitting. However, unlike the survival RuleFit, the proposed method requires the selection of treatment effect rules for both the treatment and control groups to ensure that their models have the same rule terms to maintain the interpretability of the estimated HTE. This was achieved by grouping the treatment effect rules for both groups and using a group lasso to select the base function and coefficient estimation. #### 3.2.1 Rule generation In this step, candidate rules for the proposed model are constructed. Given the dataset \(\{(t_{i},\delta_{i},z_{i},\mathbf{x}_{i})\}_{i=1}^{N}\), generalized boosted models (GBMs)[18] are fitted to the dataset to construct rules related to the treatment effect. The treatment indicator \(z_{i}\) was regarded as a covariate in model building. The GBM model \(F_{M}(\mathbf{x}_{i},z_{i})\) with \(M\) tree base learner is built in the form of \(F_{M}(\mathbf{x}_{i},z_{i})=\sum_{m=1}^{M}f_{m}(\mathbf{x}_{i},z_{i})\), where \(M\) is the number of the base learner; the \(m\)-th base learner \(f_{m}(\mathbf{x}_{i},z_{i})=\sum_{d=1}^{D_{m}}\gamma_{d}I((\mathbf{x}_{i},z_{i})\in R_{d})\) is the decision tree, consisting of disjointed partitioned regions \(R_{1},R_{2},\cdots,R_{D_{m}}\); \(\gamma_{d}\) is the weight of the corresponding region \(R_{d}\); and \(D_{m}\) is the number of regions. Therefore, according to these steps, we obtained a set of tree-based learners \(\{f_{m}(\mathbf{x}_{i},z_{i})\}_{m=1}^{M}\). Then, to create the candidate rules for the proposed method, the whole tree base learners \(\{f_{m}(\mathbf{x}_{i},z_{i})\}_{m=1}^{M}\) are decomposed into a set of rules \(r_{k}(\mathbf{x}_{i},z_{i})_{k=1}^{K}\) with the number of rules \(K=\sum_{m=1}^{M}2(D_{m}-1)\), in same manner as used in the survival RuleFit [12]. The algorithm for generating rules is presented in Algorithm 1 and is explained in detail here. The lines in this paragraph correspond to those in Algorithm 1. Line 1 specifies the input for rule generation, including the dataset and several hyperparameters for the GBM. From lines 3 to 9, we update the parameters in a manner similar to that of the GBM algorithm. Line 5 randomly determines the depth of the base learner. This process is crucial to survival RuleFit and makes it possible to obtain rules with various depths. In the common GBM algorithm, the depth of each base learner is small and fixed. Therefore, the maximum depth of the created rules is also small, and a linear combination of these rules cannot capture high-order interactions. Thus, in the survival RuleFit algorithm, the depth of each base learner is randomly determined. However, to avoid affecting the performance of the GBM, the depth of most base learners remains small, and only a few of them have a large depth. To achieve this, the depth of each base learner is randomly drawn from the exponential distribution \(\mathrm{exponential}(1/(\bar{L}-2))\), where \(\mathrm{floor}(u)\) represents the largest integer less than or equal to \(u\). From lines 10 to 12, we decompose the tree-based learners into rules, as shown in Fig1 and output the candidate rules set. ``` 1:Input : Dataset \(\{t_{i},\delta_{i},z_{i},\mathbf{x}_{i}\}_{i=1}^{N}\), number of tree base learners \(M\), mean depth of tree base learners \(\bar{L}\), shrinkage rate \(v\), and the training sample fraction for each tree-based learner \(\eta\) 2:Initialize the model \(F_{0}(\mathbf{x}_{i},z_{i})=0\) 3:for\(m=1\) to \(M\)do 4: For \(i=1,2,\cdots,N\), compute the gradient for \(F_{m-1}(\mathbf{x}_{i},z_{i})\) as: \[r_{i}^{(m)}=\delta_{i}-\sum_{i^{*}=1}^{N}\delta_{i^{*}}\frac{I(t_{i}>t_{i^{*} })\mathrm{exp}(F_{m-1}(\mathbf{x}_{i},z_{i}))}{\sum_{i^{**}=1}^{N}I(t_{i^{**}}>t_{i^ {*}})\mathrm{exp}(F_{m-1}(\mathbf{x}_{i^{**}},z_{i^{**}}))}\] 5: Determine the number of terminal nodes for the tree-based learner, \(D_{m}=2+\mathrm{floor}(u)\), where \(u\sim\mathrm{exponential}(1/(\bar{L}-2))\) 6: Fit a regression tree \(f_{m-1}(\mathbf{x}_{i},z_{i})\) to the gradient \(r^{(m)}\), giving the terminal regions \(R_{d},d=1,2,\cdots D_{m}\) 7: For \(d=1,2,\cdots,D_{m}\), estimate the value of region \(R_{d}\) as \[\hat{\gamma}_{d}=\operatorname*{arg\,min}_{\gamma_{d}}\sum_{(\mathbf{x}_{i},z_{i} )\in R_{d}}\sum_{i\in H}(r_{i}^{(m)}-\gamma_{d})^{2},\] where \(H\subset\{1,2,\cdots,N\}\) is a sample-set randomly drawn from the data and \(|H|=\lfloor\eta N\rfloor\) 8: Update \(F_{m}(\mathbf{x}_{i},z_{i})=F_{m-1}(\mathbf{x}_{i},z_{i})+v\cdot\sum_{d=1}^{D_{m}} \hat{\gamma}_{d}I((\mathbf{x}_{i},z_{i})\in R_{d})=F_{m-1}(\mathbf{x}_{i},z_{i})+vf_{m }(\mathbf{x}_{i},z_{i})\) 9:endfor 10: For \(m=1,2,\cdots,M\), traverse the tree \(f_{m}(\mathbf{x}_{i},z_{i})\) to create the rule set \(\{r_{k}(\mathbf{x}_{i},z_{i})\}_{k=1}^{K^{(m)}}\), where \(K^{(m)}=2(D_{m}-1)\) 11: Aggregate every rule set \(\{r_{k}(\mathbf{x}_{i},z_{i})\}_{k=1}^{K^{(1)}},\{r_{k}(\mathbf{x}_{i},z_{i})\}_{k=1}^{ K^{(2)}},\cdots,\text{and}\left\{r_{k}(\mathbf{x}_{i},z_{i})\right\}_{k=1}^{K^{(M)}}\) into \(\{r_{k}(\mathbf{x},z_{i})\}_{k=1}^{K}\), where \(K=\sum_{m=1}^{M}K^{(m)}\). 12:Output \(\{r_{k}(\mathbf{x}_{i},z_{i})\}_{k=1}^{K}\) ``` **Algorithm 1** Rule generation #### 3.2.2 Rule division In this step, the created rules \(\{r_{k}(\mathbf{x}_{i},z_{i})\}_{k=1}^{K}\) are divided into rules for the main effect \(\{r_{k}^{\dagger}(\mathbf{x}_{i})\}_{k=1}^{K^{\dagger}}\) and treatment effect \(\{r_{k}^{\dagger}(\mathbf{x}_{i})\}_{k=1}^{K^{\dagger}}\). In the rule generation step, a GBM was created with the treatment indicator as a covariate. Consequently, tree-based learners of the GBM model can automatically detect the interaction effect between the treatment and other covariates, allowing us to divide the rules into main and treatment effect rules. We present a simple example to briefly illustrate this process in Fig. 1. The \(k\)th rule in the rule set \(\{r_{k}(\mathbf{x}_{i},z_{i})\}_{k=1}^{K}\) can be expressed as \[r_{k}(\mathbf{x}_{i},z_{i})=\left\{\begin{array}{cl}I(z_{i}\in S_{k}^{(z)})r_{k}^{ \dagger}(\mathbf{x}_{i}),&\text{if }S_{k}^{(z)}=\{0,1\}\\ I(z_{i}\in S_{k}^{(z)})r_{k}^{\dagger}(\mathbf{x}_{i}),&\text{if }S_{k}^{(z)}=\{0\}\text{ or }S_{k}^{(z)}=\{1\}\end{array},\right.\] where \(S_{k}^{(z)}\) is the set of available values of the treatment indicator. Therefore, there is an interaction effect between the treatment indicator and \(r_{k}^{\dagger}(\mathbf{x}_{i})\), but no interaction effect between the treatment indicator and \(r_{k}^{\dagger}(\mathbf{x}_{i})\). In this sense, we define \(r_{k}^{\dagger}(\mathbf{x}_{i})\) as the treatment effect rule and \(r_{k}^{\dagger}(\mathbf{x}_{i})\) as the main effect rule. Therefore, in this step, we finally produce the set of rules for the treatment effect \(\{r_{k}^{\dagger}(\mathbf{x}_{i})\}_{k=1}^{K^{\dagger}}\) and that for the main effect \(\{r_{k}^{\dagger}(\mathbf{x}_{i})\}_{k=1}^{K^{\dagger}}\), where \(K=K^{\dagger}+K^{\dagger}\). The algorithm for rule division is presented in Algorithm 2 and is explained in detail here. The lines in this paragraph correspond to those in Algorithm 2. Line 1 specifies the input for the rule-division process, which is a set of rules created during the rule-generation process. From lines 3 to 10, we divided the rules into a set of main effect rules and a set of treatment effect rules based on whether any interactions involved the treatment indicators. #### 3.2.3 Rule ensemble In this step, the base functions (rule and linear terms) for the proposed method were selected, and the coefficients for the proposed method were estimated. First, we added the "winsorized" versions of the linear terms as base functions in the same manner as that used in the survival RuleFit. We then selected the optimal model and estimated the coefficients simultaneously using a group lasso. To illustrate how the group lasso works in the proposed method, we first introduce the group lasso. Group lassoGroup lasso [19] is a regularized method which allows the variable selection of a predefined group of variables in regression models. Given the data \(\{y_{i},\mathbf{g}_{1i},\mathbf{g}_{2i},\cdots,\mathbf{g}_{iL}\}_{i=1}^{N}\) consisting of \(L\) groups of variables, where the grouped variable \(\mathbf{g}_{\ell}\in\mathbb{R}^{p_{\ell}}(\ell=1,2,\cdots,L)\) and \(p_{\ell}\) is the number of variables in group \(\ell\), the estimated intercept \(\theta_{0}\in\mathbb{R}\) and coefficients of the grouped variable \(\mathbf{\theta}_{\ell}\in\mathbb{R}^{p_{\ell}}(\ell=1,2,\cdots,J)\) are then defined as: \[(\hat{\theta}_{0},\{\hat{\mathbf{\theta}}_{\ell}\}_{\ell=1}^{L})=\operatorname*{ arg\,min}_{\theta_{0},\{\mathbf{\theta}_{\ell}\}_{\ell=1}^{L}}\mathcal{L}( \theta_{0},\{\mathbf{\theta}_{\ell}\}_{\ell=1}^{L})+\sum_{\ell=1}^{L}\lambda \sqrt{p_{\ell}}||\mathbf{\theta}_{\ell}||_{2},\] where \(||\mathbf{\theta}_{\ell}||_{2}\) is the Euclidean norm of the coefficients, \(\mathbf{\theta}_{\ell}\) and \(\lambda\in\mathbb{R}^{+}\) are the tuning parameters, and \(\mathcal{L}(\theta_{0},\mathbf{\theta}_{\ell})\) is the loss function; for example, \[\mathcal{L}(\theta_{0},\{\mathbf{\theta}_{\ell}\}_{\ell=1}^{L})=\frac{1}{2}\sum_{ i=1}^{N}(y_{i}-\theta_{0}-\sum_{\ell=1}^{L}\mathbf{g}_{i\ell}{}^{T}\mathbf{\theta}_{ \ell})^{2}\] in a linear regression model. Application of the proposed modelHere, we apply the group lasso to estimate the coefficients for the main effect \(\theta_{k}\) and \(\theta_{j}^{*}\) for the treatment groups \(\alpha_{k}\) and \(\alpha_{j}^{*}\), and for the control groups \(\beta_{k}\) and \(\beta_{j}^{*}\), respectively. As mentioned above, to maintain the interpretability of the proposed model, we grouped the treatment effect rules for the treatment group (\(z_{i}=1\)) and control group (\(z_{i}=0\)) as: \[\mathbf{u}_{ik} =(z_{i}r_{k}^{\ddagger}(\mathbf{x}_{i}),(1-z_{i})r_{k}^{\ddagger}(\mathbf{ x}_{i}))^{T},k=1,2,\cdots,K^{\ddagger}\;\mathrm{and}\] \[\mathbf{u}_{ij}^{*} =(z_{i}l_{j}(x_{ij}),(1-z_{i})l_{j}(x_{ij}))^{T},j=1,2,\cdots,p;\;i=1,2,\cdots,N,\] respectively. Additionally, we denote the coefficients of the grouped treatment effect rules as: \[\mathbf{\theta}_{k} =(\alpha_{k},\beta_{k})^{T},k=1,2,\cdots,K^{\ddagger}\;\mathrm{and}\] \[\mathbf{\theta}_{j}^{**} =(\alpha_{j}^{*},\beta_{j}^{*})^{T},j=1,2,\cdots,p.\] The proposed model (Eq. 5) can be rewritten as: \[h(t|\mathbf{x}_{i},z_{i})=h_{0}(t)\exp\biggl{[}\sum_{k=1}^{K^{\dagger}} \theta_{k}r_{k}^{\dagger}(\mathbf{x}_{i})+\sum_{j=1}^{p}\theta_{j}^{*}l_{j}(x_{ij})+ \biggl{\{}\sum_{k=1}^{K^{\dagger}}\mathbf{\theta}_{k}{}^{T}\mathbf{u}_{ik}+\sum_{j=1}^{ p}\mathbf{\theta}_{j}^{**T}\mathbf{u}_{ij}^{*}\biggr{\}}\biggr{]}\] and using the adaptive group lasso, the coefficients are estimated as \[\Bigl{(}\{\hat{\theta}_{k}\}_{k=1}^{K^{\dagger}},\{\hat{\theta}_ {j}^{*}\}_{j=1}^{p},\{\hat{\mathbf{\theta}}_{k}\}_{k=1}^{K^{\dagger}},\{\hat{\mathbf{ \theta}}_{j}^{**}\}_{j=1}^{p}\Bigr{)} \tag{7}\] \[=\operatorname*{arg\,min}_{\{\theta_{k}\}_{k=1}^{K^{\dagger}},\{ \theta^{*}_{j}\}_{j=1}^{P},\{\mathbf{\theta}_{k}\}_{k=1}^{K^{\dagger}},\{\mathbf{ \theta}_{j}^{*}\}_{j=1}^{p}}\frac{2}{N}\biggl{\{}\sum_{i=1}^{N}\delta_{i} \biggl{(}\sum_{k=1}^{K^{\dagger}}\theta_{k}r_{k}(\mathbf{x}_{i})+\sum_{j=1}^{p} \theta_{j}^{*}l_{j}(x_{ij})+\sum_{k=1}^{K^{\dagger}}\mathbf{\theta}_{k}{}^{T}\mathbf{ u}_{ik}+\sum_{j=1}^{p}\mathbf{\theta}_{j}^{**T}\mathbf{u}_{ij}^{*}\biggr{)}\] \[-\sum_{i=1}^{N}\delta_{i}\log\biggl{(}\sum_{m\in R_{(i)}}\exp \biggl{(}\sum_{k=1}^{K^{\dagger}}\theta_{k}r_{k}(\mathbf{x}_{m})+\sum_{j=1}^{p} \theta_{j}^{*}l_{j}(x_{mj})+\sum_{k=1}^{K^{\dagger}}\mathbf{\theta}_{k}{}^{T}\mathbf{ u}_{mk}+\sum_{j=1}^{p}\mathbf{\theta}_{j}^{**T}\mathbf{u}_{mj}^{*}\biggr{)}\biggr{)} \biggr{\}}\] \[+\lambda\left(\sum_{k=1}^{K^{\dagger}}||\theta_{k}||_{2}+\sum_{j= 1}^{p}||\theta_{j}^{*}||_{2}+\sum_{k=1}^{K}\sqrt{2}||\mathbf{\theta}_{k}||_{2}+ \sum_{j=1}^{p}\sqrt{2}||\mathbf{\theta}_{j}^{**}||_{2}\right), \tag{8}\] where \(R_{(i)}\) is the set of at-risk subjects at time \(t_{i}\). The HTE can then be estimated using Eq. 6 based on the estimated coefficients. ### Interpretation Tools To allow the results of RuleFit to be more easily understood, Friedman and Popescu (2008)[11] provided several interpretation tools, including base function importance and variable importance. The importance of the base function was used to evaluate the contribution of each base function to the outcome, whereas the variable importance was used for each variable. In this study, we modified the original interpretation tools and applied them to help interpret the application results. We introduce these concepts in detail below. Base function importanceThe based function importance includes the importance of the rule terms and linear terms. A high base function importance value indicates that the corresponding base function is closely related to the HTE, whereas a low value indicates that it contributes little to the HTE. Here, we modified the base function importance based on the original functions and provided the importance of the rule and linear terms for our proposed method as follows: \[I_{k} =|\hat{\alpha}_{k}-\hat{\beta}_{k}|\cdot\sqrt{\varrho_{k}(1- \varrho_{k})}\quad\mathrm{and} \tag{9}\] \[I_{j} =|\hat{\alpha}_{j}^{*}-\hat{\beta}_{j}^{*}|\cdot|l_{j}(x_{j})- \bar{l}_{j}|, \tag{10}\] respectively, where \(\varrho_{k}\) is the support for the rules, as shown in Eq. 4. Variable importanceVariable importance is a general approach that helps to interpret black-box machine learning methods, and it can provide information on variables that are most strongly related to the outcomes. Although the proposed method is interpretable, using the variable importance makes our results easier to understand. Thus, we referred to the variable importance of the original RuleFit and define it as follows: \[I_{j}^{*}(\mathbf{x})=I_{j}(\mathbf{x})+\sum_{x_{j}\in r_{k}}\frac{I_{k}( \mathbf{x})}{m_{k}}, \tag{11}\] where the first term \(I_{j}(\mathbf{x})\) is the importance of the \(j\) th linear term, and the second term is the sum of the importance of the rules that contain \(x_{j}(x_{j}\in r_{k})\), with each rule importance divided by the total number of variables \(x_{j}\) used to define the rule. ## 4 Simulation study Several artificial datasets were used to evaluate the performance of the proposed method under various conditions. We provide a detailed description of the design of the artificial data for each simulation and present the application results of each method for each artificial dataset. ### Simulation design For each simulation study, the dataset \(\{(t_{i},\delta_{i},z_{i},\mathbf{x}_{i})\}_{i=1}^{N}\) with \(N=1000\) samples, was generated for the training and test datasets. For each individual \(i\), \(t_{i}\) is the observed survival time, \(\delta_{i}\) is the censoring indicator, \(z_{i}\) is the treatment indicator, and \(\mathbf{x}_{i}=(x_{i1},x_{i2},\cdots,x_{ip})\) is a vector of explanatory variables. The number of covariates was set to \(p=15\). The simulation design is described in detail below. Explanatory variablesThe explanatory variables \(\mathbf{x}_{i}=(x_{i1},x_{i2},\cdots,x_{ip})\) consist of both continuous and binary values, where the odd-numbered covariates have continuous values distributed according to the standard normal distribution \(N(0,1)\) and the even-numbered covariates have binary values distributed according to the Bernoulli distribution \(B(0.5)\). Treatment indicatorThe treatment indicator \(t_{i}\) was generated from the Bernoulli distribution \(B(0.5)\) because we assumed a randomized control trial design in this study. Observed survival time and censoring indicatorsTo obtain the observed survival times and censoring indicators, the true survival and censoring times must be generated first. In the simulation, we referenced the data generation model in Powers et al. (2018)[14] and generated the true survival times from the Cox proportional hazards model as follows: \[h(t|\mathbf{x}_{i},z_{i})=h_{0}(t)\exp(\mu(\mathbf{x}_{i})+(z_{i}-0.5)\tau(\mathbf{x}_{i} )), \tag{12}\] where \(\mu(\mathbf{x}_{i})\) is the main effect function and \(\tau(\mathbf{x}_{i})\) is the treatment effect function. To evaluate the performance of each method under various conditions, a linear function (Eq.13, Eq.16), a stepwise function (Eq.14, Eq.17), and a nonlinear function (Eq.15, Eq.18) were used for \(\tau(\mathbf{x}_{i})\) and \(\mu(\mathbf{x}_{i})\). Therefore, the three functions for the main effect were denoted as: \[\mathrm{M1}: \mu(\mathbf{x}_{i})=0.5x_{i1}+0.5x_{i3}+0.5x_{i5}+0.5x_{i2}+0.5x_{i4} -x_{i6}, \tag{13}\] \[\mathrm{M2}: \mu(\mathbf{x}_{i})=I(x_{i1}>-1)-I(x_{i3}>0)+I(x_{i5}>1)+0.5x_{i2}x_{ i4}-1.25x_{i6},\] (14) \[\mathrm{M3}: \mu(\mathbf{x}_{i})=-1.25\sin(x_{i1}x_{i3})+2.25/(1+\exp(-x_{i5}))-1.5x_{i2}x_{i4}x_{i6}-1. \tag{15}\] The three functions for the treatment effect were denoted as \[\mathrm{T1}: \tau(\mathbf{x}_{i})=-x_{i5}-1.5|x_{i7}+x_{i9}|+1.5x_{i6}-x_{i8}-x_{i 10}, \tag{16}\] \[\mathrm{T2}: \tau(\mathbf{x}_{i})=-2I(x_{i5}>-1)I(x_{i7}>0)-2I(x_{i7}>0)I(x_{i9}>1 )-2.5x_{i6}-x_{i8}+1.5x_{i10},\] (17) \[\mathrm{T3}: \tau(\mathbf{x}_{i})=-1.75\sin(x_{i5}x_{i7})+3(x_{i5}/(1+\exp(-x_{i6} x_{i9})))-2x_{i8}x_{i9}x_{i10}-2. \tag{18}\] These functions provide nine scenarios for true survival time generation based on various combinations. Additionally, we set the baseline hazard function to \(h_{0}(t)=2t\). Using the relationship between the survival function and hazard function \(-\log(S(t))=\int_{0}^{t}h(s)ds\), Eq.12 can be transformed into \[t=\left\{\frac{-\log(S(t))}{\exp(\mu(\mathbf{x}_{i})+(z_{i}-0.5)\tau(\mathbf{x}_{i}))} \right\}^{1/2}.\] The set survival rate \(S(t)=u\sim U(0,1)\) and true survival time for the \(i\)-th individual can be calculated as: \[t_{i}^{*}=f^{*}(\mathbf{x}_{i},u_{i},z_{i})=\left\{\frac{-\log(u_{i})}{\exp(\mu( \mathbf{x}_{i})+(z_{i}-0.5)\tau(\mathbf{x}_{i}))}\right\}^{1/2}. \tag{19}\] For the censoring time, we assumed the maximum following time \(t=3\) and generated the censoring time using the following functions: \[f^{c}(\mathbf{x}_{i})=1.1\exp(1-\sin(x_{i1}x_{i3})+3(z_{i}-0.5)x_{i8}+\varepsilon), \quad\varepsilon\sim N(0,1).\] The censoring was defined as \(c=\min(f^{c}(\mathbf{x}_{i}),3)\). Therefore, under these settings, the observed survival time \(t_{i}\) and censoring indicator \(\delta_{i}\) can be expressed as \[t_{i}=\min(t_{i}^{*},c_{i})\] \[\delta_{i}=I(t_{i}^{*}<c_{i}).\] True HTETo evaluate the performance of each method, we defined the true HTE for individual \(i\) as the survival rate difference between the control and treatment groups at \(t_{0}=2\) (90th-percentile point of the observed survival time)on covariates \(\mathbf{x}_{i}\), denoted as: \[\Delta(t_{0}|\mathbf{x}_{i}) =S(t_{0}|\mathbf{x}_{i},z_{i}=1)-S(t_{0}|\mathbf{x}_{i},z_{i}=0)\] \[=P(T_{i}>t_{0}|\mathbf{x}_{i},z_{i}=1)-P(T_{i}>t_{0}|\mathbf{x}_{i},z_{i}= 0),\] where \(T_{i}\) is a random variable for the true survival time, using values from Eq.19. We referred to the true HTE creation process proposed by Cui et al.(2023)[8]. For each individual \(i\), we set its covariates \(\mathbf{x}_{i}\), sampled the true survival time 100,000 times, and calculated the true HTE as: \[\Delta(t_{0}|\mathbf{x}_{i})=\frac{1}{N^{*}}\sum_{n^{*}=1}^{N^{*}}I\left(f^{*}( \mathbf{x}_{i},u_{n^{*}},z_{i}=1)>t_{0}\right)-\frac{1}{N^{*}}\sum_{n^{*}=1}^{N^{* }}I\left(f^{*}(\mathbf{x}_{i},u_{n^{*}},z_{i}=0)>t_{0}\right),\] where \(N^{*}=100000\) and \(t_{0}=2\). Performance evaluationWe compared the performance of our proposed method with several existing survival HTE estimation methods, such as Random survival forest using S-Leaner (rsfs)[13], Enriched random survival forest (rsft)[20], Virtual twins (rsfvt)[2] and Causal survival forest (csf)[8]. To evaluate the performance of each method, we used four metrics: the **root mean squared error (RMSE)**, **absolute relative bias (AbsRbias)**, **Spearman's rank correlation**, and **correct classification rate**. First, we evaluated the prediction accuracy using **RMSE** and **AbsRbias**, which are defined as follows: \[\mathbf{RMSE} =\sum_{i=1}^{N}\delta_{i}\left(\Delta(t_{0}|\mathbf{x}_{i})-\hat{ \Delta}(t_{0}|\mathbf{x}_{i})\right)^{2}\] \[\mathbf{AbsRbias} =\sum_{i=1}^{N}\;\delta_{i}\left(\frac{\Delta(t_{0}|\mathbf{x}_{i})- \hat{\Delta}(t_{0}|\mathbf{x}_{i})}{\Delta(t_{0}|\mathbf{x}_{i})}\right),\] where \(\Delta(t_{0}|\mathbf{x}_{i})\) is the true HTE and \(\hat{\Delta}(t_{0}|\mathbf{x}_{i})\) is the estimated HTE. Second, the **Spearman's rank correlation** between the true HTE \(\Delta(t_{0}|\mathbf{x}_{i})\) and estimated HTE \(\hat{\Delta}(t_{0}|\mathbf{x}_{i})\) was used to evaluate the effectiveness of the method for different individuals. Finally, we used the **Correct Classification Rate** to evaluate how accurately the estimated HTE reflected the true efficacy of the treatment. It was defined as: \[\mathbf{Correct\ classification\ rate}=\frac{1}{N}\sum_{i=1}^{N}I(sign( \Delta(t_{0}|\mathbf{x}_{i}))=sign(\hat{\Delta}(t_{0}|\mathbf{x}_{i}))),\] where the \(sign()\) function returns 1 if the value in parentheses is positive, -1 if it is negative, and 0 if it is zero. Therefore, a high correct classification rate indicates that the estimated HTE more accurately reflects the true efficacy. The simulation was performed using the R 4.1.2. programming environment, and all previous methods can be implemented using existing R packages. Random survival forests using S-Leaner, rich random survival forests, and virtual twins were implemented using the randomForestSRC package[21] with the default hyperparameters, and a causal survival forest was implemented using the grf package[22] with the default hyperparameters. The hyperparameters of the proposed method were set as follows: the maximum number of trees was \(M=500\), the mean depth of each tree-based learner was \(\bar{L}=2\), the learning rate was \(V=0.01\), and the size of the sample used to train the base learner in each boost step was \(\eta=\min(N/2,100+6\sqrt{N})\), where \(N\) is the size of the entire training sample. ### Simulation results Here, we present the simulation results for nine scenarios for each method, as illustrated in Fig.2. The proposed method (PROP) outperformed previous methods in all simulation settings. S-learner using a random survival forest (SRC1) and causal survival forest (CSF) tended to underperform compared to other methods using these simulation datasets. Fig.2 A) provides a detailed presentation of the RMSE for each method. The proposed method consistently exhibited lower RMSE values than the other methods in all scenarios. In particular, the S-learners using a random survival forest (SRC1) generally underperformed. Fig.2 B) displays the relative bias. For clarity, we present the absolute value of the relative bias for each method. When the treatment effects are derived from linear or stepwise functions, the proposed method usually displays a similar or lower relative bias than the other methods. When treatment effects are generated from nonlinear functions, the results of the proposed method are not superior to those of previous methods, but they are comparable. Both the RMSE results and relative bias indicate the superior performance of our proposed method over previous methods, suggesting that it can estimate the HTE more accurately than previous methods. Fig.2 C) shows the Spearman's rank correlation results for each method. For each scenario, the proposed method demonstrated higher correlations than the other methods, indicating that it was capable of capturing the effectiveness of the treatment for each individual. Fig.2 D) presents the correct classification rate based on the estimated HTE for each method. This metric evaluates the diagnostic accuracy of the estimated HTE. Consequently, a high correct classification rate implies that the estimated HTE can diagnose treatment efficacy accurately. Thus, the proposed method outperformed the other methods. ## 5 Real data application In this section, we describe how the proposed method applies to a real-world dataset. First, we present a direct interpretation of the results obtained based on the rules and conduct a simple evaluation to assess whether the proposed method correctly interpreted the actual results. Second, we employ a general interpretation approach commonly used in machine learning methods, variable importance, to interpret the results. Third, we demonstrate how the estimated HTE can be used to interpret treatment effectiveness. We applied our method to a dataset from the AIDS Clinical Trials Group Protocol 175 (ACTG 175)[23], a randomized clinical trial involving HIV1-infected adults with CD4 cell counts between 200 and 500 cells/\(mm^{3}\). The trial included four treatment groups: zidovudine monotherapy, zidovudine plus didanosine combination therapy, zidovudine plus zalcitabine combination therapy, and didanosine monotherapy. The ACTG 175 dataset is available in the R package speff2trial[24] and comprises the data of 1762 subjects. In this study, we focused on binary treatment and used a subset of the ACTG 175 dataset that included only two treatment groups: zidovudine monotherapy and zidovudine didanosine combination therapy. For simplicity, we refer to zidovudine monotherapy as monotherapy and zidovudine plus didanosine combination therapy as combination therapy. We referred to the 419 subjects who received monotherapy as the control group and the 436 subjects who received combination therapy as the treatment group. The primary endpoint of the ACTG 175 study was the number of days until at least one of the following three events occurred: (i) a decline in the CD4 cell count of at least 50, (ii) progression to AIDS, or (iii) death. We selected 12 covariates for our analysis as proposed by Tsiatis et al. (2008)[25]. These included five continuous variables: baseline CD4 cell count (cd40; cells/\(mm^{3}\)), baseline CD8 cell count (cd80; cells/\(mm^{3}\)), age (years), weight (wtkg;kg), and Karnofsky score (karnof; on a scale of 0-100). We also included seven binary variables: hemophilia (hemo; 0 = no, 1 = yes), homosexual activity (homo; 0 = no, 1 = yes), race (0 = white, 1 = other), sex (0 = female, 1 = male), history of intravenous drug use (drugs: 0 = no, 1 = yes), history of antiretroviral therapy (str2; 0 = naive, 1 = experienced), and symptomatic indicators (symptoms: 0 = asymptomatic, 1 = symptomatic). The HTE was defined as the difference in three-year (365.25 \(\times\) 3 days) survival rates between the treatment and control groups. Therefore, the larger the HTE, the greater the benefit of combination therapy compared to monotherapy. Additionally, as mentioned previously, our proposed method has four hyperparameters: the maximum number of trees, mean depth of each tree-based learner, learning rate, and size of the sample used to train the base learner. We set the hyperparameters to be the same as those used in the simulations. ### Results of proposed method application First, we present the application results for the proposed method and interpret them based on the created rules. The application of the proposed method yielded 16 rules, as shown in Table1. For each created rule, we provided information about its importance, hazard ratio, and support to help with interpretation. The details of each column are as follows. **Importance**: This column shows the base function importance for each rule (Eq. 10). A high base function importance value indicates that the rule is closely related to the HTE. **Hazard ratio**: This column shows the hazard ratio between the treatment and control groups for each rule. Because our proposed method is based on Cox proportional hazard models, the hazard ratio between the treatment and control groups for each rule is also related to the difference in the survival rate. Thus, we can use the hazard ratio value to interpret the HTE for each rule. A hazard ratio greater than 1 indicates that the HTE is negative and that the control group benefits more than the treatment group. If the hazard ratio is between 0 and 1, it indicates that the HTE is positive, and the treatment group benefits more than the control group. **Support**: This column shows the support for each rule (Eq. 4). Support indicates the size of the subgroups, as defined by the rules. To make the results more interpretable and insightful, we selected certain rules from Table1 for two reasons. First, the support values for Rules 1, 5, 10, and 12 were less than 0.1. This suggests that the participants in these subgroups constituted less than 10% of the sample. Consequently, the number of these subgroups may have been too small to provide generalizable results. Second, to obtain more informative results, we focused on the rules which were of greater importance to the HTE. Hence, we finally selected six rules with support values greater than 0.1 and base function importance values exceeding the average value of 26.54. The hazard ratios for these rules fell between 0 and 1, indicating that the subjects within these subgroups benefited more from combination therapy than from monotherapy. The hazard ratios for the first four rules were similar and smaller than those for the last two rules. This suggests that subjects who followed the last two rules benefited less from combination therapy than did those who followed the first four rules. To elucidate this further: * The rules "cd40 \(<\) 266.5 & age \(<\) 39.5," "cd40 \(>=\) 298.5 & age \(>=\) 41.5," and "cd40 \(<\) 268 & age \(<\) 39"* suggest that combination therapy is more beneficial than monotherapy for subjects with a baseline CD4 cell count of at least 298.5 who are aged 41.5 years or more, and for those with a baseline CD4 cell count below 266.5 who are aged under 39.5 years. Notably, the subgroups defined by the rules "cd40 \(<\) 266.5 & age \(<\) 39.5" and "cd40 \(<\) 268 & age \(<\) 39" were nearly overlapping. Consequently, the estimated HTE for subjects who follow these rules would be much greater than those for subjects who follow the other rules. In other words, the subjects who follow these rules can benefit more from the combination therapy than other subjects. * The rule "homo \(<\) 0.5 & race \(<\) 0.5 & cd40 \(<\) 359" suggests that Caucasians who do not engage in homosexual activity and have baseline CD4 cell counts below 359 benefit more from combination therapy than from monotherapy. * The rule "symptom \(<\) 0.5 & cd80 \(>=\) 790.5 & age \(>=\) 28.5" points out that asymptomatic subjects, with baseline CD8 cell counts above 790.5 and aged 28.5 years or more, are likely to derive more benefit from combination therapy than from monotherapy. * The rule "wtkg \(<\) 97.26 & cd80 \(>=\) 814 & cd40 \(>=\) 335" indicates that subjects weighing less than 97.26 kg, with baseline CD8 cell counts of at least 814 and baseline CD4 cell counts of at least 335, also tend to benefit more from combination therapy than from monotherapy. Additionally, we provide a simple evaluation to assess whether the results of the proposed method correctly interpret the actual results. We used the Kaplan-Meier method to calculate the actual HTE for each subgroup, as shown in Fig. 3. For all the subgroups defined by the selected rules, the effect of combination therapy was superior to that of monotherapy. The HTE for the subgroups "cd40 \(>=\) 298.5 & age \(>=\) 41.5" and "homo \(<\) 0.5 & race \(<\) 0.5 & cd40 \(<\) 359" were similar, but greater than the HTE for the subgroups "symptom \(<\) 0.5 & cd80 \(>=\) 790.5 & age \(>=\) 28.5" and "wtkg \(<\) 97.26 & cd80 \(>=\) 814 & cd40 \(>=\) 335." As for subgroups "cd40 \(<\) 266.5 & age \(<\) 39.5" and "cd40 \(<\) 268 & age \(<\) 39," their HTE estimates were higher than those of other subjects. Thus, the interpretations based on the estimated hazard ratio and actual HTE for each rule showed similar trends. This consistency between the two interpretations allows the application of the proposed method to directly identify the subgroups that might derive greater benefits from combination therapy. Second, we demonstrated the interpretation of variable importance of the proposed method, as shown in Fig. 4. This result indicates that the baseline CD4 cell count, baseline CD8 cell count, weight, and age are much more important than most other variables. The Karnofsky score, homosexual activity, race, history of antiretroviral therapy, and symptomatic indicators contributed less to the HTE. Hemophilia and history of intravenous drug use did not contribute to the HTE. Third, we demonstrate how the estimated HTE is utilized to assess the effectiveness of the treatment. We randomly selected 10 subjects from the dataset and estimated their HTE, as shown in Table 2. The estimated HTE for Subjects 3, 5, and 7 were 0.44, 0.73, and 0.67, respectively, which were much higher than those for the other subjects. This finding suggests that these subjects would benefit more from combination therapy than from other treatments. By contrast, the estimated HTE values for Subjects 6 and 8 were 0.00 and 0.01, respectively, indicating that these subjects were unlikely to derive any obvious benefit from combination therapy compared to monotherapy. In addition, we provide an overall interpretation based on the estimated HTE. To achieve this, we ranked the participants based on their estimated HTE and divided them into three equal-sized groups: the low, moderate, and high HTE groups. We subsequently generated a Kaplan-Meier plot for each of these groups, as depicted in Fig5. These plots suggest that subjects in the low HTE group did not derive much benefit from combination therapy compared to that from monotherapy. By contrast, patients in the high HTE group showed substantial benefits from combination therapy. Therefore, the estimated HTE can be a valuable metric for distinguishing between subjects who are likely to benefit greatly from combination therapy and those who may not experience any pronounced advantages. ## 6 Conclusion In this study, we introduce an innovative approach to estimate survival HTE based on the survival RuleFit method. The primary strength of the proposed method is its ability to formulate interpretable models, facilitating effortless interpretation of the relationships between HTE and covariates through rule-based terms. Moreover, our approach combines the concepts of S-learner and shared-basis conditional mean regression. This combination serves to avoid bias in HTE estimation caused by differences in the base functions and main effects between the treatment and control group models. Numerous simulations were conducted to assess the predictive performance of the proposed method under diverse conditions. The simulation results indicate that the proposed method outperforms existing approaches, namely, RMSE, Spearman's rank correlation, and correct classification rate. Regarding the absolute relative bias, the proposed method is still comparable to the best-performing method, the CSF. Thus, the simulation results indicate that our proposed method has adequate prediction performance compared to existing methods. The effectiveness of the proposed method is further supported by its application to real data from the HIV study ACTG 175. First, the results of applying the proposed method are presented. Subsequently, some of the most important and general rules are selected to interpret the application results. The interpretation based on the estimated model showed trends similar to the actual results, demonstrating the correctness of the interpretation. Furthermore, we introduced a method to interpret the estimated HTE and compare it to the actual HTE. The results show that the trend of the estimated HTE is similar to that of the actual HTE, indicating that the estimated HTE of the proposed method is correct. In summary, the simulation results indicate that the prediction performance of the proposed method is comparable to that of previous methods, whereas real data application shows that the proposed method can correctly interpret actual results. However, in this study, we focused on the RCT dataset, and its application to observational studies also needs to be considered. #### Conflict of interest No conflicts of interest.
2306.17538
Beyond Active Engagement: The Significance of Lurkers in a Polarized Twitter Debate
The emergence of new public forums in the shape of online social media has introduced unprecedented challenges to public discourse, including polarization, misinformation, and the emergence of echo chambers. While existing research has extensively studied the behavior of active users within echo chambers, little attention has been given to the hidden audience, also known as lurkers, who passively consume content without actively engaging. This study aims to estimate the share of the hidden audience and investigate their interplay with the echo chamber effect. Using Twitter as a case study, we analyze a polarized political debate to understand the engagement patterns and factors influencing the hidden audience's presence. Our findings reveal a relevant fraction of users that consume content without active interaction, which underscores the importance of considering their presence in online debates. Notably, our results indicate that the engagement of the hidden audience is primarily influenced by factors such as the reliability of media sources mentioned in tweets rather than the ideological stance of the user that produced the content. These findings highlight the need for a comprehensive understanding of the hidden audience's role in online debates and how they may influence public opinion.
Anees Baqir, Yijing Chen, Fernando Diaz-Diaz, Sercan Kiyak, Thomas Louf, Virginia Morini, Valentina Pansanella, Maddalena Torricelli, Alessandro Galeazzi
2023-06-30T10:50:38Z
http://arxiv.org/abs/2306.17538v1
# Beyond Active Engagement: The Significance of Lurkers in a Polarized Twitter Debate ###### Abstract The emergence of new public forums in the shape of online social media has introduced unprecedented challenges to public discourse, including polarization, misinformation, and the emergence of echo chambers. While existing research has extensively studied the behavior of active users within echo chambers, little attention has been given to the hidden audience, also known as lurkers, who passively consume content without actively engaging. This study aims to estimate the share of the hidden audience and investigate their interplay with the echo chamber effect. Using Twitter as a case study, we analyze a polarized political debate to understand the engagement patterns and factors influencing the hidden audience's presence. Our findings reveal a relevant fraction of users that consume content without active interaction, which underscores the importance of considering their presence in online debates. Notably, our results indicate that the engagement of the hidden audience is primarily influenced by factors such as the reliability of media sources mentioned in tweets rather than the ideological stance of the user that produced the content. These findings highlight the need for a comprehensive understanding of the hidden audience's role in online debates and how they may influence public opinion. polarization echo chamber hidden audience engagement Twitter ## 1 Introduction The advent of the digital age has ushered in an era of unprecedented and instantaneous communication among members of society. While these technological advancements promised faster and wider access to information, their influence on the spread of information has turned out to be more nuanced. Indeed, they have also fostered several pervasive issues, such as polarization, misinformation, and the emergence of echo chambers that could influence public opinion and negatively impact society [1, 32, 21]. While these divergences could already be observed during the 20th century [32], the introduction of social media networks may have increased the ideological divide among opposite factions [22]. This radicalization in opinions has been shown to be a clear obstacle to dialogue, consensus, and policy-making [1, 32], being also considered as "harmful to democracy and society" [34] and a security risk for the UN [37]. Polarized debate is also a fertile environment for the spread of misinformation that may harm society at different levels [12]. Falsehoods and unsubstantial claims have been shown to spread widely in social media [39, 40, 28], and they may erode trust in reliable sources [30]. One of the most insidious consequences of the digital age is the emergence of echo chambers [11], which have been found in various domains, including blogs [24], forums [20], and prominent social networks [11] like Facebook and Twitter [16, 14]. While not intrinsically harmful, echo chambers may reinforce individuals' existing beliefs and perspectives, creating segregated environments where alternative viewpoints are suppressed and dissenting voices are silenced [35]. Moreover, the echo-chamber effect also exacerbates polarization and misinformation [36], trapping individuals within their own ideological bubbles and limiting the exploration of diverse perspectives. In light of these pressing challenges, a surge of academic research has sparked over the last decade to understand the underlying mechanisms and real extent of echo chambers. Scholars have dedicated substantial efforts to characterize them within online social networks systematically [10, 11, 17, 31, 9], and developed indices to gauge their presence and strength [23, 18, 27]. Furthermore, various models have been proposed to elucidate the mechanisms driving the emergence of echo chambers [6, 5, 18]. While these models consistently emphasize the role of homophily as the primary catalyst for the echo chamber effect, a diverse range of contributing factors has also been proposed. Such factors include limited attention spans [10], selective exposure [29], confirmation bias [33], the silencing effect [35], and even the influence of feed algorithms [7]. Researchers have also explored methods to mitigate the impact of echo chambers, like introducing counter-biases within the feed algorithm [38]. Nonetheless, it is important to note that some researchers contend that the influence of echo chambers may be overstated [19, 4, 8, 15], thereby fostering ongoing debates surrounding the magnitude of their effects. All the empirical studies mentioned above have one point in common: they focus on active users, meaning those who directly took action to interact with the content they were shown. On Twitter, for instance, those include liking a tweet, replying to it, or reposting it - also known as retweeting. But this might only be the tip of the iceberg: some users may actually belong to an echo chamber without actively participating in it. These users are known as _lurkers_. Although precise measurements of their relative prominence are not generally available, the current estimates place them as the majority of users on social networks, as they range from \(75\,\%\) to \(90\,\%\)[25, 2]. Ignoring the presence of lurkers can lead to inaccurate estimations of echo chamber sizes and their potential impact on public debate. Consequently, an accurate estimation of their share is a pressing issue. In this work, we investigate the prominence of lurkers within the social network Twitter. To do so, we rely on the recently introduced metric called impression counts. An impression represents content appearing on a user's screen, reflecting visual engagement frequency. Notably, impressions quantify appearances, not unique viewers. Accordingly, this metric can be employed to estimate the share of the hidden audience, as well as the user engagement generated by different types of content. Additionally, we use this metric to explore whether the lurkers' share is influenced by factors like the ideological leaning of the content producer or the reliability and political bias of the sources used, thereby gaining insights into the lurkers' engagement patterns. The remainder of this work is organized as follows: Section 2 describes the dataset we collect, alongside the methods and the analyses conducted on them. Section 3 presents the main findings, and Section 4 summarizes the strategy, highlights the results, and suggests future directions. ## 2 Materials and methods ### Data collection We exploit two different datasets. The first aim is to investigate the interaction between ideological stances and user engagement, with a specific focus on assessing the share of hidden audiences in different communities of a polarized discussion. Accordingly, we collect through the official Twitter API all tweets related to the debate on whether countries should provide military support to Ukraine or not in the current 2023 war. This dataset consists of more than 17 million tweets posted by more than 5.2M users between November 22nd, 2022 and March 1st, 2023. The data collection process involved employing a search query that included the following terms: "military aid", "military support", "tanks", "abrams", "leopard", "challenger", "jet", "aircraft", "munitions", "HIMARS", "rockets" and "missile". We use a second dataset assessing the political leaning and reliability of news outlets employed in this debate. This dataset is created starting from Media Bias/Fact Check (MBFC), an independent fact-checking organization that classifies news outlets based on their reliability and political bias - originally used in [11]. The dataset described above contains 2190 different news outlets, their domain names, political tendencies and reliability. The dataset was last updated in June 2019. These news outlets have been labeled according to their political leaning, ranging from "extreme left" to "extreme right." Additionally, some media sources are classified as "questionable" or "conspiracy-pseudoscience" if they have a tendency to publish misinformation or false content and endorse conspiracy theories. To ensure a comprehensive analysis, we manually record the classification of these media outlets based on the information provided by MBFC, resulting in the inclusion of 468 outlets in addition to the existing pool of 1722 news outlets that already possess clear political labels. To calculate the individual leaning of users, each label is converted into a numerical value: -1 for Extreme Left, -0.66 for Left, -0.33 for Left-Center, 0 for Least Biased, 0.33 for Right-Center, 0.66 for Right, and +1 for Extreme Right, and the political leaning of a user is calculated as the average of the scores of all URLs it shared. ### Data filtering Considering our specific focus on engagement and hidden audience, we filter the dataset in order to maintain only the tweets whit a valid impression count. Firstly, we filter the dataset by date, retaining only the tweets posted after the introduction of the impression count metric, which is not available for tweets posted before the release of such metric (December 15th, 2022). Secondly, we restrict the dataset to tweets in English to avoid conflating factors (such as geography) that may affect the detection of users' ideological stances. Lastly, in the analysis of the hidden audience, we exclusively included original tweets. We disregarded other forms of content such as quotes, replies and retweets to accurately gauge the user's ability to engage their audience, as the impression on replies can be influenced by the original tweet and may not serve as a reliable proxy for measuring engagement. ### Interaction network Using the dataset described in 2.2, we build the retweet interaction network. This methodology aligns with prevailing practices in Twitter analysis research [22, 21, 13], as retweets are regarded as endorsements of content. On the other hand, quotes, retweets and replies are disregarded since they are less likely to signify endorsement and are often used for expressing opposing viewpoints or engaging in polemics [22]. Using the English retweets dataset, we build a network by assigning a node to each unique user in the dataset - this includes users who either authored an original English tweet or retweeted an English tweet containing the specified keywords. We create directed edges from node A to node B if user A retweeted a post authored by user B and the weight of the edges is determined by the count of unique retweets between the two users, reflecting the strength of their interaction. The final interaction network counts 2.5 million nodes and 7.1 million edges. ### Latent Ideology Estimation To estimate the ideological stance of users in the debate, we start from the latent ideology algorithm proposed in [3, 4]. Following the studies already conducted in this field [22, 21], we consider retweets instead of follower/following relationships as interaction since retweets have been found to be good indicators of content endorsement [21, 22]. The latent ideology algorithm requires the extraction of a subset of the influencer nodes which critically affects the ideology estimation results. The method by which such extraction is performed is the main topic of the following subsection. Once the influencer set is known, we apply the Correspondence Analysis algorithm [26], which follows three steps: (i) Construction of the interaction matrix \(A\), (ii) normalization of the matrix, and (iii) singular value decomposition. For the first step, we construct a matrix \(A\), whose elements \(A_{ij}\) represent the number of retweets user \(i\) directs toward influencer \(j\). Once \(A\) is known, we normalize it as follows. First, we divide by the total number of retweets, obtaining: \[P=\frac{A}{\sum_{ij}A_{ij}}. \tag{1}\] Then, we define the following quantities: \[\begin{cases}\textbf{r}=P\textbf{1},\\ \textbf{c}=\textbf{1}^{T}P,\\ D_{r}=\text{diag}(\textbf{r}),\\ D_{c}=\text{diag}(\textbf{c}),\end{cases} \tag{2}\] and we perform the following normalization operation: \[S=D_{r}^{-1/2}(P-\textbf{rc})D_{c}^{-1/2} \tag{3}\] For the third step, we perform a singular value decomposition of the form \(S=U\Sigma V^{T}\), where \(U,V\) are orthogonal matrices and \(\Sigma\) is a diagonal matrix containing the singular values of \(S\). Given the polarized nature of the networks under investigation, we can approximate the system by taking the subspace associated with the first singular value of the decomposition. Thus, we take the latent ideology of user \(i\) to be the \(i\)-th entry of the first column of the orthogonal matrix \(U\), while the median ideology of their retweeters represents the latent ideology of an influencer. ### Influencers Selection As mentioned above, to apply the ideology-scoring algorithm we first need to extract a set of influencers from the retweet network. The influencer group encompasses several subgroups: (i) Russian and Ukrainian politicians, (ii) official accounts from information media sources such as journals and TV channels, and (iii) political activists. Users in the retweet network are ranked according to their in-degree, according to the number of unique users who have retweeted them. This enables us to start from a manually selected set of users pertaining to the three aforementioned categories with some of the highest in-degree. This set then serves as a seed as we further select similar accounts using the "Who to follow" recommendations made by Twitter on their accounts' page. We then refine the selection by excluding users with an in-degree lower than 100 and those whose content is unrelated to the Ukrainian conflict. These criteria yield a comprehensive set of 204 influencers. ### Estimation of the hidden audience proportion The estimation of hidden audience proportion leverages tweet-level metrics, including the number of likes, replies, retweets, quote tweets and, crucially, the number of impressions. We define the proportion of hidden audience as the ratio of the number of active actions taken - namely, liking, replying, retweeting, and quote retweeting - out of the number of impressions received by a given tweet, a given user, or a given domain, depending on the comparison unit of interest. We call this ratio _Active Engagement (AE)_. \[AE=\frac{\text{\# of actions}}{\text{\# of impressions}} \tag{4}\] The measure of active engagement, coupled with users' popularity level, latent ideology, and the credibility and ideology of shared links, can give us crucial insights into how the share of the hidden audience may vary along these dimensions. ## 3 Results ### Polarization in the debate around military support to Ukraine on Twitter The ongoing debate on whether other countries should provide military assistance to Ukraine during its conflict with Russia has generated significant attention from influential figures such as politicians, journalists, and committed citizens. As discussed above, the formation of echo chambers, where users predominantly interact with like-minded peers, is a common phenomenon observed in such controversial debates within social networks. Examining the presence of echo chambers around this polarizing topic is our first step to analyzing the dependence of hidden audience share on ideological stance. We estimated users' stances by computing ideology scores based on the influencers they had retweeted. Our results demonstrate a highly polarized discussion, with individuals advocating for military aid to Ukraine disproportionately engaging with like-minded users and those against military intervention engaging mainly with others holding similar views, as shown in panel a of Fig. 1. The emergence of two distinct user clusters where they primarily retweet ideologically congruent influencers indicates a fairly limited volume of cross-ideology influence (see Fig. 1(a)). Also, the latent ideology analysis shows a clear bimodal distribution of the users' and influencers' opinions as shown in Fig. 1(b), further highlighting the presence of echo chambers in this debate (see also Fig. S1). ### Unveiling the hidden audience in the echo chambers Having identified two opposing echo chambers, we now turn to the characterization of the hidden audience of this debate and how they may be distributed between these two groups. **Across individual users.** We first show the proportion of hidden audience across individual users in Fig. 2(a). Overall, we see that the majority of the audience in such a polarized discussion is "hidden", as the average value of AE peaks at around 1% and is particularly low for quote tweets (see Table 1). Out of the four actions identified in our data, the most prevalent action after viewing a tweet is liking (bottom left), the AE of which has the weakest correlation with the number of followers a user has. Such a weakly-correlated pattern extends to the AE of retweeting (top left), with a slightly higher level of average AE. On the other hand, when we look at actions that require textual inputs and may entail conversational interactions among users, such as replies (top right) and quote retweets (bottom right), there are more pronounced negative correlations between the AE level and the number of followers a user has (both log-scaled), and such a visual impression can be statistically verified by their Pearson's r values. **Across user groups.** Next, we turn our attention to the interplay between users' opinions and the hidden audience. Thus, we utilize the inferred ideology of individual users and compare AE levels between two opposing groups (i.e., UA \begin{table} \begin{tabular}{l l l} \hline \hline action & average AE (\%) & Pearson’s r \\ \hline retweet & 0.2909 & -0.3469 \\ reply & 0.2479 & -0.5649 \\ like & 1.1154 & -0.2250 \\ quote & 0.0612 & -0.5690 \\ \hline \hline \end{tabular} \end{table} Table 1: **Summary statistics for the active engagement by kind of action taken.** We compute the average AE for all tweets (with and without actions), as well as Pearson’s r for tweets with actions (with log-scaled values) to show its correlation with user popularity. Figure 1: **Retwet Network and Ideology Distribution of users and influencers.** Panel (a): influencers and users retweet networks for nodes with a degree greater than 100, with edges colored on the base of nodes’ ideological stances. Panel (b): histogram of users’ and influencers’ ideology score (top) and distributions of top influencers’ retweeters’ ideology obtained with the latent ideology algorithm (lower). Negative values represent pro-military aid alignment, while positive values correspond to military aid opponents. Bar colors in the top panel of (b) represent the density of influencers (pink color) and users (green color). Consistently with the color palette in panel (a), the area below retweeter distributions in the bottom panel of (b) is shaded in salmon if the influencer was inferred to be a supporter of Ukrainian aid, and in black if the influencer is against providing weapons to Ukraine. aid supporters and opponents, see Fig. 2(b)). Across four actions, there are some subtle differences in these distributions, like a slightly higher AE for opponents' retweeting, replying, and quoting. That being said, we do not observe a qualitative difference overall, which means that these two groups, though opposed along the spectrum of ideology, do not appear to behave differently in terms of their engagement patterns. **Across domains.** One possible factor influencing the hidden audience is the sources used in the tweets. Here we focus on news sources shared in the debate surrounding our topic of interest with respect to their reliability and political leaning, as shown in Figs. 3 and 4. Results show that the number of sharers does not have a strong influence on AE, as we observe across individual users. Similarly, the order of magnitude of AE does vary across different types of actions upon tweets sharing domain(s): the majority of likes maintain a relatively higher AE level around \(10^{-3}\sim 10^{-2}\), while the AE of retweets and replies concentrates in a lower range of \(10^{-4}\sim 10^{-3}\), with the AE of quotes being the lowest around \(10^{-4}\)., requiring more effort and direct interaction, Additionally, when analyzing the domains that predominantly report misleading information, we observe a consistently higher level of AE across four types of actions. This suggests that unreliable domains do not necessarily attract a larger number of unique shares, but they do trigger more visible user-content interactions with smaller proportions of passive observers. Furthermore, among all reliability groups, the extreme-right domains have the highest AE level overall (Fig. 4), indicating that tweets sharing far-right domains engage a more actively engaged audience pool with a relatively smaller proportion of passive linkers. As the next step, we combine the political leaning of domains shared by users and their detected positioning in the debate. The ideology distribution of groups of users depending on the leaning of the domains they share is shown in Fig. 5. Results highlight that support for the supply of weapons to Ukraine tends to drop the more extreme the political leaning. However, there is still an important distinction between extreme left and extreme right, as the former does have a wider distribution, but the majority of its users are still in favor of military aid, while the latter opposes such aid in its majority. ## 4 Discussion This study presents a novel approach to investigating the interplay between echo chambers, misinformation, and the share of users that consume content without visible interactions. The distinctive aspect of this approach lies in estimating the prominence of passive users, commonly referred to as lurkers, who refrain from actively engaging with tweets through actions such as liking, quoting, replying, or retweeting. Our work exploits the impression count, the newly introduced tweet-level metric from Twitter API, to estimate the proportion of the hidden audience that would be Figure 2: **User-level hidden audience in the Twitter discussion on military aid to Ukraine.** (a) Bivariate probability density of the number of followers and the active engagement with respect to retweets (top left), replies (top right), likes (bottom left), and quotes (bottom right). The active engagement seems to generally decrease with the number of followers of the original poster. (b) Boxplots of the active engagement for the same actions as in panel (a) grouped by users’ ideologies, i.e., UA aid supporters (pink) and opponents (grey). Figure 4: **Influence of political ideology on the share of the hidden audience. Each subplot shows box plots representing the distribution of the fraction of impression that provokes an action (retweet, reply, like or quote). For each subplot, we group domains according to their political leaning into seven groups, ranging from extreme right to extreme left.** Figure 3: **Analysis of the hidden audience at the domain level. Each scatter plot represents the relation between the number of unique sharers and the fraction of impressions that are followed by an action (either retweet, reply, like, or quote). Each cross represents a domain, and their color indicates their reporting quality.** otherwise disregarded in analysis, inspiring a novel angle for analysis of polarization and echo chamber on Twitter, for which the previous analysis only take into account users with visible actions. To address the presence of lurkers within the echo chamber, we compare the number of actions and impressions in the original tweets related to the Russo-Ukrainian conflict. This analysis allows us to determine the proportion of lurkers relative to active users and investigate their dependency on various factors. Our findings reveal that Twitter actions constitute a smaller portion compared to the total impressions, indicating that passive users account for a significant share of consumers. Furthermore, this share is even greater for actions that require active engagement, such as quoting. Notably, the main driver that influences the share of passive consumers is the presence of far-right and misinformation-spreading news sources. These contents exhibited the highest ratio of actions per impression, suggesting the dependence of active engagement on the type of content more than the ideological stance of the producers. Although this study provides an initial understanding of the impact of passive users, it also raises several unresolved questions that warrant further investigations. Firstly, exploring whether lurkers are present in other debates of interest and social networks, such as Facebook or Reddit, would be valuable. However, we must consider that these platforms do not offer the same impression count metric yet. Secondly, the temporal variability of the action-per-impression rate remains unclear, necessitating a detailed analysis of lurker behavior over time. Thirdly, which types of media (e.g., URLs, videos, images) elicit the highest level of active user engagement is yet to be determined. Lastly, developing a minimal mechanistic model capable of reproducing the observed data would be instrumental in comprehending the underlying mechanisms driving content engagement. Furthermore, extending our exploration of lurkers to other debates and networks is crucial to gain a comprehensive understanding of their role in shaping online discourse. In summary, this research makes a step forward by exploring the presence of passive users, who constitute a relevant part of social network users. Additionally, our findings underscore the heightened activity of users within domains with low factual reporting or far-right ideologies. By acknowledging the significance of lurkers and their relationship to echo chambers, polarization and misinformation, this study contributes to a comprehensive understanding of social network dynamics specifically related to the amplification of polarization on social media. Furthermore, it opens new avenues for future research and interventions aimed at addressing these challenges. ## Acknowledgments This work is the output of the Complexity72h workshop, held at the IFISC in Palma, Spain, 26-30 June 2023. [https://www.complexity72h.com](https://www.complexity72h.com) Figure 5: **Distributions of users by ideology segmented by political leaning of shared domains.** The distributions are shown for regular users and influencers who have shared at least a class of domains twice.
2306.00044
How to Construct Perfect and Worse-than-Coin-Flip Spoofing Countermeasures: A Word of Warning on Shortcut Learning
Shortcut learning, or `Clever Hans effect` refers to situations where a learning agent (e.g., deep neural networks) learns spurious correlations present in data, resulting in biased models. We focus on finding shortcuts in deep learning based spoofing countermeasures (CMs) that predict whether a given utterance is spoofed or not. While prior work has addressed specific data artifacts, such as silence, no general normative framework has been explored for analyzing shortcut learning in CMs. In this study, we propose a generic approach to identifying shortcuts by introducing systematic interventions on the training and test sides, including the boundary cases of `near-perfect` and `worse than coin flip` (label flip). By using three different models, ranging from classic to state-of-the-art, we demonstrate the presence of shortcut learning in five simulated conditions. We analyze the results using a regression model to understand how biases affect the class-conditional score statistics.
Hye-jin Shim, Rosa González Hautamäki, Md Sahidullah, Tomi Kinnunen
2023-05-31T15:58:37Z
http://arxiv.org/abs/2306.00044v1
# How to Construct Perfect and Worse-than-Coin-Flip ###### Abstract Shortcut learning, or 'Clever Hans effect' refers to situations where a learning agent (e.g., deep neural networks) learns spurious correlations present in data, resulting in biased models. We focus on finding shortcuts in deep learning based spoofing countermeasures (CMs) that predict whether a given utterance is spoofed or not. While prior work has addressed specific data artifacts, such as silence, no general normative framework has been explored for analyzing shortcut learning in CMs. In this study, we propose a generic approach to identifying shortcuts by introducing systematic interventions on the training and test sides, including the boundary cases of 'near-perfect' and 'worse than coin flip' (label flip). By using three different models, ranging from classic to state-of-the-art, we demonstrate the presence of shortcut learning in five simulated conditions. We analyze the results using a regression model to understand how biases affect the class-conditional score statistics. Hye-jin Shim\({}^{1}\), Rosa Gonzalez Hautamaki\({}^{2}\), Md Sahidullah\({}^{3}\), Tomi Kinnunen\({}^{1}\)\({}^{1}\)University of Eastern Finland, Finland \({}^{2}\)University of Oulu, Finland \({}^{3}\)TCG CREST, India [email protected], [email protected], [email protected], [email protected] **Index Terms**: dataset bias, shortcut learning, Clever Hans, anti-spoofing, ASVspoof ## 1 Introduction The study of deep learning models has increased along with their widespread adoption in applications processing large amounts of data [1, 2, 3, 4, 5]. However, unexpected model behavior or system outcomes can be incurred when simply enlarging the scale of datasets and achieving high numerical accuracy without thorough examinations of the data and models. Accordingly, warnings have been raised about the potential risks associated with _biased_ datasets and models [6, 7, 8]. This kind of biased model behavior based on spurious correlations is referred to as _Clever Hans effect_[9] or _shortcut learning_[10]. Several studies have been conducted to understand how models work and uncovered potential biases in the data [11, 12, 13, 14, 15, 16, 17]. For example, the models in the image domain are prone to focus on the background, rather than the object as a shortcut to predict class labels [18, 19, 20]. The situation becomes more complex when using a deep-learning _black-box_ model, which is difficult to interpret. _Explainable AI_ (XAI) [21] is one of the ways to interpret model behavior and build human-understandable models [22, 23, 24]. In this study, we propose a novel framework to discover shortcut learning in binary classifiers, treated as black-boxes. The proposed framework, detailed in Section 2, formulates a generic approach for introducing systematic biases into an existing database. Purposefully introduced interventions in _asymmetric_ ways lead the model to respond to the provided shortcuts. Those interventions are applied both in the training and test sides, including the boundary cases of 'near-perfect' and 'worse than coin flip' cases (label flip). Since the parameters of the intervention process are known, another key element of the proposed approach is to use these parameters as inputs in a regression model (linear mixed effects modeling or LME) in the classifier score domain. Importantly, the inclusion of LME allows us to 'go beyond the EER' - to learn how the biases impact the class-conditional score statistics. The focus of our case study is on audio anti-spoofing, which determines whether the utterance is from a real human (bona fide) or spoofing attacks (e.g. voice conversion, text-to-speech). Several studies have addressed data bias in audio anti-spoofing [25, 26, 27, 28, 29, 30]. The early studies in [25, 26, 27] have investigated the distribution of waveform samples as a shortcut in spoofing countermeasures (CM). More recently, the validity of the ASVspoof 2019 and ASVspoof 2021 datasets has also raised concern about the proportion of silence in [28, 31]. While _silence_ has only been examined in spoofing detection yet, our research explores several suspected candidates for leading bias. Our proposed shortcut learning analysis framework is applicable to arbitrary types of data interventions and black-box classifiers. As a proof of concept, therefore, our experimental part includes five different types of interventions and three different spoofing countermeasures (CMs) of varied complexity. In particular, experiments are conducted with two conventional methods, namely, Gaussian mixture model (GMM) and light convolutional network (LCNN) as well as state-of-the-art audio anti-spoofing using integrated spectro-temporal graph attention network (AASIST). \begin{table} \begin{tabular}{|l||c|c|c|c|} \hline \begin{tabular}{c} **Configuration** \\ (indicator) \\ \end{tabular} & \multicolumn{2}{c|}{Train} & \multicolumn{2}{c|}{Test} \\ \cline{2-5} \multicolumn{1}{c|}{} & Spf & Bona & Spf & Bona \\ \hline \multicolumn{5}{l}{**O** (0 0 0 0)} \\ \hline \multicolumn{5}{l}{**A** (0 1 0 1)} \\ \multicolumn{5}{l}{**B** (1 0 1 0)} \\ \hline \multicolumn{5}{l}{**C** (0 1 1 0)} \\ \multicolumn{5}{l}{**D** (1 0 0 1)} \\ \end{tabular} & \multicolumn{1}{c|}{\(\check{\check{\check{\check{\check{\check{\check{\check{\check{\check{\check{\check{\checkcheckcheckcheck{ \check ## 2 Methodology To simulate and analyze data bias in a delicate way, we address binary classification (detection) where the system under study potentially learns unintended associations between class labels and extrinsic interventions. We consider carefully selected sets of systematic interventions designed to create asymmetry between the two classes, as presented in Table 1. This section details the proposed methodology. ### Constructing and Evaluating Binary Classifiers Let \(\mathscr{D}\triangleq\{(x_{i},y_{i}^{\text{ch}}):i=1,\ldots,N\}\) denote a labeled dataset of \(N\) objects \(x_{i}\in\mathscr{X}\) and their ground-truth class labels \(y_{i}^{\text{ch}}\in\mathscr{Y}\triangleq\{0,1\}\). The \(N\) instances are considered as independent, identically-distributed draws from an unknown data distribution \(P(X,Y)\). In this study, each \(x_{i}\) is a speech utterance (a digital waveform) and \(y_{i}^{\text{ch}}\) indicates whether \(x_{i}\) is a bona field (\(y_{i}^{\text{ch}}=1\)) or a spoofed (\(y_{i}^{\text{ch}}=0\)) waveform. The dataset \(\mathscr{D}\) consists of disjoint training and evaluation subsets denoted by \(\mathscr{D}_{\text{in}}\) and \(\mathscr{D}_{\text{exa}}\), respectively1. We introduce another binary variable \(y_{i}^{\text{m}}\in\mathscr{Y}\) that indicates whether \(x_{i}\) belongs to the training (\(y_{i}^{\text{m}}=1\)) or the evaluation (\(y_{i}^{\text{m}}=0\)) subset. The two labels, \(\mathbf{y}_{i}\triangleq(y_{i}^{\text{ch}},y_{i}^{\text{m}})\in\{0,1\}\times\{0,1\}\) specify both the class label (bona field or spoof) and the use case (training or evaluation) of \(x_{i}\) in the dataset. These labels remain fixed (as-given) in the original dataset. Footnote 1: Additionally, when training models, we have a _development set_. Since in our modeling this is always treated the same way as the training set, we do not explicitly write notations about development data. By using a selected class of predictive models \(g:\mathscr{X}\rightarrow\mathscr{Y}\) and training loss \(\mathcal{L}:\mathscr{X}\times\mathscr{Y}\rightarrow\mathbb{R}_{+}\), a system developer trains a model by minimizing \(\mathcal{L}\) on the training data. The trained model is then executed on the evaluation set to make predictions of class labels. In practice, the model produces _score_\(s_{i}\in\mathbb{R}\) for each instance \(x_{i}\in\mathscr{D}_{\text{exa}}\). The score can then be converted to a predicted hard decision by comparing \(s_{i}\) to a threshold \(\tau\). The scores paired up with their ground-truth labels \((s_{i},y_{i}^{\text{ch}})\) are then used to compute miss and false alarm rates, each being a function of \(\tau\). From the miss and false alarm rate functions, the evaluator summarizes performance using a suitable figure of merit - here, the equal error rate (EER). Besides EER, which measures discrimination, we also do direct modeling of the detection scores through linear mixed effects modeling as will be detailed below in Section 2.4. ### Introducing Controlled (but Random) Interventions While the ground-truth labels \(\mathbf{y}_{i}\) will remain as-given in the original data \(\mathscr{D}\), we introduce random (but controlled) interventions to the audio files \(x_{i}\). Our aim is to understand their impact on learning, predictions, and performance evaluation of models. Consider an arbitrary file \(x_{i}\). It gets randomly perturbed as \[x_{i}^{\prime}=f_{j}(x_{i};z_{ij}), \tag{1}\] with probability \(P(f_{j})\triangleq\mathbb{P}\big{(}\text{Apply }f_{j}\text{ to }x_{i}\big{)}\). Here \(f_{j}\) is a predefined, deterministic function of its inputs that takes different forms depending on the type of intervention (detailed in Subsection 3.2), indexed by the subscript \(j\). The conditioning variable \(z_{ij}\sim_{\text{i.i.d.}}P(Z;\mathbf{\theta}_{j})\), in turn, contains parameters of a random intervention applied to \(x_{i}\). Using additive white noise as an example, \(z_{ij}\) might indicate the signal-to-noise ratio (SNR) selected for degrading waveform \(x_{i}\). To sum up, the intervention process is specified by two sets of control parameters: (1) the probability of applying intervention, \(P(f_{j})\); and (2) the distribution of the intervention control variable, \(P(Z;\mathbf{\theta}_{j})\). The former controls _how many of the files in a given dataset get perturbed_. For a dataset of \(N\) files, we apply \(f_{j}\) to randomly selected \(N_{\text{penth}}=\lfloor P(f_{j})\cdot N\rfloor\) files, the remainder \(N-N_{\text{penth}}\) being retained as in the original dataset. For the randomly selected files that undergo intervention, we sample another variable \(z_{ij}\sim P(Z;\mathbf{\theta}_{j})\) independent of \(x_{i}\) which controls the intervention applied to \(x_{i}\). ### Choosing Intervention Hyperparameters For reasons of space and avoidance of combinatorial explosion, in this initial study, we make a number of simplifications about the interventions. First, while the selected types of interventions are typical in the audio domain, we focus on each type at a time; we neither mix different interventions into an experiment nor consider combinations of different interventions applied sequentially. Second, \(P(Z;\mathbf{\theta}_{j})\) takes the form of either a continuous uniform or a Dirac distribution. In the former case, \(P(Z;\mathbf{\theta}_{j})\) is specified by the minimum and maximum values, whereas the latter implies a deterministic choice of \(z_{ij}\). An example of the former is white noise with randomly-selected SNR, and an example of the latter is \(\mu\)-law encoding. For the intervention probability we consider the two extremes, \(P(f_{j})\in\{0,1\}\), i.e. we either perturb _all_ files (\(P(f_{j})=1\)) or _none_ (\(P(f_{j})=0\)). Importantly, however, the intervention probability varies depending on the specific subset of data, as shown in Table 1. Introducing these _systematic effects_ at the level of data subsets allows us to analyze a classifier's behavior under the biased data set-up. From here on, we refer to the different 4-bit strings shown in the rows of Table 1 as _intervention configurations_, denoted by \(\mathbf{c}=(c_{1},c_{2},c_{3},c_{4})\in\{0,1\}^{4}\). Note that while the configuration remains fixed for a given training-evaluation experiment, the audio interventions are random at the level of _individual audio files_ (except for \(\mu\)-law encoding). ### Mixed Effects Modeling of Biased CM Scores Suppose that a model \(g\) has been trained and scored on a dataset \(\mathscr{D}\) corresponding to a number of different configurations. Since we have full knowledge of the intervention parameters, our aim is to define an _explanatory model_ that links the detection scores to these parameters. Although observing changes in EER with different interventions being applied provides a quick overall trend, it does not provide details of how the cores are impacted per class. Hence, direct regression modeling of the CM detection scores as a function of intervention parameters helps to 'go beyond the EER' and obtain explanations for the impact of interventions. To this end, our selected methodology consists of _linear mixed effects_ (LME) modeling [32] of CM scores. We fit the following model separately for each CM and intervention type: \[s_{i}=\underbrace{\mu+d\,y_{i}^{\text{ch}}+\beta^{\text{bona}}\Delta_{i}^{\text {bona}}+\beta^{\text{grf}}\Delta_{i}^{\text{grf}}}_{\text{fixed effect}}+\underbrace{ \varepsilon_{i}}_{\text{random effect}} \tag{2}\] Here, \(\mu\) is the global mean of the scores, \(d\) is a class discrimination parameter, \(y_{i}^{\text{ch}}\) the class label, \((\beta_{j}^{\text{bona}},\beta_{j}^{\text{gr}})\) two regression coefficients, and \(\varepsilon_{i}\sim\mathcal{N}(0,\sigma_{c}^{2})\) is a random effect that models variation across the trials. The model parameters, obtained by fitting (2) to CM score data, are \((\mu,d,\beta^{\text{bona}},\beta^{\text{grf}},\sigma_{c}^{2})\). Most relevant to the analysis of biased scores are the two variables \(\Delta_{i}^{\text{boxa}}\) and \(\Delta_{i}^{\text{sqf}}\). The first one, \(\Delta_{i}^{\text{boxa}}\), is the absolute difference between the intervention probability of the test trial and the intervention probability of the bona fide training set. Likewise, \(\Delta_{i}^{\text{sqf}}\) is the absolute difference between the intervention probability of the test trial and the intervention probability of the spoof training set. The value \(0\) indicates an equivalent treatment of test and training audio, while 1 indicates different treatments. A concrete example may be helpful in clarifying (2). For configuration **A** in Table 1, \(\Delta_{i}^{\text{boxa}}=0\) and \(\Delta_{i}^{\text{sqf}}=1\) for all bona fide trials; and \(\Delta_{i}^{\text{boxa}}=1\) and \(\Delta_{i}^{\text{sqf}}=0\) for all spoof trials. The two class-conditional models obtained from (2) are \[\begin{split} s_{i}&=\mu+\beta^{\text{boxa}}+ \varepsilon_{i}\hskip 30.0pt\text{( spoof trials, $y_{i}^{\text{ck}}=0$)}\\ s_{i}&=\mu+d+\beta^{\text{sqf}}+\varepsilon_{i}\quad \text{(bona fide trials, $y_{i}^{\text{ck}}=1$)}\end{split} \tag{3}\] Since \(\varepsilon_{i}\) is normal, both of these conditional score distributions are normal as well, with shared variance \(\sigma_{\varepsilon}^{2}\). The difference between the bona fide and spoof class means (which relates to discrimination) is \(d+(\beta^{\text{sqf}}-\beta^{\text{bona}})\). The expression in the parentheses vanishes when the two classes are treated the same way (original configuration **O**). Whenever the difference of \(\beta^{\text{sqf}}\) and \(\beta^{\text{bona}}\) is positive, the separation of the two distributions improves, leading to 'decreased' EER relative to **O**. Likewise, a negative difference yields an 'increase' in EER. The use of quotes is intentional, as the \(\beta\)-coefficients relate to the systematic biases that we introduced to the data. Similar model interpretations are easy to obtain for the remaining configurations; see the summary in Table 2. As one might have expected, the model for configurations **A** and **B** are the same; likewise, the model for the two label-flip configurations **C** and **D** are the same. Considering the difference of the class-conditional means, the only difference between the two sets of biased models is in the sign of \((\beta^{\text{sqf}}-\beta^{\text{bona}})\) or \((\beta^{\text{bona}}-\beta^{\text{sqf}})\). In our result analysis, we use \(\beta^{*}\), referring to \(\beta^{\text{bona}}\) or \(\beta^{\text{sqf}}\), as they only differ in sign. ## 3 Experimental setup ### Dataset We use the ASVspoof 2019 logical access (LA) dataset [5] for all the experiments. It consists of speech synthesis and voice conversion samples distributed across training, development, and evaluation subsets. The two former include six types of spoofing attacks, while the last contains thirteen attacks. ### Selected interventions We consider five different types of dataset interventions. They are motivated either by general considerations of the desirable robustness of countermeasures; or reported findings on ASVspoof dataset biases. **Mp3 compression** lowers the quality of the audio by MP3 encoder. We process audio files with \(lameen\) encoder and bit-rates were randomly selected with varying quality ranging from 16 kbps to 256 kbps. Then, speech files are decoded again. **Additive white noise** degrades speech files of the corpus with a random signal-to-noise-ratio value chosen between \([0,30]\) dB. **Loudness normalization** utilizes a constant gain to match a specific loudness in _loudness units relative to full scale_ (LUFS) which is an implementation of ITU-R BS.1770-4 [33]. The minimum and maximum values of loudness are -31 and -13 in **dB** and it is randomly selected for each sample in our work. **Non-speech zeroing** sets a desired proportion of detected non-speech frames to a constant value (blank zeros). We use an energy-based approach [34, Section 5.1] with 25 ms, non-overlapping frames to obtain speech/nonspeech labels. This intervention is motivated by reports [28, 29, 30] on systematic differences in non-speech segments across bona fide and spoof segments in the ASVspoof 2019 data. **\(\boldsymbol{\mu}\)-law encoding** first applies \(\mu\)-law compression with 255 quantization levels. Then it performs \(\mu\)-law expansion to derive the speech signal affected by quantization error. Note that we do not apply random intervention parameters at the sample level for \(\mu\)-law encoding, unlike other interventions. We save the perturbed files into 16-bit.flac following the original file format of ASVspoof 2019. While the above perturbations are commonly used for different _data augmentations_ to improve the generalization of DNNs, here they are used as interventions to purposefully create biased data. This allows us to gauge the extent of shortcut learning taking place in the selected models. While the interventions are applied in asymmetric ways, as described in Table 1, the numbers of training files, test files, and evaluation trials remain as in the original data. ### Countermeasures We consider three different CM models. The first one uses linear frequency cepstral coefficient (LFCC) features with Gaussian mixture model (GMM) [35]. The second one uses LFCCs with light convolutional neural network (LCNN) [36, 37]. Above two systems are being used as baselines for the ASVspoof challenge series2. Our third CM is AASIST [38], one of the state-of-the-art systems. It directly operates upon raw waveform input and utilizes graph and graph pooling modules. In this paper, we use a light variant of AASIST with 85K parameters for all experiments. Footnote 2: [https://github.com/asvspoof-challenge/2021](https://github.com/asvspoof-challenge/2021) ### Mixed Effects Modeling of CM Scores For the analysis, all scores are normalized with Z-score separately for each configuration and intervention. Then the scores are modeled as defined in (2) where the regression coefficients are estimated using _lme4 package_[32] for R. ## 4 Results ### Countermeasure performance Table 3 shows the comparative performance of the three countermeasure models for the original dataset as well as for var \begin{table} \begin{tabular}{l l l l} \hline Config. & \(y_{i}^{\text{ss}}\) & Model & _Difference_ \\ \hline **O** & 0 & \(s_{i}=\mu+\varepsilon_{i}\) & \(d\) \\ & 1 & \(s_{i}=\mu+d+\varepsilon_{i}\) & \\ \hline **A**, **B** & 0 & \(s_{i}=\mu+\beta^{\text{sqf}}+\varepsilon_{i}\) & \(d+\beta^{\text{sqf}}-\beta^{\text{bona}}\) \\ & 1 & \(s_{i}=\mu+d+\beta^{\text{sqf}}+\varepsilon_{i}\) & \\ \hline **C**, **D** & 0 & \(s_{i}=\mu+\beta^{\text{sqf}}+\varepsilon_{i}\) & \(d+\beta^{\text{bona}}-\beta^{\text{sqf}}\) \\ & 1 & \(s_{i}=\mu+d+\beta^{\text{bona}}+\varepsilon_{i}\) & \\ \hline \end{tabular} \end{table} Table 2: Models per configuration and trial class \(y_{i}^{\text{cls}}\) (0: spoof, 1: bonafide), where Difference refers to the difference of conditional means: \(E[s_{i}|y_{i}=1]-E[s_{i}|y_{i}=0]\) ious interventions as explained in Table 1. For the original dataset (configuration **O**), the state-of-the-art DNN-based AASIST method shows the lowest EER whereas the classical GMM classifier shows the worst performance. The configurations with perturbation in the same class across training and test (i.e, **A** and **B**) substantially reduce EER compared to the baseline. This is because the dataset bias acts as an additional cue for discrimination. For some cases, they show extreme case performance with 0% EER indicating perfect discrimination. On the other hand, completely opposite trends are shown when the perturbation is reversed for classes across training and test (i.e., **C** and **D**). In some cases, we observe more than 50% EER indicating label flipping. In terms of the type of interventions, the models are highly sensitive, particularly in Mp3 compression and additive white noise. The less significant intervention involves loudness normalization through countermeasure models. An important observation is that when intervention is added to spoof utterances in training, it has a greater impact compared to adding intervention to bona fide utterances in training on neural classifiers, shown in the gap between **C** and **D**. We can reasonably conclude that bona fide speech is less susceptible to silence, which aligns with the findings of [28]. Additionally, the trend across different features and models is consistent, indicating a potential data bias. ### Mixed effects modeling of biased scores We employed a mixed-effect model with Eq. (2) to fit the standardized detection scores and determine the effect of configuration variation on bias across the trials. Each model corresponds to the intervention and its five configurations. Table 4 shows the parameters of each CM model on each intervention. The mean of scores represents the spoofing trials as the reference data with no interventions. We found a substantial effect on \(\beta^{*}\) (referring to \(\beta^{\text{bona}}\) or \(\beta^{\text{spf}}\)) for all the models except for loudness normalization in AASIST-L and GMM. This does not mean that a significant effect cannot be found in the individual configurations. However, the models had high residual variances, describing the random effects that could not be explained by our selected variables. This outcome was anticipated due to the experiments' detrimental impact on system performance. To analyze the effect of each configuration on the detection score, we use the full model parameters for each intervention to define models for each configuration which were presented in Table 2. As outlined in the same Table, differences between trial types were defined by \(d\), and \(\beta^{*}\) to compare the configuration effects for spoof or bona fide training. Configurations **A** and **B** exhibited a larger difference between bona fide and spoof trials in estimated scores compared **O**, resulting in a lower EER. In contrast, configurations **C** and **D** had mismatched training and testing class interventions, leading to a higher EER. In terms of interventions, MP3 compression, and additive white noise showed higher variation effects for \(\beta^{*}\), while loudness normalization produced smaller variation effects for the data. ## 5 Conclusions To uncover shortcut learning in binary classifiers, we propose a novel framework introducing systematic bias with intentional intervention in an asymmetric way. Our goal is to intrigue the black-box model by giving shortcuts explicitly to react to the given conditions. We conduct our proposed method in audio anti-spoofing as a case study. By fitting a mixed-effects model on countermeasure scores from diverse CM models, we demonstrate the effect of data bias on scores due to interventions. The results reveal that MP3 compression and additive white noise could be shortcuts for audio anti-spoofing. Our findings indicate a direction for analyzing possible data biases in countermeasure evaluation. The solutions to mitigate those biases and deep analysis of those correlations remain in our future work. ## 6 Acknowledgements This work was partially supported by Academy of Finland (Decision No. 349605, project "SPEECHFAKES"). \begin{table} \begin{tabular}{l l c c c c} \hline \hline System & Intervention & \(\mu\) & d & \(\beta^{*}\) & \(\varepsilon_{i}\) \\ \hline \multirow{4}{*}{GMM} & MP3 & -0.029 & 0.287 & 0.513 & 0.781 \\ & Additive noise & -0.012 & 0.120 & 0.533 & 0.771 \\ & Loudness & -0.083 & 0.806 & 0.002 & 0.939 \\ & Non-speech & -0.025 & 0.243 & 0.341 & 0.901 \\ & \(\mu\)-law & -0.056 & 0.549 & 0.173 & 0.948 \\ \hline \multirow{4}{*}{LCNN} & MP3 & -0.041 & 0.402 & 0.588 & 0.707 \\ & Additive noise & -0.045 & 0.443 & 0.580 & 0.712 \\ & Loudness & -0.218 & 2.119 & 0.010 & 0.584 \\ & Non-speech & -0.143 & 1.385 & 0.237 & 0.777 \\ & \(\mu\)-law & -0.035 & 0.346 & 0.475 & 0.808 \\ \hline \multirow{4}{*}{AASIST-L} & MP3 & -0.064 & 0.623 & 0.379 & 0.849 \\ & Additive noise & -0.051 & 0.501 & 0.598 & 0.690 \\ \cline{1-1} & Loudness & -0.064 & 0.625 & 0.008 & 0.963 \\ \cline{1-1} & Non-speech & -0.221 & 2.141 & 0.089 & 0.569 \\ \cline{1-1} & \(\mu\)-law & -0.181 & 1.760 & 0.236 & 0.668 \\ \hline \hline \end{tabular} \end{table} Table 4: Model parameters for countermeasure scores with the tested configurations. \(\mu\) is the model intercept, \(d\) is the class discrimination, \(\beta^{*}\) refers to the biased training effect in the configurations, where \(\beta^{\text{spf}}=\beta^{*}\) and \(\beta^{\text{bona}}=-\beta^{*}\) and \(\varepsilon_{i}\) is the residual variance. \begin{table} \begin{tabular}{l l c c c} \hline \hline \multirow{2}{*}{Intervention} & \multirow{2}{*}{Config.} & \multicolumn{3}{c}{EER (in \%)} \\ \cline{3-5} & GMM & LCNN & AASIST \\ \hline - & **O** & 7.92 & 1.39 & 1.39 \\ \hline \multirow{4}{*}{MP3 compression} & **A** & 0.00 & 0.01 & 0.01 \\ & **B** & 0.00 & 0.01 & 0.01 \\ & **C** & 99.99 & 94.00 & 94.00 \\ & **D** & 97.85 & 99.93 & 99.93 \\ \hline \multirow{4}{*}{Additive white noise} & **A** & 0.00 & 0.00 & 0.00 \\ & **B** & 0.01 & 0.00 & 0.00 \\ & **C** & 99.98 & 99.98 & 99.98 \\ & **D** & 99.99 & 99.98 & 99.98 \\ \hline \multirow{4}{*}{Loudness normalization} & **A** & 7.61 & 1.60 & 1.60 \\ & **B** & 7.83 & 0.66 & 0.66 \\ & **C** & 8.44 & 20.66 & 20.66 \\ & **D** & 9.00 & 23.86 & 23.86 \\ \hline \multirow{4}{*}{Non-speech zeroing} & **A** & 2.40 & 1.06 & 1.06 \\ & **B** & 0.57 & 0.38 & 0.38 \\ \cline{1-1} & **C** & 81.67 & 3.52 & 3.52 \\ \cline{1-1} & **D** & 90.53 & 35.67 & 35.67 \\ \hline \multirow{4}{*}{\(\mu\)-law} & **A** & 0.41 & 0.80 & 0.80 \\ \cline{1-1} & **B** & 0.38 & 0.25 & 0.25 \\ \cline{1-1} & **C** & 78.79 & 48.54 & 48.54 \\ \cline{1-1} & **D** & 82.02 & 41.98 & 41.98 \\ \hline \hline \end{tabular} \end{table} Table 3: The results applying diverse interventions. Three different countermeasures are trained and tested based on the definition of biased configurations in Table 1.
2305.00514
Discriminative Co-Saliency and Background Mining Transformer for Co-Salient Object Detection
Most previous co-salient object detection works mainly focus on extracting co-salient cues via mining the consistency relations across images while ignoring explicit exploration of background regions. In this paper, we propose a Discriminative co-saliency and background Mining Transformer framework (DMT) based on several economical multi-grained correlation modules to explicitly mine both co-saliency and background information and effectively model their discrimination. Specifically, we first propose a region-to-region correlation module for introducing inter-image relations to pixel-wise segmentation features while maintaining computational efficiency. Then, we use two types of pre-defined tokens to mine co-saliency and background information via our proposed contrast-induced pixel-to-token correlation and co-saliency token-to-token correlation modules. We also design a token-guided feature refinement module to enhance the discriminability of the segmentation features under the guidance of the learned tokens. We perform iterative mutual promotion for the segmentation feature extraction and token construction. Experimental results on three benchmark datasets demonstrate the effectiveness of our proposed method. The source code is available at: https://github.com/dragonlee258079/DMT.
Long Li, Junwei Han, Ni Zhang, Nian Liu, Salman Khan, Hisham Cholakkal, Rao Muhammad Anwer, Fahad Shahbaz Khan
2023-04-30T15:56:47Z
http://arxiv.org/abs/2305.00514v2
# Discriminative Co-Saliency and Background Mining Transformer for Co-Salient Object Detection ###### Abstract Most previous co-salient object detection works mainly focus on extracting co-salient cues via mining the consistency relations across images while ignoring **explicit** exploration of background regions. In this paper, we propose a Discriminative co-saliency and background Mining Transformer framework (DMT) based on several economical multi-grained correlation modules to **explicitly** mine both co-saliency and background information and effectively model their discrimination. Specifically, we first propose a region-to-region correlation module for introducing inter-image relations to pixel-wise segmentation features while maintaining computational efficiency. Then, we use two types of pre-defined tokens to mine co-saliency and background information via our proposed contrast-induced pixel-to-token correlation and co-saliency token-to-token correlation modules. We also design a token-guided feature refinement module to enhance the discriminability of the segmentation features under the guidance of the learned tokens. We perform iterative mutual promotion for the segmentation feature extraction and token construction. Experimental results on three benchmark datasets demonstrate the effectiveness of our proposed method. The source code is available at: [https://github.com/dragonlee258079/DMT](https://github.com/dragonlee258079/DMT). ## 1 Introduction Unlike standard Salient Object Detection (SOD) [21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 213, 214, 215, 216, 217, 218, 223, 219, 224, 225, 226, 227, 228, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 252, 254, 259, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 282, 284, 286, 287, 289, 288, 289, 291, 285, 286, 287, 289, 292, 293, 294, 295, 296, 297, 298, 299, 300, 31, 320, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 55, 56, 57, 58, 59, 60, 61, 62, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 113, 114, 115, 116, 117, 118, 119, 120, 121, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 203, 204, 205, 206, 207, 208, 209, 211, 213, 214, 215, 216, 217, 218, 223, 219, 23, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 253, 254, 255, 256, 257, 258, 259, 260, 259, 270, 271, 272, 274, 275, 276, 277, 278, 279, 281, 282, 283, 285, 286, 287, 288, 289, 290, 291, 292, 294, 295, 296, 297, 298, 299, 300, 31, 320, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 109, 113, 104, 109, 113, 104, 105, 106, 109, 113, 104, 107, 108, 109, 113, 109, 114, 115, 116, 117, 118, 119, 120, 121, 123, 124, 125, 126, 127, 128, 129, 130, 140, 141, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 170, 182, 183, 184, 185, paradigm of a semantic segmentation transformer architecture, MaskFormer [5], which enables explicit co-saliency and BG modeling and the construction of multi-grained correlations. Using this architecture, we decompose the CoSOD modeling into two sub-paths, generating pixel-wise segmentation feature maps and extracting category information with pre-defined co-saliency and BG detection tokens. In the first sub-path, to efficiently and thoroughly mine the common cues within the image group, we propose a Region-to-Region correlation (R2R) module to model the inter-image relation and plug it into each decoder layer. In the second sub-path, we transform the pixel-wise features into a co-saliency token and a BG token for each image, abstracting pixel-wise cues into high-level tokens. As such, we achieve sophisticated relation modeling among the tokens and features while largely reducing the computational costs. Concretely, we propose an intra-image Contrast-induced Pixel-to-Token correlation (CtP2T) module to extract the two tokens by considering the contrast relation between co-saliency and BG. Since the co-saliency tokens from CtP2T are separately learned on each image, we further design a Co-saliency Token-to-Token (CoT2T) correlation module to model their common relation. After obtaining the tokens and pixel-wise features, the MaskFormer [5] architecture adopts dot production between them to obtain the final segmentation results. However, such a scheme only achieves unidirectional information propagation, conveying information from the feature maps to the tokens. We argue that the learned two tokens can also be used to improve the discriminability of the pixel-wise features, thus proposing our Token-Guided Feature Refinement (TGFR) module as a reverse information propagation path. Concretely, we first use the tokens as guidance to distill co-saliency and BG features from the pixel-wise feature maps, and then enhance the discriminability of the segmentation features between the two detection regions. In this way, the refined features become sensitive to both co-saliency and BG, reducing the affect of ambiguous distractors. Finally, as shown in Figure 1, our DMT iteratively deploys CtP2T and CdTeT to leverage the segmentation features for updating the tokens, and then adopts TGFR to refine the corresponding decoder feature with the updated tokens. As a result, the learning processes can be effectively promoted, thus obtaining more accurate CoSOD results. In summary, our major contributions are as follows: * We model CoSOD from the perspective of explicitly exploring both co-saliency and BG information and effectively modeling their discrimination. * We introduce several computationally economical multi-grained correlation modules, R2R, CtP2T, CoT2T, for the inter-image and intra-image relations modeling. * We propose a novel TGFR module to use the learned tokens as guidance to refine the segmentation features for enhancing their discriminability between co-saliency and BG regions. * Experimental results demonstrate that our DMT model outperforms previous state-of-the-art results on three benchmark datasets. ## 2 Related Work ### Co-Salient Object Detection Recent CoSOD works [12, 16, 41, 34, 42, 27] have achieve promising performance and can be summarized as a unified paradigm, first aggregating all image features in the group to form a consensus representation and then distributing it back to each image feature. We refer to these two processes as _aggregation_ and _distribution_ for expression convenience. For example, [42] summed up all features for _aggregation_ and leveraged a gradient feedback mechanism for _distribution_. [16] formed the consensus cues with a group of enhanced intra-saliency vectors and conducted the _distribution_ via a dense correlation module. [12] generated a consensus attention map with an affinity module and multiplied it back to the individual image features. [41] encoded the consensus information with dynamic kernels and convolved the image features using these kernels as the _distribution_ process. [34] first obtained consensus seeds by processing P2P affinity maps and then propagated the seeds using normalized convolution operations. However, most of them are limited in exploring BG regions, which hinders the discriminability learning. Unlike them, we propose to simultaneously detect the co-saliency and BG regions and sufficiently explore their discriminative modeling. Besides, we utilize tokens under a transformer architecture for _aggregation_, and then use the learned tokens to conduct the _distribution_ process. ### Transformer After Vaswani [29] first proposed the transformer architecture for machine translation, many successful transformer applications in the computer vision field emerged. Some works [35, 28, 8] directly apply the transformer architecture for feature learning. Some other works mainly focused on using transformers to extract specific semantic concepts, the category or instance information for object detection [32, 44, 3], semantic segmentation [31, 5], and the saliency and contour information for salient object detection [23]. Concretely, they first use a backbone to extract image feature maps and then adopt transformers to collect semantic concept information and store them in pre-created tokens. This paper follows the second type application to utilize the transformer for simultaneous FG and BG modeling. We further modify the transformer framework tailored for the CoSOD task by introducing economic multi-grained correlations for modeling sophisticated relations. We also propose to leverage the semantic information encoded in the learned tokens as a guide to refine the features, thus improving its discriminability. ## 3 Proposed Method ### Overview Figure 1 illustrates our MaskFormer-style framework for simultaneously detecting co-salient objects and BG regions. It consists of two sub-paths, pixel-wise segmentation feature generation and detection token construction. We use R2R in the first sub-path to enhance the segmentation features with inter-image consistency. In the second sub-path, CtP2T and CoT2T are designed to effectively construct the co-saliency and BG tokens from segmentation features, capturing the binary detection patterns. Finally, we propose TGFR to use the detection tokens as guidance for refining the segmentation features. For ease of understanding, we first briefly describe the vanilla MaskFormer-style framework for simultaneously detecting co-saliency and BG in CoSOD. Then, we progressively introduce the improvements in our proposed DMT, including R2R, CtP2T, CoT2T, and TGFR. ### Vanilla MaskFormer-style Framework #### 3.2.1 Segmentation Feature Generation Given a set of \(N\) relevant images \(\{\mathbf{I}_{i}\}_{i=1}^{N}\), we follow the original MaskFormer framework [5] to adopt an FPN [19] for generating pixel-wise segmentation features. Specifically, we use VGG-16 [26] as the encoder and take \(\{\mathbf{I}_{i}\}_{i=1}^{N}\) as the input to obtain the highest-level features \(\mathbf{F}^{e}\in\mathbb{R}^{N\times H_{0}\times W_{0}\times C}\) from the last block. Then, based on \(\mathbf{F}^{e}\), the FPN decoder uses five decoder layers to progressively enlarge the feature resolution and obtain five decoder features \(\mathbf{F}^{d}_{j}\in\mathbb{R}^{N\times H_{j}\times W_{j}\times C},j\in\{1\cdots 5\}\). #### 3.2.2 Detection Token Construction Given the highest-level semantic feature \(\mathbf{F}^{e}_{i}\in\mathbb{R}^{H_{0}\times W_{0}\times C}\) of image \(\mathbf{I}_{i}\), we extract the detection tokens from it via a vanilla pixel-to-token correlation (P2T) module. First, we define two randomly initialized tokens for \(\mathbf{I}_{i}\), a co-saliency token \(\mathbf{T}^{c}_{i,0}\in\mathbb{R}^{1\times C}\) and a BG token \(\mathbf{T}^{b}_{i,0}\in\mathbb{R}^{1\times C}\), and denote their union as \(\mathbf{T}_{i,0}\in\mathbb{R}^{2\times C}\). We also flatten \(\mathbf{F}^{e}_{i}\) along the spatial dimension as \(\hat{\mathbf{F}}^{e}_{i}\in\mathbb{R}^{H_{0}W_{0}\times C}\). Then, we iteratively update the tokens five times. At each iteration \(j\in\{1,...,5\}\), we obtain \(\mathbf{T}_{i,j}\) by transforming the information from the feature \(\hat{\mathbf{F}}^{e}_{i}\) to tokens in (1) and modeling the relationship between the co-saliency and BG tokens in (2), formulated as \[\hat{\mathbf{T}}_{i,j}=\mathrm{Trans}(\mathbf{T}_{i,j-1},\hat{\mathbf{F}}^{e}_{i}), \tag{1}\] \[\mathbf{T}_{i,j}=\mathrm{Trans}(\hat{\mathbf{T}}_{i,j},\hat{\mathbf{T}}_{i,j}), \tag{2}\] where \(\mathrm{Trans}\) is a basic transformer operation following [29]: \[\mathrm{Trans}(\mathbf{X},\mathbf{Y})=\mathrm{rMLP}(\mathrm{rMHA}(\mathbf{X},\mathbf{Y})). \tag{3}\] It can transfer the information from \(\mathbf{Y}\in\mathbb{R}^{N_{y}\times C}\) to \(\mathbf{X}\in\mathbb{R}^{N_{x}\times C}\) under the guidance of their relation. \(\mathrm{rMHA}\) and \(\mathrm{rMLP}\) denote the residual multi-head attention [29] and residual multi-layer perception, respectively, formulated as \[\mathrm{rMLP}(\mathbf{X})=\mathbf{X}+\mathrm{MLP}(\mathrm{LN}(\mathbf{X})), \tag{4}\] \[\mathrm{rMHA}(\mathbf{X},\mathbf{Y})=\mathbf{X}+\mathrm{MHA}(\mathrm{LN}(\bm {X}),\mathrm{LN}(\mathbf{Y}))), \tag{5}\] where \(\mathrm{LN}\) denotes the layer normalization [2] and \(\mathrm{MLP}\) is the multi-layer perception consisting of two fully connected Figure 1: **Overall flowchart of our proposed DMT CoSOD model.** Specifically, the framework consists of four components, R2R for segmentation feature generation, CtP2T and CoT2T for detection token construction, and TGFR for the segmentation feature refinement under the guidance of the tokens. layers with a GELU [15] activation function. \(\mathrm{MHA}\) is the multi-head attention that can be formulated as \[\mathrm{MHA}(\mathbf{X},\mathbf{Y}) =\mathrm{Cat}([\mathrm{Att}_{m}(\mathbf{X},\mathbf{Y})V_{m}(\mathbf{Y})]_{m=1}^{M }), \tag{6}\] \[\mathrm{Att}_{m},(\mathbf{X},\mathbf{Y}) =\mathrm{Softmax}\left(\frac{Q_{m}(\mathbf{X})K_{m}(\mathbf{Y})^{\top}}{ \sqrt{C/M}}\right), \tag{7}\] where \(M\) is the number of used attention heads. The result of each head (with the shape of \(N_{x}\!\times\!C/M\)) is obtained via the matrix multiplication between \(\mathrm{Att}_{m}(\mathbf{X},\mathbf{Y})\in\mathbb{R}^{N_{x}\times N_{y}}\) and \(V_{m}(\mathbf{Y})\in\mathbb{R}^{N_{y}\times C/M}\). \(\mathrm{Att}_{m}(\mathbf{X},\mathbf{Y})\) is the attention matrix calculated in (7). Here \(Q_{m}(\cdot)\), \(K_{m}(\cdot)\), and \(V_{m}(\cdot)\) are the query, key, and value embedding functions in the \(m\)th head, respectively, and project corresponding tensors from \(C\) channels to \(C/M\) channels. Finally, \(\mathrm{MHA}(\mathbf{X},\mathbf{Y})\in\mathbb{R}^{N_{x}\times C}\) can be obtained by concatenating (\(\mathrm{Cat}\)) the results of \(M\) heads along the channel dimension. #### 3.2.3 Prediction After performing the token construction five times on each image, we collect the final tokens of all images, _i.e_. \(\mathbf{T}_{5}^{c},\mathbf{T}_{5}^{b}\in\mathbb{R}^{N\times 1\times C}\). Then, we use the output of the first sub-path, _i.e_. the segmentation feature \(\mathbf{F}_{5}^{d}\), to generate the final predictions via the sigmoid matrix multiplication, formulated as \[\mathbf{P}^{c} =\mathcal{P}(\mathbf{T}_{5}^{c},\mathbf{F}_{5}^{d})=\mathrm{Sigmoid}(\bm {T}_{5}^{c}(\mathbf{F}_{5}^{d})^{\top}), \tag{8}\] \[\mathbf{P}^{b} =\mathcal{P}(\mathbf{T}_{5}^{b},\mathbf{F}_{5}^{d})=\mathrm{Sigmoid}(\bm {T}_{5}^{b}(\mathbf{F}_{5}^{d})^{\top}), \tag{9}\] where \(\mathbf{P}^{c},\mathbf{P}^{b}\in\mathbb{R}^{N\times 1\times H\times W}\) are the segmentation results of co-salient objects and BG regions, respectively. ### Our Improvements for DMT #### 3.3.1 Region-to-Region Correlation In the first sub-path, the original FPN individually processes each image and lacks the inter-image correlation modeling, which is crucial for CoSOD. However, straightforward P2P correlation is computationally prohibitive for large feature maps and multiple images. To this end, we consider modeling correlations among images in an economical way, thus proposing our R2R module, which uses region-level features instead of pixel-level features to compute correlations. Concretely, when given the features \(\mathbf{F}_{j}^{d}\in\mathbb{R}^{N\times H_{j}\times W_{j}\times C}\) of \(N\) relevant images from the \(j\)th decoder layer, we first adopt a transformation \(\mathrm{R}_{1}\) to divide the \(H_{j}\!\times\!W_{j}\) feature maps into \(K\!\times\!K\) local regions and use max-pooling to pick up the most representative feature for representing each local region. As a result, we can obtain the region-level query with shape \(\mathbb{R}^{N\times K\times K\times C}\). Then, we generate multi-scale region-level key and value via another transformation \(\mathrm{R}_{2}\), which consists of three adaptive max-pooling operations with the output spatial sizes of \(1\times 1\), \(3\times 3\), and \(6\times 6\), respectively. The three pooled features are finally flattened and concatenated to generate the key and value with shape \(\mathbb{R}^{N\times 46\times C}\), encoding multi-scale robust region information. Next, we perform the R2R inter-image correlation among the region-level query, key, and value via the transformer operation (3), thus obtaining the enhanced features with the region-wise correlation. Finally, we upsample the enhanced features to the original resolution \(H_{j}\!\times\!W_{j}\) via the nearest interpolation, denoted as \(\mathrm{R}_{1}^{-1}\). A residual connection is also used to add the original features. Thus, the region correlation results are diffused to the corresponding internal pixels in each local region. The whole process of R2R on \(\mathbf{F}_{j}^{d}\) is formulated as \[\mathbf{F}_{j}^{dr}=\mathbf{F}_{j}^{d}+\mathrm{R}_{1}^{-1}(\mathrm{Trans}(\mathrm{R}_{1 }(\mathbf{F}_{j}^{d}),\mathrm{R}_{2}(\mathbf{F}_{j}^{d}))). \tag{10}\] #### 3.3.2 Contrast-induced Pixel-to-Token Correlation In the second sub-path, the original P2T module uses a transformer operation in (2) to mine relations between the two types of tokens in a data-driven way, while ignoring explicit CoSOD cues, especially the crucial contrast modeling between co-saliency and BG regions. To enhance the discriminability between the tokens, we explicitly model the contrast relation with our proposed CtP2T module, which modifies the transformer layer in (1) and the remaining part keeps the same as P2T. Overall, we modify the multi-head attention (denoted as \(\mathrm{MHA}^{*}\)) and propose a contrast-induced channel attention (CCA) mechanism. The basic idea is to suppress the channels that are not contrastive enough in the generated co-saliency and BG tokens. The contrast is modeled as the opposite of the channel similarity between the two types of tokens, which can be calculated via channel correlation. For brevity's sake, we slightly abuse the notation and use \(\hat{\mathbf{T}},\mathbf{T}\in\mathbb{R}^{2\times C}\), and \(\mathbf{F}\in\mathbb{R}^{H_{0}W_{0}\times C}\) as shorthands for \(\hat{\mathbf{T}}_{i,j}\), \(\mathbf{T}_{i,j-1}\), and \(\hat{\mathbf{F}}_{i}^{e}\) in (1), respectively. Then, (1) can be modified for our CtP2T as below: \[\hat{\mathbf{T}} =\mathrm{Trans}^{*}(\mathbf{T},\mathbf{F}) \tag{11}\] \[=\mathrm{rMLP}(\mathbf{T}+\mathrm{CCA}(\mathrm{MHA}^{*}(\mathbf{T},\mathbf{F} ))).\] Next, we introduce \(\mathrm{MHA}^{*}\) and CCA as shown in Figure 2. The \(\mathrm{LN}\) operations are omitted for expression convenience. Modified Multi-Head Attention.To generate co-saliency and BG tokens that can be used for calculating their channel similarity, we make our \(\mathrm{MHA}^{*}\) able to generate tokens with multiple heads. Concretely, we first replace the original \(V_{m}\) in (6) with \(V_{m}^{*}\) that embeds \(\mathbf{F}\) to the identical channel number \(C\). Thus, the shape of each head's result becomes \(2\!\times\!C\) instead of \(2\!\times\!C/M\). Next, we stack the results of \(M\) heads to produce the output of \(\mathrm{MHA}^{*}\). The whole process can be formulated as \[\mathbf{T}_{M} =\mathrm{MHA}^{*}(\mathbf{T},\mathbf{F}) \tag{12}\] \[=\mathrm{Stack}([\mathrm{Att}_{m}(\mathbf{T},\mathbf{F})V_{m}^{*}(\mathbf{F})] _{m=1}^{M}).\] \(\mathbf{T}_{M}\in\mathbb{R}^{2\times M\times C}\) is composed of the co-saliency token and the BG token \(\mathbf{T}_{M}^{c},\mathbf{T}_{M}^{b}\in\mathbb{R}^{M\times C}\) with \(M\) heads. Next, we can compute the channel similarity based on them. Contrast-induced Channel Attention.Given the multi-head tokens \(\mathbf{T}_{M}^{c}\) and \(\mathbf{T}_{M}^{b}\), we generate channel attention \(\mathbf{W}\in\mathbb{R}^{2\times 1\times C}\) to suppress the token channels with strong _mutual_ similarities since they cannot clearly distinguish between co-saliency and BG. First, we compute a \(C\times C\)_channel similarity matrix_ between \(\mathbf{T}_{M}^{c}\) and \(\mathbf{T}_{M}^{b}\) via matrix multiplication. Then, the channel similarity of each token to the other token can be computed as the average along the channel dimension of the other token. The whole process can be denoted as \[\mathbf{S}^{c} =\mathrm{Avg}(\mathbf{T}_{M}^{c\top}\mathbf{T}_{M}^{b}), \tag{13}\] \[\mathbf{S}^{b} =\mathrm{Avg}(\mathbf{T}_{M}^{b\top}\mathbf{T}_{M}^{c}), \tag{14}\] where \(\mathbf{S}^{c},\mathbf{S}^{b}\in\mathbb{R}^{C\times 1}\), representing how similar each channel is to the channels of the other token. \(\mathrm{Avg}\) means calculating the average along the second dimension. Next, we multiply \(\mathbf{S}^{c}\) and \(\mathbf{S}^{b}\) with \(-1\) to turn the similarity measurements into the _contrast_ scores and then compute the channel attention \(\mathbf{W}\in\mathbb{R}^{2\times 1\times C}\) via \[\mathbf{W}=\mathrm{Sigmoid}\left(\alpha\begin{bmatrix}-\mathbf{S}^{c\top}\\ -\mathbf{S}^{b\top}\end{bmatrix}+\beta\right). \tag{15}\] Here we use a learnable linear projection with parameters \(\alpha,\beta\) on each channel of the stacked contrast scores to fit them for the sigmoid activation. Once obtained \(\mathbf{W}\), we adopt the element-wise multiplication between \(\mathbf{T}_{M}\) and \(\mathbf{W}\) to modulate the token channels based on their contrast and then eliminate the multi-head dimension of the tokens by averaging the head dimension and obtaining the modulated tokens: \[\mathrm{CCA}(\mathbf{T}_{M})=\mathrm{Avg}(\mathbf{W}\odot\mathbf{T}_{M})\in\mathbb{R}^{2 \times C}, \tag{16}\] where \(\odot\) means element-wise multiplication with broadcasting. #### 3.3.3 Co-saliency Token-to-Token Correlation CtP2T effectively explores the correlation between the two types of tokens within each image, but lacks explicitly modeling the inter-image relation to capture the token-wise group consistency, thus being limited for consensus mining. Therefore, we use co-saliency tokens from all images to model the consensus patterns via our CoT2T module. Specifically, we first define a group token \(\mathbf{G}\in\mathbb{R}^{1\times C}\) to represent the group-wise consensus information, which is randomly initialized at the first iteration step. At the \(j\)th iteration, given the last group token \(\mathbf{G}_{j-1}\) and the co-saliency tokens \(\tilde{\mathbf{T}}_{j}^{c}\in\mathbb{R}^{N\times C}\) from the CtP2T module, we aggregate the consensus information from all co-saliency tokens by using \(\tilde{\mathbf{T}}_{j}^{c}\) to update \(\mathbf{G}_{j-1}\), denoted as \[\mathbf{G}_{j}=\mathrm{Trans}(\mathbf{G}_{j-1},\tilde{\mathbf{T}}_{j}^{c}). \tag{17}\] Finally, we distribute the aggregated consensus cues back to \(\tilde{\mathbf{T}}_{j}^{c}\) and obtain the final co-saliency tokens \(\mathbf{T}_{j}^{c}\): \[\mathbf{T}_{j}^{c}=\mathrm{Trans}(\tilde{\mathbf{T}}_{j}^{c},\mathbf{G}_{j}). \tag{18}\] #### 3.3.4 Token-guided Feature Refinement The vanilla MaskFormer only transforms the information from the segmentation features to the tokens, hindering their complementary learning. To this end, we propose our TGFR module to improve the discriminability of the segmentation features via the detection cues of the tokens. As shown in Figure 3, TGFR consists of two processes, _i.e_. distillation and refusion. The distillation process is designed to distill the co-saliency and BG features from the segmentation feature under the guidance of the corresponding tokens. The refusion process is to fuse the distilled features back to the segmentation feature to enhance its discriminability. Distillation.For image \(\mathbf{I}_{i}\) at the \(j\)th iteration, we have the final co-saliency token \(\mathbf{T}_{i,j}^{c}\in\mathbb{R}^{1\times C}\) generated from CoT2T and the final BG token \(\mathbf{T}_{i,j}^{b}\in\mathbb{R}^{1\times C}\) outputted by Figure 3: **Diagram of our proposed TGFR module.** Specifically, we first distill the co-saliency and BG features under the guidance of the two tokens. Then, we fuse them back to the original segmentation feature for discriminability enhancement. Figure 2: **Diagram of \(\mathrm{MHA}^{\star}\) and \(\mathrm{CCA}\).** We first generate multi-head tokens \(\mathbf{T}_{M}^{c}\) and \(\mathbf{T}_{M}^{b}\) via \(\mathrm{MHA}^{\star}\). Then, we utilize matrix multiplication of the two tokens to generate the attention weights \(\mathbf{W}\) for modulating the token channels in \(\mathrm{CCA}\). CtP2T, and the segmentation feature \(\mathbf{F}_{i,j}^{dr}\in\mathbb{R}^{H_{j}\times W_{j}\times C}\) enhanced by R2R. We first compute two attention maps \(\mathbf{A}_{i,j}^{c}\in\mathbb{R}^{H_{j}\times W_{j}\times 1}\) and \(\mathbf{A}_{i,j}^{b}\in\mathbb{R}^{H_{j}\times W_{j}\times 1}\) via performing the matrix multiplication between the segmentation feature and the tokens and then adopting a softmax normalization on the spatial dimension, formulated as \[\mathbf{A}_{i,j}^{c}=\mathrm{Softmax}(\mathbf{F}_{i,j}^{dr}(\mathbf{T}_{i,j}^{c})^{\top}/ \sqrt{C}), \tag{19}\] \[\mathbf{A}_{i,j}^{b}=\mathrm{Softmax}(\mathbf{F}_{i,j}^{dr}(\mathbf{T}_{i,j}^{b})^{\top}/ \sqrt{C}). \tag{20}\] Next, we adopt the computed attention maps to distill the detection features from the segmentation feature via matrix multiplication, denoted as \[\mathbf{D}_{i,j}^{c}=(\mathbf{A}_{i,j}^{c})^{\top}\mathbf{F}_{i,j}^{dr}, \tag{21}\] \[\mathbf{D}_{i,j}^{b}=(\mathbf{A}_{i,j}^{b})^{\top}\mathbf{F}_{i,j}^{dr}, \tag{22}\] where \(\mathbf{D}_{i,j}^{c},\mathbf{D}_{i,j}^{b}\in\mathbb{R}^{1\times C}\) is the distilled features for co-saliency and BG, respectively. Refusion.After producing \(\mathbf{D}_{i,j}^{c}\) and \(\mathbf{D}_{i,j}^{b}\), we conduct the refusion process to fuse them back to \(\mathbf{F}_{i,j}^{dr}\) sequentially in a cascade way for activating the co-saliency and BG regions in \(\mathbf{F}_{i,j}^{dr}\). In this way, we can effectively reduce ambiguous information and enhance feature discriminability. The details can be formulated as \[\hat{\mathbf{F}}_{i,j}^{dt}=\mathrm{Conv}_{c}\,\big{(}\,\mathrm{Cat}([\mathbf{F}_{i,j}^ {dr},\,\mathrm{E}(\mathbf{D}_{i,j}^{c})])\big{)}, \tag{23}\] \[\mathbf{F}_{i,j}^{dt}=\mathrm{Conv}_{b}\,\big{(}\,\mathrm{Cat}([\hat{\mathbf{F}}_{i,j}^ {dt},\,\mathrm{E}(\mathbf{D}_{i,j}^{b})])\big{)}, \tag{24}\] where \(\mathrm{E}(*)\) replicates \(\mathbf{D}_{i,j}^{c}\) and \(\mathbf{D}_{i,j}^{b}\) along the spatial dimension to the same size as \(\mathbf{F}_{i,j}^{dr}\). Then, we progressively concatenate them with \(\mathbf{F}_{i,j}^{dr}\) and use a convolution layer to reduce the channel number to \(C\). #### 3.3.5 Prediction and Loss Function In the \(j\)th iteration, after obtaining the learned co-saliency and BG tokens, _i.e_. \(\mathbf{T}_{j}^{c},\mathbf{T}_{j}^{b}\in\mathbb{R}^{N\times 1\times C}\), from CoT2T and CtP2T, respectively, and the improved segmentation features \(\mathbf{F}_{j}^{dt}\in\mathbb{R}^{N\times H_{j}\times W_{j}\times C}\) from TGFR, we use the prediction function \(\mathcal{P}\) in (8) to generate the co-saliency and the BG predictions, _i.e_. \(\mathbf{P}_{j}^{c}\)\(\mathbf{P}_{j}^{b}\), as follows: \[\mathbf{P}_{j}^{c}=\mathcal{P}(\mathbf{T}_{j}^{c},\mathbf{F}_{j}^{dt}), \tag{25}\] \[\mathbf{P}_{j}^{b}=\mathcal{P}(\mathbf{T}_{j}^{b},\mathbf{F}_{j}^{dt}). \tag{26}\] We also supervise the learning of the group token \(\mathbf{G}_{j}\in\mathbb{R}^{1\times 1\times C}\) in CoT2T and the middle feature \(\hat{\mathbf{F}}_{j}^{dt}\in\mathbb{R}^{N\times H_{j}\times W_{j}\times C}\) in TGFR. Two predictions can be obtained from them, respectively: \[\mathbf{P}_{j}^{g}=\mathcal{P}(\mathrm{Repeat}(\mathbf{G}_{j}),\mathbf{F}_{j}^{dt}), \tag{27}\] \[\mathbf{P}_{j}^{dt}=\mathcal{P}(\mathbf{T}_{j}^{c},\hat{\mathbf{F}}_{j}^{dt}),\] where \(\mathrm{Repeat}\) is to repeat \(\mathbf{G}_{j}\)\(N\) times. Our total loss \(\mathcal{L}_{total}\) can be formulated as \[\mathcal{L}_{total}=\sum_{j=1}^{5}\Big{(}\mathcal{L}_{1}(\mathbf{P}_{j}^{c},\mathbf{M }_{j}^{c})+\mathcal{L}_{2}(\mathbf{P}_{j}^{c},\mathbf{M}_{j}^{c})+ \tag{28}\] \[\mathcal{L}_{2}(\mathbf{P}_{j}^{b},\mathbf{M}_{j}^{b})+\mathcal{L}_{2}(\mathbf{P}_{j}^{g}, \mathbf{M}_{j}^{c})+\mathcal{L}_{2}(\mathbf{P}_{j}^{dt},\mathbf{M}_{j}^{c})\Big{)},\] where \(\mathcal{L}_{1}\) and \(\mathcal{L}_{2}\) are the IoU [16] and Binary Cross-Entropy (BCE) [7] loss, respectively. \(\mathbf{M}_{j}^{c}\) and \(\mathbf{M}_{j}^{b}\) denote the co-saliency and BG ground truths, respectively, with the spatial shapes aligned to the \(j\)th decoder layer. ## 4 Experiments ### Evaluation Datasets and Metrics We follow [12, 34, 27] to evaluate our proposed model on three CoSOD benchmark datasets. CoSal2015 [36] and CoSOD3k [11] collect 50 groups with 2015 images and 160 groups with 3316 images, respectively. CoCA [42] is the most challenging dataset and contain 1295 images of 80 groups. We employ four widely-used metrics for quantitative evaluation, _i.e_. Structure-measure \(S_{m}\)[9], Enhanced-alignment measure \(E_{\xi}\)[10], Maximum F-measure (maxF) [1], and Mean Absolute Error (MAE) [6]. ### Implementation Details We follow [41] to use the COCO-9k [20] (9213 images of 65 groups) and the DUTS class [42] (8250 images of 291 groups) with the synthesis strategy [41] to construct our training set. We follow [22] to perform data augmentation and adopt the Adam optimizer [17] with an initial learning \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c c c c} \hline \hline \multicolumn{1}{c}{} & \multicolumn{4}{c|}{Settings} & \multicolumn{4}{c}{CoCA [42]} \\ \hline Co & \(\mathrm{Bg}\) & R2R & CtP2T & CtP2T & TGFR & \(S_{m}\uparrow\) & \(E_{\xi}\uparrow\) & maxF\(\uparrow\) & MAE \(\downarrow\) \\ \hline ✓ & ✓ & & & & & & 0.6751 & 0.7683 & 0.5474 & 0.1383 \\ \hline ✓ & ✓ & ✓ & & & & & 0.6945 & 0.7824 & 0.5815 & 0.1234 \\ \hline ✓ & ✓ & ✓ & ✓ & & & & & 0.7038 & 0.7868 & 0.5984 & 0.1230 \\ \hline ✓ & ✓ & ✓ & ✓ & ✓ & & & 0.7140 & 0.7880 & 0.6003 & 0.1139 \\ \hline \hline ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & **0.7246** & **0.8001** & **0.6190** & **0.1084** \\ \hline ✓ & ✓ & & & & & & & 0.7059 & 0.7920 & 0.5996 & 0.1259 \\ \hline \hline \end{tabular} \end{table} Table 1: **Quantitative results of different settings of our proposed model.** We show the results of progressively adding R2R, CtP2T, CoT2T, and TGFR on the baseline. “Co” and “Bg” mean explicitly modeling co-saliency and BG, respectively. Figure 4: **Qualitative results of different settings of our proposed model.** We show the results of progressively adding the R2R, CtP2T, CoT2T, and TGFR on the baseline. rate of 0.0001, \(\beta_{1}=0.9\), and \(\beta_{2}=0.99\) to train our model for 80,000 iterations. The learning rate is divided by 10 at the \(60000^{\text{th}}\) iteration. We select at most eight images from each group as a mini-batch to train our network. The training and testing image size is set as \(256\times 256\). Our method is implemented using Pytorch [24]. ### Ablation Study We conduct ablation studies on the challenging CoCA [42] dataset to verify the effectiveness of our proposed components. As shown in Table 1, we treat the vanilla MaskFormer-style framework as our baseline, shown in the first row, and progressively add our proposed R2R, CtP2T, CoT2T, and TGFR on it for effectiveness analysis. **Effectiveness of R2R.** First, we plug R2R into each decoder layer to enhance the segmentation features. It shows that using R2R largely improves the model performance compared to the baseline, while using vanilla P2P causes the out-of-memory error. The results verify the necessity of using our R2R for inter-image correlation modeling. **Effectiveness of CtP2T.** Next, we consider the contrast relation modeling between the co-saliency and BG tokens, thus replacing the original P2T module to our proposed CtP2T module. By using CtP2T, the model performance can be further improved, indicating that CtP2T is beneficial for enhancing the discriminability between the two types of tokens. We also provide some visual samples in Figure 5. Since the channels of the tokens correspond to those of the values in \(\mathrm{MHA}^{*}\), we visualize some feature maps of \(V_{m}^{*}(\mathbf{F})\) of the channels with large or small channel attention weights in \(\mathbf{W}\). We can see that the channels with large channel attention (CA) can easily distinguish co-salient objects and distracting objects, while those with small CA usually confuse them. The results demonstrate our generated channel attention is meaningful for accurate co-salient object detection. **Effectiveness of CoT2T.** Furthermore, we supplement CoT2T to explore the inter-image correlations for all co-saliency tokens. CoT2T explicitly promotes consensus information propagation among all co-saliency tokens, thus obtaining obvious improvements. **Effectiveness of TGFR.** Finally, we add TGFR to leverage the learned tokens for refining the segmentation features. Table 1 shows that adopting TGFR can bring more performance gains, thus demonstrating its effectiveness. We also visualize some feature maps and predictions of using and without using TGFR in Figure 6. It can be seen that using TGFR obtains more discriminative features for distinguishing co-saliency objects from distractors, thus generating better segmentation results. To dive deeper into the effectiveness of TGFR, we report more experimental results in Table 2 for further analysis. First, we directly fuse the tokens and the segmentation features without performing the distillation process ("w/o Distillation"). we find this model brings limited improvements compared to the "w/o TGFR" model. It is probably because the tokens and features might exist semantic gap, being detrimental for their fusion, hence verifying the necessity of our distillation mechanism. Next, we supplement the distillation process and explore four strategies for the refusion process, individually refusing the distilled co-saliency ("w/ co") or BG features ("w/ big") to the segmentation features, or refusing both with the order of co-saliency feature first ("w/ co&bg") or BG feature first ("w/ big&co"). We can find refusing both achieves better performance, thus verifying the necessity of leveraging both features for discrimination enhancement. We also find first refusing the co-saliency feature and then integrating the BG feature obtains the best results. Thus, we adopt this strategy in our final TGFR design. **Effectiveness of BG Exploration.** We remove all BG-related modules in our final model and only explore co-saliency regions, shown in the last row of Table 1. In this setting, CtP2T can not be used while only the co-saliency feature is used in TGFR. We find that the performance significantly drops compared to our final model, thus verifying the necessity of explicit BG modeling. \begin{table} \begin{tabular}{c|c c c c} \hline \hline \multirow{2}{*}{Settings} & \multicolumn{4}{c}{CoCA [42]} \\ & \(S_{m}\uparrow\) & \(E_{\xi}\uparrow\) & maxF\(\uparrow\) & MAE \(\downarrow\) \\ \hline w/o TGFR & 0.7140 & 0.7880 & 0.6003 & 0.1139 \\ \hline w/o Distillation & 0.7141 & 0.7921 & 0.6046 & 0.1114 \\ \hline w/ co & 0.7171 & 0.7965 & 0.6076 & 0.1144 \\ w/bg & 0.7155 & 0.7935 & 0.6064 & 0.1112 \\ w/bg\&co & 0.7197 & 0.7907 & 0.6092 & 0.1069 \\ \hline w/ co\&bg & **0.7246** & **0.8001** & **0.6190** & **0.1084** \\ \hline \hline \end{tabular} \end{table} Table 2: **Quantitative results of different settings in TGFR.** Figure 5: **Visual comparison among the channels with different channel attention weights in CtP2T.** We visualize some feature maps in \(V_{m}^{*}(\mathbf{F})\) for the channels with large and small channel attention (CA) in CtP2T. We visualize two channels for large and small CA, respectively. Figure 6: **Visualization of some feature maps (Fea.) and predictions (Pred.) of the models with (w/) or without (w/o) using TGFR.** **Quantitative Analysis**. As shown in Figure 4, we also provide some visual comparison samples for the four key components. We find that the baseline model is easily distracted by complex BG regions, while progressively introducing our four components can gradually exclude these distractors and achieve more and more accurate results. ## 5 Comparison with State-of-the-Art Methods We compare our model with other seven state-of-the-art methods, _i.e_. CSMG [39], GICD [42], ICNet [16], GCoNet [12], CADC [41], UFO [27], and DCFM [34]. We report the quantitative comparison results in Table 3. We can observe that our proposed DMT achieves the best performance on all three benchmark datasets. Especially, on CoSal2015 and CoSOD3k, our DMT model surpasses the second-best model by a large margin, _e.g_. 3.14% \(S_{m}\) and 4.07% maxF on CoSal2015 and 3.23% \(S_{m}\) and 3.08% maxF on CoSOD3k. We also show some visual comparison results in Figure 7. We can find that our method can precisely detect co-salient objects in complex scenarios, _e.g_. the existence of extraneous salient objects with similar appearances to target objects, and target objects with small sizes. Nevertheless, other models are heavily distracted in these challenging scenes. ## 6 Conclusions In this paper, we propose DMT, a transformer-based CoSOD model for explicitly mining both co-saliency and BG information and effectively modeling their discrimination. Specifically, we propose several economic multi-grained correlations, _i.e_. R2R, CtP2T, and CoT2T to model inter-image and intra-image relations. Besides, we propose a TGFR module to leverage the detection information for improving the discriminability of the segmentation features. It is an improvement to the MaskFormer that allows the mutual promotion of two sub-paths. Our model achieves a new state-of-the-art result. Acknowledgments:This work was supported in part by Key-Area Research and Development Program of Guangdong Province (No.2021B0101200001), the National Key R&D Program of China under Grant 2021B0101200001, and the National Science Foundation of China under Grant 62036011, U20B2065, 721A0001, 62136004. \begin{table} \begin{tabular}{c|c c c c c|c c c c|c c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{4}{c|}{CoCA [42]} & \multicolumn{4}{c|}{CoSal2015 [36]} & \multicolumn{4}{c}{CoSOD3k [11]} \\ \cline{2-13} & \(S_{m}\uparrow\) & \(E_{\xi}\uparrow\) & maxF\(\uparrow\) & MAE\(\downarrow\) & \(S_{m}\uparrow\) & \(E_{\xi}\uparrow\) & maxF\(\uparrow\) & MAE\(\downarrow\) & \(S_{m}\uparrow\) & \(E_{\xi}\uparrow\) & maxF\(\uparrow\) & MAE\(\downarrow\) \\ \hline CSMG\({}_{\text{(CVPR2019)}}\)[39] & 0.6276 & 0.7324 & 0.4988 & 0.1273 & 0.7757 & 0.8436 & 0.7869 & 0.1309 & 0.7272 & 0.8208 & 0.7297 & 0.1480 \\ GICD\({}_{\text{(ECCV2020)}}\)[42] & 0.6579 & 0.7149 & 0.5126 & 0.1260 & 0.8437 & 0.8869 & 0.8441 & 0.0707 & 0.7967 & 0.8478 & 0.7698 & 0.0794 \\ ICNet\({}_{\text{(NPS2020)}}\)[16] & 0.6541 & 0.7042 & 0.5133 & 0.1470 & 0.8571 & 0.9011 & 0.8583 & 0.0579 & 0.7942 & 0.8450 & 0.7623 & 0.0891 \\ GCoNet\({}_{\text{(CVPR2021)}}\)[12] & 0.6730 & 0.7598 & 0.5438 & 0.1050 & 0.8453 & 0.8879 & 0.8471 & 0.0681 & 0.8018 & 0.8601 & 0.7771 & 0.0712 \\ CADC\({}_{\text{(ICVCV2021)}}\)[41] & 0.6800 & 0.7443 & 0.5487 & 0.1330 & **0.8666** & **0.9963** & **0.8645** & **0.0641** & 0.8150 & 0.8543 & 0.7781 & 0.0875 \\ UFO\({}_{\text{(AAXi202)}}\)[27] & 0.6971 & 0.7802 & 0.5681 & **0.0939** & 0.8578 & 0.9057 & 0.8621 & 0.0648 & **0.8191** & 0.8694 & 0.7954 & 0.0735 \\ DCFM\({}_{\text{(CVPR2022)}}\)[34] & **0.7101** & **0.7826** & **0.5981** & **0.0845** & 0.8380 & 0.8929 & 0.8559 & 0.0672 & 0.8094 & **0.8742** & **0.8045** & **0.0674** \\ \hline DMT (Ours) & 0.7246 & 0.8801 & 0.6190 & 0.1084 & 0.8974 & 0.9362 & 0.9952 & 0.0454 & 0.8514 & 0.8950 & 0.8353 & 0.0633 \\ \hline \hline \end{tabular} \end{table} Table 3: **Quantitative comparison of our model with other state-of-the-art methods.** We conduct the comparison on three benchmark CoSOD datasets. **Red** and **blue** denote the best and the second-best results, respectively. Figure 7: **Qualitative comparisons of our model with other state-of-the-art methods.**
2302.14288
Numerical Investigation of a Rotating Double Compression Ramp Intake
The intakes of air-breathing high-speed flying vehicles produce a large share of the thrust propulsion. Furthermore, the propulsion performance of these engines increases when the single-ramp intake is replaced with a multiple-ramps intake. Many scholars numerically and experimentally studied the high-speed engine performance over static single and multiple compression ramps. However, the transient behavior of the flow during the rotation of the double compression ramp from a single ramp is not fully investigated. The present paper aims to numerically investigate the transient shock reflection phenomenon over a rotating double wedge. The problem will start with a 3-Mach number inviscid flow over a single wedge. Then, a portion of the wedge will be rotated upstream at a quite low trailing Mach number to avoid the significant lag effect in the shock waves system. This idea could be applied in the supersonic intake or extensionally in the hypersonic intake of scramjets with a somehow complex mechanism. Further, the length of the rotating portion of the wedge will be changed three times to study its effect on the shock system. The results show a high gain in the pressure due to the rotation of the wedge. Moreover, the wave angles were larger at the low chord ratio value of $w_2/w_i= 0.25$ than at the high values of $w_2/w_i$ at the same second wedge rotating angle, $\theta_2$, resulting in a higher pressure distribution.
Lubna Margha, Ahmed A. Hamada, Othman Ahmed, Ahmed Eltaweel
2023-02-28T03:45:32Z
http://arxiv.org/abs/2302.14288v1
# Numerical Investigation of a Rotating Double Compression Ramp Intake ###### Abstract The intakes of air-breathing high-speed flying vehicles produce a large share of the thrust propulsion. Furthermore, the propulsion performance of these engines increases when the single-ramp intake is replaced with a multiple-ramps intake. Many scholars numerically and experimentally studied the high-speed engine performance over static single and multiple compression ramps. However, the transient behavior of the flow during the rotation of the double compression ramp from a single ramp is not fully investigated. The present paper aims to numerically investigate the transient shock reflection phenomenon over a rotating double wedge. The problem will start with a 3-Mach number inviscid flow over a single wedge. Then, a portion of the wedge will be rotated upstream at a quite low trailing Mach number to avoid the significant lag effect in the shock waves system. This idea could be applied in the supersonic intake or extensionally in the hypersonic intake of scramjets with a somehow complex mechanism. Further, the length of the rotating portion of the wedge will be changed three times to study its effect on the shock system. The results show a high gain in the pressure due to the rotation of the wedge. Moreover, the wave angles were larger at the low chord ratio value of \(w_{2}/w_{i}=0.25\) than at the high values of \(w_{2}/w_{i}\) at the same second wedge rotating angle, \(\theta_{2}\), resulting in a higher pressure distribution. Regular reflection; Mach reflection; Moving wedge; Dynamic shock waves; Supersonic flow; Dual solution domain. 2022 ## 1 Introduction Many aerospace applications in the supersonic and hypersonic flow regimes, such as engine intakes, scram-jet, isolator ducts, and adjacent rockets, incorporate shock waves' interaction and reflection. That is why these phenomena gained the interest of the scientific community. Thus, the scholars investigated the transition and hysteresis between the Regular Reflection (RR) and Mach Reflection (MR) [1, 2, 3, 4]. RR consists of two shock waves, the Incident (I) and Reflected (R) shocks. While the MR occurs when the reflected shock is not able to turn the flow and Mach Stem (MS) appears. Thus, the MR is a three-shock wave configuration. MS appears with increasing the Mach number [5, 6], changing the wedge angle [7, 8, 9], and disturbing the flow using laser energy decomposition [10, 11, 12]. The appearance of Mach stem affects the performance of engineering devices by changing static and stagnation pressure distribution. These effects were observed in failures of aero-propulsion engines [13, 14]. A method for controlling the transition from regular reflection to Mach reflection is to dynamically change the inclination angle, \(\theta\), of the wedge, or divide the wedge into two portions each with a different angle \(\theta\). During the last couple of decades, moving/rotating a single wedge angle was used to control the transition between the RR and MR. Felthun and Skews [15] rotated the wedge towards the upstream of flow around its leading edge. They found that the rotating rate of the wedge highly affects the transition wave angles. Numerical and experimental studies on a rapidly rotating wedge downstream of the flow were performed by Naidoo and Skews [16]. They found that the dynamic transition from MR to RR happened below Von Neumann criteria. However, the transition from RR to MR occurred outside the steady-state theoretical limits. Goyal et al. [17] studied the transition from RR to MR by changing the Mach number and the pivot point at the same strong shock reflection domain. They concluded that the effect of the pivot point location on the transition phenomenon is minor. whereas its effect is serious over the development of MS and the reflection point motion. Another innovative method of moving the wedge was proposed by Margha et al. [9]. They moved the trailing edge upstream with velocity \(V(t)\) preserving the wedge height and allowing the total length of the wedge to be changed by changing the wedge angle. This motion of the wedge was studied at \(M_{\infty}=3\) and different frequencies \(\kappa\). They found that the transition wave angle, \(\beta_{t}\), approaches the theoretical limit of Mach Reflection with subsonic downstream flows (MRs). This was found at a relatively high frequency \(\kappa=2\). In the last five decades, many investigations were conducted for the shock wave interaction over a double wedge. In mid-1975, Bertin and Hinkle [18] performed experimental tests that agreed with the theoretical results of the shock wave interaction patterns for the MR case. Ben-Dor and Rayvesky [19] studied the interaction of shock waves with the thermal layer within inviscid flows over both concave and convex double wedges. They concluded that increasing thermal layer temperature increases the height of the triple point. Olejniczk et al. [20] studied the inviscid flow over a 2-dimensional one-sided double wedge configuration. They identified four shock wave interactions for that particular configuration. Li and Ben-Dor [21] formulated an analytical model of inviscid flow over the wedge and quantitatively described the physical mechanism by which sonic throat is created and hence Mach stem height is determined. It was found from the analysis that for the same Mach number, Mach stem heights are only determined by geometrical setup. It was shown that the Mach stem height vanishes and flow transforms from MR to RR exactly what happened at von Neumann transition condition. Li and Ben-Dor [22] performed analytical studies and experiments over static concave double wedges, investigating the transition process between RR and MR. The analytical tests were performed 2-dimensional while the experimental tests were affected by the 3-dimensional type of wedge. They concluded that the effects of the 3-dimensional Wedge were not dominant within the experiment's setup, and the transition angles in 3-dimensional steady flows were very close to the angles at 2-dimensional steady flows. Further, Ivanov et al. [6] analyzed the hysteresis phenomenon at the transition using 3-dimensional numerical and experimental configurations. Furthermore, Ben-Dor et al. [23] evolved their work to study double wedge within the range of \(5\leq M\leq 9\) inviscid flow. They aimed to check the existence of the hysteresis phenomenon over double wedge geometry. In addition, they found that there are oscillations induced by flow itself at different angles. Shoev et al. [24] studied the flow over a double wedge with two cases (low/High enthalpy). The coupling between Navier-Stokes (NS) equations with Direct Simulation Monte Carlo (DSMC) was used to perform the simulations. For the low enthalpy case, a good agreement with the previous literature was found. However, the high enthalpy case had a qualitative agreement with previous experimental studies, with a difference in the quantitative results. The present paper was intended to dynamically study the shock wave structure over a rotating double wedge at a free-stream Mach number, \(M_{\infty}\), of 3. The rotation starts from a single ramp case at 3 different positions to create the double compression ramp. The rotation was achieved at a quite low rate to avoid a significant change in the shock waves system. During the transition, the effect of varying the rotating wedge length on the shock structure was investigated. This idea can be applied in the supersonic intake or extensionally in the hypersonic intake of scramjets with a somehow complex mechanism. ## 2 Computational Model Figure 1 shows the initial single ramp and the rotating double ramps configurations in a supersonic flow with a free-stream Mach number, \(M_{\infty}=3\). The computational inflow boundary is \(2H\) in the transverse-wise and \(L_{t}\) in the stream-wise length. The initial condition is the steady-state solution over a single compression ramp with a deflection angle of \(\theta_{i}=19\) and chord \(w_{i}=1\). Then, a portion of the wedge rotates at pivot point \(a\), which is the leading edge of the rotating second wedge. This splits the single ramp to double ramps, one is stationary at the initial wedge angle, \(\theta_{1}=19\). While the second ramp is impulsively rotated with a trailing edge Mach number of \(M_{t}=0.05\). The dynamic shock structure was investigated at three different positions of point \(a\). The time-dependent second wedge angle, \(\theta_{2}(t)\), is changed from \(19^{\circ}\) to \(32^{\circ}\). Both the second ramp's length and height, \(L_{2}(t)\) and \(h_{2}(t)\), change with time step, respectively.The three pivot positions are \(a=\frac{1}{4}\), \(\frac{1}{2}\), and \(\frac{3}{4}\) of the initial chord length, \(w_{i}\). Moreover, Table 1 shows the values of the geometric and flow parameters. \begin{table} \begin{tabular}{c c} \hline \hline Initial wedge’s chord, \(w_{i}=w_{1}+w_{2}\) & \(1m\) \\ \hline Initial rotating wedge angle, \(\theta_{2}(0)\) & \(19^{\circ}\) \\ Stationary wedge angle, \(\theta_{1}\) & \(19^{\circ}\) \\ Final rotating wedge angle, \(\theta_{2}(t_{f})\) & \(32^{\circ}\) \\ Free-stream Mach number, \(M_{\infty}\) & 3 \\ Trailing edge Mach number, \(M_{t}\) & 0.05 \\ Total wedge length to the initial chord, \(\frac{L_{t}}{w_{i}}\) & 1.8 \\ Half domain height to the initial chord, \(\frac{B}{w_{i}}\) & 0.9 \\ Pivot positions to the initial chord, \(\frac{w_{i}}{w_{i}}\) & \(\frac{1}{4},\frac{1}{2},\frac{3}{4}\) \\ \hline \hline \end{tabular} \end{table} Table 1: System properties and parameters. Figure 1: The geometric schematic of the double rotating wedge. The shock-shock wave interaction and the shock wave reflections of the RR and MR shock structures over a rotating slip double wedge are shown in Figure 2. The initial condition starts at a single wedge with a fixed small deflection angle of \(\theta_{i}=\theta_{1}=19^{\circ}\), in which a RR occurs. The RR shock wave structure is shown in the lower half of the figure, while the MR shock wave configuration is shown in the upper half of the figure. During the rotation of the second wedge, the \(1^{st}\) Incident Shock (\(IS_{1}\)) wave interacts with the \(2^{nd}\) Incident Shock (\(IS_{2}\)) at point (\(I\)). The shock-shock interaction wave angle is measured through the slope between points \(I\) and \(a\). Then, the two interacting shocks combine forming one combined shock (\(CS\)). At a relatively small rotating wedge angle, a RR configuration happens when the \(CS\) hits the mid-plane of symmetry and reflects regularly at the point \(P\), generating a reflected shock wave (\(RS\)). At higher rotating angles of the second wedge, the transition from RR to MR occurs because the flow is not able to turn to be parallel to the mid-plane. Thus, a normal shock wave with a Mach Stem height (MS) is formed. In this case, the three shock waves gather at the triple point (\(P\)), where a Slip-Line (\(SL\)) appears. \(\beta_{p}\) is the combined shock wave angle that is measured from the slope between the reflection/triple point \(P\) and the shock interaction point \(I\). At the second wedge trailing edge, an Expansion Fan (\(EF\)) generates and hits the \(RS\), deforming it slightly from a straight line. ### Governing Equations The compressible Euler equations in the Cartesian coordinates are used to compute the supersonic flow over a sharp single and double wedge and are expressed in the conservative form as: \[\frac{\partial Q}{\partial t}+\frac{\partial F}{\partial x}+\frac{\partial G }{\partial y}=0, \tag{1}\] where \[Q=\begin{bmatrix}\rho\\ \rho\nu\\ \rho\rho\end{bmatrix},\quad F=\begin{bmatrix}\rho u\\ \rho u^{2}+p\\ \rho uv\\ u(\rho e+p)\end{bmatrix},\quad G=\begin{bmatrix}\rho v\\ \rho uv\\ \rho v^{2}+p\\ v(\rho e+p)\end{bmatrix} \tag{2}\] The static pressure is obtained from \[p=(\gamma-1)\left(\rho e-\rho\frac{u^{2}+v^{2}}{2}\right) \tag{3}\] where \(u\) and \(v\) are the velocity components in \(x\) and \(y\) directions, respectively, \(p\), \(\rho\) and \(e\) are pressure, density, and the internal energy of the flow field, respectively, and \(\gamma\) is the specific heat ratio of air. ### Computational Domain The transient compressible flow solver, _rhoCentralDyM-Foam_, is used to simulate the flow over the double rotating wedge at \(M_{\infty}=3\). It is a density-based solver implemented in OpenFOAM\(@-\)v2006. The letters "_DyM_" in the solver name indicate the ability of the solver to support the dynamic mesh applications. The solver's approach depends on the semi-discrete and upwind-central non-staggered schemes of Kurganov and Tadmor [25; 26]. Half of the computational domain was simulated due to the symmetry of the flow behavior and geometry. Figure 3 shows the body-fitted structured mesh for the double wedge at the initial deflection wedge angle of \(\theta_{i}=19^{\circ}\). It also indicates the implemented initial and boundary conditions. The mesh independent test on the static and moving wedge was conducted in our previous work [9] with different mesh sizes at the free-stream Mach number of 3. Table 2 compares mesh sizes by presenting the absolute percentage inaccuracy of the transition wave angle at the reflection point, \(\beta_{t}\), and the Mach stem height, MS, at wedge angle, \(\theta=27^{\circ}\). This study resulted in the selection of a mesh size of \(2624\times 720\) cells with an acceptable error percentage of the non-dimensional Mach stem height, MS. The minimum element size in the \(x\) and \(y\) directions were \(0.75mm\) and \(0.78mm\), respectively. During the computational time, the time step was adjustable to maintain the Courant-Friedrichs-Lewy (CFL) number at 0.2. In addition, the verification details of the inserted dynamic code to the _rhoCentralDyMFoam_ solver were presented in our previous work [9]. Figure 3: Schematic of the computational domain, the boundary, and initial conditions Figure 2: The RR and MR inflow shock configurations over a double rotating wedge. It was verified with the analysis of Felthun and Skews [15] over a dynamic single rotating wedge problem with different rates of the rotation motion, \(M_{t}\) and at the free-stream Mach number of 3 as shown in Figure 4. ## 3 Results and Discussion The dynamic shock interaction and reflection including the dynamic transition from RR to MR phenomenon was studied over a rotating double wedge. Starting with the steady-state RR shock configuration at a single compression ramp inclined with \(\theta_{i}=19^{\circ}\). Then, the rapid rotation, with a trailing edge Mach number of \(M_{t}=0.05\), began at three different portions (\(w_{2}/w_{i}=0.25\), 0.5, and 0.75) to create the rotating second ramp. While the first ramp was kept fixed at \(\theta=19^{\circ}\). Further, the results were compared by rotating the full length of the initial chord (single rotating ramp). The effect of changing the chord of the second ramp relative to the first one was investigated on the dynamic shock structure. The variation of the shock-shock interaction wave angle, \(\beta_{I}\), with the rotating second ramp deflection angle, \(\theta_{2}\), at \(M_{t}=0.05\), is shown in Figure 5. It is shown that the behavior of the three curves is almost the same. That's because of using the same rotation speed for all tested cases. This is reflected in the same lag effect in the dynamic shock system. Thus, the only effective parameter here is the chord length ratio. The effect of different pivot point locations is shown with different starts interacting wedge and wave angles of the two incident shocks. As the ratio of the second wedge to the initial one, \(w_{2}/w_{i}\), is large, the start of the interaction point is relatively close to the first wedge's apex, such as the case of \(w_{2}/w_{i}=0.75\) and 0.5. On the opposite side, the second wedge angle at the first interaction point was large at a smaller ratio, \(w_{2}/w_{i}=0.25\). Further, this variation of the wedge angle of the first interaction point is not linear with the variation of pivot point location. Moreover, the unsteady wave angle at the reflection/triple point, \(\beta_{p}\), moves upstream with the increase in the rotating ramp angle, \(\theta_{2}\). This variation is compared with the whole length rotating ramp (single rotating ramp), \(w_{2}/w_{i}=1.0\), as shown in Figure 6. All the simulations started with a steady-state wave angle of \(\beta_{p}=36.6^{\circ}\), corresponding to the initial wedge angle of \(\theta_{i}=19^{\circ}\). For the case of \(w_{2}/w_{i}=1.0\), the constant value of \(\beta_{p}\) from \(\theta_{2}=19^{\circ}\) to \(\theta_{2}=20^{\circ}\) represents the lag effect due to \(M_{t}=0.05\). Further, the figure shows that the value of \(\beta_{p}\) did not deviate from the initial value with increasing the wedge angle until the start of the shock interaction for \(w_{2}/w_{i}=0.75\), 0.5, and 0.25. The combined shock wave angle is smaller than the second incident wave angle due to the shock-shock interaction phenomenon, which can be observed by comparing the \(y\)-axes of Figures 5 and 6. The deviation in the wave angles for the cases of \(w_{2}/w_{i}=0.75\) and 0.5 from the whole rotating wedge, \(w_{2}/w_{i}=1.0\), is within a half degree to a degree at the start of the rotation (at \(\theta=20.5^{\circ}\) and \(\theta=21^{\circ}\)), respectively. Furthermore, this deviation increases with the rotation process. For the case of \(w_{2}/w_{i}=0.25\), the interaction between the two incident shocks happened at a large second wedge angle of \(\theta_{2}=29^{\circ}\), because it was far from the first wedge's apex. This resulted that the second incident shock from the rotating wedge reflected on the mid-plane of symmetry before interacting with the first incident shock. This interaction caused a sudden jump in the wave angle of the combined shock (\(CS\)) from \(\beta_{p}=36.6^{\circ}\) to \(\beta_{p}=44.3^{\circ}\). \begin{table} \begin{tabular}{c c c c c c} \hline Mesh \# & Mesh Size & \(\beta_{p}\) (\({}^{\circ}\)) & \(\frac{\rm MS}{L(0)}\times 10^{-2}\) & \(\frac{|Error|\%}{L(0)}\) \\ \hline 1 & 328 \(\times\) 90 & 41.39 & 5.29 & 1.22 & 46.4 \\ 2 & 656 \(\times\) 180 & 41.15 & 7.34 & 0.62 & 25.6 \\ 4 & 1312 \(\times\) 360 & 41.03 & 8.75 & 0.33 & 11.3 \\ 8 & 2624 \(\times\) 720 & 40.96 & 9.53 & 0.17 & 3.4 \\ 16 & 5248 \(\times\) 1440 & 40.89 & 9.87 & - & - \\ \hline \end{tabular} \end{table} Table 2: INDEPENDENT GRID STUDY: ABSOLUTE PERCENTAGE ERROR OF THE TRANSITION WAVE ANGLE AT THE REFLECTION POINT, \(\beta_{T_{r}}\), AND THE MACH STEM HEIGHT, MS, AT WEDGE ANGLE, \(\theta\)=\(27^{\circ}\)[9]. Figure 4: VALIDATION WITH THE WORK OF FELTHUN AND SKEWS [15] BY MEASUURING THE EFFECT OF TRAILING-EDGE MACH NUMBER ON TRANSITION ANGLES FROM REGULAR TO MACH REFLECTION [9]. Figure 5: THE VARIATION OF INTERACTION WAVE ANGLE OF THE ROTATING WEDGE, \(\beta_{I}^{\circ}\), WITH THE SECOND ROTATING WEDGE ANGLE \(\theta_{2}\). The sonic and detachment transition criteria were used to measure the dynamic transition from RR to MR for different chord length ratios, \(w_{2}/w_{i}\). The sonic criterion happens when there is a Mach number of 1 behind the reflected point. After a while, the flow becomes unable to turn parallel to the symmetric plane. At this point, a Mach Stern (MS) generates. This moment is called the detachment point. The values of the transition angles were summarized in Tables 3 and 4. The results showed that the difference in the transition angles occurred within a degree between the two cases of \(w_{2}/w_{i}=0.75\) and \(0.5\). Table 3 shows that for the case of \(w_{2}/w_{i}=0.25\), the sonic transition was not possible to be detected. That's because of the sudden interaction between the two incident wave angles at the symmetric plane. This caused the interaction point to be the detachment point, where the MS instantaneously developed, as shown in Figure 8 (j-l). Moreover, the dynamic shock structures of the four different cases of \(w_{2}/w_{i}\) are shown in Figure 8. The development of the non-dimensional Mach stem height, MS/\(w_{i}\), with the upstream rotation of the second ramp, \(\theta_{2}\), for when the MS started to appear. The figure shows that the decrease of the rotating wedge chord delayed the MS height, compared with the whole rotating ramp. Further, the rate of the difference in the MS height for each wedge portion case from the case of \(w_{2}/w_{i}=1.0\) increased during the rotation upstream. In the case of \(w_{2}/w_{i}=0.25\) and \(\theta_{2}=27^{\circ}\), there is no interaction between the two incident waves. Thus, the MS height was zero, i.e. still RR shock configuration, as shown in the bar graph of Figure 9 (a). The reflection points of the two shocks on the symmetric plane are shown in the double jump in the pressure ratio at \(x/w_{i}=1.215\) and \(1.30\), as shown in the left part of Figure 9 (a). Further, the strength of the dynamic shock wave from the rotating wedge was higher than the static shock wave, which is indicated by the higher value of the pressure ratio. Thus, the detachment phenomenon happened at the moment of interaction. Additionally, the pressure jump location along the symmetric plane due to shocks was shifted to the right with the decrease of the wedge chord ratio as the apex point of the second ramp was getting closer to the symmetric plane. At each same \(\theta_{2}\), the MS height increased with elongating the rotating wedge portions, as shown in the bar graphs of Figure 9. Despite that, all cases were plotted at the same rotating wedge angle, \(\theta_{2}=30^{\circ}\), and the pressure distribution along the mid-plane of symmetry for the case of \(w_{2}/w_{i}=0.25\) was higher than the other cases after \(x/w_{i}=1.25\), as shown in Figure 9 (b). This indicates that the decrease in the rotating chord ratio increases the combined shock strength (pressure distribution behind). \begin{table} \begin{tabular}{c c c c c} \hline \(w_{2}/w_{i}\) & \(\theta_{2}\) & \(\beta_{t_{l}}\) & \(\beta_{t_{P}}\) & \(t_{t}\)(ms) \\ \hline 0.25 & 29.185\({}^{\circ}\) & 52.617\({}^{\circ}\) & 44.55\({}^{\circ}\) & 2.56 \\ 0.5 & 26.0\({}^{\circ}\) & 52.487\({}^{\circ}\) & 41.398\({}^{\circ}\) & 3.52 \\ 0.75 & 25.432\({}^{\circ}\) & 53.197\({}^{\circ}\) & 41.91\({}^{\circ}\) & 4.85 \\ \hline \end{tabular} \end{table} Table 4: DETachment TRANSITION ANGLES AT DIFFERENT VALUES OF \(W_{2}/W_{I}\). Figure 6: THE VARIATION OF REFLECTION/TRIPLE WAVE ANGLE, \(\beta_{P}\), WITH THE SECOND ROTATING WEDGE ANGLE \(\theta_{2}\). \begin{table} \begin{tabular}{c c c c c} \hline \(w_{2}/w_{i}\) & \(\theta_{t_{2}}\) & \(\beta_{t_{l}}\) & \(\beta_{t_{P}}\) & \(t_{t}\)(ms) \\ \hline 0.25 & – & – & – & – & – \\ 0.5 & 25.047\({}^{\circ}\) & 51.548\({}^{\circ}\) & 40.362\({}^{\circ}\) & 3.04 \\ 0.75 & 24.371\({}^{\circ}\) & 52.254\({}^{\circ}\) & 40.625\({}^{\circ}\) & 4.05 \\ \hline \end{tabular} \end{table} Table 3: SONIC TRANSITION ANGLES AT DIFFERENT VALUES OF \(W_{2}/W_{I}\). Figure 7: THE VARIATION OF NON-DIMENSIONAL MACH STEM HEIGHT WITH THE SECOND WEDGE ANGLE, \(\theta_{2}^{\circ}\), AT DIFFERENT NON-DIMENSIONAL ROTATING WEDGE CHORDS \(W_{2}/W_{I}\) AND \(M_{T}=0.05\). Figure 8: Normalized velocity gradient while increasing the WEDGE angle for a SINGLE/DOUBLE ROTATING WEDGE at DIFFERENT WEDGE ANGLES. ## 4 Conclusion The current research work aims to numerically investigate the effect of changing the rotating wedge chord ratio on the phenomena of the dynamic shock-shock interaction and the dynamic transition from RR to MR. The wedge is divided into two compression ramps where the first one is kept fixed and the second one is rotating with a trailing Mach number of \(M_{t}=0.05\). The pivot point of the rotating wedge was placed at the locations, \(a=0.25\), \(0.5\), and \(0.75\). Further, the results were compared with the dynamics of the whole rotating wedge with the same \(M_{t}\). The study was achieved by measuring the dynamic wave angles, Mach stem height, and pressure distribution along the mid-plane of symmetry. The major conclusions of the current study are: * The start of the two incident shock-shock interaction was delayed with the decrease of the rotating wedge chord ratio. * The delay in the shock-shock interaction strengthened the combined shock which caused a sudden transition from RR to MR in the case of \(w_{2}/w_{i}=0.25\). * The dynamic shock systems of the cases \(w_{2}/w_{i}=0.75\) and \(w_{2}/w_{i}=0.5\) are relatively close to each other and close to the whole rotating wedge case, unlike the dynamics of the case of \(w_{2}/w_{i}=0.25\). This was observed in the values of \(\beta_{t},\beta_{p}\), and MS/\(w_{i}\). * At the value of \(w_{2}/w_{i}=0.25\), the wave angles were higher than that of the high values of \(w_{2}/w_{i}\) at the same \(\theta_{2}\), and provided a higher pressure distribution, because the apex point of the rotating wedge was close to the symmetric plane. * The variation of the dynamic parameters was not linear with the variation of the wedge chord ratio. Thus, there is a critical wedge chord ratio, where the dynamic shock system becomes very aggressive. A more detailed investigation is required to find the value of this critical ratio. The future work is to apply the mechanism proposed by Margha et al. [9] to move the double wedge without changing the throat area and study the dynamic shock system structures and the gain in total pressure.
2304.00173
Lego-Features: Exporting modular encoder features for streaming and deliberation ASR
In end-to-end (E2E) speech recognition models, a representational tight-coupling inevitably emerges between the encoder and the decoder. We build upon recent work that has begun to explore building encoders with modular encoded representations, such that encoders and decoders from different models can be stitched together in a zero-shot manner without further fine-tuning. While previous research only addresses full-context speech models, we explore the problem in a streaming setting as well. Our framework builds on top of existing encoded representations, converting them to modular features, dubbed as Lego-Features, without modifying the pre-trained model. The features remain interchangeable when the model is retrained with distinct initializations. Though sparse, we show that the Lego-Features are powerful when tested with RNN-T or LAS decoders, maintaining high-quality downstream performance. They are also rich enough to represent the first-pass prediction during two-pass deliberation. In this scenario, they outperform the N-best hypotheses, since they do not need to be supplemented with acoustic features to deliver the best results. Moreover, generating the Lego-Features does not require beam search or auto-regressive computation. Overall, they present a modular, powerful and cheap alternative to the standard encoder output, as well as the N-best hypotheses.
Rami Botros, Rohit Prabhavalkar, Johan Schalkwyk, Ciprian Chelba, Tara N. Sainath, Françoise Beaufays
2023-03-31T23:33:21Z
http://arxiv.org/abs/2304.00173v1
# Lego-Features: Exporting Modular Encoder Features ###### Abstract In end-to-end (E2E) speech recognition models, a representational tight-coupling inevitably emerges between the encoder and the decoder. We build upon recent work that has begun to explore building encoders with modular encoded representations, such that encoders and decoders from different models can be stitched together in a zero-shot manner without further fine-tuning. While previous research only addresses full-context speech models, we explore the problem in a streaming setting as well. Our framework builds on top of existing encoded representations, converting them to modular features, dubbed as _Lego-Features_, without modifying the pre-trained model. The features remain interchangeable when the model is retrained with distinct initializations. Though sparse, we show that the Lego-Features are powerful when tested with RNN-T or LAS decoders, maintaining high-quality downstream performance. They are also rich enough to represent the first-pass prediction during two-pass deliberation. In this scenario, they outperform the N-best hypotheses, since they do not need to be supplemented with acoustic features to deliver the best results. Moreover, generating the Lego-Features does not require beam search or auto-regressive computation. Overall, they present a modular, powerful and cheap alternative to the standard encoder output, as well as the N-best hypotheses. Rami Botros, Rohit Prabhavalkar, Johan Schalkwyk, Ciprian Chelba, Tara N. Sainath, Francoise Beaufays Google LLC, USA [email protected] modular, representations, zero-shot stitching ## 1 Introduction E2E speech recognition models, which combine acoustic, pronunciation and language models from conventional systems [1] into one neural network, have become widely used, especially for on-device applications [2, 3, 4, 5, 6, 7]. Since they are much smaller than conventional models, and their inference speed is often much faster [2, 3, 8, 9], they work well for various streaming applications. They typically use an encoder-decoder architecture [10]. Like most deep neural networks, the whole architecture is usually trained end to end. The encoder implicitly learns to serve the subsequent decoder layers, and thus conversely, the decoder is thoroughly oriented towards inputs coming from the specific encoder that it has been trained with. Therefore, encoders and decoders from different models or training runs, are generally not interchangeable without further E2E training. This tight coupling between both components stands in the way of a flexible, modular architecture. Speech encoders that have been trained on high-resource ASR data can serve as foundation models for other tasks like sentiment analysis [11] or low-resource translation [12], to name a few. However, this presents a challenge if a shared encoder representation is used for multiple downstream tasks: When the ASR encoder is retrained, all downstream models must be retrained as well. Hence, it would be more practical if each component can be developed and updated independently. To that end, we present a method for building modular speech encoder features, where different versions of the encoder can be plugged into the decoder in a zero-shot stitching manner without fine-tuning. Our method works by building on top of an existing base encoder, which is kept frozen. We adapt the Beam-Convolution scheme described in [13] to train streaming modular encoded representations, which we call Lego-Features. To produce them, the original (fixed) continuous encoded features pass through a few extra trainable "Exporter" layers, then through a CTC decoder, which is trained with an auxiliary CTC loss. Lego-Features are defined as the sorted top \(K\) CTC logit indices at every frame, see Figure 1. The logits operate over a discrete space (here: wordpiece vocabulary) and are grounded in the transcript text, which is why they tend to be modular. Overall, the traditional encoder features are forced through a tight discretizing bottleneck, which protects downstream models from coupling themselves to fine details in the encoded representation. Downstream consumers of Lego-Features need to first re-embed them, since they come in as sparse indices. [13, 14] have shown how this tight bottleneck still produces a powerful representation which is sufficiently informative for downstream ASR decoders. They also perform a "modularity test": The downstream decoder is kept constant, but gets input with a new version of the encoded representation, which is obtained by retraining the encoder from scratch using a different initialization. The switch is done in a zero-shot manner without any extra fine-tuning. Traditional continuous encoded features categorically fail the modularity test, bringing the downstream performance to nearly 100% WER, which is what motivates this new type of encoded representation. We build on the original works with a few novel contributions: 1) We find that training the modular encoder from scratch under the CTC loss is insufficient for producing the best performance. Instead, our recipe pre-trains some base encoder layers with RNN-T loss and keeps them frozen. Next, we just train the extra Exporter layers with the auxiliary CTC loss. This solution is also practical since it enables researchers to cheaply export modular features without having to modify their original system. Thus, the quality, latency and efficiency of the base model are all maintained. 2) We adapt the design to a streaming setting for the first time. Unlike the original work [13, 14], our encoder layers attention have limited left and right context windows, and the produced Lego-Features are successfully paired with a streaming-friendly RNN-T decoder. The streaming architecture still exhibits strong downstream ASR quality and passes the modularity test. By plugging the same fixed set of Lego-Features into causal as well as non-causal decoders, our work adds further evidence to their modularity and interoperability. 3) Rather than merely looking at the Lego-Features as an encoded representation, we also study them as an alternative to the N-best hypotheses within two-pass systems. We provide new comparisons against the N-best in terms of speed, accuracy and modularity. To this end, the Lego-Features are used as a first-pass output within the deliberation framework [15]. This achieves good post-deliberation WER performance, which is shown to be on-par with a baseline that performs deliberation on 1st-pass RNN-T N-best hypotheses + audio features. The Lego-Features demonstrate success in the modularity test here as well. On the other hand, we find that the N-best hypothesis text does not pass the modularity test, i.e. a new N-best from a second model would confuse the deliberation decoder from the first, which is a novel observation. Moreover, the Lego-Features are cheaper to produce than the N-best, since they require no beam-search or auto-regressive decoding, but are generated via a simple projection at every frame. Other works have attempted to present generic methods for zero-shot stitching between layers. In [16], this is achieved by learning representations relative to data-dependent anchors. In contrast, the method presented here does not need to choose anchor samples and leverages the existence of ground-truth speech transcripts instead. Another general approach, presented in [17], uses self-supervised objectives designed to encourage compatibility of different layer outputs. It is an open question whether the cited methods can deal with long sequences, whereas the CTC loss used here is a natural choice that works well with ASR and gives interpretable outputs. Further, some research has already experimented with deliberation on top of CTC outputs to save the cost of first-pass decoding [18, 19, 20]. This includes the Align-refine approach, which iteratively improves on the first-pass output. Those works tend to focus on optimizing the size and speed of the first-pass model, whereas our focus is mainly on modularity. Nevertheless, since we build on base encoder layers that have been pre-trained with the RNN-T loss, we find our CTC outputs to have high quality, which removes the need for audio attention that is used in other deliberation models. Hence, this work also introduces some speed gains to deliberation, without using the iterative Align-refine approach. On the whole, with one simple representations, we get a compelling cheap, streaming-friendly, as well as modular, alternative to both the continuous encoding vector and the N-best hypotheses, without any loss in quality. ## 2 Modeling Our framework is trained in three separate stages described below. ### Base Model We start off from a pre-trained end-to-end system that follows the cascade architecture in [21]: The base encoder comprises 3 convolution layers, then 14 Conformer [22] blocks: 4 causal ones, followed by 5 blocks that process 180 milliseconds of right-context each, then 5 more causal ones. This base encoder is pre-trained using the RNN-T loss on the same training set. For the modularization steps below, the pre-trained RNN-T decoder layers will be discarded, and the base encoder is kept frozen. This recipe allows us to keep the existing pre-trained model unchanged while exporting modular features. ### Exporting Lego-Features Figure 1 shows how the modular encoder is trained on top of a frozen base model. The Exporter layers comprise further Conformer blocks with 180ms look-ahead context. The CTC decoder [23] amounts to a single projection layer to compute the frame-level posterior over the output vocabulary. Our work uses wordpiece output tokens, but further research can explore using phonemes or graphemes instead. The depicted CTC loss is applied to those logits and is what trains the Exporter layers. Finally, the Lego-Features are computed by extracting the sorted top-\(K\) indices of the CTC logits, giving \(K\) integers at every frame. Note that this is performed on the logit vector directly, without requiring any actual decoding algorithm like beam-search. ### Downstream Models Figure 2 illustrates how downstream models generally consume the Lego-Features, which come in as sparse indices. The downstream consumer does not receive extra information about how the indices map to wordpiece tokens, and hence starts by embedding them. An Importer module, once again consisting of 180ms look-ahead Conformer blocks, prepares the embeddings for the downstream decoder. [13, 14] use 1D convolution + multi-headed attention in place of the Importer, but our early experiments show that Conformer blocks improve over this original stack. Note that the Lego-Features themselves are kept constant during downstream training. We experiment with two types of ASR decoders as examples for downstream tasks, which are used with the same fixed set of Lego-Features. #### 2.3.1 Downstream RNN-T Decoder The first downstream model uses an RNN-T decoder, which tends to serve real-time applications well, since it processes the input frames in a streaming fashion as they become available and starts outputting text tokens after a short delay [3, 24]. We adopt the same RNN-T decoder layer architecture from the base model (Section 2.1) but use it as a simulated downstream task, as the decoder in Figure 2, to see if the bottlenecked Lego-Features are as informative as the continuous base encoded tensor. #### 2.3.2 Downstream LAS decoder / Deliberation As a second downstream ASR decoder in Figure 2, we experiment with a full-context Listen-Attend-and-Spell (LAS) decoder [25], which can achieve higher quality by attending to all input frames. A fitting baseline to this experiment is second-pass deliberation ASR [15]. Typically, a deliberation system generates first-pass Figure 1: Modular Encoder. Lego-Features are exported from frozen base encoder by training extra layers with an auxiliary CTC loss. Figure 2: Downstream models embed and process the fixed Lego-features before passing them to a downstream decoder. hypotheses using a fast decoder, like RNN-T, then embeds its N-best hyps and attends to them with a second-pass full-context LAS decoder. We have therefore constructed a comparable deliberation baseline model shown in Figure 3. This model is analogous to our full pipeline, i.e. Figures 1 & 2 put together, and is designed to have a similar total model size and encoder latency. It starts with the same frozen base encoder, then trains a first-pass RNN-T decoder to obtain the N-best hyps, which stands to be compared to the Lego-Features in terms of informativeness and modularity. Figure 3 also ends with an LAS decoder, except this one can optionally attend to the continuous encoder features as well, as is done in previous deliberation work [15]. Gradients do not flow back through embedded N-best. ## 3 Experimental Settings ### CTC Logit Evaluation An interesting aspect of the Lego-Features encoder is that one can evaluate its quality directly before providing the features to any downstream tasks. This is done via a preliminary experiment where we directly decode from the full set of the CTC-trained logits (before the top-\(K\) operation in Figure 1) using beam search or greedy decoding. The decoding algorithm used for this evaluation is tangential to how the Lego-Features are produced, since those are only extracted as the top-\(K\) logit ranks without decoding actual transcripts. Yet this direct evaluation can inform us about the general quality of the CTC-trained logits, from which the Lego-Features are produced. ### WER and Modularity Test The downstream ASR decoders trained on the Lego-Features (Section 2.3) are then evaluated and a modularity test is performed. The aim of the test is to check if two different versions of the encoded features are interchangeable. We test that by keeping the downstream model fixed, but feeding it with a new version of the encoded features, which we get from another training run. The second training is done from scratch with a new initialization. We compare the WER performance of the decode before and after the switch, denoted as "Normal \(\rightarrow\) Mod. Test WER" in our tables. For the Lego-Features, we retrain the encoder in Figure 1, where the base frozen encoder is also replaced with a second version from a retrained base. As a baseline, we also test the modularity of the base model itself, where we simply train the base encoder + decoder a second time end-to-end and get the retrained encoder from there. ### Architectural Details Our base architecture follows [21]: All Conformer layers [22] are 512-dim, use 8-headed self-attention and a convolution kernel size of 15. We train on a 128D log-mel feature frontend with a 16-D one-hot domain-id vector appended to it, see [26]. Our models work with 4,096 word pieces [27]. The RNN-T decoder comprises a prediction network and a joint network with a single 640-dim FF layer. The embedding prediction network [28], uses an embedding dimension of 320, and has 9M parameters. For the deliberation decoder, we use a 2-layer LSTM similar to [15], where each layer has 1536 hidden units followed by 384-dim projection. We do not use external LMs. ### Datasets As discussed in [29], all E2E models are trained on multidomain audio-text pairs [26]. All datasets obtain their labels in a semi-supervised fashion, using larger teacher models trained on in-domain data to provide pseudo labels [30, 31]. Data was handled in accordance to Google AI principles [32]. To further increase data diversity, multi-condition training (MTR) [33], random data downsampling to 8kHz [34] and SpecAug [35] are also used. Noisy data is generated at signal-noise-ratio (SNR) from 0 to 30 dB, with an average SNR of 12 dB, and with T60 times ranging from 0 to 900ms, averaging 500ms. Noise segments are sampled from YouTube and daily life noisy environmental recordings. Both 8 kHz and 16 kHz versions of the data are generated, each with equal probability, to make the model robust to varying sample rates. The _Voice-Search_ test set has 10K Voice Search utterances with an average length of 5.5 seconds. They are anonymized, hand-transcribed, and are representative of Google's Voice Search traffic. ## 4 Experimental Results ### Preliminary CTC Decoder Evaluation As explained in Section 3.1, the CTC decoder in Figure 1 can be evaluated directly. Table 1 shows two settings for the Exporter layers and their corresponding CTC WER performance. The right-context length indicates the extra duration of future context attended to by the Exporter, noting that the base encoder already sees a future context of 900ms. In both cases, greedy decoding performs close \begin{table} \begin{tabular}{c c c|c c} \hline \hline \multicolumn{2}{c|}{Exporter Properties} & \multicolumn{2}{c}{CTC Test WER} \\ \# Blocks & Size & Right & \multirow{2}{*}{Greedy} & Beam-search \\ & & Context & & (Oracle) \\ \hline 1 & 10M & +180 ms & 5.9\% & 5.8\% (2.8\%) \\ 3 & 30M & +540 ms & **5.5\%** & **5.3\% (2.7\%)** \\ \hline \hline \end{tabular} \end{table} Table 1: CTC Voice-Search WER for different Exporter setups Figure 3: Baseline deliberation on N-best RNN-T hyps. The LAS decoder attends to embedded text and optionally to the pre-RNN-T audio features. Modularity test boundary shown as the dotted line in the middle. to beam search, which tracks 16 hypotheses in its beam. For all the downstream experiments below, we use the better setup with 3 blocks for the Exporter, and apply the same design to the Importer. ### Base RNN-T vs. Downstream RNN-T Our first downstream setting works with an RNN-T decoder (Section 2.3.1). Table 2 demonstrates how the Lego-Features bottleneck still produces a rich encoding that the downstream Importer and RNN-T use well. We export \(K=12\) Lego-Features per frame and the downstream re-embeds each into \(32\) dimensions. Preliminary experiments, omitted here for brevity, indicate that varying these values does not affect downstream WER performance significantly. The Base case in the table is simply the frozen base model on the left of Figure 1, in which case the modularity test connects a new base encoder (from another training run) to the same frozen base RNN-T. The modularity test fails for the base case, yet passes for the Lego-Features. Both models involve different sizes and latencies, so a direct WER contest between them is not the main concern. Rather, the goal is to show that the Lego-Features bottleneck does not degrade performance while enabling modularity. To test robustness across changing domains, we also supply the same Lego-Features used above to a downstream RNN-T model that is trained on Librispeech data instead. The modularity test results are shown in Table 3 and only cause less than 4% relative WER decline. ### Deliberation on N-Best vs. Lego-Features Table 4 compares the LAS deliberation scenarios described in Section 2.3.2, where the Lego-Features are compared to an N-best as a first-pass output. Dropping the audio connection significantly degrades performance in the N-best case, which is consistent with previous findings [15]. The Lego-Features seem to preserve more information in the encoding, and thus do not need the audio connection. They are significantly better than N-best text, and are only off by 0.1 in absolute WER from N-best + audio. The modularity test causes no performance decline for the Lego-Features, but does not work well in the N-best case; even the text-only case degrades by 17% relative WER. This somewhat unexpected result might be a symptom of label bias, which RNN-T suffers from because of local normalization [36, 37], but the CTC decoder avoids with its conditional independence assumption. Hence, two separately-trained RNN-T first-pass models might exhibit different biases in their N-bests, leading to this result. #### 4.3.1 Speed Comparison Table 4 notes a difference in the input shapes to the Importers across the different types of first-pass models, after re-embedding in Figure 2 & 3. Here, \(E_{1}\) and \(E_{2}\) are the respective embedding dimensions, \(n\) is the RNN-T's beam width and \(U\) is the number of text tokens produced by it. \(K\) is the number of logit indices in the Lego-Features and \(T\) is their sequence length (=number of encoded frames). Note how the N-best's embedding expands the output sequence length, since it stacks the \(N\) hypothesis sequentially while keeping the sentence structures intact, in order to attend to this order information during second-pass decoding. Since the LegoFeatures describe per-frame logit ranks without serializing them into sentences, we forgo this expansion and concatenate the embeddings within the depth dimension at each frame instead. This saves on computational cost, since the #GFLOPs used by LAS is proportional to the sequence length it is attending to. While \(U\) can change from one utterance to the other, the embedded matrices have to padded to maximum length when working with hardware accelerators. Our system uses \(n=8\), \(U=120\), \(E_{1}=384\), \(T=343\), \(K=12\), and \(E_{2}=32\). This makes the depth dimension equal, but LegoFeatures' sequence length is \(64\%\) smaller than the N-best's. Another important computational benefit of deliberating on LegoFeatures is that we can obtain them without performing a beam-search procedure. It is hence possible to compute them for long utterances with high parallelization, only limited by the number of TPI cores available. Generating the N-best, on the other hand, requires sequential auto-regressive processing. For instance, benchmarking this sequential path in the RNN-T (using an in-house server TPU and the above dimensions) gives \(1.8\) ms per output token, or \(216\) ms per utterance in the padded worst case, which does become the bottleneck after the other layers are parallelized. ## 5 Conclusions and Future Work In this paper, we describe a simple recipe for exporting streaming-friendly modular encoded representations and successfully test them with RNN-T and LAS decoders. Overall, exporting the encoder output as top CTC-trained logits introduces multiple benefits. The encoding achieves strong WER performance and interchangability is demonstrated through the modularity test. If regarded as a representation for first-pass ASR prediction, the Lego-Features surpass the N-best in quality, modularity, and generation speed. To address resource-limited environments like on-device ASR, and to improve latency, future research can explore using smaller Exporter and Importer layers. Another avenue is to export CTC logits over phoneme/triphone/grapheme vocabularies, or a combination thereof. Different types of Lego-Features can be tested with various downstream tasks, like confidence models, speech translation or spoken language understanding. \begin{table} \begin{tabular}{l|c|c|c} \hline \hline \multirow{2}{*}{First Pass} & Embedded & Attend & Downstream WER \\ & Shape & Audio & Normal \(\rightarrow\) Mod. Test \\ \hline RNN-T \(N\)-best & \([N\cdot U,E_{1}]\) & No & 5.4\% \(\rightarrow\) 6.3\% \\ & & Yes & 5.0\% \(\rightarrow\) 14.3\% \\ \hline Lego-Features & \([T,K\cdot E_{2}]\) & No & 5.1\% \(\rightarrow\) **5.1\%** \\ \hline \hline \end{tabular} \end{table} Table 4: Deliberation WER and Modularity Tests. Embedded Shapes discussed in Section 4.3.1 \begin{table} \begin{tabular}{l c|c} \hline \hline \multicolumn{2}{c|}{Dev-Clean WER} & \multicolumn{2}{c}{Test-Other WER} \\ Normal \(\rightarrow\) Mod. Test & Normal \(\rightarrow\) Mod. Test \\ \hline 4.9 \(\rightarrow\)**5.1** & 10.0 \(\rightarrow\)**10.3** \\ \hline \hline \end{tabular} \end{table} Table 3: Modularity tests if downstream is trained on Librispeech
2309.09815
On The Stabilizer Formalism And Its Generalization
The standard stabilizer formalism provides a setting to show that quantum computation restricted to operations within the Clifford group are classically efficiently simulable: this is the content of the well-known Gottesman-Knill theorem. This work analyzes the mathematical structure behind this theorem to find possible generalizations and derivation of constraints required for constructing a non-trivial generalized Clifford group. We prove that if the closure of the stabilizing set is dense in the set of $SU(d)$ transformations, then the associated Clifford group is trivial, consisting only of local gates and permutations of subsystems. This result demonstrates the close relationship between the density of the stabilizing set and the simplicity of the corresponding Clifford group. We apply the analysis to investigate stabilization with binary observables for qubits and find that the formalism is equivalent to the standard stabilization for a low number of qubits. Based on the observed patterns, we conjecture that a large class of generalized stabilizer states are equivalent to the standard ones. Our results can be used to construct novel Gottesman-Knill-type results and consequently draw a sharper line between quantum and classical computation.
Éloi Descamps, Borivoje Dakić
2023-09-18T14:36:45Z
http://arxiv.org/abs/2309.09815v1
# On The Stabilizer Formalism And Its Generalization ###### Abstract The standard stabilizer formalism provides a setting to show that quantum computation restricted to operations within the Clifford group are classically efficiently simulable: this is the content of the well-known Gottesman-Knill theorem Gottesman and Knill (1994). This work analyzes the mathematical structure behind this theorem to find possible generalizations and derivation of constraints required for constructing a non-trivial generalized Clifford group. We prove that if the closure of the stabilizing set is dense in the set of \(SU(d)\) transformations, then the associated Clifford group is trivial, consisting only of local gates and permutations of subsystems. This result demonstrates the close relationship between the density of the stabilizing set and the simplicity of the corresponding Clifford group. We apply the analysis to investigate stabilization with binary observables for qubits and find that the formalism is equivalent to the standard stabilization for a low number of qubits. Based on the observed patterns, we conjecture that a large class of generalized stabilizer states are equivalent to the standard ones. Our results can be used to construct novel Gottesman-Knill-type results and consequently draw a sharper line between quantum and classical computation. ## I Introduction Quantum computation uses quantum systems to perform calculations beyond the capabilities of classical (standard) computers Gottesman and Knill (1994). Many quantum algorithms solve problems seemingly intractable for classical computers. Prominent examples include Shor's factoring Shor (1994), Grover search Grover (1996), phase estimation Gottesman and Knill (1996), quantum simulation algorithms Gottesman and Knill (1996) etc. A common belief is that only some parts of quantum computation possess a significant speedup. Thus, it is crucial to identify sets of gates that make quantum computation classically (in)tractable to understand how this advantage arises. Some models do not bring (exponential) speedups, and this question has been extensively studied in the literature Nielsen and Chuang (2000); Gottesman and Knill (2001) prominently with the aid of the so-called _stabilizer formalism_. Based on this formalism, the Gottesman-Knill theorem identifies a"non-trivial" portion of the quantum-mechanical computation that can be efficiently simulated classically. The theorem has a pedagogical value in showing substantial differences and similarities between classical and quantum computation. This theorem also saw few generalizations obtained by varying different parameters of the setting Gottesman and Knill (1996); Gottesman and Knill (2001) leading to a better insights into the boundary between classical and quantum models. In this work, we continue this search for possible generalizations of the Gottesman-Knill theorem and the underlying stabilizer formalism. Our main results are: a) we prove the constraint theorem for the generalized Clifford group, b) we provide an exhaustive analysis of binary stabilization for two and three qubits, and based on observed patterns, c) we conjecture that binary stabilization for qubits is equivalent to the standard stabilizer formalism. The paper is organized as follows. In Section II we discuss in more details the stabilizer formalism and the Gottesman-Knill theorem and then proceed by defining the generalized stabilizer states, stabilizer sets and Clifford groups. In Section III, we focus on infinite stabilization sets, specifically those that generate a dense set within the group of \(SU(d)\) transformations. In such case, we derive a theorem that demonstrates that the corresponding Clifford group is trivial, composed solely of local gates and permutations of subsystems. This finding highlights the direct relationship between the density of the stabilization set and the triviality of the associated Clifford group. Lastly, in Section IV we discuss the stabilization for qubits and binary operators. We provide a complete analysis for the two and three qubits cases and we show that in these cases the stabilizer formalism is equivalent to the standard one. Based on this, we conjecture that this equivalence extends to an arbitrary number of qubits. ## II Towards a generalization of the stabilizer formalism We start our analysis with the set of \(N\) qudits residing in the Hilbert space \((\mathbb{C}^{d})^{\otimes N}\). The basic idea of the stabilizer formalism is to fix a state \(|\psi\rangle\) with the set of operators \(O_{1},\cdots,O_{k}\) such that: * \(|\psi\rangle\) is a \(+1\)-eigenvector (with eigenvalue \(+1\)) for all \(O_{i}\), and * \(|\psi\rangle\) is unique (up to a complex factor). These hypotheses allow for a full characterization of the state \(|\psi\rangle\) by only specifying this list \(O_{1},\cdots,O_{k}\) of operators. We will consider (local) tensor operators \(O_{i}\in\mathcal{A}^{\otimes N}\) with \(\mathcal{A}\subset U(d)\), where \(U(d)\) is the group of \(d\times d\) unitary matrices. The set \(\mathcal{A}\) is called the _stabilizing set_, and any state \(\ket{\psi}\) uniquely stabilized (_i.e._, satisfying the two conditions above) is a _stabilizer state_. We now introduce the _generalized Clifford group_\(\mathcal{C}_{N}(\mathcal{A})\) as follows \[\mathcal{C}_{N}(\mathcal{A})=\{U\in U(d^{N})|\forall O\in\mathcal{A}^{\otimes N },\;UOU^{\dagger}\in\mathcal{A}^{\otimes N}\}. \tag{1}\] The definition ensures that \(O\ket{\psi}\) is also a stabilizer state for any stabilizer states \(\ket{\psi}\) and \(O\in\mathcal{C}_{N}(\mathcal{A})\). A classic example of such stabilization setting is given by the set of Pauli matrices and this is the (standard) stabilizer formalism [11]. In this case \(\mathcal{A}=\mathcal{P}\), and \[\mathcal{P}=\{\pm 1,\pm i\}\cdot\{\mathds{1},\sigma_{X},\sigma_{Y},\sigma_{Z}\}, \tag{2}\] where \(\mathds{1}\) is the identity and \(\sigma_{X}\), \(\sigma_{Y}\) and \(\sigma_{Z}\) are the usual Pauli matrices. The corresponding Clifford group is: \[\mathcal{C}_{N}=\{U\in U(d^{N})|\forall O\in\mathcal{P},\;UOU^{\dagger}\in \mathcal{P}\}. \tag{3}\] This Clifford group is generated by two single-qubit gates and one two-qubit gate [12]: * Hadamard gate: \(H=\frac{1}{\sqrt{2}}\left(\begin{array}{cc}1&1\\ 1&-1\end{array}\right)\) * \(\frac{\pi}{2}\)-phase gate: \(S=\left(\begin{array}{cc}1&0\\ 0&i\end{array}\right)\) * Controlled phase gate \(\Lambda Z=\left(\begin{array}{cccc}1&0&0&0\\ 0&1&0&0\\ 0&0&1&0\\ 0&0&0&-1\end{array}\right)\) or * Controlled not gate \(\Lambda X=\left(\begin{array}{cccc}1&0&0&0\\ 0&1&0&0\\ 0&0&0&1\\ 0&0&1&0\end{array}\right)\) Computation involving gates from the Clifford group can be realised as a successive application of these three generators. With these definitions we can give a precise statement of the Gottesman-Knill theorem [1]: **Theorem 1**.: _(Gottesman-Knill) Computation utilizing only:_ * _Preparation of qubits in states of the computational basis,_ * _Quantum gates from the Clifford group_ \(\mathcal{C}_{N}\)_, and_ * _Measurements in the computational basis,_ _can be simulated efficiently on a classical probabilistic computer._ The proof is mainly based on the stabilizer formalism [2], suggesting that the general definition we gave for the stabilization may lead to a generalized version of the Gottesman-Knill theorem. However, it may happen that the set \(\mathcal{C}_{N}(\mathcal{A})\) defined in eq. (1), is composed of local gates only, reducing the model to computation by local gates only. In such situation, the corresponding generalized theorem would be trivially true. Thus, two interesting questions arise in this respect: * For which set \(\mathcal{A}\) is the corresponding Clifford group not trivial? * What kind of states can be stabilized by different sets \(\mathcal{A}\)? What is meant by a trivial Clifford group here is a group contained in \(U(d)^{\otimes N}P_{N}(d)\), where \(P_{N}(d)\) is the set of permutation gates on \(N\) qudits, with elements \(P_{\sigma}\) acting on the basis states in the following way \[P_{\sigma}\ket{i_{1},\cdots,i_{N}}=\ket{i_{\sigma(1)},\cdots,i_{\sigma(N)}}, \tag{4}\] for any permutation \(\sigma\) of \(N\) elements. In other words, a trivial Clifford group is composed only of local and permutation gates. Such a group does not generate entanglement and thus the computation involving only gates from it is considered trivial. ## III Generalized Clifford group We shall try to find examples of \(\mathcal{C}_{N}(\mathcal{A})\) being non-trivial, this means containing entangling gates. This problem was also studied by [9] where the authors studied the Clifford group with \(\mathcal{A}\) being a finite group acting irreducibly on \(\mathbb{C}^{d}\). Here we consider a more general case of infinite groups being involved. We cover a first step in this direction by considering the stabilizing set \(\mathcal{A}\) to generate a group \(\langle\mathcal{A}\rangle\) dense inside \(SU(d)\), _i.e._, \(SU(d)\subset\langle\overline{\mathcal{A}}\rangle\), where \(\overline{O}\) denote the topological closure of the set \(O\). Under these assumptions we show a series of results leading to a new necessary condition for a generalized Clifford group to be non-trivial. We show the following result. **Proposition 1**.: _If the set \(\langle\mathcal{A}\rangle\) is dense inside \(SU(d)\), then for all integers \(N\), the Clifford group \(\mathcal{C}_{N}(\mathcal{A})\) verifies \(\mathcal{C}_{N}(\mathcal{A})\subset\mathcal{C}_{N}(U(d))\)._ _Proof_. See Appendix (A) Next, we show that a gate from \(\mathcal{C}_{N}(U(d))\) is non-entangling, which means that it maps product-states to product-states. **Proposition 2**.: _If \(U\in\mathcal{C}_{N}(U(d))\), then for all product-state \(\ket{\psi}=\ket{\varphi_{1}}\cdots\ket{\varphi_{N}}\), we have \(U\ket{\psi}=\ket{\tilde{\varphi_{1}}}\cdots\ket{\tilde{\varphi_{N}}}\) for some states \(\ket{\tilde{\varphi_{1}}},\cdots,\ket{\tilde{\varphi_{N}}}\)._ _Proof_. See Appendix (B) An equivalent result can be obtained for product "_bra_" (dual) states. This is simply obtained by varying the proof and looking at the left eigenvectors. We thus have that all \(U\in\mathcal{C}_{N}(\mathcal{A})\) stabilize the set of product linear forms, _i.e._, \(\langle\psi|=\langle\varphi_{1}|\cdots\langle\varphi_{N}|\), and \(\langle\psi|\,U=\langle\tilde{\varphi_{1}}|\cdots\langle\tilde{\varphi_{N}}|\). With this, we can show an already known result [13], of factorizing non-entangling gates into a tensor product of local gates up to a permutation gate. However, in comparison to the proof provided in [13] our proof employs elementary mathematics and provides the result in full generality (for arbitrary \(d\) and \(N\)). **Proposition 3**.: _The non-entangling set \(\mathcal{C}_{N}(U(d))\) acting on N qudits is composed of trivial gates, i.e.,_ \[\mathcal{C}_{N}(U(d))=U(d)^{\otimes N}P_{N}(d). \tag{5}\] _Proof_. See Appendix (D). By combining these intermediary results, we arrive to the following theorem: **Theorem 2**.: _Let \(\mathcal{A}\subset U(d)\) such that, the group \(\langle\mathcal{A}\rangle\) generated by \(\mathcal{A}\) is dense inside \(SU(d)\). Then for all integers \(N\), the Clifford group of order \(N\), \(\mathcal{C}_{N}(\mathcal{A})\) is trivial, i.e., verifies \(\mathcal{C}_{N}(\mathcal{A})\subset U(d)^{\otimes N}P_{N}(d)\)._ Theorem 2 simply means that if the set \(\mathcal{A}\) generates a group which is too big, then its corresponding generalized Clifford group \(\mathcal{C}_{N}(\mathcal{A})\) is only made of trivial gates. In this situation, a computation starting with a product-state will keep a separable form throughout the application of gates from the Clifford group. Given this, we can keep track of each qudit individually and perform an efficient classical simulation of the computation. This result gives a non-trivial constraint on the stabilizing set \(\mathcal{A}\), which is needed to generalize the stabilizer formalism. ## IV Exploring generalized stabilization for qubits In the previous section we found some constraints on the stabilizing sets \(\mathcal{A}\). However, the Gottesman-Knill theorem does not only rely on the fact that the standard Clifford group \(\mathcal{C}_{N}\) is not trivial but also that the set of all Pauli-stabilizer states (_i.e._, state of the standard stabilizer formalism) posses a specific and rich structure. Thus, when analyzing a potential generalized stabilizer setting, an important question to address is determining the specific nature (_e.g._, entanglement structure) of the stabilizer states. We focus our attention to the case of qubits (\(d=2\)). To arrive at novel structures, it is thus important to understand when a set \(\mathcal{A}\) stabilizes states that are not locally equivalent to Pauli-stabilized ones. We say that two \(N\)-qubits states \(\ket{\psi}\) and \(\ket{\phi}\) are locally equivalent if there exist a local gate \(U=U_{1}\otimes\cdots\otimes U_{N}\) such that \(\ket{\psi}=U\ket{\phi}\) with \(U_{i}\in U(2)\). ### Binary operator case The problem in its full generality (\(\mathcal{A}\) composed of arbitrary unitary matrices) is hard, we will thus consider a simpler case of binary observables. This choice comes naturally, as the Pauli matrices generating the Pauli group (standard stabilizer formalism), are specific instance of binary operators, _i.e._, operators of the form \(\vec{\sigma}\cdot\vec{\mathbf{n}}\) where \(\vec{\sigma}=(\sigma_{X},\sigma_{Y},\sigma_{Z})\) and \(\vec{\mathbf{n}}\) is a unit-norm vector. We first take a look at the stabilization problem when the elements of \(\mathcal{A}\) are general binary operators. Recall that a necessary condition to have stabilization by a list of operators, is to have for each pair of stabilizers common \(+1\)-eigenvectors. We thus investigate this pairwise condition in the case of binary observables \(\vec{\sigma}\cdot\vec{\mathbf{n}}\). Since stabilized sets are equivalent up to local unitary, without loss of generality, we set the first stabilizing operator to be \(\sigma_{Z}\otimes\cdots\otimes\sigma_{Z}\) and the second one can to be composed of operators in the \((XZ)\)-plane (_i.e._, operators of the form \(\cos(\theta)\sigma_{Z}+\sin(\theta)\sigma_{X}\)). To begin with, we introduce some useful notation. Notation: * For real angles \(\theta\) and \(\phi\) a general binary operator on the Bloch sphere is defined as \[A_{\theta,\phi}=\cos(\theta)\sigma_{Z}+\sin(\theta)\Big{(}\cos(\phi)\sigma_{X }+\sin(\phi)\sigma_{Y}\Big{)}\] (6) * For operators in the \((XZ)\)-plane we set \(A_{\theta}=A_{\theta,0}\). * For \(n\) binary operators \(A_{1}\),...,\(A_{n}\) the projector on the \(+1\)-eigenspace is denoted as: \(P_{A_{1}\otimes\cdots\otimes A_{n}}=\frac{1}{2}(\mathds{1}+A_{1}\otimes\cdots \otimes A_{n})\) * For operators in the \((XZ)\)-plane, \(A_{\theta_{1}}\),...,\(A_{\theta_{n}}\) we set notation \(P_{A_{\theta_{1}}\otimes\cdots\otimes A_{\theta_{n}}}=P_{\theta_{1},\ldots, \theta_{n}}\) * For indices \(j_{1},\ldots,j_{n}\in\{-1,+1\}\), we define the unnormalized states: \[\ket{\psi_{j_{1},\ldots,j_{n}}}=\mathrm{Re}\left[\sum_{k_{1},\ldots,k_{n}=0,1} (ij_{1})^{k_{1}}\cdots(ij_{n})^{k_{n}}\ket{k_{1}\cdots k_{n}}\right].\] (7) From a technical standpoint, the last definition proves to be highly useful as in our calculations, it frequently becomes necessary to retain terms within sums that involve an even number of indices \(k_{i}\), all of which are set to the value \(1\). We also see that \(\ket{\psi_{j_{1},\cdots,j_{n}}}=\ket{\psi_{-j_{1},\cdots,-j_{n}}}\) and this is why we fix the first index to \(+1\) to avoid counting twice the same state. With these definitions in place, we can state a technical theorem which has profound consequences on the stabilization with binary operators in the \((XZ)\)-plane. **Theorem 3**.: _For \(n\) angles \(\theta_{1},\ldots,\theta_{n}\), the eigenvalues (with multiplicity) of \(P_{0,\cdots,0}P_{\theta_{1},\cdots,\theta_{n}}\) are the following:_ * \(2^{n-1}\) _zeros,_ * \(\frac{1}{2}(1\ +\ \cos(j_{1}\theta_{1}+\cdots+j_{n}\theta_{n}))\) _for_ \(j_{1},...,j_{n}\in\{-1,+1\}\) _and_ \(j_{1}=1\)_, with the corresponding eigenvector_ \(\ket{\psi_{j_{1},\cdots,j_{n}}}\)_._ Proof.: See Appendix (E) Recall that we are interested in the stabilization of a state by a family of tensor product of binary operators. Two of them are chosen as a reference, and, as already said, due to local equivalence, we can always set them to \(\sigma_{Z}\otimes\cdots\otimes\sigma_{Z}\) and \(A_{\theta_{1}}\otimes\cdots\otimes A_{\theta_{N}}\). Their common \(+1\)-eigenvector is simply given by a state stabilized by the product of the projectors \(P_{0,\ldots,0}P_{\theta_{1},\ldots,\theta_{N}}\). Given Theorem 3, we can conclude the following. As the eigenvalues are \(\frac{1}{2}(1+\cos(j_{1}\theta_{1}+\cdots+j_{n}\theta_{n}))\), we can vary parameters \(\theta_{1},\ldots,\theta_{n}\) to control the number of stabilized states. Note that these eigenstates are independent of the value of \(\theta_{k}\). Thus stabilization with operators in the \((XZ)\)-plane, we will only lead to states given by equation (7). On the other hand, such states can be stabilized by taking tensor product operator from \(\{\sigma_{Z},\pm\sigma_{X}\}\) only. Indeed, \(\ket{\psi_{j_{1},\cdots,j_{N}}}\) is uniquely stabilized by the operators \[O_{k}=\sigma_{X}\otimes\sigma_{Z}\otimes\cdots\otimes\sigma_{Z}\otimes \underbrace{(-j_{k}\sigma_{X})}_{\text{at }k}\otimes\sigma_{Z}\otimes\cdots\otimes\sigma_{Z}, \tag{8}\] for \(k=2,\ldots,n\). To see this, we can compute the eigenvalue of \(O_{k}\) for any state \(\ket{\psi_{i_{1},\cdots,i_{N}}}\) and verify that it is \(1\) if and only if \(i_{k}=j_{k}\). This means all states stabilized by operators in the \((XZ)\)-plane are locally equivalent to Pauli stabilizer states. However, note that not all of Pauli stabilizer states can be obtained in this way. For example the state \(\ket{00}\) cannot be stabilized by only binary operators in the \((XZ)\)-plane, because states in eq. (7) for two qubits are maximally entangled, _i.e_, \(\ket{\psi_{j_{1},j_{2}}}=\ket{00}\pm\ket{11}\). Since the identity operator allows for the stabilization of product-states, we shall add the identity as an element of our stabilizing set \(\mathcal{A}\). ### Adding the identity We have seen in the previous section that stabilization by operators constructed from the \((XZ)\)-plane only yields standard stabilizers states, but states in eq. (7) do not span all possibilities, thus adding identity to the set \(\mathcal{A}\) is necessary. Therefore, we will explore stabilizing sets of the form \(\mathcal{A}=\{A_{\theta,\phi},\mathds{1}_{2}\}\). For a small number of qubits, we can exhaustively search all possibilities, and from there, we can get better insight into general constraints on the stabilizing set. We can also compute the associated stabilizer states up to local equivalence. #### ii.2.1 Methods To do an exhaustive search, firstly, we list all non-equivalent stabilization patterns. A stabilization pattern simply corresponds to the list of stabilizers: \[\begin{split} O_{1}&=A_{1,1}\otimes\cdots\otimes A _{1,n}\\ &\cdots\\ O_{k}&=A_{k,1}\otimes\cdots\otimes A_{k,n}\end{split} \tag{9}\] We are interested in stabilization up to local rotation, thus, two stabilization patterns are equivalent if they can be transformed to each other via a) permutation of the qubits, b) permutation of the operators, and c) local rotations. With this, we seek for * **Unique stabilization.** There is only one common \(+1\)- eigenstate of all the operators \(O_{k}\). * **Minimal stabilization.** We cannot achieve a unique stabilization with a proper subset of the operators \(\{O_{k}\}\). After listing all inequivalent stabilization patterns, we shall analyze each one separately to identify parameters \(\theta_{k}\) for which we get unique stabilization. Finally, we shall identify stabilized states. To do so we will use two different methods. With the first method, which we call the determinant method, we consider \(\ket{\psi_{i}}\) (\(1\leq i\leq m\)) to be the eigenbasis basis of \(O_{1}\). A common eigenstate \(\ket{\psi}\) is then expanded over this basis: \[\ket{\psi}=\sum_{i=1}^{m}\alpha_{i}\ket{\psi_{i}}. \tag{10}\] We write \(O_{j}\ket{\psi_{i}}=\sum_{l=1}^{m}\beta_{ijl}\ket{\psi_{l}}\), and thus the condition \(O_{j}\ket{\psi}=\ket{\psi}\) implies for all \(1\leq j\leq k\) and \(1\leq l\leq m\), \[\sum_{i=1}^{m}\alpha_{i}\beta_{ijl}=\alpha_{l}. \tag{11}\] This is a linear system in \(m\) unknown \(\alpha_{i}\)'s with \(km\) equations, which can be summarized in matrix form as: \[M\cdot\begin{pmatrix}\alpha_{1}\\ \vdots\\ \alpha_{m}\end{pmatrix}=0, \tag{12}\] with \(M\) being a \(km\times m\) matrix. The solution of the system is then given by the kernel \(M\). To find the kernel, we impose the constraint \(\det\!\left(M^{\dagger}M\right)=0\) which gives us a necessary condition for the set of coefficients \(\beta_{ijl}\) and consequently on the set of angles \(\theta_{1},\ldots,\theta_{N}\) that parameterize our stabilizers. The second approach, called the projector method, involves examining the projector \(P_{i}=(\mathds{1}+O_{i})/2\) on the \(+1\)-eigenspace for each stabilizing operator \(O_{i}\). We consider their product \(M=P_{1}P_{2}\ldots P_{k}\). A state \(\ket{\psi}\) is stabilized by all \(O_{i}\) if and only if \(M\ket{\psi}=\ket{\psi}\), therefore, we search for the eigenvalues and eigenvectors of \(M\). The expressions for the eigenvalues of \(M\), combined with the requirement that one of them equals \(1\), impose constraints on the angles \(\theta_{1},\ldots,\theta_{N}\). Furthermore computing the eigenbasis of \(M\) directly provides the expression of the stabilizer states. This method requires more computational resources, making it suitable only for simpler scenarios characterized by a limited number of free parameters. These two methods are applied in the analysis for two and three qubits. #### ii.1.2 Two-qubits case We have identified four inequivalent stabilization patterns with two operators. Table 1 outlines the stabilization patterns along with the necessary conditions on the free parameters to achieve unique stabilization, as well as the corresponding stabilizer states. We see that the stabilizer states are either product-states or maximally entangled states. These are all locally equivalent to Pauli stabilizer states. #### ii.1.3 Three qubits-case The three qubit stabilization analysis is more involved, since there are dozen of inequivalent stabilization patterns. We start the analysis of stabilizations with only two operators. This gives us some minimal unique stabilization, and also a two-operators compatibility rule. We enumerate 9 distinct stabilization patterns, but only one of them yields a unique stabilization, where the stabilizer states correspond to those described in eq. (7). Additionally, we identify 8 patterns that stabilize a two-dimensional space, which establishes a compatibility condition used in the subsequent analysis. From there, we also provide an exhaustive search for three operators. Details are provided in Appendix F. We found that stabilization is locally equivalent to Pauli stabilizer states. Table 2 and 3 shown in appendix F summarizes the analysis and findings. #### ii.1.4 Conjecture Our analysis for a low number of qubits shows that all stabilization with binary operators (including identity operator) is locally equivalent to the standard Pauli stabilization. This motivates us to put forward the following conjecture: **Conjecture 1**.: _All states stabilized by the set \(\mathcal{A}\) composed of binary operators and the identity i.e., \(\mathcal{A}=\{A_{\theta,\phi},\mathds{1}_{2}\}\), are locally equivalent to standard stabilizer states (i.e., stabilized by the Pauli group \(\mathcal{P}=\{\pm 1,\pm i\}\cdot\{\mathds{1},\sigma_{X},\sigma_{Y},\sigma_{Z}\}\))._ One way to tackle this conjecture is to go back to stabilization on the \((XZ)\)-plane, where a weaker conjecture with \(\mathcal{A}=\{A_{\theta},\mathds{1}\}\) is also probably true. Furthermore, on a small number of qubits, we observe that all Pauli stabilizations also seem possible (up to local equivalence) without the operator \(\sigma_{Y}\). So, potentially, it is possible that by including the identity matrix as a stabilizer, a way can be found to eliminate the \(Y\) components. It is interesting to point out that this conjecture is false for the case of stabilization with general \(2\times 2\) unitary matrices. Indeed, as shown in [14], there is a setting - the so called _XS-stabilizer formalism_ - where one can explicitly construct a stabilizer state on 6 qubits which is not locally equivalent to a Pauli stabilizer state. The set \(\mathcal{A}\) of this setting is the group generated by three operators \(\{\sigma_{X},\mathrm{diag}(1,i),e^{i\pi/4}\mathds{1}\}\). The stabilizer state is given by \[\ket{\psi}=\sum_{x_{i}=0,1}(-1)^{x_{1}x_{2}x_{3}}\ket{x_{1},x_{2},x_{3},x_{1} \oplus x_{2},x_{2}\oplus x_{3},x_{3}\oplus x_{1}}, \tag{13}\] (where \(\oplus\) correspond to the addition modulo 2) and it is stabilized by the following set \[\begin{split}& O_{1}=\sigma_{X}\otimes S^{3}\otimes S^{3}\otimes S \otimes\sigma_{X}\otimes\sigma_{X}\\ & O_{2}=S^{3}\otimes\sigma_{X}\otimes S^{3}\otimes\sigma_{X} \otimes S\otimes\sigma_{X}\\ & O_{3}=S^{3}\otimes S^{3}\otimes\sigma_{X}\otimes\sigma_{X} \otimes\sigma_{X}\otimes S.\end{split} \tag{14}\] One can see that this state is not a Pauli stabilizer state because the exponents of the coefficients \((-1)^{x_{1}x_{2}x_{3}}\) are a cubic polynomial in variables \(x_{1},x_{2},x_{3}\). However, showing that this state is not locally equivalent to a Pauli stabilizer state is more involved and requires a full classification of standard stabilizer formalism states [14]. \begin{table} \begin{tabular}{|c|c|c|} \hline Operators & Conditions & Stabilized states \\ \hline \(\sigma_{Z}\otimes\sigma_{Z}\) & \(\theta\pm\phi=0\)\([2\pi]\) & \(\ket{\psi}=\ket{00}\pm\ket{11}\) \\ \(A_{\theta}\otimes A_{\phi}\) & & \\ \hline \(\sigma_{Z}\otimes\sigma_{Z}\) & \(\theta=0\)\([\pi]\) & \(\ket{\psi}=\ket{00}\) or \(\ket{11}\) \\ \(A_{\theta}\otimes\mathds{1}\) & & \\ \hline \(\sigma_{Z}\otimes\mathds{1}\) & \(\theta=0\)\([2\pi]\) & Non unique \\ \(A_{\theta}\otimes\mathds{1}\) & & \\ \hline \(\sigma_{Z}\otimes\mathds{1}\) & \(\mathrm{None}\) & \(\ket{\psi}=\ket{00}\) \\ \(\mathds{1}\otimes\sigma_{Z}\) & & \\ \hline \end{tabular} \end{table} Table 1: List of non-equivalent stabilization pattern for two qubits together with the associated stabilizer states. Conclusion We have investigated two important questions about stabilizer formalism in this work. We have found a set of new constraints on the set of stabilization operators used to generate non-trivial generalized Clifford groups. We have also shown that such a set cannot generate a group dense inside \(SU(d)\). Finally, we explored the case of stabilization with binary operators, and we observed, for a low number of qubits possibility of stabilization only by the standard Pauli group (up to local equivalence). This led us to conjecture that the same holds for any number of qubits. It is worth pointing out that there exist many generalizations for the case of qudits (\(d>2\)) [15; 16]. It is thus interesting to investigate equivalent versions of this conjecture in such settings. Our findings offer promising perspectives regarding new Gottesman-Knill-like theorems and potential applications in quantum error correction [1]. ## VI Acknowledgments Authors would like to thank Sebastian Horvat for his valuable feedback. E.D. extends thanks to the University of Vienna and the Erasmus program for enabling his fellowship and contributing to his stay in Vienna. This research was funded in whole, or in part, by the Austrian Science Fund (FWF) [F7115] (BeyondC). For the purpose of open access, the author(s) has applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission.
2309.13695
Quiver presentations and isomorphisms of Hecke categories and Khovanov arc algebras
We prove that the extended Khovanov arc algebras are isomorphic to the basic algebras of anti-spherical Hecke categories for maximal parabolics of symmetric groups. We present these algebras by quiver and relations and provide the full submodule lattices of Verma modules.
Chris Bowman, Maud De Visscher, Amit Hazi, Catharina Stroppel
2023-09-24T17:04:46Z
http://arxiv.org/abs/2309.13695v1
# Quiver presentations and isomorphisms ###### Abstract. We prove that the extended Khovanov arc algebras are isomorphic to the basic algebras of anti-spherical Hecke categories for maximal parabolics of symmetric groups. We present these algebras by quiver and relations and provide the full submodule lattices of Verma modules. ## 1. Introduction A fundamental notion of categorical Lie theory is that of uniqueness. If a pair of \(2\)-categorical objects share the same underlying Kazhdan-Lusztig theory, then they "should be the same" -- a striking example of this is given by the \(\mathbb{Z}\)-graded algebra isomorphisms between KLR algebras and diagrammatic Soergel bimodules conjectured in [1] and proven in [1]. Our starting point for this paper is the simplest family of (\(p\)-)Kazhdan-Lusztig polynomials -- those given by oriented Temperley-Lieb diagrams, or equivalently, Dyck tilings. These combinatorial objects underly the (extended) Khovanov arc algebras [1] and the anti-spherical Hecke categories associated to maximal parabolics of symmetric groups [1]. The first theorem of this paper explains this coincidence via an explicit and elementary \(\mathbb{Z}\)-graded algebra isomorphism (see Theorem A below) in the spirit of [1, 1]. By a theorem of Gabriel, any basic algebra over a field is isomorphic to the path algebra of its Ext-quiver, modulo relations. Such presentations are one of the "holy grails" of representation theory -- they essentially provide complete information about the structure of an algebra. Our second main theorem of this paper provides such a presentation for the basic algebras of anti-spherical Hecke categories of maximal parabolics of symmetric groups (Theorem B). We use this presentation to obtain complete submodule lattices for the Verma modules (Theorem C). The results of this paper are mostly self-contained, with elementary proofs which work over any integral domain \(\Bbbk\) containing \(i\in\Bbbk\) such that \(i^{2}=-1\). ### The isomorphism theorem The (extended) Khovanov arc algebras \(\mathcal{K}_{m,n}\) for \(m,n\in\mathbb{N}\) have their origins in \(2\)-dimensional topological quantum field theory and their first applications were in categorical knot theory [14, 15]. These algebras have subsequently been studied from the point of view of Springer varieties [14], their cohomological and representation theoretic structure [1, 17, 20, 18, 19], and symplectic geometry [21] and have further inspired much generalisation: from the Temperley-Lieb setting to web diagrams [13, 12, 15, 16] and also from the "even" setting to "super" [11] and "odd" settings [20], as well as to the orthosymplectic case [1, 18, 19]. In summary, these algebras form the prototype for knot-theoretic categorification, we refer to Stroppel's 2022 ICM address for more details [15]. On the other hand, anti-spherical Hecke categories of parabolic Coxeter systems \((W,P)\) provide the universal setting for studying the interaction between Kazhdan-Lusztig theory and categorical Lie theory -- they have formed the crux of the resolutions of the Jantzen, Lusztig, and Kazhdan-Lusztig positivity conjectures [22, 23, 24] and control much of the representation theory of algebraic groups and braid groups [1, 2, 25, 26, 17]. We refer to Williamson's 2018 ICM address for a more complete history and the geometric motivation for their study [22]. Our first main theorem bridges the gap between these two distinct categorical worlds: **Theorem A**.: _The extended Khovanov arc algebras \(\mathcal{K}_{m,n}\) are isomorphic (as \(\mathbb{Z}\)-graded \(\Bbbk\)-algebras) to the basic algebras \(\mathcal{H}_{m,n}\) of the anti-spherical Hecke categories associated to the maximal parabolics of symmetric groups \((S_{m+n},S_{m}\times S_{n})\) for all \(m,n\in\mathbb{N}\)._ Given the vast generalisations of Khovanov arc algebras (in particular to the super world!) and of these anti-spherical Hecke categories (to all parabolic Coxeter systems) we hope that our main theorem will be the starting point of much reciprocal study of these two worlds. ### Quiver and relations for Hecke categories It is well-known that (\(p\)-)Kazhdan-Lusztig polynomials encode a great deal of character-theoretic and indeed cohomological information about Verma modules (particularly if one puts certain restrictions on \(p\geqslant 0\)). If the algebra is Koszul (as is the case for our algebras) we further know that the \(p\)-Kazhdan-Lusztig polynomials carry complete information about the radical layers of indecomposable projective and Verma modules. Given the almost ridiculous level of detail these polynomials encode, it is natural to ask _"what are the limits to what \(p\)-Kazhdan-Lusztig combinatorics can tell us about the structure of the Hecke category?"_ The starting point of this paper is to delve deep into the Dyck/Temperley-Lieb combinatorics for \(p\)-Kazhdan-Lusztig polynomials, which was initiated in [10, BDHN]. There is a wealth of extra, richer combinatorial information which can be encoded into the Dyck tilings underlying these \(p\)-Kazhdan-Lusztig polynomials. Instead of looking only at the sets of Dyck tilings (which enumerate the \(p\)-Kazhdan-Lusztig polynomials) we look at the relationships for passing between these Dyck tilings. In fact, this "meta-Kazhdan-Lusztig combinatorics" is sufficiently rich as to completely determine the full structure of our Hecke categories: **Theorem B**.: _The \(\Bbbk\)-algebra \(\mathcal{H}_{m,n}\) admits a quadratic presentation as the path algebra of the "Dyck quiver" \(\mathscr{D}_{m,n}\) of Definition 7.5 modulo "Dyck-combinatorial relations" (7.4) to (7.8). If \(\Bbbk\) is a field, then the \(\operatorname{Ext}\)-quiver of \(\mathcal{H}_{m,n}\) is isomorphic to \(\mathscr{D}_{m,n}\) and this gives a presentation of the algebra by quiver and relations._ In a nutshell, the power of Theorem B is that it allows us to understand not only the _graded composition series_ of standard and projective modules (the purview of classical Kazhdan-Lusztig combinatorics) but the _explicit extensions interrelating these composition factors_ (in terms of meta-Kazhdan-Lusztig combinatorics). In essence, Theorem B provides complete information about the structure of the anti-spherical Hecke categories of \((S_{m+n},S_{m}\times S_{n})\) for \(m,n\in\mathbb{N}\). We reap some of the fruits of Theorem B by providing an incredibly elementary description of the full submodule lattices of Verma modules: **Theorem C**.: _Let \(\Bbbk\) be a field. The submodule lattice of the Verma module \(\Delta_{m,n}(\lambda)\) can be calculated in terms of the combinatorics of Dyck tilings; moreover this lattice is independent of the characteristic of \(\Bbbk\)._ An example is depicted in Figure 1, below. Specialising to the case that \(\Bbbk\) is a field and putting Theorems A and B together, we obtain a conceptually simpler proof of the results of [10, Section 2] (which makes use of the Koszulity of these algebras over a field, which is the main result of [10]). ### Structure of the paper In Section 2 we recall the necessary combinatorics of oriented Temperley-Lieb diagrams and \(p\)-Kazhdan-Lusztig polynomials from [10, BDH\({}^{+}\)]. In Section 3 we recall the extended Khovanov arc algebras and the basic algebras of the Hecke categories which will be of interest in this paper. In Sections 4 and 5 we develop the Dyck path combinatorics and lift this to the level of generators and bases of the basic algebras of the Hecke categories. In Section 6 we take a short detour to discuss the notion of _dilation_ for our diagram algebras, which will simplify the main proofs significantly. In Sections 7 we prove Theorem B of this paper, by lifting the Dyck combinatorics to the level of a of \(\mathcal{H}_{m,n}\) over an integral domain \(\Bbbk\); we then recast this presentation in terms of the quotient of the path algebra of the Ext-quiver in the case that \(\Bbbk\) is a field. In Section 8, we prove Theorem C. Finally, in Section 9 we use Theorem B to prove the isomorphism of Theorem A. **Acknowledgements**.: _The first and third authors were funded by EPSRC grant EP/V00090X/1._ ## 2. The combinatorics of Kazhdan-Lusztig polynomials We begin by reviewing and unifying the combinatorics of Khovanov arc algebras [1, 2, 3, 4] and the Hecke categories of interest in this paper [1, 1]. ### Cosets, weights and partitions Let \(S_{n}\) denote the symmetric group of degree \(n\). Throughout this paper, we will work with the parabolic Coxeter system \((W,P)=(S_{m+n},S_{m}\times S_{n})\). We label the simple reflections with the slightly unusual subscripts \(s_{i}\), \(-m+1\leqslant i\leqslant n-1\) so that \(P=\langle s_{i}\,|\,i\neq 0\rangle\leqslant W\). We view \(W\) as the group of permutations of the \(n+m\) points on a horizontal strip numbered by the half integers \(i\pm\frac{1}{2}\) where the simple reflection \(s_{i}\) swaps the points \(i-\frac{1}{2}\) and \(i+\frac{1}{2}\) and fixes every other point. The right cosets of \(P\) in \(W\) can then be identified by labelled horizontal strips called weights, where each point \(i\pm\frac{1}{2}\) is labelled by either \(\wedge\) or \(\vee\) in such a way that the total number of \(\wedge\) is equal to \(m\) (and so the total number of \(\vee\) is equal to \(n\)). Specifically, the trivial coset \(P\) is represented by the weight with negative points labelled by \(\wedge\) and positive points labelled by \(\vee\). The other cosets are obtained by permuting the labels of the identity weight. An example is given on the left hand side of Figure 2. We denote by \({}^{P}W\) the set of minimal length right coset representative of \(P\) in \(W\). Recall that an element \(w\in{}^{P}W\) precisely when every reduced expression of \(w\) starts with \(s_{0}\). This implies that \(w\) must be fully commutative, that is no reduced expression for \(w\) contains a subword of the form \(s_{i}s_{i\pm 1}s_{i}\) for some \(i\). It follows that the elements of \({}^{P}W\) can also be represented by partitions that fit into an \(m\times n\) rectangle. An example of the correspondence between \(w\in{}^{P}W\), its weight diagram and the associated partition is illustrated in Figure 2. Figure 1. The full submodule lattice of the Verma module \(\Delta_{m,n}(2,1)\) for \(\mathcal{H}_{m,n}\) for \(m,n=3\) and \(\Bbbk\) any field. We represent each simple module by the corresponding partition (in Russian notation) and highlight the \(3\times 3\) rectangle in which the partition exists. This module has simple head \(L_{3,3}(2,1)\) and simple socle \(L_{3,3}(3,2,1)\). Each edge connects a pair of partitions which differ by adding or removing a single Dyck path. Formally, a partition \(\lambda\) of \(\ell\) is defined to be a weakly decreasing sequence of non-negative integers \(\lambda=(\lambda_{1},\lambda_{2},\ldots)\) which sum to \(\ell\). We call \(\ell(\lambda):=\ell=\sum_{i}\lambda_{i}\) the length of the partition \(\lambda\). We define the Young diagram of a partition to be the collection of tiles \[[\lambda]=\{[r,c]\mid 1\leqslant c\leqslant\lambda_{r}\}\] depicted in Russian style with rows at \(135^{\circ}\) and columns at \(45^{\circ}\). We identify a partition with its Young diagram. We let \(\lambda^{t}\) denote the transpose partition given by reflection of the Russian Young diagram through the vertical axis. Given \(m,n\in\mathbb{N}\) we let \(\mathscr{P}_{m,n}\) denote the set of all partitions which fit into an \(m\times n\) rectangle, that is \[\mathscr{P}_{m,n}=\{\lambda\mid\lambda_{1}\leqslant m,\lambda_{1}^{t} \leqslant n\}.\] For \(\lambda\in\mathscr{P}_{m,n}\), the \(x\)-coordinate of a tile \([r,c]\in\lambda\) is equal to \(r-c\in\{-m+1,-m+2,\ldots,n-2,n-1\}\) and we define this \(x\)-coordinate to be the content (or "colour") of the tile and we write \(\mathsf{cont}[r,c]=r-c\). For a partition \(\lambda\) of \(\ell\), we define a standard tableau of shape \(\lambda\) to be a bijection \(\mathsf{t}\) from the set of tiles of \(\lambda\) to the set \(\{1,2,\ldots,\ell\}\) such that for each \(1\leqslant k\leqslant\ell\), the set of tiles \(\mathsf{t}^{-1}(\{1,\ldots,k\})\) is a partition. We can view \(\mathsf{t}\) as a filling of the tiles of \(\lambda\) by the number \(1\) to \(\ell\) such that the numbers increase along rows and columns. We denote by \(\operatorname{Std}(\lambda)\) the set of all standard tableaux of shape \(\lambda\). There is one particular standard tableau that we will be using throughout this paper which is defined as follows. We let \(\mathsf{t}_{\lambda}\in\operatorname{Std}(\lambda)\) denote the tableau in which we fill the tiles of \(\lambda\) according to increasing \(y\)-coordinate and then refine according to increasing \(x\)-coordinate. An example is depicted in Figure 3. For \(\lambda\) a partition of \(\ell\), we define the content sequence of \(\mathsf{s}\in\operatorname{Std}(\lambda)\) to be the element of \(\mathbb{Z}^{\ell}\) given by reading the contents of the boxes in order. Under the correspondence between \({}^{P}W\) and \(\mathscr{P}_{m,n}\) described above, the content \(i\) of each tile of \(\lambda\in\mathscr{P}_{m,n}\) corresponds to the subscript of a simple reflection \(s_{i}\). So we will often refer to the simple reflection \(\tau=s_{i}\) as the content of the tile. Moreover, standard tableaux \(\mathsf{t}\in\operatorname{Std}(\lambda)\) correspond precisely to reduced expressions \(\lambda=s_{i_{1}}s_{i_{2}}\ldots s_{i_{\ell}}\) where \(\mathsf{t}^{-1}(j)=[r_{j},c_{j}]\) with \(\mathsf{cont}[r_{j},c_{j}]=i_{j}\) for each \(1\leqslant j\leqslant\ell\). The Bruhat order on \({}^{P}W\) becomes simply the inclusion of the (Young diagrams) of partitions in \(\mathscr{P}_{m,n}\). Given \(\lambda\in\mathscr{P}_{m,n}\), we define the set \(\operatorname{Add}(\lambda)\) to be the set of all tiles \([r,c]\notin\lambda\) such that \(\lambda\cup[r,c]\in\mathscr{P}_{m,n}\). Similarly, we define the set \(\operatorname{Rem}(\lambda)\) to be the set of all tiles \([r,c]\in\lambda\) such that \(\lambda\setminus[r,c]\in\mathscr{P}_{m,n}\). Note that a partition \(\lambda\) has at most one addable or removable tile of each content. So for \([r,c]\in\operatorname{Add}(\lambda)\) (respectively \([r,c]\in\operatorname{Rem}(\lambda)\)) with \(\tau=s_{r-c}\) we write \(\lambda+\tau\) (respectively \(\lambda-\tau\)) for \(\lambda\cup[r,c]\) (respectively \(\lambda\setminus[r,c]\)). Figure 2. We depict the weight \(\varnothing\) along the bottom of the diagram, the weight of \(\lambda\) along the top of the diagram, and the coset \(s_{0}s_{-1}{}^{S_{-2}{}^{S_{-3}{}^{S_{-4}{}^{S_{1}}}}}s_{1}{}_{0}s_{-1}{}^{S_{ -2}{}^{S_{2}}}s_{1}{}^{S_{1}}{}^{S_{2}}\). We have seen how to pass from a coset to a weight diagram and a partition. We now explain how to go directly from a weight diagram to a partition. Read the labels of a weight diagram from left to right. Starting at the left most corner of the \(m\times n\) rectangle, take a north-easterly step for each \(\vee\) and a south-easterly step for each \(\wedge\). We end up at the rightmost corner of the rectangle, having traced out the "northern perimeter" of the Russian Young diagram. In particular, the identity coset corresponds to the weight diagram labelled by \(m\)\(\wedge\)'s followed by \(n\)\(\vee\)'s, tracing the perimeter of the empty partition \(\varnothing\). Throughout the paper, we will identify minimal coset representative with their weight diagrams and partitions. ### Oriented Temperley-Lieb diagrams and Kazhdan-Lusztig polynomials The following definitions come from [1]. **Definition 2.1**.: * _To each weight_ \(\lambda\) _we associate a_ \(\mathsf{cup}\) _diagram___\(\underline{\lambda}\) _and a_ \(\mathsf{cap}\) _diagram___\(\overline{\lambda}\)_. To construct_ \(\underline{\lambda}\)_, repeatedly find a pair of vertices labeled_ \(\vee\)__\(\wedge\) _in order from left to right that are neighbours in the sense that there are only vertices already joined by cups in between. Join these new vertices together with a cup. Then repeat the process until there are no more such_ \(\vee\)__\(\wedge\) _pairs. Finally draw rays down at all the remaining_ \(\wedge\) _and_ \(\vee\) _vertices. The cap diagram_ \(\overline{\lambda}\) _is obtained by flipping_ \(\underline{\lambda}\) _horizontally. We stress that the vertices of the cup and cap diagrams are not labeled._ * _Let_ \(\lambda\) _and_ \(\mu\) _be weights. We can glue_ \(\underline{\mu}\) _under_ \(\lambda\) _to obtain a new diagram_ \(\underline{\mu}\lambda\)_. We say that_ \(\underline{\mu}\lambda\) _is_ \(\mathsf{oriented}\) _if (i) the vertices at the ends of each cup in_ \(\underline{\mu}\) _are labelled by exactly one_ \(\vee\) _and one_ \(\wedge\) _in the weight_ \(\lambda\) _and (ii) it is impossible to find two rays in_ \(\underline{\mu}\) _whose top vertices are labeled_ \(\vee\)__\(\wedge\) _in that order from left to right in the weight_ \(\lambda\)_. Similarly, we obtain a new diagram_ \(\lambda\overline{\mu}\) _by gluing_ \(\overline{\mu}\) _on top of_ \(\lambda\)_. We say that_ \(\lambda\overline{\mu}\) _is oriented if_ \(\underline{\mu}\lambda\) _is oriented._ * _Let_ \(\lambda\)_,_ \(\mu\) _be weights such that_ \(\underline{\mu}\lambda\) _is oriented. We set the_ \(\mathsf{degree}\) _of the diagram_ \(\underline{\mu}\lambda\) _(respectively_ \(\lambda\overline{\mu}\)_) to the the number of clockwise oriented cups (respectively caps) in the diagram._ * _Let_ \(\lambda,\mu,\nu\) _be weights such that_ \(\underline{\mu}\lambda\) _and_ \(\lambda\overline{\nu}\) _are oriented. Then we form a new diagram_ \(\underline{\mu}\lambda\overline{\nu}\) _by gluing_ \(\underline{\mu}\) _under and_ \(\overline{\nu}\) _on top of_ \(\lambda\)_. We set_ \(\deg(\underline{\mu}\lambda\overline{\nu})=\deg(\underline{\mu}\lambda)+ \deg(\lambda\overline{\nu})\)_._ An example is provided in Figure 4. For the purposes of this paper, for \(p\geqslant 0\), we can define the \(p\)-Kazhdan-Lusztig polynomials of type \((W,P)=(S_{n+m},S_{m}\times S_{n})\) as follows. For \(\lambda,\mu\in\mathscr{P}_{m,n}\) we set \[{}^{p}n_{\lambda,\mu}=\begin{cases}q^{\deg(\underline{\mu}\lambda)}&\text{if $ \underline{\mu}\lambda$ is oriented}\\ 0&\text{otherwise.}\end{cases}\] We refer to [1, Theorem 7.3] and [1] for a justification of this definition and to [1] for the origins of this combinatorics. It is clear that for a fixed \(\mu\in\mathscr{P}_{m,n}\), the diagram \(\underline{\mu}\lambda\) is oriented if and only if the weight \(\lambda\) is obtained from the weight \(\mu\) by swapping the labels on some of the pairs of vertices connected by a cup in \(\underline{\mu}\). Moreover, in this case the degree of \(\underline{\mu}\lambda\) is precisely the number of such swapped pairs. See Figure 5 for an example of a cup diagram of degree 8. There is an alternative construction of the cup diagram \(\underline{\mu}\) as the top half of a Temperley-Lieb diagram \(e_{\mu}\). An \((m+n)\)- Temperley-Lieb diagram is a rectangular frame with, in our case, \(m+n\) vertices along the top and \(m+n\) along the bottom which are paired-off by non-crossing strands. We refer to a strand connecting a top and bottom vertex as a propagating strand. We refer to any strand connecting two top vertices as a cup and any strand connecting two bottom vertices as a cap. For \(\mu\in\mathscr{P}_{m,n}\), the Temperley-Lieb diagram \(e_{\mu}\) is obtained by starting from the partition \(\mu\) and taking the product of the 'Temperley-Lieb generator' in each of its tiles. The cup diagram \(\underline{\mu}\) is then simply the top half of the Temperley-Lieb diagram \(e_{\mu}\). This is illustrated in Figure 6. For more details, see [BDH\({}^{+}\)]. Now, let \(d\) be any \((m+n)\)-Temperley-Lieb diagram, and \(\lambda\), \(\mu\) be weights, then we can form a new diagram \(\lambda d\mu\) by gluing the weight \(\lambda\) under \(d\) and \(\mu\) on top of \(d\). We say that \(\lambda d\mu\) is an oriented Temperley-Lieb diagram if for each propagating strand in \(d\) its two vertices are either both \(\vee\) symbols or both \(\wedge\) symbols and for each cup or cap in \(d\) its two vertices consist of precisely one \(\vee\) symbol and one \(\wedge\) symbol. Figure 4. The construction of the cup diagram \(\underline{\lambda}\) for \(\lambda=(5,4,2^{2})\). See also Figure 6. It is easy see that for \(\lambda,\mu\in\mathscr{P}_{m,n}\) we have that \(\underline{\mu}\lambda\) is oriented if and only if \(\varnothing e_{\mu}\lambda\) is an oriented Temperley-Lieb diagram. Throughout the paper, we will always view the oriented Temperley-Lieb diagrams \(\varnothing e_{\mu}\lambda\) on the Young diagram of the partition \(\mu\) as illustrated in Figure 7. It was shown in [1] that we can define an graded algebra structure on the space spanned by all oriented Temperley-Lieb diagrams. A crucial ingredient in the construction of the light leaves basis for the Hecke category (see Definition 5.1 below) comes from writing each oriented Temperley-Lieb diagram \(\varnothing e_{\mu}\lambda\) as a product of generators for this algebra. We will not need the explicit presentation for this graded algebra here and will instead view this product of generators as an 'oriented tableau' \(\mathfrak{t}^{\lambda}_{\mu}\). This oriented tableau \(\mathfrak{t}^{\lambda}_{\mu}\) is obtained by assigning to each tile of the partition \(\mu\) not only a number between \(1\) and \(\ell(\mu)\) defined by the tableau \(\mathfrak{t}_{\mu}\) but also one of four possible orientations determined by the weight \(\lambda\). **Definition 2.2**.: _Let \(\lambda,\mu\in\mathscr{P}_{m,n}\) such that \(\underline{\mu}\lambda\) is oriented. Draw the Temperley-Lieb diagram \(e_{\mu}\) on the tiles of \(\mu\) as in Figure 6. Gluing \(\varnothing\) and \(\lambda\) on the bottom and top of the diagram respectively defines one of four possible orientations on each tile of \(\mu\). We define the orientation label of a tile as follows:_ _We then define the oriented tableau \(\mathfrak{t}^{\lambda}_{\mu}\) to be the map which assign to each tile \([r,c]\in\mu\) a pair \((k,x)\) where \(k=\mathfrak{t}_{\mu}([r,c])\) and \(x\in\{1,s,f,sf\}\) is the orientation label of the tile \([r,c]\)._ ## 3. The diagrammatic algebras We now recall the construction of the two protagonists of this paper: the Hecke categories associated to \((S_{m+n},S_{m}\times S_{n})\) and the (extended) Khovanov arc algebras. ### The Hecke categories We denote by \(S=\{s_{i}\,:\,-m+1\leqslant i\leqslant n-1\}\) the set of simple reflections. To simplify notations, for \(\boldsymbol{\sigma}=s_{i},\tau=s_{j}\in S\) we write \(|\boldsymbol{\sigma}-\boldsymbol{\tau}|:=i-j\). So we have \(|\boldsymbol{\sigma}-\boldsymbol{\tau}|>1\) precisely when \(\boldsymbol{\sigma}\boldsymbol{\tau}=\boldsymbol{\tau}\boldsymbol{\sigma}\) and \(|\boldsymbol{\sigma}-\boldsymbol{\tau}|=1\) precisely when \(\boldsymbol{\sigma}\boldsymbol{\tau}\boldsymbol{\sigma}=\boldsymbol{\tau} \boldsymbol{\sigma}\boldsymbol{\tau}\). Figure 7. We depict the weight \(\varnothing\) along the bottom of the diagram, the weight of \(\lambda\) along the top of the diagram, and the coset \(s_{0}s_{-1}s_{-2}s_{-3}s_{-4}s_{1}s_{0}s_{-1}s_{-2}s_{2}s_{1}s_{3}s_{2}\). The first diagram is \(e_{(5,4,2^{2})}\). The latter diagram (of degree \(2\)) is obtained by reorienting the arc connecting the first to the fourth northern vertex and the arc connecting the sixth and ninth northern vertices. We define the Soergel generators to be the framed graphs \[\mathsf{1_{0}}=\raisebox{-14.226378pt}{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt]{ \includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt ]{\includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt ]{\includegraphics[height=14.226378pt}{\includegraphics[height=14.226378pt ]{\includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt ]{\includegraphics[height=14.226378pt}{\includegraphics[height=14.226378pt ]{\includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt ]{\includegraphics[height=14.226378pt}{\includegraphics[height=14.226378pt ]{\includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt ]{\includegraphics[height=14.226378pt}{\includegraphics[height=14.226378pt ]{\includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt ]{\includegraphics[height=14.226378pt}{\includegraphics[height=14.226378pt ]{\includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt ]{\includegraphics[height=14.226378pt}{\includegraphics[height=14.226378pt ]{\includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt ]{\includegraphics[height=14.226378pt}{\includegraphics[height=14.226378pt ]{\includegraphics[height=14.226378pt}{\includegraphics[height=14.226378pt ]{\includegraphics[height=14.226378pt]{\includegraphics[height=14.226378pt ]{\includegraphics[height=14.226378pt}{\includegraphics[height=14.226378pt ]{includegraphics[height=14.226378pt}{\includegraphics[height=14.226378pt ]{\includegraphics[height=14.226378pt}{\includegraphics[height=14.226378pt ]{\includegraphics[height=14.226378pt}{\includegraphics[height=14.226378pt ]{\includegraphics[height=14.226378pt}{\includegraphics[height=14.226378pt ]{\includegraphics[height=14.226378pt}{\includegraphics[height=14.226378pt ]{\includegraphics[height=14.226378pt}{\includegraphics[height=14.226378pt ]{\includegraphics[height=14.226378pt}{\includegraphics[height=14.226378pt ]{\includegraphics[height=14.226378pt}{\includegraphics[height=14.226378pt ]{\includegraphics[height=14.226378pt}{\includegraphics[height=14.226378pt ]{\includegraphics[height=14.226378pt}{\includegraphics[height=14.226378pt ]{\includegraphics[height=14.226378pt}{\includegraphics[height=14.226378pt ]{\includegraphics[height=14.226378pt}{\includegraphics[height=14.226378pt ]{\includegraphics[height=14.226378pt}{\includegraphics[height=14.226378pt ]{\includegraphics[height=14.226378pt}{\includegraphics[height=14.226378pt ]{\includegraphics[height=14.226378pt}{\includegraphics[height=14.226378pt ]{\includegraphics[height=14.226378pt}{\includegraphics[height=14.226378pt ]{\includegraphics[height=14.226378pt}{\includegraphics[height=14.226378pt ]{\includegraphics[height=14.226378pt}{\includegraphics[height=14.226378pt ]{\includegraphics[height=14.226378pt}{\includegraphics[height=14.226378pt ]{\includegraphics[height=14.226378pt}{\includegraphics[height=14.226378pt ]{\includegraphics[height=14.226378pt}{\includegraphics[height=14.226378pt ]{\includegraphics[height=14.226378pt}{\includegraphics[height=14.226378pt ]{\includegraphics[height=14.226378pt}{\includegraphics[height=14.226378pt ]{\includegraphics[height=14.226378pt}{\includegraphics[height=14.226378pt ]{\includegraphics[height=14.226378pt}{\includegraphics[height=14.226378pt ]{\includegraphics[height=14.226378pt}{\includegraphics[height=14.226378pt ]{\includegraphics[height=14.226378pt}{\includegraphics[height=14.226378pt ]{\includegraphics[height=14.226378pt}{\includegraphics[height=14.26378pt ]{\includegraphics[height=14.26378pt}{\includegraphics[height=14.226378pt ]{\includegraphics[height=14.26378pt}{\includegraphics[height=14.26378pt ]{\includegraphics[height=14.26378pt}{\includegraphics[height=14. _For \((\mathbf{\sigma},\mathbf{\tau},\mathbf{\rho})\in S^{3}\) with \(|\mathbf{\sigma}-\mathbf{\rho}|,|\mathbf{\rho}-\mathbf{\tau}|,|\mathbf{\sigma}-\mathbf{\tau}|>1\), we have the commutation relations_ \[\mathsf{spot}_{\emptyset}^{\sigma}\otimes\mathsf{1}_{\mathbf{\rho}}=\mathsf{ braid}_{\rho\sigma}^{\sigma\mathbf{\rho}}(\mathsf{1}_{\mathbf{\rho}}\otimes\mathsf{spot}_{ \emptyset}^{\sigma})\qquad(\mathsf{for}_{\mathcal{G}}^{\sigma\sigma}\otimes \mathsf{1}_{\mathbf{\rho}})\mathsf{braid}_{\rho\sigma}^{\sigma\mathbf{\rho}}=\mathsf{ braid}_{\rho\sigma}^{\sigma\sigma\mathbf{\rho}}(\mathsf{1}_{\mathbf{\rho}}\otimes\mathsf{ for}_{\mathbf{\sigma}}^{\sigma\sigma})\] \[\mathsf{braid}_{\rho\sigma}^{\sigma\mathbf{\tau}\mathbf{\rho}}\mathsf{braid}_{\rho \sigma}^{\sigma\mathbf{\rho}}\mathsf{braid}_{\rho\sigma}^{\rho\mathbf{\sigma}\mathbf{\tau}} \mathsf{braid}_{\rho\sigma}^{\rho\mathbf{\sigma}\mathbf{\tau}}=\mathsf{braid}_{\tau \sigma}^{\sigma\mathbf{\tau}\mathbf{\rho}}\mathsf{braid}_{\tau\rho}^{\sigma\mathbf{\tau} \mathbf{\rho}}\mathsf{braid}_{\rho\sigma}^{\tau\mathbf{\rho}\mathbf{\sigma}}\] _For \(\mathbf{\sigma},\mathbf{\tau}\in S\) with \(|\mathbf{\sigma}-\mathbf{\tau}|=1\) we have the one and two colour Demazure relations:_ \[\mathsf{bar}(\mathbf{\sigma})\otimes\mathsf{1}_{\mathbf{\sigma}}+\mathsf{ 1}_{\mathbf{\sigma}}\otimes\mathsf{bar}(\mathbf{\sigma}) =2\mathsf{gap}(\mathbf{\sigma})\] \[\mathsf{bar}(\mathbf{\tau})\otimes\mathsf{1}_{\mathbf{\sigma}}-\mathsf{ 1}_{\mathbf{\sigma}}\otimes\mathsf{bar}(\mathbf{\tau}) =\mathsf{1}_{\mathbf{\sigma}}\otimes\mathsf{bar}(\mathbf{\sigma})-\mathsf{gap}( \mathbf{\sigma})\] _and the null braid relation_ \[\mathsf{1}_{\mathbf{\sigma}\mathbf{\tau}\mathbf{\sigma}}+(\mathsf{1}_{\mathbf{\sigma}}\otimes \mathsf{spot}_{\emptyset}^{\sigma}\otimes\mathsf{1}_{\mathbf{\sigma}})\mathsf{ for}_{\mathcal{G}\mathbf{\sigma}}^{\sigma\sigma}(\mathsf{1}_{\mathbf{\sigma}}\otimes \mathsf{spot}_{\tau}^{\emptyset}\otimes\mathsf{1}_{\mathbf{\sigma}})=0\] _Further, we require the interchange law and the monoidal unit relation_ \[\big{(}\mathsf{D}_{1}\otimes\mathsf{D}_{2}\big{)}\circ\big{(}\mathsf{D}_{3} \otimes\mathsf{D}_{4}\big{)}=(\mathsf{D}_{1}\circ\mathsf{D}_{3})\otimes( \mathsf{D}_{2}\circ\mathsf{D}_{4})\qquad\mathsf{1}_{\emptyset}\otimes\mathsf{ D}_{1}=\mathsf{D}_{1}=\mathsf{D}_{1}\otimes\mathsf{1}_{\emptyset}\] _for all diagrams \(\mathsf{D}_{1},\mathsf{D}_{2},\mathsf{D}_{3},\mathsf{D}_{4}\). Finally, we require the non-local cyclotomic relations_ \[\mathsf{1}_{\mathbf{\sigma}}\otimes D=0\qquad\mathsf{bar}(\mathbf{\tau})\otimes D=0\] _for all \(s_{0}\neq\mathbf{\sigma}\in S\), \(\mathbf{\tau}\in S\) and \(D\) any diagram._ _We also define the idempotent truncation_ \[\mathsf{1}_{m,n}=\sum_{\lambda\in\mathscr{P}_{m,n}}\mathsf{1}_{\mathsf{t}_{ \lambda}}\qquad\mathcal{H}_{m,n}=\mathsf{1}_{m,n}\mathscr{H}_{m,n}\mathsf{1}_{ m,n}\] **Remark 3.3**.: _In [1] the algebra \(\mathcal{H}_{m,n}\) is shown to be the basic algebra of the anti-spherical Hecke category for \(W=S_{m+n}\) the finite symmetric group and \(P=S_{m}\times S_{n}\leqslant W\) a maximal parabolic subgroup._ **Remark 3.4**.: _The algebras \(\mathscr{H}_{m,n}\) and \(\mathcal{H}_{m,n}\) can be equipped with a \(\mathbb{Z}\)-grading which preserves the duality \(*\). The degrees of the generators under this grading are defined as follows:_ \[\mathsf{deg}(\mathsf{1}_{\emptyset})=0\quad\mathsf{deg}(\mathsf{1}_{\mathbf{\sigma }})=0\quad\mathsf{deg}(\mathsf{spot}_{\mathbf{\sigma}}^{\emptyset})=1\quad\mathsf{ deg}(\mathsf{for}_{\mathbf{\sigma}\mathbf{\sigma}}^{\sigma})=-1\quad\mathsf{deg}( \mathsf{braid}_{\tau\sigma}^{\sigma\tau})=0\] _for \(\mathbf{\sigma},\mathbf{\tau}\in S\) such that \(|\mathbf{\sigma}-\mathbf{\tau}|>1\)._ ### Khovanov arc algebras We now recall the definition of the extended Khovanov arc algebras studied in [1, 1, 1, 2, 1]. We define \(\mathcal{K}_{m,n}\) to be the algebra spanned by diagrams \[\{\underline{\lambda}\mu\overline{\nu}\mid\lambda,\mu,\nu\in\mathscr{P}_{m,n} \text{ such that }\mu\overline{\nu},\underline{\lambda}\mu\text{ are oriented}\}\] with the multiplication defined as follows. First set \[(\underline{\lambda}\mu\overline{\nu})(\underline{\alpha}\beta\overline{\gamma} )=0\quad\text{unless }\nu=\alpha.\] To compute \((\underline{\lambda}\mu\overline{\nu})(\underline{\nu}\beta\overline{\gamma})\) place \((\underline{\lambda}\mu\overline{\nu})\) under \((\underline{\nu}\beta\overline{\gamma})\) then follow the'surgery' procedure. This surgery combines two circles into one or splits one circle into two using the following rules for re-orientation (where we use the notation \(1=\) anti-clockwise circle, \(x=\) clockwise circle, \(y=\) oriented strand). We have the splitting rules \[1\mapsto 1\otimes x+x\otimes 1,\quad x\mapsto x\otimes x,\quad y\mapsto x\otimes y.\] and the merging rules \[1\otimes 1\mapsto 1,\quad 1\otimes x\mapsto x,\quad x\otimes 1\mapsto x,\quad x \otimes x\mapsto 0,\quad 1\otimes y\mapsto y,\quad x\otimes y\mapsto 0,\] \[y\otimes y\mapsto\left\{\begin{array}{ll}y\otimes y&\text{if both strands are propagating, one is}\\ &\wedge\text{-oriented and the other is $\vee$-oriented;}\\ 0&\text{otherwise.}\end{array}\right.\] **Example 3.5**.: _We have the following product of Khovanov diagrams_ _where we highlight with arrows the pair of arcs on which we are about to perform surgery. The first equality follows from the merging rule for and the second equality follows from the merging rule._ **Example 3.6**.: _We have the following product of Khovanov diagrams_ _where we highlight with arrows the pair of arcs on which we are about to perform surgery. This is similar to Example 3.5._ ## 4. Dyck combinatorics We have defined the -Kazhdan-Lusztig polynomials via counting of certain oriented Temperley-Lieb diagrams. For the purposes of this paper, we require richer combinatorial objects which _refine_ the Temperley-Lieb construction: these are provided by tilings by Dyck paths. Let us start with a simple example to see how these Dyck paths come from the oriented Temperley-Lieb diagrams. Consider the partitions and. The oriented Temperley-Lieb diagram is illustrated in Figure 8. We see that is obtained from by swapping the labels of the vertices of one cup in. The tiles of which intersect this cup form a Dyck path (see definition below) highlighted in pink. Moreover the partition is obtained from the partition by removing the equivalent Dyck path shaded in grey (or, equivalently, by removing the pink Dyck path, and letting the tiles fall under gravity). More generally, if with oriented of degree, then we will see that the partition is obtained from the partition by removing Dyck paths. In this section, we develop the combinatorics of Dyck paths needed to give a quadratic presentation for the Hecke category. ### Dyck paths We define a path on the -tiled rectangle to be a finite non-empty set of tiles that are ordered for some such that for each we have or. Note that the set of contents of the tiles in a path form an interval of integers. We say that is a Dyck path if that is the minimal height of the path is achieved at the start and end of the path. We will write and. Throughout the paper, we will identify all Dyck paths having the same content interval. There are a few of places where we will need to fix a particular representative for a Dyck path and in that case we will use subscripts, such as or. Given \(P\) a Dyck path, we set \(|P|\) to be the number of tiles in \(P\). We also define the breadth of \(P\), denoted by \(b(P)\), to be \[b(P)=\tfrac{1}{2}(|P|+1).\] This measures the horizontal distance covered by the path. **Definition 4.1**.: _Let \(P\) and \(Q\) be Dyck paths._ * _We say that_ \(P\) _and_ \(Q\) _are_adjacent _if and only if the multiset given by the disjoint union_ \(\mathsf{cont}(P)\sqcup\mathsf{cont}(Q)\) _is an interval._ * _We say that_ \(P\) _and_ \(Q\) _are_ \(\mathsf{distant}\) _if and only if_ \[\min\{|\mathsf{cont}[r,c]-\mathsf{cont}[x,y]|\,:\,[r,c]\in P,[x,y]\in Q\}\geqslant 2.\] * _We say that_ \(P\)__covers__\(Q\) _and write_ \(Q\prec P\) _if and only if_ \[\mathsf{first}(Q)>\mathsf{first}(P)\text{ and }\mathsf{last}(Q)<\mathsf{last}(P).\] Examples of such Dyck paths \(P\) and \(Q\) are given in Figure 9. ### Removable and addable Dyck paths Now we fix a partition \(\mu\in\mathscr{P}_{m,n}\). Recall that we identify any pair of Dyck paths which have the same content intervals. **Definition 4.2**.: _Let \(\mu\in\mathscr{P}_{m,n}\) and \(P\) be a Dyck path. We say that \(P\) is a removable Dyck path from \(\mu\) if there is a representative \(P_{b}\) of \(P\) such that \(\lambda:=\mu\setminus P_{b}\in\mathscr{P}_{m,n}\). In this case we will write \(\lambda=\mu-P\). (Note that this is well-defined as if \(P_{b}\) exists then it is unique). We define the set \(\operatorname{DRem}(\mu)\) to be the set of all removable Dyck paths from \(\mu\)._ Figure 8. On the left we picture the cup diagram for \((5^{3},4,1)\) and we highlight the arc, \(p\), and the corresponding path \(P_{sf}\). On the right we have the partition/cup diagram obtained by removing \(P\). Figure 9. Examples of \(P\) and \(Q\) adjacent, distant, and \(Q\prec P\) respectively. _We say that \(P\) is an_ addable Dyck path _of \(\mu\) if there is a representative \(P_{b}\) of \(P\) such that \(\lambda:=\mu\sqcup P_{b}\in\mathscr{P}_{m,n}\). In this case we will write \(\lambda=\mu+P\). (Note again that this is well-defined as if \(P_{b}\) exists then it is unique). We define the set \(\operatorname{DAdd}(\mu)\) to be the set of all addable Dyck paths of \(\mu\)._ **Proposition 4.3**.: _Fix \(\mu\in\mathscr{P}_{m,n}\). There is a bijection between the set of cups in \(e_{\mu}\) (or in \(\underline{\mu}\)) and the set \(\operatorname{DRem}(\mu)\)._ Proof.: As observed at the beginning of this section, every cup in \(e_{\mu}\) gives rise to a removable Dyck path. The fact that every removable Dyck path corresponds to a cup follows from the construction of the cup diagram \(\underline{\mu}\). **Definition 4.4**.: _Let \(\mu\in\mathscr{P}_{m,n}\). For each \(P\in\operatorname{DRem}(\mu)\), we define \(P_{sf}\) to be its representative given by the set of tiles intersecting the corresponding cup in \(e_{\mu}\) as shaded in pink in Figure 8._ **Lemma 4.5**.: _Let \(\mu\in\mathscr{P}_{m,n}\) and let \(P,Q\in\operatorname{DRem}(\mu)\). Then either \(P\) covers \(Q\), or \(Q\) covers \(P\), or \(P\) and \(Q\) are distant._ Proof.: This follows directly from Proposition 4.3 **Definition 4.6**.: _Let \(\mu\in\mathscr{P}_{m,n}\) and \(P,Q\in\operatorname{DRem}(\mu)\). We say that \(P\) and \(Q\)_commute _if \(P\in\operatorname{DRem}(\mu-Q)\) and \(Q\in\operatorname{DRem}(\mu-P)\)._ **Lemma 4.7**.: _Let \(\mu\in\mathscr{P}_{m,n}\) and \(P,Q\in\operatorname{DRem}(\mu)\). Then \(P\) and \(Q\) commute if and only if \(P_{sf}\cap Q_{sf}=\emptyset\)._ Proof.: This follows directly by definition. ### Oriented Temperley-Lieb diagrams and Dyck tiling We now introduce Dyck tilings and relate them to oriented arc diagrams. **Definition 4.8**.: _Let \(\lambda\subseteq\mu\in\mathscr{P}_{m,n}\). A Dyck tiling of the skew partition \(\mu\setminus\lambda\) is a set \(\{P^{1},\ldots,P^{k}\}\) of Dyck paths such that_ \[\mu\setminus\lambda=\bigsqcup_{i=1}^{k}P^{i}\] _and for each \(i\neq j\) we have either \(P^{i}\) covers \(P^{j}\) (or vice versa), or \(P^{i}\) and \(P^{j}\) are distant. We call \((\lambda,\mu)\) a Dyck pair of degree\(k\) if \(\mu\setminus\lambda\) has a Dyck tiling with \(k\) Dyck paths._ We will see that Dyck tilings are essentially unique, and as a consequence the degree of a Dyck pair is well defined. Examples of such tilings are given in Figure 10 for the pair \((\lambda,\mu)=((11,9,8,7,6,4,3^{2},2^{2}),(11^{7},8^{3},2^{2}))\). We see that even though the tilings are different (as partitions of \(\mu\setminus\lambda\)), the Dyck paths appearing are the same (remember that we identify Dyck paths with the same content intervals). **Lemma 4.9**.: _Let \(\lambda\subseteq\mu\in\mathscr{P}_{m,n}\) with Dyck tiling \(\mu\setminus\lambda=\sqcup_{i=1}^{k}P^{i}\) as in Definition 4.8. Then \(P^{i}\in\operatorname{DRem}(\mu)\) for all \(1\leqslant i\leqslant k\)._ Proof.: We prove it by induction on \(k\). If \(k=0\) there is nothing to prove. If \(k=1\) then \(\mu\setminus\lambda=P^{1}\) and so \(P^{1}\in\operatorname{DRem}(\mu)\) as required. Now let \(k\geqslant 2\) and assume that the result holds for \(k-1\). Pick a removable tile \([x,y]\) in \(\mu\setminus\lambda\) such that it belongs to some \(P^{j}\) with \(|P^{j}|\) minimal. Then we claim that \(P^{j}\in\operatorname{DRem}(\mu)\). Indeed, if there were any tile \([r,c]\) above \(P^{j}\) preventing it from being removable, then by the definition of Dyck pair and the minimality of \(P^{j}\), we would have that \([r,c]\) belongs to a Dyck path \(Q\) which covers \(P^{j}\). But this would contradict the fact that \([x,y]\) is removable. It remains to show that \(P^{i}\in\operatorname{DRem}(\mu)\) for all \(i\neq j\). Now, we have that \((\mu-P^{j},\lambda)\) is a Dyck pair and so by induction \(P^{i}\in\operatorname{DRem}(\mu-P^{j})\) for all \(i\neq j\). Fix \(i\neq j\). If \(P^{i}\) and \(P^{j}\) are distant, then we have \(P^{i}\in\operatorname{DRem}(\mu)\) as required. If \(P^{i}\) covers \(P^{j}\) then we must have \(|P^{i}|\geqslant|P^{j}|+4\), as it is impossible to have a partition \(\mu\) and Dyck paths \(Q,Q^{\prime}\) with \(|Q^{\prime}|=|Q|+2\), \(Q\in\operatorname{DRem}(\mu)\) and \(Q^{\prime}\in\operatorname{DRem}(\mu-Q)\). This means that we can shift the tiles of \(P^{j}\in\operatorname{DRem}(\mu-P^{i})\) with the same contents as those of \(P^{i}\in\operatorname{DRem}(\mu)\) one step up and we get an equivalent Dyck path which is now removable from \(\mu\) as required. This is illustrated in Figure 11. Finally, suppose \(P^{j}\) covers \(P^{i}\). In this case we can again shift the tiles of \(P^{i}\in\operatorname{DRem}(\mu-P^{j})\) one step up so that it is now a subset of \(P^{j}\in\operatorname{DRem}(\mu)\). We claim that this subset is also removable. If not, then we would have some \(P^{l}\) which is adjacent to \(P^{i}\), contradicting the fact that \((\lambda,\mu)\) is a Dyck pair. This case is illustrated in Figure 12. **Theorem 4.10**.: _Let \(\lambda,\mu\in\mathscr{P}_{m,n}\). Then \(\underline{\mu}\lambda\) is oriented if and only if \((\lambda,\mu)\) is a Dyck pair._ Figure 12. In both diagrams we depict examples of Dyck paths \(P^{j}\) and \(P^{i}\) such that the former covers the latter. On the left we see that \(P^{i}\) is removable. On the right we make \(P^{i}\) slightly larger so as to contradict removability: in this case we must also include the Dyck path \(P^{l}\) by our assumption that \(\lambda\) is a partition; however \(P^{l}\) and \(P^{i}\) are adjacent, a contradiction. Figure 10. Two of the twelve Dyck tilings of shape \((11^{7},8^{3},2^{2})\setminus(11,9,8,7,6,4,3^{2},2^{2})\). Compare with Figure 5. Proof.: Assume that \(\underline{\mu}\lambda\) is oriented. Then the weight \(\lambda\) is obtained from the weight \(\mu\) by swapping the labels of pairs corresponding to some of the cups in \(e_{\mu}\). Let \(P^{1},\ldots,P^{k}\) be the Dyck paths corresponding to these cups. We list these in order such that if \(P^{i}\prec P^{j}\) then \(j<i\). Then it's easy to see that \(P^{i}\in\operatorname{DRem}(\mu-P^{1}-\ldots-P^{i-1})\) for all \(i\) and \(\lambda=\mu-P^{1}-\ldots-P^{k}\). It follows from Lemma 4.5 that \(\mu\setminus\lambda=\sqcup_{i=1}^{k}P^{i}\) is a Dyck tiling. Conversely, suppose that \(\lambda\subseteq\mu\) with \(\mu\setminus\lambda=\sqcup_{i=1}^{k}P^{i}\) a Dyck tiling. Then it follows from Lemma 4.9 that each \(P^{i}\in\operatorname{DRem}(\mu)\) and so \(\underline{\mu}\lambda\) is oriented. **Corollary 4.11**.: _Let \(\lambda\subseteq\mu\). Then \(\mu\setminus\lambda=\sqcup_{i=1}^{k}P^{i}\) where the \(P^{i}\)'s are Dyck paths is a Dyck tiling if and only if \(P^{i}\in\operatorname{DRem}(\mu)\) for all \(i\). In particular, the set of Dyck paths \(\{P^{i}:1\leqslant i\leqslant k\}\) is unique. Moreover, in this case we have that \(\deg(\underline{\mu}\lambda)=k\)._ ### Dyck paths generated by tiles We will need one last piece of combinatorics to describe our quadratic presentation for the Hecke category. **Definition 4.12**.: _Fix \(\mu\in\mathscr{P}_{m,n}\) and \([r,c]\in\mu\). Let \(l,k\) be the maximal non-negative integer such that \([r-i,c+i]\in\mu\) for all \(0\leqslant i\leqslant l\), \([r-i+1,c+i]\in\mu\) for all \(1\leqslant i\leqslant l\) and \([r+j,c-j]\in\mu\) for all \(0\leqslant j\leqslant k\), \([r+j,c-j+1]\in\mu\) for all \(1\leqslant j\leqslant k\). Then define the Dyck paths generated by the tile\([r,c]\in\mu\), denoted by \(\langle r,c\rangle_{\mu}\), to be the path_ \[[r-l,c+l],[r-l+1,c+l],\ldots,[r,c],\ldots,[r+k,c-k].\] Note that the Dyck path generated by a tile of a partition \(\mu\) may or may not be in \(\operatorname{DRem}(\mu)\) as illustrated in Figure 13. ## 5. Generators for the Hecke category In this section, we lift the combinatorics of Section 4 to provide a new set of generators for the Hecke category \(\mathcal{H}_{m,n}\). These generators are not'monoidal', but all lie in degree \(0\) or \(1\) (this is the first step in providing a quadratic presentation). ### Soergel diagrams from oriented Temperley-Lieb diagrams We now revisit the classical definition of the light leaves basis starting from oriented Temperley-Lieb diagrams. This material is covered in detail (from a slightly different perspective) in [BDHN]. **Definition 5.1**.: _We define up and down operators on diagrams as follows. Let \(D\) be any Soergel graph with northern colour sequence \(\mathsf{s}\in\operatorname{Std}(\alpha)\) for some \(\alpha\in\mathscr{P}_{m,n}\)._ * _Suppose that_ \(\boldsymbol{\sigma}\in\operatorname{Add}(\alpha)\)_. We define_ \[\mathsf{U}_{\sigma}^{1}(D)=\] \[D\] \[\mathsf{U}_{\sigma}^{0}(D)=\] \[D\] * _Now suppose that_ \([r,c]\in\operatorname{Rem}(\alpha)\) _for_ \(\alpha\vdash a\) _and with_ \(r-c=\boldsymbol{\sigma}\in S\)_. We let_ \(\mathsf{t}\in\operatorname{Std}(\alpha-[r,c])\) _be defined as follows: if_ \(\mathsf{s}[r,c]=k\) _then we let_ \(\mathsf{t}^{-1}(j)=\mathsf{s}^{-1}(j)\) _for_ \(1\leqslant j<k\) _and_ \(\mathsf{t}^{-1}(j-1)=\mathsf{s}^{-1}(j)\) for \(k<j\leqslant a\). We let \(\mathsf{t}\otimes\boldsymbol{\sigma}\in\operatorname{Std}(\alpha)\) be defined by \((\mathsf{t}\otimes\boldsymbol{\sigma})[r,c]=a\) and \((\mathsf{t}\otimes\boldsymbol{\sigma})[x,y]=\mathsf{t}[x,y]\) otherwise. We define_ **Definition 5.2**.: _Let \(\lambda,\mu\in\mathscr{P}_{m,n}\) with \((\lambda,\mu)\) a Dyck pair. Recall the oriented tableau \(\mathsf{t}_{\mu}^{\lambda}\) from Definition 2.2. Suppose that \(\mathsf{t}_{\mu}^{\lambda}[r_{k},c_{k}]=(k,x_{k})\) for each \(1\leqslant k\leqslant\ell(\mu)\). We construct the Soergel graph \(D_{\mu}^{\lambda}\) inductively as follows. We set \(D_{0}\) to be the empty diagram. Now let \(1\leqslant k\leqslant\ell(\mu)\) with \(\boldsymbol{\sigma}=r_{k}-c_{k}\). We set_ \[D_{k}=\left\{\begin{array}{ll}\mathsf{U}_{\sigma}^{1}(D_{k-1})&\text{if $x_ {k}=1$;}\\ \mathsf{U}_{\sigma}^{0}(D_{k-1})&\text{if $x_{k}=s$;}\\ \mathsf{D}_{\sigma}^{0}(D_{k-1})&\text{if $x_{k}=f$;}\\ \mathsf{D}_{\sigma}^{1}(D_{k-1})&\text{if $x_{k}=sf$.}\end{array}\right.\] _Now \(D_{\ell(\mu)}\) has southern colour sequence \(\mathsf{t}_{\mu}\) and northern colour sequence some \(\mathsf{s}\in\operatorname{Std}(\lambda)\). We then define \(D_{\mu}^{\lambda}=\mathsf{braid}_{\mathsf{s}}^{\mathsf{t}_{\lambda}}\circ D_{ \ell(\mu)}\). We also define \(D_{\lambda}^{\mu}=(D_{\mu}^{\lambda})^{*}\)._ **Example 5.3**.: _In Figure 14 we provide an example of the labelling of an oriented Temperley-Lieb diagram and the corresponding up-down construction of a basis element._ **Theorem 5.4**.: _The algebra \(\mathcal{H}_{m,n}\) is a graded cellular (in fact quasi-hereditary) algebra with graded cellular basis given by_ \[\{D_{\lambda}^{\mu}D_{\nu}^{\lambda}\mid\lambda,\mu,\nu\in\mathscr{P}_{m,n} \text{ with }(\lambda,\mu),(\lambda,\nu)\text{ Dyck pairs}\} \tag{5.1}\] _with_ \[\deg(D_{\lambda}^{\mu}D_{\nu}^{\lambda})=\deg(\lambda,\mu)+\deg(\lambda,\nu),\] _with respect to the involution \(*\) and the partial order on \(\mathscr{P}_{m,n}\) given by inclusion._ Figure 14. On the left we depict the labelling of the oriented Temperley–Lieb diagram of shape \(\lambda=(1)\) and \(\mu=(3^{3})\). On the right we depict the unique \(D_{\mu}^{\lambda}\) for \(\mathsf{t}_{(3^{3})}\in\operatorname{Std}((3^{3}))\). Proof.: This is simply a combinatorial rephrasing of the light leaves basis, constructed in full generality in [11, 12] and reproven in the case of \((W,P)=(S_{m+n},S_{m}\times S_{n})\) paper in [10]. Note, in particular that the degree \(0\) basis elements are given by \(D_{\lambda}^{\lambda}=1_{\lambda}\), \(\lambda\in\mathscr{P}_{m,n}\) and the degree \(1\) basis elements are given by \(D_{\mu}^{\lambda}\) and \(D_{\lambda}^{\mu}\) for \(\lambda,\mu\in\mathscr{P}_{m,n}\) with \(\lambda=\mu-P\) for some \(P\in\operatorname{DRem}(\mu)\). We will show that these degree \(0\) and degree \(1\) elements generate \(\mathcal{H}_{m,n}\). But first we will describe an easy way of visualising products of light leaves basis elements directly from the oriented tableaux used to define them. ### Multiplying generators on the oriented tableaux We have seen that the Soergel graph \(D_{\mu}^{\lambda}\) is completely determined by the oriented tableau \(\mathfrak{t}_{\mu}^{\lambda}\). To visualise the multiplication of two such Soergel graphs directly from the oriented tableaux, we will want to consider pairs of tableaux of the same shape. This can easily be done by adding one more possible orientation for tiles, namely \(0\). When constructing the corresponding Soergel graph, whenever we encounter a tile with \(0\)-orientation we will simply tensor with the empty Soergel graph, that is we leave the graph unchanged (see Figure 15 for two examples). For example, let \(P\in\operatorname{DRem}(\mu)\) and \(Q\in\operatorname{DRem}(\mu-P)\) and assume for now that \(P\) and \(Q\) commute. We would like to be able to visualise the product \[D_{\mu-P}^{\mu-P-Q}D_{\mu}^{\mu-P}\] on the oriented tableaux. So instead of considering the oriented tableau \(\mathfrak{t}_{\mu-P}^{\mu-P-Q}\) as a labelling of the tiles of \(\mu-P\), we can visualise it as a labelling of the tiles of \(\mu\) with all tiles belonging to \(P_{sf}\) having \(0\)-orientation. The orientation of the other tiles remain unchanged. We can then easily multiply the elements \(D_{\mu-P}^{\mu-P-Q}\) with \(D_{\mu}^{\mu-P}\) simply by'stacking' the two oriented tableaux, without any need to apply a braid generator in between the two diagrams. An example is depicted in Figure 16. Now, let \(P\in\operatorname{DRem}(\mu)\) and \(Q\in\operatorname{DRem}(\mu-P)\) be such that \(P\) and \(Q\) do not commute. We proceed as above (rewriting the tiles in \(P_{sf}\) so as to have a \(0\)-orientation) and then we let each tile with an \(s\)-orientation fall down one place (from \([r,c]\) to \([r-1,c-1]\), say) and we leave all other tiles unchanged. An example is given in Figure 19. When multiplying two elements in this way, we will represent the product by splitting each tile in half with the label of the top half corresponding to the first element and the label of the bottom half correspond to the second element. When considering the dual Soergel graphs \(D_{\lambda}^{\mu}=(D_{\mu}^{\lambda})^{*}\), we will represent them with the same oriented tableau as \(D_{\mu}^{\lambda}\) except that we will replace all \(s\)-orientation (respectively \(f\)-orientation, or \(sf\)-orientation) by the symbol \(s^{*}\) (respectively \(f^{*}\), or \(f^{*}s^{*}\)). An example of a degree two basis element \(D_{\lambda}^{\mu}D_{\nu}^{\lambda}\) is given in Figure 19. We will now restate some of the (simplest) relations in the Hecke category on the oriented tableaux. **Proposition 5.5**.: _The relations depicted in Figures 17 and 18 hold._ Figure 15. Re-drawing tableau of shape (\(2^{2}\)) as tableaux of shape (\(3^{3}\)) where \((2^{2})=(3^{3})-P\) and \(P_{sf}\) is depicted as the blue zeroes. Proof.: These are all restatements of the relations given in the monoidal presentation of the Hecke category. For example the last one is the dual fork-spot contraction. ### Generators for the Hecke category We are now ready to prove that the algebra \(\mathcal{H}_{m,n}\) is generated in degree \(0\) and \(1\). Figure 16. A product of commuting diagrams on tableaux. Figure 17. The (dual) spot idempotent relations on tableaux. Figure 18. A few fork (and fork-spot) relations on tableaux. **Proposition 5.6**.: _The algebra \(\mathcal{H}_{m,n}\) is generated by the elements_ \[\{D_{\mu}^{\lambda},D_{\lambda}^{\mu}\mid\lambda,\mu\in\mathscr{P}_{m,n}\text{ with }\lambda=\mu-P\text{ for some }P\in\operatorname{DRem}(\mu)\}\cup\{D_{\mu}^{\mu}= \mathbbm{1}_{\mu}\mid\mu\in\mathscr{P}_{m,n}\}.\] Proof.: It suffices to show that every element \(D_{\mu}^{\lambda}\) for \(\lambda,\mu\in\mathscr{P}_{m,n}\) with \((\lambda,\mu)\) a Dyck pair can be written as a product of these elements. We proceed by induction on \(k=\deg(\mu\lambda)\). For \(k=0\) or \(1\), there is nothing to prove. So assume that \(k\geqslant 2\). We have \(\mu\setminus\lambda=\sqcup_{i=1}^{k}P^{i}\) where each \(P^{i}\in\operatorname{DRem}(\mu)\). Pick \(P\in\{P^{i}\,:\,1\leqslant i\leqslant k\}\) such that there is no \(P^{i}\) covering \(P\). Then we claim that \[D_{\mu}^{\lambda}=D_{\mu-P}^{\lambda}D_{\mu}^{\mu-P}.\] The result would then follow by induction. To see this, note that the oriented tableau \(\mathfrak{t}_{\mu-P}^{\lambda}\), viewed as a tableau of shape \(\mu\) as explained in the last subsection is obtained from \(\mathfrak{t}_{\mu}^{\lambda}\) by setting the orientation of all tiles of \(P_{sf}\) to \(0\). Moreover, if there is some \(P^{i}\neq P\) which does not commute with \(P\), then each of the \(s\)-orientations on the tiles of \(P_{sf}^{i}\) (which also belong to \(P_{sf}\)) in \(\mathfrak{t}_{\mu}^{\lambda}\) fall down one tile. Note that by assumption on \(P\), these were labelled by \(1\) in \(\mathfrak{t}_{\mu}^{\lambda}\). Now using the relations given in Figures 17 and 18, we see that \(D_{\mu-P}^{\lambda}D_{\mu}^{\mu-P}=D_{\mu}^{\lambda}\) as required. The dual element \(D_{\lambda}^{\mu}=(D_{\mu}^{\lambda})^{*}\) can then by written as the reverse product of the dual degree \(1\) elements. An example is given in Figure 19. ## 6. Dilation and contraction We now make a slight detour in order to construct dilation maps which allow us to interrelate partitions, weights, Dyck paths as well as Hecke categories and Khovanov arc algebras of different sizes. **Definition 6.1**.: _For \(-m\leqslant k\leqslant n\) we define the dilation map \(\varphi_{k}:\mathscr{P}_{m,n}\longrightarrow\mathscr{P}_{m+1,n+1}\) on weights by setting \(\varphi_{k}(\lambda)\) for \(\lambda\in\mathscr{P}_{m,n}\) to be the weight obtained from \(\lambda\) by moving any label in position \(x<k\) to \(x-1\), any label in position \(x>k\) to \(x+1\) and labelling the vertices \(k-\frac{1}{2}\) and \(k+\frac{1}{2}\) by \(\vee\) and \(\wedge\) respectively._ The following lemmas follow directly from the definition. **Lemma 6.2**.: _The map \(\varphi_{k}\) is injective with image given by the set \(\mathscr{P}_{m+1,n+1}^{k}\) consisting of all partitions with a removable node of content \(k\). We call such partitions contractible at \(k\)._ Figure 19. A product of non-commuting diagrams on tableaux. **Lemma 6.3**.: _Let \(\lambda,\mu\in\mathscr{P}_{m,n}\) and let \(-m\leqslant k\leqslant n\). We have that \((\lambda,\mu)\) is a Dyck pair of degree \(j\) if and only of \((\varphi_{k}(\lambda),\varphi_{k}(\mu))\) is a Dyck pair of degree \(j\). In particular, if \(\lambda=\mu-P\) for some \(P\in\operatorname{DRem}(\mu)\) then we have \(\varphi_{k}(\lambda)=\varphi_{k}(\mu)-Q\) where \(Q\in\operatorname{DRem}(\varphi_{k}(\mu))\) satisfies \(|Q|=|P|+2\) and \(\underline{\operatorname{cont}}(Q)=\underline{\operatorname{cont}}(P)\cup \{\operatorname{first}(P)-1,\operatorname{last}(P)+1\}\) if \(k\in\underline{\operatorname{cont}}(P)\), \(|Q|=|P|\) and \(\underline{\operatorname{cont}}(Q)=\{l+1\,:\,l\in\underline{\operatorname{ cont}}(P)\}\) if \(k<l\) for all \(l\in\underline{\operatorname{cont}}(P)\) \(|Q|=|P|\) and \(\underline{\operatorname{cont}}(Q)=\{l-1\,:\,l\in\underline{\operatorname{ cont}}(P)\}\) if \(k>l\) for all \(l\in\underline{\operatorname{cont}}(P)\) We write \(\varphi_{k}(P):=Q\)._ We now extend the dilation map \(\varphi_{k}\) to dilation homomorphisms for the Hecke categories and the arc algebras. We will use the same notation for all three dilation maps. We start with the Hecke category. **Theorem 6.4**.: _Let \(\Bbbk\) be a commutative integral domain and let \(i\in\Bbbk\) be a square root of \(-1\). For \(-m\leqslant k\leqslant n\), we define the map \(\varphi_{k}:\mathcal{H}_{m,n}\to\mathcal{H}_{m+1,n+1}\) on the generators as follows. For \(\lambda,\mu\in\mathscr{P}_{m,n}\) with \(\lambda=\mu-P\) for some \(P\in\operatorname{DRem}(\mu)\) we have \(\varphi_{k}(\mathds{1}_{\mu})=\mathds{1}_{\varphi_{k}(\mu)},\) and_ \[\varphi_{k}(D_{\mu}^{\lambda})=\left\{\begin{array}{ll}D_{\varphi_{k}(\mu)} ^{\varphi_{k}(\lambda)}&\text{if $k\notin\underline{\operatorname{cont}}(P)$}\\ (-i)\cdot D_{\varphi_{k}(\mu)}^{\varphi_{k}(\lambda)}&\text{if $k\in\underline{ \operatorname{cont}}(P)$ and $k$ labels a spot tile in $P_{sf}$}\\ i\cdot D_{\varphi_{k}(\mu)}^{\varphi_{k}(\lambda)}&\text{if $k\in\underline{ \operatorname{cont}}(P)$ and $k$ labels a fork tile in $P_{sf}$}\\ \end{array}\right.\] _and \(\varphi_{k}(D_{\lambda}^{\mu})=\varphi_{k}((D_{\mu}^{\lambda})^{*})=(\varphi_ {k}(D_{\mu}^{\lambda}))^{*}\). Then \(\varphi_{k}\) extends to an injective homomorphism of graded \(\Bbbk\)-algebras._ Proof.: The map \(\varphi_{k}\) is defined on the monoidal (spot, fork, braid and idempotent) generators of \(\mathcal{H}_{m,n}\) in [BDHN, Section 5.3] (note that in that paper we use the notation \(\varphi_{\boldsymbol{\tau}}\) where \(\boldsymbol{\tau}=s_{k}\)), where it is proven to be an injective homomorphism of graded \(\Bbbk\)-algebras. Rewriting this in terms of the generators \(D_{\mu}^{\lambda}\) and \(D_{\varphi_{k}(\mu)}^{\varphi_{k}(\lambda)}\) we deduce the result. We now define the dilation homomorphisms for \(\mathcal{K}_{m,n}\). **Theorem 6.5**.: _Let \(\Bbbk\) be a commutative integral domain. For \(-m\leqslant k\leqslant n\), the map \(\varphi_{k}:\mathcal{K}_{m,n}\to\mathcal{K}_{m+1,n+1}\) defined on arc diagrams by_ \[\varphi_{k}(\underline{\mu}\lambda\overline{\nu})=\varphi_{k}(\underline{\mu })\varphi_{k}(\lambda)\overline{\varphi_{k}(\nu)}\] _extends to an injective homomorphism of graded \(\Bbbk\)-algebras._ Proof.: Consider a pair of arc diagrams for which the \(k\pm\frac{1}{2}\) vertices form anti-clockwise oriented circles. We can choose to do the surgery procedure so that this is the final step we consider. The anti-clockwise circle acts as an idempotent and so the result follows. These dilation homomorphisms will allow us to prove results by induction. The base cases for the induction will be the following: Figure 20. The latter partition is obtained from the former by dilation at \(k=1\). **Definition 6.6**.: _Let \(\lambda,\mu\in\mathscr{P}_{m,n}\) with \(\lambda=\mu-P\) for \(P\in\mathrm{DRem}(\mu)\). We say that \((\lambda,\mu)\) are_ incontractible _if there does not exist \(k\in\mathbb{Z}\) such that \(\lambda,\mu\in\mathscr{P}_{m,n}^{k}\)._ **Remark 6.7**.: _By definition, it is clear that \((\lambda,\mu)\) are incontractible if and only if \(\mu=(c^{r})\) is a rectangular partition and \(\lambda=(c^{r-1},c-1)\) so that \(b(P)=1\)._ ## 7. The quiver and relations for \(\mathcal{H}_{m,n}\) We now provide a presentation for \(\mathcal{H}_{m,n}\) over \(\Bbbk\) an arbitrary integral domain. Before stating the presentation as a theorem, we first recall (and slightly rephrase) a proposition from [BDHN, Proposition 4.18] and a lemma which will be used in the proof. **Proposition 7.1**.: _[_BDHN, Proposition 4.18_]_ _Let \(\lambda\in\mathscr{P}_{m,n}\), \([\tau,c]\in\mathrm{Add}(\lambda)\) such that \(s_{r-c}=\tau\). Then we have that_ \[\mathtt{1}_{\lambda}\otimes\mathsf{bar}(\tau)=\sum_{[x,y]}(-1)^{b(x,y)_{ \lambda}}D^{\lambda}_{\lambda-\langle x,y\rangle_{\lambda}}D^{\lambda-\langle x,y\rangle_{\lambda}}_{\lambda}. \tag{7.1}\] _where the sum is taken over all \([x,y]\in\lambda\) where either \([x,y]=[x,c]\) with \(x<r\) or \([x,y]=[r,y]\) with \(y<c\), and \(\langle x,y\rangle_{\lambda}\in\mathrm{DRem}(\lambda)\)._ Proof.: Given \(\lambda\in\mathscr{P}_{m,n}\), \([x,y]\in\lambda\) such that \(\mathtt{t}_{\lambda}([x,y])=k\) and \(\boldsymbol{\sigma}=s_{x-y}\), we set \[\mathsf{gap}(\mathtt{t}_{\lambda}-[x,y])=\mathtt{1}_{\mathtt{t}_{\lambda} \downarrow\{1,\ldots,k-1\}}\otimes\mathsf{spot}_{\emptyset}^{\sigma}\mathsf{ spot}_{\sigma}^{\emptyset}\otimes\mathtt{1}_{\mathtt{t}_{\lambda}\downarrow_{\{k+1, \ldots,\ell(\lambda)\}}}.\] It was shown in [BDHN, Proposition 4.18] that \[\mathtt{1}_{\lambda}\otimes\mathsf{bar}(\tau)=-\!\!\!\sum_{[x,y]}\mathsf{gap} (\mathtt{t}_{\lambda}-[x,y]) \tag{7.2}\] where the sum is taken over all \([x,y]\in\lambda\) where either \([x,y]=[x,c]\) with \(x<r\) or \([x,y]=[r,y]\) with \(y<c\). For each term in the sum, we now apply \(b\langle x,y\rangle_{\lambda}-1\) times the null-braid relation to get \[\mathsf{gap}(\mathtt{t}_{\lambda}-[x,y])=(-1)^{b\langle x,y\rangle_{\lambda} -1}D^{\lambda}_{\lambda-\langle x,y\rangle_{\lambda}}D^{\lambda-\langle x,y \rangle_{\lambda}}_{\lambda}.\] If \(\langle x,y\rangle_{\lambda}\in\mathrm{DRem}(\lambda)\), then this is a basis element and we are done. If \(\langle x,y\rangle_{\lambda}\notin\mathrm{DRem}(\lambda)\), write \(\langle x,y\rangle_{\lambda}=[x_{1},y_{1}],\ldots,[x_{s},y_{s}]\). Then we have either \([x_{1},y_{1}+1]\in\lambda\) and \([x_{1}-1,y_{1}+1]\notin\lambda\), or \([x_{s}+1,y_{s}]\in\lambda\) and \([x_{s}+1,y_{s}-1]\notin\lambda\). In both cases, we can apply the cyclotomic relations to deduce that the correspondent element in the sum is zero. **Lemma 7.2**.: _The following relations hold._ Proof.: The first equation follows by applying the null braid relation followed by the fork-spot contraction as follows: (where the first equality is merely a trivial isotopy). The second equation is obtained by simply applying the null braid relation as follows: \[\tikzfig{fig:1}\] as required. **Theorem 7.3**.: _The algebra \(\mathcal{H}_{m,n}\) is the associative \(\Bbbk\)-algebra generated by the elements_ \[\{D^{\lambda}_{\mu},D^{\mu}_{\lambda}\mid\lambda,\mu\in\mathscr{P}_{m,n}\text{ with }\lambda=\mu-P\text{ for some }P\in\mathrm{DRem}(\mu)\}\cup\{1_{\mu}\mid\mu\in\mathscr{P}_{m,n}\} \tag{7.3}\] _subject to the following relations and their duals._ **The idempotent relations:**_For all \(\lambda,\mu\in\mathscr{P}_{m,n}\), we have that_ \[\mathbf{1}_{\mu}\mathbf{1}_{\lambda}=\delta_{\lambda,\mu}\mathbf{1}_{\lambda} \qquad\qquad\mathbf{1}_{\lambda}D^{\lambda}_{\mu}\mathbf{1}_{\mu}=D^{\lambda} _{\mu}. \tag{7.4}\] **The self-dual relation:**_Let \(P\in\mathrm{DRem}(\mu)\) and \(\lambda=\mu-P\). Then we have_ \[D^{\lambda}_{\mu}D^{\mu}_{\lambda}=(-1)^{b(P)-1}\Bigg{(}2\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! **The non-commuting relation:**_Let \(P,Q\in{\rm DRem}(\mu)\) with \(P\prec Q\) which do not commute. Then \(Q\setminus P=Q^{1}\sqcup Q^{2}\) where \(Q^{1},Q^{2}\in{\rm DRem}(\mu-P)\) and we have_ \[D_{\mu}^{\mu-Q}D_{\mu-P}^{\mu}=D_{\mu-P-Q^{1}}^{\mu-Q}D_{\mu-P}^{\mu-P-Q^{1}}= D_{\mu-P-Q^{2}}^{\mu-Q}D_{\mu-P}^{\mu-P-Q^{2}} \tag{7.7}\] **The adjacent relation:**_Let \(P\in{\rm DRem}(\mu)\) and \(Q\in{\rm DRem}(\mu-P)\) be adjacent. Denote by \(\langle P\cup Q\rangle_{\mu}\), if it exists, the smallest removable Dyck path of \(\mu\) containing \(P\cup Q\). Then we have_ \[D_{\mu-P}^{\mu-P-Q}D_{\mu}^{\mu-P}=\left\{\begin{array}{ll}(-1)^{b((P\cup Q )_{\mu})-b(Q)}D_{\mu-\langle P\cup Q\rangle_{\mu}}^{\mu-P-Q}D_{\mu}^{\mu- \langle P\cup Q\rangle_{\mu}}&\mbox{if $\langle P\cup Q\rangle_{\mu}$ exists}\\ 0&\mbox{otherwise}\end{array}\right. \tag{7.8}\] **Example 7.4**.: _We have already seen examples of the "commuting relations" in Figure 16; of the "non-commuting relations" in Figure 19; the "adjacent relation" in Lemma 7.2; the "idempotent relations" are what one would expect; the combinatorics of the "self-dual relation" is pictured in Proposition 7.1 and Figure 21._ Proof.: By Proposition 5.6 it is enough to check that (7.4) to (7.8) are a complete list of relations. We first prove that all these relations do hold. The idempotent relations are immediate. We now proceed to check the other relations. **Proof of the self-dual relation.** First consider the case where \(b(P)=1\). This is (up to commutation) exactly the case covered in Proposition 7.1. We just need to note that \(\langle r-1,c\rangle_{\lambda}\) and \(\langle r,c-1\rangle_{\lambda}\) give Dyck paths adjacent to \(P\) and that \(Q=\langle r-j,c\rangle_{\lambda}=\langle r,c-j\rangle_{\lambda}\) (for \(j\geqslant 2\)) give two identical Dyck paths satisfying \(P\prec Q\). Now if \(b(P)\geqslant 2\) then we can find \(k\) such that \((\lambda,\mu)=(\varphi_{k}(\lambda^{\prime}),\varphi_{k}(\mu^{\prime}))\) where \(\lambda^{\prime}=\mu^{\prime}-P^{\prime}\) with \(b(P^{\prime})=b(P)-1\) and \(k\in\underline{\sf cont}(P^{\prime})\). Now using induction and applying the dilation homomorphism we get \[-D_{\mu}^{\lambda}D_{\lambda}^{\mu} = (-1)^{b(P^{\prime})-1}\left(2\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! spot) tiles in \(Q_{sf}\), gives the required relation. (The example in Figure 19 is obtained by dilating the example in Figure 22 at \(k=0\).) **Proof of the adjacent relation.** First assume that \(b(P)=b(Q)=1\). Say \(Q=[r,c]\) then we have \(\langle P\cup Q\rangle_{\mu}=\langle r,c\rangle_{\mu}\). Now applying Lemma 7.2 (i) once followed by \(b(\langle r,c\rangle_{\mu})-2\) times Lemma 7.2 (ii), we obtain \[D^{\mu-P-Q}_{\mu-P}D^{\mu-P}_{\mu}=(-1)^{b(\langle r,c\rangle_{\mu})-1}D^{\mu -P-Q}_{\mu-\langle r,c\rangle_{\mu}}D^{\mu-\langle r,c\rangle_{\mu}}_{\mu}\] Now if \(\langle r,c\rangle_{\mu}\notin\operatorname{DRem}(\mu)\) then this is equal to zero by the cyclotomic relations, giving the results. An example of this case is provided in Figure 23. Now if \(b(P)\geqslant 2\) or \(b(Q)\geqslant 2\) then we can find \(k\) such that \((\mu,\mu-P,\mu-P-Q,\mu-\langle P\cup Q\rangle_{\mu})=(\varphi_{k}(\mu^{\prime }),\varphi_{k}(\mu^{\prime}-P^{\prime}),\varphi_{k}(\mu^{\prime}-P^{\prime}-Q ^{\prime}),\varphi_{k}(\mu^{\prime}-\langle P^{\prime}\cup Q^{\prime}\rangle ^{\prime}_{\mu}))\) with either \(k\in\operatorname{\underline{\mathsf{cont}}}(P^{\prime})\) or \(k\in\operatorname{\underline{\mathsf{cont}}}(Q^{\prime})\) but not both. In the first case, using induction and the dilation homomorphism we have \[(\pm i)D^{\mu-P-Q}_{\mu-P}D^{\mu-P}_{\mu}=(-1)^{b(\langle P^{\prime}\cup Q^{ \prime}\rangle^{\prime}_{\mu})-b(Q^{\prime})}(\mp i)D^{\mu-P-Q}_{\mu-\langle r,c\rangle_{\mu}}D^{\mu-\langle r,c\rangle_{\mu}}_{\mu}.\] Noting that \(b(\langle P\cup Q\rangle_{\mu})=b(\langle P^{\prime}\cup Q^{\prime}\rangle_{ \mu^{\prime}})+1\) and \(b(Q)=b(Q^{\prime})\) gives the result. In the second case, using induction and the dilation homomorphism we have \[(\pm i)D^{\mu-P-Q}_{\mu-P}D^{\mu-P}_{\mu}=(-1)^{b(\langle P^{\prime}\cup Q^{ \prime}\rangle^{\prime}_{\mu})-b(Q^{\prime})}(\pm i)D^{\mu-P-Q}_{\mu-\langle r,c\rangle_{\mu}}D^{\mu-\langle r,c\rangle_{\mu}}_{\mu}.\] Noting that \(b(\langle P\cup Q\rangle_{\mu})=b(\langle P^{\prime}\cup Q^{\prime}\rangle_{ \mu^{\prime}})+1\) and \(b(Q)=b(Q^{\prime})+1\) gives the result. It remains to show that these form a complete list of relations. Completeness of relations.It is enough to show that, using (7.4)-(7.8) and their duals, we can rewrite any product of \(k\) degree \(1\) generators as a linear combination of light leaves basis elements. We proceed by induction on \(k\). For \(k=1\) there is nothing to prove. For \(k=2\), note that (7.4)-(7.8) and their duals cover precisely all possible (non-zero) product of two degree \(1\) generators and rewrite these as linear combinations of basis elements. Now, assume that the result holds for \(k\) and consider a product of \(k+1\) generators. By induction, it is enough to consider a product of the form \[D^{\mu}_{\lambda}D^{\lambda}_{\nu}D^{\nu}_{\nu\pm P}\] where \(D^{\mu}_{\lambda}D^{\lambda}_{\nu}\) is a basis element of degree \(k\) and \(P\) is a Dyck path. To show that this product can be rewritten as a linear combination of basis elements, we will additionally use induction on \(\ell(\lambda)+\ell(\nu)\). If \(\ell(\lambda)+\ell(\nu)=0\) then \(\lambda=\nu=\emptyset\), \(\nu+P=(1)\) and we have, using (7.4), \[D^{\mu}_{\emptyset}D^{\emptyset}_{0}D^{\emptyset}_{(1)}=D^{\mu}_{\emptyset}D^{ \emptyset}_{(1)}\] which is a basis element. Now assume that \(\ell(\lambda)+\ell(\nu)\geqslant 1\). If \(\lambda\neq\nu\) then we can write \[D^{\lambda}_{\nu}=D^{\lambda}_{\nu-Q}D^{\nu-Q}_{\nu}\] for some \(Q\in\operatorname{DRem}(\nu)\). Now using (7.4)-(7.8) we can write \[D^{\nu-Q}_{\nu}D^{\nu\pm P}_{\nu\pm P}=\sum_{\nu^{\prime}}c_{\nu^{\prime}}D^{ \nu-Q}_{\nu^{\prime}}D^{\nu^{\prime}}_{\nu\pm P}\] Figure 22. The non-commuting relations with \(b(P)=1\) and \(b(Q)=2\) on the left. On the right we picture \(Q^{1}\) and \(Q^{2}\). The equality follows by the spot-fork relation (as in Figure 18). for some \(c_{\nu^{\prime}}\in\Bbbk\) where \(\ell(\nu^{\prime})\leqslant\ell(\nu-Q)<\ell(\nu)\). Now we have \[D^{\mu}_{\lambda}D^{\lambda}_{\nu}D^{\nu}_{\nu\pm P} = D^{\mu}_{\lambda}D^{\lambda}_{\nu-Q}D^{\nu-Q}_{\nu}D^{\nu}_{\nu \pm P}\] \[= \sum_{\nu^{\prime}}c_{\nu^{\prime}}(D^{\mu}_{\lambda}D^{\lambda}_ {\nu-Q}D^{\nu-Q}_{\nu})D^{\nu^{\prime}}_{\nu\pm P}\] \[= \sum_{\nu^{\prime},\lambda^{\prime}}d_{\nu^{\prime},\lambda^{ \prime}}D^{\mu}_{\lambda^{\prime}}D^{\lambda^{\prime}}_{\nu^{\prime}}D^{\nu^ {\prime}}_{\nu\pm P}.\] by induction, and \(\ell(\lambda^{\prime})+\ell(\nu^{\prime})<\ell(\lambda)+\ell(\nu)\) so we're done. It remains to consider the case where \(\lambda=\nu\). Here we must have \(\mu\neq\lambda\). First observe that \[D^{\mu}_{\lambda}D^{\lambda}_{\lambda}D^{\lambda}_{\lambda+P}=D^{\mu}_{ \lambda}D^{\lambda}_{\lambda+P}\] by (7.4) and this is a basis element. The last case to consider is \[D^{\mu}_{\lambda}D^{\lambda}_{\lambda}D^{\lambda}_{\lambda-P}=D^{\mu}_{ \lambda}D^{\lambda}_{\lambda-P}.\] As \(\mu\neq\lambda\) we have \(D^{\mu}_{\lambda}=D^{\mu}_{\lambda+Q}D^{\lambda+Q}_{\lambda}\), for some \(Q\in\mathrm{DAdd}(\lambda)\) and so \[D^{\mu}_{\lambda}D^{\lambda}_{\lambda}D^{\lambda}_{\lambda+P}=D^{\mu}_{ \lambda+Q}D^{\lambda+Q}_{\lambda}D^{\lambda}_{\lambda-P}.\] Now, using (7.4)-(7.8) we have \[D^{\lambda+Q}_{\lambda}D^{\lambda}_{\lambda-P}=\sum_{\nu^{\prime}}c_{\nu^{ \prime}}D^{\lambda+Q}_{\nu^{\prime}}D^{\nu^{\prime}}_{\lambda-P}\] with \(\ell(\nu^{\prime})\leqslant\ell(\lambda-P)<\ell(\lambda)\) and so \[D^{\mu}_{\lambda+Q}D^{\lambda+Q}_{\lambda}D^{\lambda}_{\lambda-P} = \sum_{\nu^{\prime}}c_{\nu^{\prime}}(D^{\mu}_{\lambda+Q}D^{\lambda+ Q}_{\nu^{\prime}})D^{\nu^{\prime}}_{\lambda-P}\] \[= \sum_{\lambda^{\prime},\nu^{\prime}}d_{\lambda^{\prime},\nu^{ \prime}}D^{\mu}_{\lambda^{\prime}}D^{\lambda^{\prime}}_{\nu^{\prime}}D^{\nu^{ \prime}}_{\lambda-P}\] using induction as \(\mathrm{deg}D^{\mu}_{\lambda+Q}D^{\lambda+Q}_{\nu^{\prime}}=k\), with \(\ell(\nu^{\prime})\leqslant\ell(\lambda-P)<\ell(\lambda)\). Now as \(\ell(\lambda^{\prime})+\ell(\nu^{\prime})<\ell(\lambda)+\ell(\nu)\), we're done by induction. ### Recasting the Dyck presentation as a quiver and relations Gabriel proved that every basic algebra is isomorphic to the path algebra of its Ext-quiver modulo relations. We now go through the formal procedure of recasting the Dyck presentation in this language. **Definition 7.5**.: _We define the Dyck quiver \(\mathscr{D}_{m,n}\) with vertex set \(\{E_{\lambda}\mid\lambda\in\mathscr{P}_{m,n}\}\) and arrows \(d^{\lambda}_{\mu}:\lambda\to\mu\) and \(d^{\mu}_{\lambda}:\mu\to\lambda\) for every \(\lambda=\mu-P\) with \(P\in\mathrm{DRem}(\mu)\)._ An example is depicted in Figure 24. **Proposition 7.6**.: _The map_ \[E_{\mu}\mapsto 1_{\mu}\qquad d^{\lambda}_{\mu}\mapsto D^{\lambda}_{\mu}\] _defines an algebra homomorphism from the path algebra of the Dyck quiver \(\mathscr{D}_{m,n}\) to \(\mathcal{H}_{m,n}\). Thus, the algebra \(\mathcal{H}_{m,n}\) is isomorphic to the quotient of the path algebra of the Dyck quiver \(\mathscr{D}_{m,n}\) by the quadratic relations given in (7.4)-(7.8) (where we replace all \(D^{\lambda}_{\mu}\)'s with \(d^{\lambda}_{\mu}\)'s)._ Proof.: This follows directly from Theorem 7.3. **Example 7.7**.: _Continuing with Figure 24 we have that the algebra \(\mathcal{H}_{2,2}\) is the path algebra of the quiver \(\mathscr{D}_{2,2}\) modulo the following relations and their duals_ \[d^{\varnothing}_{(1)}d^{(1)}_{(2)}=0=d^{\varnothing}_{(1)}d^{(1)}_{(1^{2})} \qquad d^{(1)}_{(2)}d^{(2)}_{(2,1)}=d^{(1)}_{(2^{2})}d^{(2)}_{(2,1)}=d^{(1)}_{ (1^{2})}d^{(1^{2})}_{(2,1)}\qquad d^{(1)}_{\lambda}d^{\lambda}_{(1)}=-d^{(1)} _{\varnothing}d^{\varnothing}_{(1)} \tag{7.9}\] _for \(\lambda=(2),(1^{2})\) or \((2^{2})\),_ \[d^{(2,1)}_{(2^{2})}d^{(2^{2})}_{(2,1)}=-d^{(2,1)}_{(2)}d^{(2)}_{(2,1)}-d^{(2,1 )}_{(1^{2})}d^{(1^{2})}_{(2,1)} \tag{7.10}\] _and for any pair \(\nu<\mu\) not of the above form, we have that_ \[d^{\mu}_{\nu}d^{\mu}_{\nu}=0. \tag{7.11}\] **Example 7.8**.: _Apart from the categories of this paper, the only Hecke categories whose quiver and relations were understood were those corresponding to Weyl groups of ranks 2 and 3 [13] and those bi-serial algebras corresponding to \((W,P)=(S_{n},S_{n-1})\), which we now describe. In this case, the quiver is depicted in Figure 25._ _We have that the algebra \(\mathcal{H}_{n,1}\) is the path algebra of the quiver \(\mathscr{D}_{n,1}\) modulo the following relations and their duals_ \[d^{(k)}_{(k+1)}d^{(k+1)}_{(k)}=d^{(k)}_{(k-1)}d^{(k-1)}_{(k)}\qquad d^{(k)}_{( k\pm 1)}d^{(k\pm 1)}_{(k\pm 2)}=0\] _for \(1\leqslant k<n\). The projective modules are all uni-serial or biserial and their structure is encoded in the Alperin diagrams in Figure 26._ ## 8. Submodule structure of standard modules For this section, we assume that \(\Bbbk\) is a field. As noted in Theorem 5.4, the algebra \(\mathcal{H}_{m,n}\) is a basic (positively) graded quasi-hereditary algebra with graded cellular basis given by \[\{D^{\mu}_{\lambda}D^{\lambda}_{\nu}\,:\,\text{for all Dyck pairs}\,(\lambda,\mu),( \lambda,\nu)\,\text{with }\lambda,\mu,\nu\in\mathscr{P}_{m,n}\}.\] For \(\lambda\in\mathscr{P}_{m,n}\), write \(\mathcal{H}^{<\lambda}_{m,n}=\operatorname{span}\{D^{\mu}_{\alpha}D^{\alpha}_ {\lambda}\,:\,\alpha,\mu\in\mathscr{P}_{m,n},\,\alpha\leqslant\lambda\}\) and \(\mathcal{H}^{<\lambda}_{m,n}=\operatorname{span}\{D^{\mu}_{\alpha}D^{\alpha}_ {\lambda}\,:\,\alpha,\mu\in\mathscr{P}_{m,n},\,\alpha<\lambda\}\). Setting \[\operatorname{DP}(\lambda):=\{\mu\in\mathscr{P}_{m,n}\,:\,(\lambda,\mu)\, \text{is a Dyck pair}\},\] the (left) standard module \(\Delta_{m,n}(\lambda)=\mathcal{H}^{<\lambda}_{m,n}/\mathcal{H}^{<\lambda}_{m,n}\) has a basis given by \[\{u_{\mu}:=D^{\mu}_{\lambda}+\mathcal{H}^{<\lambda}_{m,n}\,:\,\mu\in \operatorname{DP}(\lambda)\}.\] Each \(u_{\mu}\) generates a submodule of \(\Delta_{m,n}(\lambda)\) with a 1-dimensional simple head, which we denote by \(L_{m,n}(\mu)\). In this section, we describe the full submodule structure of the standard modules. As \(\mathcal{H}_{m,n}\) is positively graded, the grading provides a submodule filtration of \(\Delta_{m,n}(\lambda)\). Decompose \(\operatorname{DP}(\lambda)\) as \[\operatorname{DP}(\lambda)=\bigsqcup_{k\geqslant 0}\operatorname{DP}_{k}( \lambda)\quad\text{where}\quad\operatorname{DP}_{k}(\lambda)=\{\mu\in \operatorname{DP}(\lambda)\,:\,\text{deg}(\lambda,\mu)=k\}.\] Note further, that the algebra \(\mathcal{H}_{m,n}\) is generated in degree 1. This implies that, in order to describe the full submodule structure, it is enough to find, for each \(\mu\in\operatorname{DP}_{k}(\lambda)\), the set of all \(\nu\in\operatorname{DP}_{k+1}(\lambda)\) such that \[u_{\nu}=cD^{\nu}_{\mu}u_{\mu}\] for some \(c\in\Bbbk\) and \(\nu=\mu\pm P\) for some \(P\in\operatorname{DAdd}(\mu)\) or \(P\in\operatorname{DRem}(\mu)\) respectively. Thus, the condition that \(\nu=\mu\pm P\) for some \(P\in\operatorname{DRem}(\mu)\) or \(P\in\operatorname{DAdd}(\mu)\) respectively is certainly a Figure 25. The quiver \(\mathscr{D}_{n,1}\). Figure 26. The Alperin diagrams of projective modules for \(\mathcal{H}_{n,1}\). necessary condition for the existence of an extension between \(L_{m,n}(\mu)\) and \(L_{m,n}(\nu)\) in \(\Delta_{m,n}(\lambda)\). We claim that it is also sufficient. Assume \(\mu\setminus\lambda=\sqcup_{i}Q^{i}\). For \(P\in\operatorname{DAdd}(\mu)\), note that \((\lambda,\mu+P)\) is a Dyck pair if and only if \(P\) is not adjacent to any \(Q^{i}\) and so in this case \((\mu+P)\setminus\lambda=\sqcup_{i}Q^{i}\sqcup P\) is the Dyck tiling and we have \[D_{\mu}^{\mu+P}D_{\lambda}^{\mu}=D_{\lambda}^{\mu+P}\] by the definition of the light leaves basis. For \(P\in\operatorname{DRem}(\mu)\), the only way to have \(\deg(\lambda,\mu-P)=\deg(\lambda,\mu)+1\) is if \(P\notin\{Q^{i}\}\) and there exists some \(Q\in\{Q^{i}\}\) such that \(P\prec Q\) do not commute. In this case we have \(Q\setminus P=R\sqcup S\) for some \(R,S\in\operatorname{DRem}(\mu-P)\). We prove by induction on \(\deg(\lambda,\mu)\) that \(D_{\mu}^{\mu-P}D_{\lambda}^{\mu}=D_{\lambda}^{\mu-P}\). If \(\deg(\lambda,\mu)=1\) then \(\mu-Q=\lambda\) and the non-commuting relation gives \[D_{\mu}^{\mu-P}D_{\lambda}^{\mu}=D_{\mu-P-R}^{\mu-P}D_{\mu-Q}^{\mu-P-R}=D_{ \lambda}^{\mu-P}\] as required. Now assume that \(\deg(\lambda,\mu)\geqslant 2\). Suppose \(Q\not\prec Q^{\prime}\) for \(Q^{\prime}\in\{Q^{i}\}\). Then we can write \(D_{\lambda}^{\mu}=D_{\mu-Q}^{\mu}D_{\lambda}^{\mu-Q}\) and we have \[D_{\mu}^{\mu-P}D_{\lambda}^{\mu}=D_{\mu}^{\mu-P}D_{\mu-Q}^{\mu}D_{\lambda}^{ \mu-Q}=D_{\mu-P-R}^{\mu-P}D_{\mu-Q}^{\mu-P-R}D_{\lambda}^{\mu-Q}=D_{\lambda}^ {\mu-P}\] by the non-commuting relations and the definition of the light leaves basis. Otherwise, we have that \(D_{\lambda}^{\mu}=D_{\mu-Q^{\prime}}^{\mu}D_{\lambda}^{\mu-Q^{\prime}}\) for some \(Q^{\prime}\) commuting with \(P\). Then we have \[D_{\mu}^{\mu-P}D_{\lambda}^{\mu}=D_{\mu}^{\mu-P}D_{\mu-Q^{\prime}}^{\mu}D_{ \lambda}^{\mu-Q^{\prime}}=D_{\mu-P-Q^{\prime}}^{\mu-P-Q^{\prime}}D_{\mu-Q^{ \prime}}^{\mu-P-Q^{\prime}}D_{\lambda}^{\mu-Q^{\prime}}=D_{\mu-P-Q^{\prime}}^ {\mu-P}D_{\lambda}^{\mu-P-Q^{\prime}}=D_{\lambda}^{\mu-P}\] where the second equality follows from the commuting relation, the third one follows by induction (as \(\deg(\lambda,\mu-Q^{\prime})=\deg(\lambda,\mu)-1\)), and the final equality follows by the definition of the light leaves basis. **Remark 8.1**.: _We set \(k_{\lambda}=\max\{k\geqslant 0\,|\operatorname{DP}_{k}(\lambda)\neq\emptyset\}\). Then it is easy to check that \(\operatorname{DP}_{k_{\lambda}}(\lambda)\) consists of a single element \(\mu_{\lambda}\). To construct the cup diagram of \(\mu_{\lambda}\), start with the weight \(\lambda\) and apply the following two steps:_ 1. _repeatedly find a pair of vertices labeled_ \(\land\) _or_ \(\lor\) _in order from left to right that are neighbours in the sense that there are only vertices already joined by cups in between. Join these new vertices together with a cup. Then repeat the process until there are no more such_ \(\land\) _pairs. We are left with a sequences of_ \(\lor\)_'s followed by a sequences of_ \(\land\)_'s._ 2. _Join these using concentric anti-clockwise cups. We are left with either a sequence of_ \(\land\)_'s or a sequence of_ \(\lor\)_'s. Draw vertical rays on these._ Suppose \(\mu_{\lambda}\setminus\lambda=\sqcup_{i}Q^{i}\). Note that \(\mu_{\lambda}\) is characterised by the following two properties: 1. There is no \(P\in\operatorname{DAdd}(\mu_{\lambda})\) such that \(P\sqcup\big{(}\bigsqcup_{i}Q^{i}\big{)}\) is a Dyck tiling of \((\mu_{\lambda}+P)\setminus\lambda\). 2. If \(P\in\{Q^{i}\}\) and \(Q\prec P\) then \(Q\in\{Q^{i}\}\). This implies, in particular, that if \(\mu\in\operatorname{DP}_{k}(\lambda)\) for \(k<k_{\lambda}\), then either (1) or (2) above fails and we have seen that in each case we can find some \(h\in\mathcal{H}_{m,n}\) and \(\nu\in\operatorname{DP}_{k+1}(\lambda)\) such that \(u_{\nu}=hu_{\mu}\). Thus the radical and socle filtration of \(\Delta_{m,n}(\lambda)\) coincide with its grading filtration and the socle of \(\Delta_{m,n}(\lambda)\) is given by \(L_{m,n}(\mu_{\lambda})\). We have proved the following: **Theorem 8.2**.: _Let \(\lambda\in\mathscr{P}_{m,n}\). The Alperin diagram of the standard module \(\Delta_{m,n}(\lambda)\) has vertex set labelled by the set \(\{L_{m,n}(\mu)\,:\,\mu\in\operatorname{DP}(\lambda)\}\) and edges_ \[L_{m,n}(\mu)\longrightarrow L_{m,n}(\nu)\] _whenever \(\mu\in\operatorname{DP}_{k}(\lambda)\), \(\nu\in\operatorname{DP}_{k+1}(\lambda)\) for some \(k\geqslant 0\) and \(\nu=\mu\pm P\) for some \(P\in\operatorname{DAdd}(\mu)\) or \(P\in\operatorname{DRem}(\mu)\) respectively. Moreover, the radical and socle filtration both coincide with the grading filtration and \(\Delta_{m,n}(\lambda)\) has simple socle isomorphic to \(L_{m,n}(\mu_{\lambda})\) (where \(\mu_{\lambda}\) is described in Remark 8.1)._ An example is provided in Figure 27. ## 9. The isomorphism between Hecke categories and Khovanov arc algebras We now utilise our newfound presentations in order to prove that the Khovanov arc algebras and Hecke categories are isomorphic as \(\mathbb{Z}\)-graded \(\Bbbk\)-algebras for \(\Bbbk\) any commutative integral domain containing a square root of \(-1\). ### Signs and the statement of the isomorphism For the purpose of defining our isomorphism, we will wish to consider all degree \(1\) diagrams in the Khovanov arc algebra. Using the dilation homomorphism of section 6, we are often able to restrict our attention to diagrams which are incontractible. which by Remark 6.7 are of the form \(\underline{\mu}\lambda\overline{\lambda}\) (or its dual) for \(\mu\) a rectangle and \(\lambda=\mu-P\) with \(b(P)=1\). The following lemma is immediate by construction of the arc diagrams. **Lemma 9.1**.: _The diagrams \(\underline{\mu}\lambda\overline{\lambda}\) for \((\mu,\lambda)=((1),\emptyset)\), \((\mu,\lambda)=((c),(c-1))\) and \((\mu,\lambda)=((1^{r}),(1^{r-1}))\) are given respectively by the arc, left-zigzag and right-zigzag diagrams_ (9.1) Figure 27. The Alperin diagram for the standard module \(\Delta_{3,3}(2,1)\). We use grey lines to indicate pairs obtained by adding a Dyck path and black lines to indicate the pairs obtained by removing a Dyck path. _with \(c-2\) (respectively \(r-2\)) vertical strands to the right (respectively, the left). For \((\mu,\lambda)=((c^{r}),(c^{r-1},c-1))\) with \(r\geqslant c>1\) we have that \(\underline{\mu}\lambda\overline{\lambda}\) is the_ brace generator _diagram_ (9.2) _with \(c-2\) concentric circles and a total of \(r-c\) vertical strands to the left of the diagram. The case with \(c\geqslant r\geqslant 1\) is similar but with \(r-2\) concentric circles and a total of \(c-r\) vertical strands to the right of the diagram._ **Remark 9.2**.: _The trivial embeddings \(\mathscr{P}_{m,n}\to\mathscr{P}_{m+1,n}\) and \(\mathscr{P}_{m,n}\to\mathscr{P}_{m,n+1}\) sending a partition to itself extends to algebra embeddings \(\mathcal{K}_{m,n}\to\mathcal{K}_{m+1,n}\) and \(\mathcal{K}_{m,n}\to\mathcal{K}_{m,n+1}\) defined on the arc diagrams by adding an upwards strand to the left or a downwards strand to the right. We have chosen to represent each arc diagram \(\underline{\mu}\lambda\overline{\nu}\) in the smallest \(\mathcal{K}_{m,n}\) where it is defined to avoid drawing lots of vertical strands which play no role in the multiplication or the next definition._ **Definition 9.3**.: _Let \((\lambda,\mu)\) be a Dyck pair of degree \(1\). Then \(\lambda=\mu-P\) for some \(P\in\operatorname{DRem}(\mu)\). We set \(\operatorname{sgn}(\lambda,\mu)\) to be the average of the elements in the set \(\operatorname{\underline{cont}}(P)\). In other words if the unique clockwise cup in \(\underline{\mu}\lambda\overline{\lambda}\) connects vertices in position \(i-\frac{1}{2}\) and \(j+\frac{1}{2}\) for \(i\leqslant j\) then \(\operatorname{sgn}(\lambda,\mu)=\frac{1}{2}(j+i)\)_ **Example 9.4**.: _The generators \(\underline{\mu}\lambda\overline{\lambda}\) of the form_ _for \(\mu=(1^{1})\), \((2)\), \((1^{2})\), \((2^{2})\), \((3)\) respectively and \(\lambda=\mu-P\) with \(b(P)=1\) have signs \(0,-1,1,0\) and \(-2\)._ **Theorem 9.5**.: _We have a graded \(\Bbbk\)-algebra isomorphism \(\Psi:\mathcal{H}_{m,n}\to\mathcal{K}_{m,n}\) defined on generators by setting, for all \(\lambda,\mu\in\mathscr{P}_{m,n}\) such that \((\lambda,\mu)\) is a Dyck pair of degree \(1\),_ \[\Psi(1_{\lambda})=\underline{\lambda}\lambda\overline{\lambda},\qquad\Psi(D_ {\mu}^{\lambda})=i^{\operatorname{sgn}(\lambda,\mu)}\underline{\lambda} \lambda\overline{\mu}\qquad\Psi(D_{\lambda}^{\mu})=i^{\operatorname{sgn}( \lambda,\mu)}\underline{\mu}\lambda\overline{\lambda}\] _where \(i\) is a square root of \(-1\) in \(\Bbbk\)._ ### Proof of the isomorphism The remainder of this section is dedicated to the proof of Theorem 9.5. **Lemma 9.6** (Local idempotent relations).: _Any anticlockwise-oriented circle which intersects the weight at precisely two points is a local weight-idempotent in the following sense: applying the local surgery procedure at this point is equivalent to simply deleting this circle (see Figure 28 for an example)._ Proof.: This follows immediately using the surgery procedures \(1\otimes 1\mapsto 1\), \(1\otimes x\mapsto x\) and \(1\otimes y\mapsto y\). **Proposition 9.7** (The idempotent relations).: _The idempotent relations are preserved by \(\Psi\)._ Proof.: Note that the element \(\underline{\lambda}\lambda\overline{\lambda}\) contains only anticlockwise circles intersecting the weight at precisely two points. Thus the result follows from Lemma 9.6. **Proposition 9.8**.: _(The self-dual relation) Let \(P\in\operatorname{DRem}(\mu)\) and \(\lambda=\mu-P\). Then we have_ \[(-1)^{\operatorname{sgn}(\lambda,\mu)}\underline{\lambda}\lambda\overline{\mu }\cdot\underline{\mu}\lambda\overline{\lambda}=\qquad 2\sum_{\begin{subarray}{c}\nu=\lambda-Q\\ P\subset Q\end{subarray}}(-1)^{b(Q)+b(P)-1+\operatorname{sgn}(\nu,\lambda)} \underline{\lambda}\nu\overline{\nu}\cdot\underline{\nu}\nu\overline{\lambda}\] \[\mu=\begin{pmatrix}\includegraphics[height=56.905512pt]{figs/1-1-1}& \includegraphics[height=56.905512pt]{figs/1-1-1}&\includegraphics[height=56.905512pt]{figs/1-1-1}& \includegraphics[height=56.905512pt]{figs/1-1-1}&\includegraphics[height=56.905512pt]{figs/1-1-1}& \includegraphics[height=56.905512pt]{figs/1-1-1}&\includegraphics[height=56.905512pt]{figs/1-1-1}& \includegraphics[height=56.905512pt]{figs/1-1-1}&\includegraphics[height=56.905512pt]{figs/1-1-1}& \includegraphics[height=56.905512pt]{figs/1-1-1}&\includegraphics[height=56.905512pt]{figs/1-1-1}& \includegraphics[height=56.905512pt]{figs/1-1-1}&\includegraphics[height=56.905512pt]{figs/1-1-1}& \includegraphics[height=56.905512pt]{figs/1-1-1}&\includegraphics[height=56.905512 pt]{figs/1-1-1}&\includegraphics[height=56.905512pt]{figs/1-1-1}& \includegraphics[height=56.905512pt]{figs/1-1-1}&\includegraphics[height=56.905512 pt]{figs/1-1-1}&\includegraphics[height=56.905512pt]{figs/1-1-1}& \includegraphics[height=56.905512pt]{figs/1-1-1}&\includegraphics[height=56.905512 pt]{figs/1-1-1}&\includegraphics[height=56.905512pt]{figs/1-1-1}& \includegraphics[height=56.905512pt]{figs/1-1-1}&\includegraphics[height=56.905512 pt]{figs/1-1-1}&\includegraphics[height=56.905512pt]{figs/1-1-1}& \includegraphics[height=56.905512pt]{figs/1-1-1}&\includegraphics[height=56.905512 pt]{figs/1-1-1}&\includegraphics[height=56.905512pt]{figs/1-1-1}& \includegraphics[height=56.905512pt]{figs/1-1-1}&\includegraphics[height=56.905512 pt]{figs/1-1-1}&\includegraphics[height=56.905512pt]{figs/1-1-1}& \includegraphics[height=56.905512pt]{figs/1-1-1}&\includegraphics[height=56.905512 pt]{figs/1-1-1}&\includegraphics[height=56.905512pt]{figs/1-1-1}& \includegraphics[height=56.905512pt]{figs/1-1-1}&\includegraphics[height=56.905512 pt]{figs/1-1-1}&\includegraphics[height=56.905512pt]{figs/1-1-1}& \includegraphics[height=56.905512pt]{figs/1-1-1}&\includegraphics[height=56.905512 pt]{figs/1-1-1}&\includegraphics[height=56.905512pt]{figs/1-1-1}& \includegraphics[height=56.905512pt]{figs/1-1-1}&\includegraphics[height=56.905512 pt]{figs/1-1-1}&\includegraphics[height=56.905512pt]{figs/1-1-1}& \includegraphics[height=56.905512pt]{figs/1-1-1}&\includegraphics[height=56.905512 pt]{figs/1-1-1}&\includegraphics[height=56.905512pt]{figs/1-1-1}& \includegraphics[height=56.905512pt]{figs/1-1-1}&\includegraphics[height=56.905512 pt]{figs/1-1-1}&\includegraphics[height=56.905512pt]{figs/1-1-1}& \includegraphics[height=56.905512pt]{figs/1-1-1}&\includegraphics[height=56.905512 pt]{figs/1-1-1}&\includegraphics[height=56.905512pt]{figs/1-1-1}& \includegraphics[height=56.905512pt]{figs/1-1-1}&\includegraphics[height=56.905512 pt]{figs/1-1-1}&\includegraphics[height=56.905512pt]{figs/1-1-1}& \includegraphics[height=56.905512pt]{figs/1-1-1}&\includegraphics[height=56.905512 pt]{figs/1-1-1}&\includegraphics[height=56.905512pt]{figs/1-1-1}& \includegraphics[height=56.905512pt]{figs/1-1-1}&\includegraphics[height=56.905512 pt]{figs/1-1-1}&\includegraphics[height=56.905512pt]{figs/1-1-1}& \includegraphics[height=56.905512pt]{figs/1-1-1}&\includegraphics[height=56.905512 pt]{figs/1-1-1}&\includegraphics[height=56.905512pt]{figs/1-1-1}& \includegraphics[height=56.905512pt]{figs/1-1-1}&\includegraphics[height=56.905512 pt]{figs/1-1-1}&\includegraphics[height=56.905512pt]{figs/1-1-1}& \includegraphics[height=56.905512pt]{figs/1-1-1}&\includegraphics[height=56.905512 pt]{figs/1-1-1}&\includegraphics[height=56.905512pt]{figs/1-1-1}& \includegraphics[height=56.905512pt]{figs/1-1-1}&\includegraphics[height=56.905512 pt]{figs/1-1-1}&\includegraphics[height=56.905512pt]{figs/1-1-1}& \includegraphics[height=56.905512pt]{figs/1-1-1}&\includegraphics[height=56.905512 pt]{figs/1-1-1}&\includegraphics[height=56.905512pt]{figs/1-1-1}& \includegraphics[height=56.905512pt]{figs/1-1-1}&\includegraphics[height=56.905512 pt]{figs/1-1-1}&\includegraphics[height=56.905512pt]{figs/1-1-1}& \includegraphics[height=56.905512pt]{figs/1-1-1}&\includegraphics[height=56.905512 pt]{figs/1-1-1}&\includegraphics[height=56.905512pt]{figs/1-1-1}& \includegraphics[height=56.905512pt]{figs/1-1-1}&\includegraphics[height=56.905512 pt]{figs/1-1-1}&\includegraphics[height=56.905512pt]{figs/1-1-1}& \includegraphics[height=56.905512pt]{figs/1-1-1}&\includegraphics[height=56.905512 pt]{figs/1-1-1}&\includegraphics[height=56.905512pt]{figs/1-1-1}&\includegraphics[height=56.905512 pt]{figs/1-1-1}&\includegraphics[height=56.905512pt]{figs/1-1-1}& \includegraphics[height=56.905512pt]{figs/1-1-1}&\includegraphics[height=56.905512 pt]{figs/1-1-1}&\includegraphics[height=56.905512pt]{figs/1-1-1}& \includegraphics[height=56.905512pt]{figs/1-1-1}&\includegraphics[height=56.905512 pt]{figs/1-1-1}&\includegraphics[height=56.905512pt]{figs/1-1-1}& \includegraphics[height=56.905512pt]{figs/1-1-1}&\includegraphics[height=56.905512 pt]{figs/1-1-1}&\includegraphics[height=56.905512pt]{figs/1-1-1}&\includegraphics[ height=56.905512pt]{figs/1-1-1}& \includegraphics[height=56.905512pt]{figs/1-1-1}&\includegraphics[height=56.905512 pt]{figs/1-1-1}&\includegraphics[height=56.905512pt]{figs/1-1-1}&\includegraphics[height=56.905512pt]{figs/1-1-1}& \includegraphics[height=56.905512pt]{figs/1-1-1}&\includegraphics[height=56.905512 pt]{figs/1-1-1}&\includegraphics[height=56.905512pt]{figs/1-1-1}&\includegraphics[height=56.905512pt]{figs/1-1-1}& \includegraphics[height=56.905512pt]{figs/1-1-1}&\includegraphics[height=56.905512pt]{figs/1-1-1}& \includegraphics[height=56.905512pt]{figs/1-1-1}&\includegraphics[height=56.905512pt]{figs/1-1-1}& \includegraphics[height=56.905512pt]{figs/1-1-1}&\includegraphics[height=56.905512pt]{figs/1-1}& Performing surgery on \(\lambda\lambda\overline{\mu}\cdot\underline{\mu}\lambda\overline{\lambda}\) we first apply Lemma 9.6 to the \(m-2\) concentric circles one at a time (starting from the outermost pair of circles) until we obtain the diagram We then apply the two surgery steps detailed in Example 3.5 to the innermost pair of braces to obtain a sum of two diagrams (9.4) We need to compare this to the righthand-side of the equation in equation (9.3). The diagrams \(\underline{\nu_{1}}\nu_{1}\overline{\lambda}\) and \(\underline{\nu_{-1}}\nu_{-1}\overline{\lambda}\) are equal to (9.5) respectively. Using Lemma 9.6 and Example 3.6, we have that the product \(\underline{\lambda}\nu_{1}\overline{\nu_{1}}\cdot\underline{\nu_{1}}\nu_{1} \overline{\lambda}\) is given by (9.6) Similarly, the product \(\underline{\lambda}\nu_{-1}\overline{\nu_{-1}}\cdot\underline{\nu_{-1}}\nu_{- 1}\overline{\lambda}\) is given by (9.7) where we have highlighted the pair of re-oriented circles in each case (with the pink circle of degree 2 and the blue of degree 0). Notice that the sum of the left-hand terms in each of (9.6) and (9.7) is the required sum (9.4). The right hand term in (9.6) and (9.7) are identical, and we denote this diagram by \(D_{1}\). We will show that the sum of these 2 terms (which is equal to \(2D_{1}\)) will cancel with the remaining terms in the larger sum. For \(2\leqslant x\leqslant m-2\) we have \[\nu_{x}\nu_{x}\overline{\lambda}=\raisebox{-14.226378pt}{\includegraphics[]{ figures/1-3.pdf}} \tag{9.8}\] where there are \(x-2\) concentric dotted hollow circles in the middle of the diagram. Therefore we have that \(\underline{\lambda}\nu_{x}\overline{\nu_{x}}\cdot\underline{\nu_{x}}\nu_{x} \overline{\lambda}\) is equal to \[\raisebox{-14.226378pt}{\includegraphics[]{figures/1-3.pdf}}\ +\ \raisebox{-14.226378pt}{\includegraphics[]{figures/1-3.pdf}}\.\] We denote the first diagram in the sum by \(D_{x-1}\) and the second by \(D_{x}\), so \(D_{x}\) is the diagram where the pink anticlockwise circle is at distance \(x\) from the small innermost circles. (Note that this is consistent with our notation for \(D_{1}\) above). Finally, for \(x=m-1\) we have \[\nu_{m-1}\nu_{m-1}\overline{\lambda}=\raisebox{-14.226378pt}{\includegraphics[]{ figures/1-3.pdf}}\] in which only the outermost strand is clockwise oriented. This gives \(\underline{\lambda}\nu_{m-1}\overline{\nu_{m-1}}\cdot\underline{\nu_{m-1}}\nu _{m-1}\overline{\lambda}\) is equal to \[\raisebox{-14.226378pt}{\includegraphics[]{figures/1-3.pdf}}\] which is equal to \(D_{m-2}\). Replacing all terms into the right-hand side of (9.3) we obtain \[(\underline{\lambda}\lambda\overline{\mu}\cdot\underline{\mu}\lambda \overline{\lambda}+2D_{1})+2\sum_{x=2}^{m-2}(-1)^{x+1}(D_{x-1}+D_{x})+2(-1)^{m }D_{m-2}=\underline{\lambda}\lambda\overline{\mu}\cdot\underline{\mu}\lambda \overline{\lambda}\] as required. **Degenerate cases.** We now consider the degenerate cases in which \(r\) or \(c\) is equal to \(1\). The \(r=1=c\) case simply follows as \[\lambda\lambda\overline{\mu}\cdot\underline{\mu}\lambda\overline{\lambda}=\] as required. It remains to consider the cases \(\mu=(c),\lambda=(c-1)\) and \(\mu=(1^{r}),\lambda=(1^{r-1})\). We deal with the first one; the second one is similar. Here the weight \(\mu\) and \(\lambda\) are of the form \[\mu=\left(\begin{array}{ So we obtain the required relation for \((\lambda,\mu)\). It remains to consider the case where \(k\notin\underline{\mathsf{cont}}(P^{\prime})\). In this case we have \(\mathsf{sgn}(\lambda^{\prime},\mu^{\prime})=\mathsf{sgn}(\lambda,\mu)\pm 1\) and \(b(P^{\prime})=b(P)\). Let \(Q^{\prime}\in\mathrm{DRem}(\lambda^{\prime})\). If \(k\notin\underline{\mathsf{cont}}(Q^{\prime})\) then we have \(b(Q^{\prime})=b(Q)\) and \(\mathsf{sgn}(\nu^{\prime},\lambda^{\prime})=\mathsf{sgn}(\nu,\lambda)\pm 1\) and so we have \[(-1)^{b(Q^{\prime})+b(P^{\prime})-1+\mathsf{sgn}(\nu^{\prime},\lambda^{\prime })}=(-1)^{b(Q)+b(P)-1+\mathsf{sgn}(\nu,\lambda)\pm 1}.\] If \(k\in\underline{\mathsf{cont}}(Q^{\prime})\) then we have \(b(Q^{\prime})=b(Q)-1\) and \(\mathsf{sgn}(\nu^{\prime},\lambda^{\prime})=\mathsf{sgn}(\nu,\lambda)\) and so we have \[(-1)^{b(Q^{\prime})+b(P^{\prime})-1+\mathsf{sgn}(\nu^{\prime},\lambda^{\prime })}=(-1)^{b(Q)-1+b(P)-1+\mathsf{sgn}(\nu,\lambda)}.\] Thus dividing the equation (9.2) by \(-1\) gives the required equation for \((\lambda,\mu)\). **Proposition 9.9** (The commuting relations).: _Let \(P,Q\in\mathrm{DRem}(\mu)\) with \(P\prec Q\) which commute, and let \(\lambda=\mu-P\), \(\nu=\mu-Q\), and \(\alpha=\mu-P-Q\). We have that_ \[i^{\mathsf{sgn}(\alpha,\lambda)+\mathsf{sgn}(\lambda,\mu)} \underline{\alpha}\alpha\overline{\lambda}\cdot\underline{\lambda} \overline{\mu} =i^{\mathsf{sgn}(\alpha,\nu)+\mathsf{sgn}(\nu,\mu)}\underline{ \alpha}\alpha\overline{\nu}\cdot\underline{\nu}\nu\overline{\mu} \tag{9.10}\] \[i^{\mathsf{sgn}(\lambda,\mu)+\mathsf{sgn}(\nu,\mu)}\underline{ \nu}\nu\overline{\mu}\cdot\underline{\mu}\lambda\overline{\lambda} =i^{\mathsf{sgn}(\alpha,\lambda)+\mathsf{sgn}(\alpha,\nu)} \underline{\nu}\alpha\overline{\alpha}\cdot\underline{\alpha}\alpha \overline{\lambda}. \tag{9.11}\] Proof.: First note that \(\mathsf{sgn}(\lambda,\mu)=\mathsf{sgn}(\alpha,\nu)\) as \(\lambda=\mu-P\) and \(\alpha=\nu-P\). Similarly, \(\mathsf{sgn}(\alpha,\lambda)=\mathsf{sgn}(\nu,\mu)\). Thus we can cancel the signs on both sides of the equation. Now, if \(P\) and \(Q\) are distant then the result follows directly using Lemma 9.6. It remains to consider the case where \(P\prec Q\) and they commute. We first focus on the incontractible case in which \(\mu=(c^{r})\) is a rectangle for \(r,c>2\) and \(b(P)=1\) and \(3\leqslant b(Q)\leqslant\min\{r,c\}\) (note that we must have \(b(Q)>2\) in order to commute with \(P\)). Set \(m=\min\{r,c\}\) and assume that \(b(Q)=m\). We have that \(\lambda=(c^{r-1},c-1)\), \(\nu=((c-1)^{r-1},r-m)\), and \(\alpha=((c-1)^{r-2},c-2,r-m)\). Thus we have \(\underline{\nu}\nu\overline{\mu}\cdot\underline{\mu}\lambda\overline{\lambda}\) is equal to \[= \tag{9.12}\] while \(\underline{\nu}\alpha\overline{\alpha}\cdot\underline{\alpha}\alpha\overline{\lambda}\) is equal to \[= \tag{9.13}\] In both equations there are \(r-3\) dotted anti-clockwise circles in each diagram; and equality follows from applying the \(1\otimes 1\mapsto 1\) rule a total of \(r\) times as required. A very similar calculation proves equation (9.10). The other cases, where \(3\leqslant b(Q)\leqslant m-1\) are completely analogous, except that the large arc in \(\underline{\mu}\nu\overline{\nu}\) and \(\underline{\lambda}\alpha\overline{\alpha}\) forms part of a zigzag or a brace. Finally, the general case follows directly by applying the dilation homomorphism. **Proposition 9.10** (The non-commuting relation).: _Let \(P,Q\in\operatorname{DRem}(\mu)\) with \(P\prec Q\) which do not commute, and let \(\lambda=\mu-P\) and \(\nu=\mu-Q\). Then \(Q\setminus P=Q^{1}\sqcup Q^{2}\) where \(Q^{1},Q^{2}\in\operatorname{DRem}(\mu-P)\) and we set \(\alpha=\lambda-Q^{1}\) and \(\beta=\lambda-Q^{2}\). We have that_ \[i^{\mathsf{sgn}(\lambda,\mu)+\mathsf{sgn}(\nu,\mu)}\underline{\nu}\nu \overline{\mu}\cdot\underline{\mu}\lambda\overline{\lambda}=i^{\mathsf{sgn}( \alpha,\lambda)+\mathsf{sgn}(\nu,\alpha)}\underline{\nu}\nu\overline{\alpha} \cdot\underline{\alpha}\alpha\overline{\lambda}=i^{\mathsf{sgn}(\beta,\lambda) +\mathsf{sgn}(\nu,\beta)}\underline{\nu}\nu\overline{\beta}\cdot\underline{ \beta}\beta\overline{\lambda}. \tag{9.14}\] Proof.: We start by proving that all the signs in the equation are equal. Assume that \(P\) corresponds to a cup connecting \(i_{P}-\frac{1}{2}\) and \(j_{P}+\frac{1}{2}\) and \(Q\) corresponds to a cup connecting \(i_{Q}-\frac{1}{2}\) and \(j_{Q}+\frac{1}{2}\). Then \(Q^{1}\) corresponds to a cup connecting \(i_{Q}-\frac{1}{2}\) and \((i_{P}+1)-\frac{1}{2}\) and \(Q^{2}\) corresponds to a cup connecting \((j_{P}+1)-\frac{1}{2}\) and \(j_{Q}+\frac{1}{2}\). This implies that \[\mathsf{sgn}(\alpha,\lambda)+\mathsf{sgn}(\nu,\alpha) = \mathsf{sgn}(\beta,\lambda)+\mathsf{sgn}(\nu,\beta)\] \[= \tfrac{1}{2}(i_{Q}+i_{P}-1)+\tfrac{1}{2}(j_{P}+1+j_{Q})=\tfrac{1} {2}(i_{P}+j_{P})+\tfrac{1}{2}(i_{Q}+j_{Q})\] \[= \mathsf{sgn}(\lambda,\mu)+\mathsf{sgn}(\nu,\mu).\] Thus we can restrict our attention to the incontractible case as the general case will follow directly by applying the dilation homomorphism. So we can assume that \(\mu=(c^{r})\) is a rectangle for \(r,c>1\) and \(b(P)=1\) and \(b(Q)=2\). For the \(r=2=c\) case, \(\lambda=(2,1)\), \(\mu=(2^{2})\), \(\nu=(1)\) and we can choose \(\alpha=(2)\) and \(\beta=(1^{2})\). Here we have that \(\underline{\nu}\nu\overline{\mu}\cdot\underline{\mu}\lambda\overline{\lambda}\) is given by \[\raisebox{-10.0pt}{\includegraphics[height=10.0pt]{fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/ fig/fig/fig/ fig/fig/fig/fig/fig/fig/ fig/fig/fig/ fig/ fig/ fig/ fig/ fig/fig/ fig/fig/ fig/fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig// fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig// fig/ fig/ fig/ fig/ fig// fig/ fig/ fig// fig/ fig/ fig// fig/ fig// fig/ fig/ fig// fig/ fig// fig// fig/ fig// fig/ fig/ fig// fig/ fig/ fig// fig/ fig/ fig// fig/ fig/ fig// fig/ fig/ fig// fig/ fig/ fig// fig/ fig/ fig// fig/ fig// fig/ fig// fig/ fig// fig/ fig// fig/ fig// fig/ fig// fig/ fig// fig/ fig// fig/ fig// fig/ fig// fig/ fig// fig// fig/ fig// fig/ fig/ fig// fig/ fig/ fig// fig/ fig// fig/ fig// fig// fig/ fig// fig// fig/ fig// fig/ fig/ fig// fig/ fig// fig// fig/ fig/ fig// fig/ fig/ fig/ fig// fig/ fig// fig// fig// fig/ fig// fig/ fig/ fig// fig// fig// fig/ fig// fig// fig/ fig// fig/ fig// fig/ fig// fig// fig/ fig// fig// fig// fig// fig/ fig// fig/ fig// fig// fig/ fig// fig// fig// fig// fig// fig// fig/ fig// fig// fig// fig/ fig// fig/ fig/ fig// fig// fig/ fig// fig// fig/ fig// fig// fig// fig// fig/ fig/ fig// fig// fig// fig/ fig/ fig// fig// fig// fig/ fig// fig// fig/ fig// fig// fig/ fig// fig// fig/ fig// fig// fig// fig// fig// fig// fig// fig// fig/ fig// fig/ fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig/ fig// fig// fig// fig/ fig// fig// fig/ fig// fig// fig// fig// fig// fig// fig/ fig// fig// fig// fig/ fig// fig// fig// fig// fig// fig// fig// fig// fig// fig/ fig// fig// fig// fig/ fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig/ fig// fig// fig// fig/ fig// fig// fig/ fig// fig// fig// fig// fig// fig/ fig// fig// fig// fig// fig// fig// fig// fig/// fig// fig// fig/ fig// fig// fig// fig// fig// fig/// fig// fig// fig// fig/ fig// fig// fig// fig/ fig// fig// fig// fig// fig// fig// fig/ fig// fig/ fig// fig// fig// fig// fig// fig// fig// fig/ fig// fig// fig// fig/ fig// fig// fig// fig/// fig// fig// fig// fig// fig/ fig// fig// fig// fig// fig/// fig// fig// fig// fig/ fig/ fig// fig// fig// fig// fig// fig/ fig/// fig// fig// fig// fig// fig/// fig// fig// fig// fig/ fig// fig/// fig// fig// fig// fig/ fig/ fig// fig// fig/ fig// fig/ fig// fig// fig/ fig// fig/// fig/ fig// fig// fig// fig// fig// fig// fig// fig// fig// fig/ fig// fig/// fig/ fig// fig// fig// fig// fig/ fig// fig// fig// fig// fig/ fig// fig/ fig/ fig// fig// fig// fig// fig// fig// fig// fig/ fig// fig// fig/ fig// fig// fig/ fig// fig/ fig// fig/ fig// fig// fig/ fig// fig// fig// fig// fig// fig// fig/ fig// fig// fig/ fig// fig/ fig// fig// fig// fig// fig/ fig/ fig// fig/ fig// fig/ fig// fig/ fig// fig/ fig/ fig/ fig// fig// fig/ fig// fig/ fig// fig/ fig/ fig/ fig/ fig// fig/ fig/ fig// fig/ fig/ fig// fig/ fig// fig// fig/ fig/ fig// fig// fig/ fig// fig/ fig// fig// fig/ fig// fig/ fig// fig/ fig// fig/ fig/ fig// fig/ fig// fig/ fig/ fig// fig// fig/ fig/ fig// fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig// fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig// fig/ fig/ fig/ fig// fig/ fig// fig/ fig// fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig// fig exists), and set \(\lambda=\mu-P\), \(\nu=\lambda-Q\), and \(\alpha=\mu-\langle P\cup Q\rangle_{\mu}\). Then we have_ \[i^{\mathsf{sgn}(\lambda,\mu)+\mathsf{sgn}(\nu,\lambda)}\underline{\mu}\lambda \overline{\lambda}\cdot\underline{\lambda}\nu\overline{\nu}=\begin{cases}i^{2 b(\langle P\cup Q\rangle_{\mu})-2b(Q)+\mathsf{sgn}(\alpha,\mu)+\mathsf{sgn}(\alpha,\nu)} \underline{\mu}\alpha\overline{\alpha}\cdot\underline{\alpha}\alpha\overline{ \nu}&\text{if $\langle P\cup Q\rangle_{\mu}$ exists,}\\ 0&\text{otherwise.}\end{cases} \tag{9.15}\] Proof.: We start by proving that the signs on both sides of the equation are equal. Write \(\langle P\cup Q\rangle_{\mu}=Q^{\prime}\sqcup P\sqcup Q\) and assume that \(Q^{\prime}\) is to the left of \(Q\). Then we have that the cup corresponding to \(P\) connects \(i_{P}-\frac{1}{2}\) and \(j_{P}+\frac{1}{2}\), the cup corresponding to \(Q\) connects \((j_{P}+1)-\frac{1}{2}\) and \(j_{Q}+\frac{1}{2}\), the cup corresponding to \(Q^{\prime}\) connects \(i_{Q^{\prime}}-\frac{1}{2}\) and \((i_{P}-1)+\frac{1}{2}\) and finally the cup corresponding to \(\langle P\cup Q\rangle_{\mu}\) connects \(i_{Q^{\prime}}-\frac{1}{2}\) and \(j_{Q}+\frac{1}{2}\) for some \(i_{Q^{\prime}}<i_{P}\leqslant j_{P}<j_{Q}\). Note that \[b(\langle P\cup Q\rangle_{\mu})=\tfrac{1}{2}(j_{Q}-i_{Q^{\prime}})+1\quad\text {and}\quad b(Q)=\tfrac{1}{2}(j_{Q}-(j_{P}+1))+1.\] Thus we have \[2b(\langle P\cup Q\rangle_{\mu})-2b(Q)+\mathsf{sgn}(\alpha,\nu)+ \mathsf{sgn}(\alpha,\nu)\] \[=2\left(\frac{j_{Q}-i_{Q^{\prime}}}{2}+1\right)-2\left(\frac{j_{ Q}-(j_{P}+1)}{2}+1\right)+\frac{i_{Q}^{\prime}+j_{Q}}{2}+\frac{i_{Q^{\prime}}+(i_{ P}-1)}{2}\] \[=\frac{j_{P}+i_{P}}{2}+\frac{j_{Q}+(j_{P}+1)}{2}\] \[=\mathsf{sgn}(\lambda,\mu)+\mathsf{sgn}(\nu,\lambda)\] as required. Thus we can restrict our attention to the incontractible case as the general case will follow directly by applying the dilation homomorphism. So we can assume that \(\mu=(c^{r}),\lambda=(c^{r-1},c-1)\) and \(\nu=(c^{r-1},c-2)\) (the case \(\nu=(c^{r-2},(c-1)^{2})\) is similar). Note that \(\langle P\cup Q\rangle_{\mu}\) exists precisely when \(r\geqslant 2\). When \(r=1\) it's easy to check that \(\underline{\mu}\lambda\overline{\lambda}\cdot\underline{\lambda}\nu\overline {\nu}=0\) using \(y\otimes y\mapsto 0\) as we are applying surgery to two \(\wedge\)-oriented propagating strands. We now assume that \(r\geqslant 2\). Then we have \(\alpha=(c^{r-2},c-1,c-2)\). In this case, for \(r\geqslant 3\), the equation \(\underline{\mu}\lambda\overline{\lambda}\cdot\underline{\lambda}\nu\overline {\nu}=\underline{\mu}\alpha\overline{\alpha}\cdot\underline{\alpha}\alpha \overline{\nu}\) becomes \[= \tag{9.16}\] where there are \(r-3\) concentric dotted outer circles in the top and bottom diagrams on each side of the product. We can rewrite both sides of this equation using the \(1\otimes 1\mapsto 1\) surgery procedure \(r-3\) times on the dotted strands (trivially turning each pair of dotted circles into a large dotted circle) followed by \(3\) times on the solid strands in order to obtain in both cases the diagram (9.17) as required. The other cases (where \(r=2\) and \(c>2\) or vice versa, and where \(r=c=2\)) are similar. We thus conclude that the map \(\Psi\) is indeed a \(\mathbb{Z}\)-graded homomorphism of \(\Bbbk\)-algebras (for \(\Bbbk\) an integral domain containing a square root of \(-1\)). It remains to check that this map is a bijection, we now verify this by showing that the image of the light leaves basis of \(\mathcal{H}_{m,n}\) is a basis of \(\mathcal{K}_{m,n}\) (we do this by showing that every basis element of \(\mathcal{K}_{m,n}\) can be written as a product of degree 1 elements). Over a field, we could deduce the following result from the known Koszulity of \(\mathcal{K}_{m,n}\) (this is the main result of [1]). However, we wish to work over more general rings (where Koszulity does not hold). From an aesthetic point of view, we also prefer to deduce that the algebra is generated in degree 1 directly (as appealing to the strong cohomological property of Koszulity is somewhat "using a sledgehammer to crack a nut"). **Proposition 9.12**.: _The map \(\Psi\) is an isomorphism of graded \(\Bbbk\)-algebras._ Proof.: We have already seen above that the map is a \(\Bbbk\)-algebra homomorphism. We need to show that the map is bijective. Clearly the map is bijective when one restricts attention to the degree 0 elements (indexed by weights/partitions) and degree 1 elements (indexed by Dyck pairs of degree 1) of both algebras. We now consider elements of higher degree: we will do this in two stages. We first show that every element of the form \(\underline{\mu}\lambda\overline{\lambda}\) is in the image, by constructing it as a product of the degree 1 elements. We will then show that \[(\underline{\mu}\lambda\overline{\lambda})(\underline{\lambda}\lambda \overline{\nu})=\underline{\mu}\lambda\overline{\nu}+\sum_{\zeta<\lambda} \underline{\mu}\zeta^{\overline{\nu}} \tag{9.18}\] and so deduce that the map is bijective by unitriangularity. **Step 1:** We fix a sequence \(\mu=\lambda^{(0)}\supset\lambda^{(1)}\supset\cdots\supset\lambda^{(m)}=\lambda\) such that the Dyck path \(\lambda^{(k-1)}=\lambda^{(k)}-P^{k}\) where \(P^{k}\) is a removable Dyck path of \(\lambda^{(k)}\) of breadth \(p_{k}\) with \(p_{0}\geqslant p_{1}\geqslant p_{2}\ldots\geqslant p_{m}\) (this decreasing size condition is not strictly necessary, but is helpful for visualisation). We have that \[(\underline{\mu}\lambda\overline{\lambda})=(\underline{\mu}\mu\overline{ \lambda^{(1)}})(\underline{\lambda^{(1)}}\lambda^{(1)}\overline{\lambda^{(2)}} )\ldots(\underline{\lambda^{(m-1)}}\lambda^{(m-1)}\overline{\lambda})\] by repeated application of Lemma 9.6. **Step 2:** We first observe that the subdiagram \(\lambda\overline{\lambda}\underline{\lambda}\lambda\) within the wider diagram \((\underline{\mu}\lambda\overline{\lambda})(\underline{\lambda}\lambda \overline{\nu})\) is an oriented Temperley-Lieb diagram whose arcs are all anti-clockwise oriented and which is invariant under the flip through the horizontal axis. We will work through the cases of the surgery rules of Subsection 3.2 and how they can potentially be applied to the anti-clockwise arcs in \(\lambda\overline{\lambda}\underline{\lambda}\lambda\) (within the wider diagram \((\underline{\mu}\lambda\overline{\lambda})(\underline{\lambda}\lambda \overline{\nu})\)) in turn: we will show that none of \(x\otimes x\mapsto 0\), \(x\otimes y\mapsto 0\), or \(y\otimes y\mapsto 0\) can occur (thus \((\mu\lambda\overline{\lambda})(\underline{\lambda}\lambda\overline{\nu})\neq 0\)) and that in all the other cases there is a single term with weight \(\lambda\) (as required). We first require some notation. Let \(\lambda\in\mathscr{P}_{m,n}\) and fix \(i<j\in\mathbb{Z}+\frac{1}{2}\) connected by a cup in \(\underline{\lambda}\), we denote this cup by \(\underline{c}\in\underline{\lambda}\) and similarly we denote the reflected cap by \(\overline{c}\in\overline{\lambda}\). We will also need to speak of the interval \(\underline{I}\) on the top weight line (respectively \(\overline{I}\) on the bottom weight line) lying strictly between the points \(i<j\in\mathbb{Z}+\frac{1}{2}\). The surgery procedure is not concerned with the local orientation of the arcs \(\underline{c}\) and \(\overline{c}\) (which are always anti-clockwise in our proof), but rather the global orientation of the circle/strand to which the arcs \(\underline{c}\) and \(\overline{c}\) belong. Determining a global orientation of clockwise/anti-clockwise is equivalent to assigning an "inside"/"outside" label to the regions \(\underline{I}\) and \(\overline{I}\) in a manner we shall make more precise (case-by-case) and this will allow us to show that certain cases cannot occur (for topological reasons). We will assume that all our diagrams in this proof have the minimum number of propagating lines -- this does not affect the surgery procedure, which is defined topologically, but allows us to speak of the 'left' and 'right' propagating lines in a given circle. This is illustrated in Figure 29. We first consider the rules in which our pair of arcs belong to the same connected component (either a circle or a strand). If the arcs \(\underline{c}\) and \(\overline{c}\) both belong to the same clockwise oriented circle, then the regions \(\underline{I}\) and \(\overline{I}\) both lie _outside_ this circle. If the arcs \(\underline{c}\) and \(\overline{c}\) both belong to the same anti-clockwise oriented circle, then the regions \(\underline{I}\) and \(\overline{I}\) both lie _inside_ this circle. This is depicted in Figure 30. Applying the surgery procedure in the former case, we obtain two non-nested clockwise oriented circles. The rightmost propagating strand of one circle goes through the point \(i\) (which was \(\vee\)-oriented in \(\lambda\)) and the leftmost propagating strand of the other circle goes through the point \(j\) (which was \(\wedge\)-oriented in \(\lambda\)) and so \(\lambda\) is preserved. Applying the surgery procedure in the latter case, we obtain two nested circles (and sum over the \(1\otimes x\) and \(x\otimes 1\) orientations). In the rightmost case in Figure 30, the \(\lambda\) weight corresponds to orienting the inner circle clockwise, see Figure 31 (the leftmost case in Figure 30 corresponds to orienting the inner circle anti-clockwise). We now suppose that the arcs \(\underline{c}\) and \(\overline{c}\) both belong to the same strand. In which case, the terminating points of this strand are either both less than or equal to \(i\) or both greater than or equal to \(j\). We claim that the surgery \(y\mapsto x\otimes y\) does not change the weight \(\lambda\). Applying the surgery in the former (respectively latter) case, we obtain a circle whose leftmost intersection with the weight lines is at the point \(j\) and a strand whose rightmost intersection with the weight lines is at the point \(i\) (respectively a circle whose rightmost intersection with the weight lines is at the point \(i\) and a strand whose leftmost intersection with the weight lines is at the point \(j\)). A circle whose leftmost point is \(\wedge\)-oriented (respectively rightmost point is \(\vee\)-oriented) is necessarily clockwise oriented, and so the claim follows. Next we consider the rules in which our pair of arcs belong to the distinct connected components (each of which is either a circle or a strand). Figure 30. On the left we picture a pair of anti-clockwise oriented cup/caps as part of a wider clockwise oriented circle, notice that the regions \(\underline{I}\) and \(\overline{I}\) lie outside of the circle. In the two rightmost diagrams we picture the two distinct ways that a pair of anti-clockwise oriented cup/caps can be part of a wider clockwise oriented circle, notice that the regions \(\underline{I}\) and \(\overline{I}\) lie inside of the circle. Figure 29. On the left we picture an anti-clockwise oriented cup \(\underline{c}\) as part of a wider clockwise oriented circle, notice that the region \(\underline{I}\) is outside of the circle. On the right we picture an anti-clockwise oriented cup \(\underline{c}\) as part of a wider anti-clockwise oriented circle, notice that the region \(\underline{I}\) is inside of the circle. Similar pictures can be drawn for the caps. The \(1\otimes 1\mapsto 1\), \(x\otimes 1\mapsto x\), and \(1\otimes x\mapsto x\) rules can be checked in a similar fashion to above. In the case of \(1\otimes 1\mapsto 1\) the original diagram consists of two non-nested circles (see Figure 32); there are two propagating strands in the circle produced by surgery and the orientation of the circle can be determined via the left/right propagating strand and checked to match \(\lambda\) by the fact that this left/right propagating strand passes through \(i\) or \(j\) with label \(\vee\) or \(\wedge\) respectively. In the case of \(x\otimes 1\mapsto x\) or \(1\otimes x\mapsto x\), the original diagram consists of two nested circles (see Figure 33); there are four propagating strands in the circle produced by surgery and the orientation of the circle can be determined via the leftmost/rightmost propagating strand and checked to match \(\lambda\) by the fact that this leftmost (respectively rightmost) propagating strand has the sign \(\wedge\) (respectively \(\vee\)) given by the _opposite direction_ to that encountered at \(i\) (respectively at \(j\)). The \(x\otimes x\mapsto 0\) case is of a different flavour entirely. We must show that we _never_ have to apply this rule in the simplification of a product of the form above (in equation (9.18)). To see Figure 31. The effect of applying the surgery to the rightmost diagram in Figure 30. The first term has a clockwise inner circle and an outer anti-clockwise circle and its weight is equal to \(\lambda\). The second term has the opposite conventions and the weight \(\zeta<\lambda\). Figure 32. The effect of applying the \(1\otimes 1\mapsto 1\) surgery. this, note that the intervals \(\underline{I}\) and \(\overline{I}\) must both lie outside of their circles: however, this implies that both circles contain propagating lines and (by the planarity condition on arc diagrams) each circle must nest inside the other (as they cannot intersect), which provides a contradiction. Similarly the \(x\otimes y\mapsto 0\) case (also the \(y\otimes y\mapsto 0\) case) gives rise to an intersecting circle and strand, which is a contradiction. The merging rules involving a strand can be checked in a similar fashion.
2309.05791
Charged particle scattering near the horizon
We study Maxwell theory, in the presence of charged scalar sources, near the black hole horizon in a partial wave basis. We derive the gauge field configuration that solves Maxwell equations in the near-horizon region of a Schwarzschild black hole when sourced by a charge density of a localised charged particle. This is the electromagnetic analog of the gravitational Dray-'t Hooft shockwave near the horizon. We explicitly calculate the S-matrix associated with this shockwave in the first quantised $1\rightarrow 1$ formalism. We develop a theory for scalar QED near the horizon using which we compute the electromagnetic eikonal S-matrix from elastic $2\rightarrow 2$ scattering of charged particles exchanging soft photons in the black hole eikonal limit. The resulting ladder resummation agrees perfectly with the result from the first quantised formalism, whereas the field-theoretic formulation allows for a computation of a wider range of amplitudes. As a demonstration, we explicitly compute sub-leading corrections that arise from four-vertices.
Fabiano Feleppa, Nava Gaddam, Nico Groenenboom
2023-09-11T19:49:28Z
http://arxiv.org/abs/2309.05791v4
# Charged particle scattering near the horizon ###### Abstract We study Maxwell theory, in the presence of charged scalar sources, near the black hole horizon in a partial wave basis. We derive the gauge field configuration that solves Maxwell equations in the near-horizon region of a Schwarzschild black hole when sourced by a charge density of a localised charged particle. This is the electromagnetic analog of the gravitational Dray-'t Hooft shockwave near the horizon. We explicitly calculate the S-matrix associated with this shockwave in the first quantised \(1\to 1\) formalism. We develop a theory for scalar QED near the horizon using which we compute the electromagnetic eikonal S-matrix from elastic \(2\to 2\) scattering of charged particles exchanging soft photons in the black hole eikonal limit. The resulting ladder resummation agrees perfectly with the result from the first quantised formalism, whereas the field-theoretic formulation allows for a computation of a wider range of amplitudes. As a demonstration, we explicitly compute sub-leading corrections that arise from four-vertices. ## 1 Introduction Eikonal physics in field theory and gravity arises in the very high energy limit of scattering processes. In field theory, these are \(2\to 2\) elastic \(t\)-channel scattering processes where external momenta are far greater than virtual momenta that are exchanged [1]. In perturbative quantum gravity about flat space, these processes involve trans-Planckian scattering where the centre of mass energies of scattering processes satisfy \(E\gg M_{Pl}\). Of course, the impact parameter of scattering in this case must necessarily be the largest length scale in the game to remain within the regime of validity of perturbation theory. Eikonal physics in gravitational theories has far reaching theoretical consequences. More recently, its relevance for the calculation of gravitational observables of interest in the inspiral phase of compact binary mergers in gravitational wave astronomy has gained prominence. We refer the reader to a recent review for further details and references [2]. The first examples of eikonal techniques providing for an amplitude-based approach for calculating gravitational observables are shockwaves in flat space [3; 4; 5]. Shockwaves have also been found as non-linear perturbations to classical Einstein equations in black hole backgrounds [6] and general curved spacetimes [7]. The eikonal representation of shockwaves come in two distinct but related avatars. The first is in the form of the change in the wavefunction of a probe particle in a shockwave background. This calculation is intrinsically in a first quantised formalism and can be thought of as \(1\to 1\) scattering. This approach was pioneered by 't Hooft both in flat space [8] and in a black hole background [9; 10; 11].1 The second and arguably more powerful avatar is in a field-theoretic setting where the amplitudes arise from elastic \(2\to 2\) scattering of high energy particles exchanging soft virtual modes. This was established in flat spacetime in [13]. In the black hole background, the field-theoretic analog is only a recent development. The eikonal manifestation of the Dray-'t Hooft shockwaves in the Schwarzschild black hole in terms of virtual graviton exchange has been developed in [14; 15].2 In fact, the field-theoretic avatar of eikonal amplitudes in black hole backgrounds has further applications that are difficult to envision in the first quantised \(1\to 1\) formalism [11; 19; 20; 21]. Footnote 1: See also [12] for a recent application at large distances in curved space. Footnote 2: For results in pure AdS and AdS black holes, see [16; 17] and [18], respectively. In [9], 't Hooft argued for an extension of these techniques to including other forces in the Standard Model in the near-horizon region of the black hole. In particular, he argued that the first quantised manifestation of the electromagnetic force near the horizon involves a certain gauge rotation of the gauge field of the charged particle being scattered near the horizon. The aim of this article is threefold: * We derive the gauge field configuration that solves Maxwell equations in the near-horizon region of a Schwarzschild black hole when sourced by a charge density of a localised charged particle. This is the electromagnetic shockwave analog of the gravitational Dray-'t Hooft shockwave near the horizon. This is done in Section 2.1. * In Section 2.2, we explicitly calculate the S-matrix associated with this shockwave in the first quantised \(1\to 1\) formalism. * Finally, we develop a theory for scalar QED near the horizon, following the general formalism of [14; 15], in Section 3, using which we compute the electromagnetic eikonal S-matrix from elastic \(2\to 2\) scattering of charged particles exchanging soft photons in Section 4. The resulting eikonal resummation is identical to the amplitude found in Section 2.2, whereas the field-theoretic formulation allows for a computation of a wider range of amplitudes. As a demonstration of the fact, in Appendix B, we compute the one-loop diagrams arising from the four-vertex in the theory which are parametrically sub-leading in comparison to the eikonal amplitude that arises from the three-vertex. Our formalism naturally allows for straightforward extensions to non-Abelian gauge fields since we perform these calculations in a basis of partial waves, owed to [22; 23], that has come to much use in the case of scattering of gravitational perturbations in black hole backgrounds. We conclude the paper with further discussion and future directions in Section 5. ## 2 Shockwave of a charged particle in the Schwarzschild background In this section, we review 't Hooft's shockwave analysis in the case of a charged particle [9] propagating in the background of a Schwarzschild black hole. The metric for the background, in four dimensions, can be written as follows: \[\mathrm{d}s^{2}\ =\ -2A\left(u,v\right)\mathrm{d}u\mathrm{d}v+r\left(u,v \right)^{2}\mathrm{d}\Omega_{(2)}^{2}\,, \tag{1}\] where the functions \(A\left(u,v\right)\) and \(r\left(u,v\right)\) are defined as \[A\left(u,v\right)\ =\ \frac{R}{r}\exp\left(1-\frac{r}{R}\right)\quad\text{and} \quad uv\ =\ 2R^{2}\left(1-\frac{r}{R}\right)\exp\left(\frac{r}{R}-1\right) \tag{2}\] The line element \(\mathrm{d}\Omega_{(2)}^{2}\) defines the round metric on the unit two-sphere and \(R=2GM\) is the Schwarzschild radius. ### Gravitational backreaction and electromagnetic gauge rotation In this section, we review the backreaction of a highly boosted charged shockwave on a probe test particle [6]. The gravitational backreaction of the shock leaves an imprint on the gravitational field experienced by the probe. The probe then experiences geodesics that are shifted across the null surface traced out by the shockwave. #### 2.1.1 Backreaction on the gravitational field The stress tensor associated with a localised source carrying momentum \(p_{\mathrm{in}}\) at a location \(u_{0}\) and a point on the sphere \(\Omega_{0}\) can be parametrised as \[T^{\mu\nu}\ =\ \ -p_{\mathrm{in}}\delta\left(u-u_{0}\right)\delta\left( \Omega-\Omega_{0}\right)\delta_{v}^{\mu}\delta_{v}^{\nu}\,. \tag{3}\] An ansatz for the backreacted geometry that solves the Einstein equations with the above source can be taken to be \[\mathrm{d}s^{2}\ =\ \ -2A\left(u,v\right)\mathrm{d}u\Big{(}\mathrm{d}v- \delta\left(u-u_{0}\right)\lambda_{1}\left(\Omega,\Omega_{0}\right)\mathrm{d} u\Big{)}+g\left(u,v\right)\mathrm{d}\Omega_{(2)}^{2}\,, \tag{4}\] where, again, \(\mathrm{d}\Omega_{(2)}^{2}\) is the line element on the unit round two-sphere. Outside of the location of the source shock, a probe particle experiences the background Schwarzschild solution with a vanishing \(\lambda_{1}\left(\Omega,\Omega_{0}\right)\). At the location of the source \(\delta\left(u-u_{0}\right)\), however, the Einstein equations with the source (3) reduce to [10; 11]3 Footnote 3: In this derivation, terms that are quadratic in \(\delta\left(u-u_{0}\right)\) have been neglected. This implies that the calculation is only valid when the impact parameter between the probe and the shock, measured by the transverse distance on the sphere, is larger than Planck length. This is the regime of validity of this effective description. Beyond this regime, it is of course well-known that a point-particle description in gravity is problematic. \[\left(\Delta_{\Omega}-1\right)\lambda_{1}\left(\Omega,\Omega_{0} \right)\ =\ \ -8\pi G\delta\left(\Omega-\Omega_{0}\right)\,, \tag{5}\] where with \(\Delta_{\Omega}\) we denoted the Laplacian on the unit two-sphere. Expanding the above equation in partial waves, we find the following solution: \[\lambda_{1}^{\ell m}\ =\ \frac{8\pi G}{\ell^{2}+\ell+1}\,. \tag{6}\] #### 2.1.2 Backreaction on the electromagnetic field In analogy to the gravitational backreaction discussed above, an electromagnetically charged shock leaves an imprint on the electromagnetic field of the probe. The probe then experiences a discontinuity in its electromagnetic field across the null surface traced out by the shockwave. In what follows, we will derive the precise change in the electromagnetic field of the probe. To this end, consider the field of a particle with charge \(q\), moving with a two-velocity \(u_{\mu}\): \[A_{\mu}\left(x\right)\ =\ \frac{qu_{\mu}}{4\pi\sqrt{x^{2}+\left(u\cdot x\right)^{ 2}}}\ \approx\ \frac{qu_{\mu}}{4\pi\left(u\cdot x\right)}\,, \tag{7}\] where, in the second step, we focussed on the near horizon region \(x^{2}\sim 0\). In [9], 't Hooft argued that this field is pure gauge for a highly boosted observer near the horizon. Indeed, by considering a gauge parameter of the form \[\Lambda\ =\ \frac{q}{4\pi}\log\left[x^{2}+\left(u\cdot x\right)^{2}\right]\,, \tag{8}\] we immediately find that \[\partial_{\mu}\Lambda\ =\ \frac{q}{2\pi}\frac{x_{\mu}+u_{\mu}\left(u\cdot x \right)}{x^{2}+\left(u\cdot x\right)^{2}}\ \approx\ \frac{qu_{\mu}}{2\pi\left(u\cdot x\right)}\,. \tag{9}\] In the second step, we considered the near horizon region \(x^{2}\sim 0\) when the particle is highly boosted \(u_{\pm}\rightarrow\infty\). This shows that (7) is indeed pure gauge in this limit. Therefore, before approaching a backreacting shockwave, the gauge field of a boosted probe particle near the horizon can be gauge fixed to zero. At the location of the shockwave, however, its field is affected by the source. To see this explicitly, let us consider a localised source of charge \(q_{\rm in}\), namely \[J^{\mu}\ =\ \frac{q_{\rm in}}{\sqrt{-g}}\delta\left(u-u_{0}\right)\delta \left(\Omega-\Omega_{0}\right)\delta_{v}^{\mu}\,. \tag{10}\] The ansatz for the electromagnetic field of the probe upon the introduction of the above source can be parametrised as \[A^{\mu}\ =\ \Theta\left(u-u_{0}\right)\lambda_{2}\left(\Omega,\Omega_{0} \right)\delta_{v}^{\mu}\,. \tag{11}\] Therefore, as in the gravitational case, we must solve the Maxwell equations at the location of the horizon \(u=0\): \[g_{\mu\nu}\Box A^{\nu}-\nabla_{\mu}\nabla_{\nu}A^{\nu}\ =\ \frac{q_{\rm in}}{\sqrt{-g}} \delta\left(u-u_{0}\right)\delta\left(\Omega-\Omega_{0}\right)\delta_{v}^{\mu }\,. \tag{12}\] The left hand side of this equation can be simplified in the Schwarzschild background to find \[\Box A^{\mu}-\partial^{\mu}\left(\nabla\cdot A\right) = g^{uv}\left(\partial_{u}\tilde{U}_{v}\right)A^{v}+2\tilde{U}^{u} \partial_{u}A^{v}+2\tilde{V}^{u}\partial_{u}A^{v} \tag{13}\] \[\mbox{}+2\tilde{V}^{v}\tilde{U}_{v}A^{v}-2\tilde{V}^{v}\tilde{V}_ {v}A^{v}+\frac{1}{r^{2}}\Delta_{\Omega}A^{v}-\partial^{v}\left[\partial_{v} \log\left(Ar^{2}\right)A^{v}\right]\,.\] Explicit computation shows that this expression simplifies to reduce (12) to an equation for the undetermined bilocal function \(\lambda_{2}\left(\Omega,\Omega_{0}\right)\) in the field configuration (11): \[\Delta_{\Omega}\lambda_{2}\left(\Omega,\Omega_{0}\right)\ =\ q_{\rm in}\delta \left(\Omega-\Omega_{0}\right)\,, \tag{14}\] where we defined \(\tilde{V}_{a}\coloneqq\partial_{a}\log r\) and \(\tilde{U}_{a}\coloneqq\partial_{a}\log A(r)\). Notice that the Latin index \(a\) runs over the coordinates \(u\) and \(v\). To arrive at this simplification, we integrate the equation against an arbitrary test function to handle the delta function in \(u\). In a partial wave basis, we find \[\lambda_{2}^{\ell m}\ =\ -\frac{q_{\rm in}}{\ell\left(\ell+1\right)}\,. \tag{15}\] Therefore, while the electromagnetic field of the probe could be gauge-fixed to vanish in the absence of sources, the backreaction of a source shock results in a gauge rotation of the probe electromagnetic field. ### An S-matrix for the wavefunction of a probe charged particle The aim of this section is to calculate the S-matrix for the wavefunction of a charged particle in the presence of a gravitationally backreacting charged shockwave. To this end, let us first begin by writing the wavefunction of a charged particle in the said eigenbasis as \(\psi\left(p_{\rm in},q_{\rm in}\right)\ =\ \langle\psi|p_{\rm in},q_{\rm in}\rangle\). In order to label states as such, we may demand the existence of a charge operator which when acted on its eigenstate yields the charge of the state. Just as a superposition of momentum eigenstates yields a state of definite position, a superposition of charge eigenstates will yield a state with definite electric field. As we argued in the previous subsection, for boosted particles backreacting near the horizon of a black hole, this electric field approaches a pure gauge configuration and may be parameterised by a gauge parameter, say, \(\Lambda\). Therefore, we may label states in the momentum-charge basis by \(|p,q\rangle\) or by \(|y,\Lambda\rangle\) in the position-gauge field basis. In terms of momentum and charge eigenstates, the S-matrix is formally given by \(\left(p_{\rm in},q_{\rm in};p_{\rm out},q_{\rm out}\right)\ \coloneqq\ \langle p_{\rm in},q_{\rm in}|p_{\rm out},q_{\rm out}\rangle\,.\) This allows us to write \[\psi\left(p_{\rm in},q_{\rm in}\right) =\ \langle\psi|p_{\rm in},q_{\rm in}\rangle \tag{16}\] \[= \int{\rm d}q_{\rm out}\int\frac{{\rm d}p_{\rm out}}{2\pi}\langle \psi|p_{\rm out},q_{\rm out}\rangle\langle p_{\rm out},q_{\rm out}|p_{\rm in },q_{\rm in}\rangle\] \[= \int{\rm d}q_{\rm out}\int\frac{{\rm d}p_{\rm out}}{2\pi}\langle \psi|p_{\rm out},q_{\rm out}\rangle S^{*}\left(p_{\rm in},q_{\rm in};p_{\rm out },q_{\rm out}\right)\] \[= \int{\rm d}\Lambda_{\rm out}\int{\rm d}y\int{\rm d}q_{\rm out} \int\frac{{\rm d}p_{\rm out}}{2\pi}\psi\left(y,\Lambda_{\rm out}\right) \langle y,\Lambda_{\rm out}|p_{\rm out},q_{\rm out}\rangle\] \[\times\ \ S^{*}\left(p_{\rm in},q_{\rm in};p_{\rm out},q_{\rm out }\right)\,.\] where we used the completeness relations \[\int\mathrm{d}q_{\mathrm{out}}\int\frac{\mathrm{d}p_{\mathrm{out}}}{2\pi}|p_{ \mathrm{out}},q_{\mathrm{out}}\rangle\langle p_{\mathrm{out}},q_{\mathrm{out}}| \ =\ 1\ =\ \ \int\mathrm{d}\Lambda_{\mathrm{out}}\int\mathrm{d}y|y,\Lambda_{\mathrm{ out}}\rangle\langle y,\Lambda_{\mathrm{out}}|\,, \tag{17}\] and the definition of the scattering matrix. As we argued in the previous section, the gravitational backreaction implies that the position of the outgoing particle is determined by the momentum of the incoming particle. Similarly, the gauge parameter of the outgoing particle is given by the charge of the incoming particle. These relations are4 Footnote 4: Here, the “constants” \(\lambda_{i}\) are only constants along the longitudinal coordinates \(u,v\), They indeed depend on the transverse distance on the horizon, between the backreacting shock and the probe outgoing particle. \[y\ =\ \lambda_{1}p_{\mathrm{in}}\quad\mathrm{and}\quad\Lambda_{\mathrm{ out}}\ =\ \lambda_{2}q_{\mathrm{in}}\,, \tag{18}\] which we insert in the previous expression for the wavefunction to find \[\psi\left(p_{\mathrm{in}},q_{\mathrm{in}}\right) = \tag{19}\] \[\times\ S^{*}\left(p_{\mathrm{in}},q_{\mathrm{in}};p_{\mathrm{ out}},q_{\mathrm{out}}\right)\] \[= \int\mathrm{d}q^{\prime}_{\mathrm{in}}\int\mathrm{d}p^{\prime}_{ \mathrm{in}}\int\mathrm{d}q_{\mathrm{out}}\int\frac{\mathrm{d}p_{\mathrm{out}} }{2\pi}\psi\left(p^{\prime}_{\mathrm{in}},q^{\prime}_{\mathrm{in}}\right) \langle y,\Lambda_{\mathrm{out}}|p_{\mathrm{out}},q_{\mathrm{out}}\rangle\] \[\times\ S^{*}\left(p_{\mathrm{in}},q_{\mathrm{in}};p_{\mathrm{ out}},q_{\mathrm{out}}\right)\,.\] The rescaling of integration variables to arrive at the second equality does not change the ranges of integration (which remain from \(-\infty\) to \(\infty\) for both the integrals.) This relation must hold for any wavefunction as (18) contains invertible basis transformations. Therefore, we can finally write \[\int\mathrm{d}q^{\prime}_{\mathrm{in}}\int\mathrm{d}p^{\prime}_{\mathrm{in}} \langle y,\Lambda_{\mathrm{out}}|p_{\mathrm{out}},q_{\mathrm{out}}\rangle S^{*} \left(p_{\mathrm{in}},q_{\mathrm{in}};p_{\mathrm{out}},q_{\mathrm{out}}\right) \ =\ \delta\left(p^{\prime}_{\mathrm{in}}-p_{\mathrm{in}}\right)\delta\left(q^{ \prime}_{\mathrm{in}}-q_{\mathrm{in}}\right)\,. \tag{20}\] To invert this equation for the S-matrix, we now need an expression for \(\langle y,\Lambda_{\mathrm{out}}|p_{\mathrm{out}},q_{\mathrm{out}}\rangle\). Writing the positions \(y\) in a momentum basis gives us a plane wave. Similarly, we know that the electric field and charge density are conjugate and therefore we may write \[\langle y,\Lambda_{\mathrm{out}}|p_{\mathrm{out}},q_{\mathrm{out}}\rangle\ =\ \exp\left(i\,y\,p_{\mathrm{out}}+i\Lambda_{\mathrm{out}}q_{\mathrm{out}}\right) \ =\ \exp\left(i\,\lambda_{1}\,p_{\mathrm{in}}\,p_{\mathrm{out}}+i\lambda_{2}\,q_{ \mathrm{in}}q_{\mathrm{out}}\right)\,. \tag{21}\] Plugging this into the previous expression, we see that it is a Fourier transform equation for the scattering matrix which can easily be inverted to find \[S^{*}\left(p_{\mathrm{in}},q_{\mathrm{in}};p_{\mathrm{out}},q_{\mathrm{out}}\right) \ =\ \ \exp\left(-i\,\lambda_{1}\,p_{\mathrm{in}}\,p_{\mathrm{out}}-i\lambda_{2}\,q_{ \mathrm{in}}q_{\mathrm{out}}\right)\,. \tag{22}\] ### Generalisation to many particles and the continuum We would now like to generalise the previous results to the case of many particles in order to then take a continuum limit to describe a distribution of particles on the horizon. Since quantum mechanics does not allow for particle production, we may safely assume that the number of incoming and outgoing particles is equal and large; we call the number of incoming and outgoing particles as \(N_{\rm in}\) and \(N_{\rm out}\) respectively. We will label the \(i\)-th incoming particles by its longitudinal position \(x_{i}\), angular position on the horizon \(\Omega_{i}\), and momentum \(p_{\rm in}^{i}\) such that \(i\in N_{\rm in}\). Similarly, outgoing particles would be labelled by \(y_{j},\Omega_{j},p_{\rm out}^{j}\) and \(j\in N_{\rm out}\). Assuming that there is no more than one particle at each angular position on the horizon, in the continuum limit \(N_{\rm in}=N_{\rm out}\rightarrow\infty\), the positions of particles may be described by distributions \(x\left(\Omega\right)\) and \(y\left(\Omega\right)\). The basis of states may now be written as \[|p_{\rm in,\ tot},q_{\rm in,\ tot}\rangle\ =\ \bigotimes_{i}|p_{\rm in}^{i},q_{\rm in }^{i}\rangle\quad\mbox{and}\quad|p_{\rm out,\ tot},q_{\rm out,\ tot}\rangle\ =\ \bigotimes_{j}|p_{\rm out}^{j},q_{\rm out,\ tot}^{j}\rangle\,, \tag{23}\] where we assumed a factorised Hilbert space because all parallel moving particles are independent. The completeness relations are now integrals defined with measure \({\rm d}p_{\rm out,\ tot}=\prod_{j}{\rm d}p_{\rm out}^{j}\) and \({\rm d}y_{\rm tot}=\prod_{j}{\rm d}y^{j}\). The S-matrix may formally be written as \[S_{\rm tot}\ :=\ S\left(p_{\rm in,\ tot},q_{\rm in,\ tot};p_{\rm out,\ tot},q_{ \rm out,\ tot}\right)\ :=\ \langle p_{\rm in,\ tot},q_{\rm in,\ tot}|p_{\rm out,\ tot},q_{\rm out,\ tot}\rangle\,. \tag{24}\] This S-matrix is dictated by the backreaction relations derived before, which are now given in terms of invertible matrices that are in turn functions of the transverse distance between the in and out particles: \[y^{j}\ =\ \lambda_{1}^{ij}\left(\Omega_{i},\Omega_{j}\right)p_{\rm in}^{i} \quad\mbox{and}\quad\Lambda_{\rm out}^{j}\ =\ \lambda_{2}^{ij}\left(\Omega_{i},\Omega_{j}\right)q_{\rm in}^{i}\,, \tag{25}\] such that we can write \[|y_{\rm tot},\Lambda_{\rm out,\ tot}\rangle\ =\ \bigotimes_{j}|\lambda_{1}^{ij} \left(\Omega_{i},\Omega_{j}\right)p_{\rm in}^{i},\lambda_{2}^{ij}\left(\Omega_ {i},\Omega_{j}\right)q_{\rm in}^{i}\rangle\,. \tag{26}\] Since the scattering matrix is a basis transformation, it is necessarily bijective between the in and out Hilbert spaces. This implies that the matrices \(\lambda_{1}\left(\Omega_{i},\Omega_{j}\right)\) and \(\lambda_{2}\left(\Omega_{i},\Omega_{j}\right)\) are invertible, which in turn implies that there is no more than one particle entering (leaving) the horizon at any given angle. Moreover, we have the condition that \(N_{\rm in}=N_{\rm out}\). Consequently, we may now repeat our strategy from the single particle case to the multiparticle case. We begin with the wavefunction \[\psi\left(p_{\rm in,tot},q_{\rm in,tot}\right) = \langle\psi|p_{\rm in,tot},q_{\rm in,tot}\rangle \tag{27}\] \[= \int{\rm d}q_{\rm out,tot}\int\frac{{\rm d}p_{\rm out,tot}}{2\pi} \langle\psi|p_{\rm out,tot},q_{\rm out,tot}\rangle\langle p_{\rm out,tot},q_{ \rm out,tot}|p_{\rm in,tot},q_{\rm in,tot}\rangle\] \[= \int{\rm d}q_{\rm out,tot}\int\frac{{\rm d}p_{\rm out,tot}}{2\pi} \langle\psi|p_{\rm out,tot},q_{\rm out,tot}\rangle S_{\rm tot}^{*}\] \[= \int{\rm d}\Lambda_{\rm out,tot}\int{\rm d}y_{\rm tot}\int{\rm d} q_{\rm out,tot}\int\frac{{\rm d}p_{\rm out,tot}}{2\pi}\psi\left(y_{\rm tot}, \Lambda_{\rm out,tot}\right)\] \[\times\ \langle y_{\rm tot},\Lambda_{\rm out,\ tot}|p_{\rm out,\ tot },q_{\rm out,\ tot}\rangle S_{\rm tot}^{*}\,,\] where we used the completeness relations \[\int{\rm d}q_{\rm out,\ tot}\int\frac{{\rm d}p_{\rm out,\ tot}}{2\pi} |p_{\rm out,\ tot},q_{\rm out,\ tot}\rangle\langle p_{\rm out,\ tot},q_{\rm out,\ tot}|\ =\ 1\,, \tag{28}\] \[\int{\rm d}\Lambda_{\rm out,\ tot}\int{\rm d}y_{\rm tot}|y_{\rm tot },\Lambda_{\rm out,\ tot}\rangle\langle y_{\rm tot},\Lambda_{\rm out,\ tot}|\ =\ 1\,. \tag{29}\] We now insert the backreaction relations \[y^{j}\ =\ \lambda_{1}^{ij}\left(\Omega_{i},\Omega_{j}\right)p_{\rm in}^{i} \quad\mbox{and}\quad\Lambda_{\rm out}^{j}\ =\ \lambda_{2}^{ij}\left(\Omega_{i},\Omega_{j}\right)q_{\rm in}^{i}\,, \tag{30}\] resulting in the measures \[\prod_{j}{\rm d}y^{j}\ =\ \det\left(\lambda_{1}^{ij}\left(\Omega_{i},\Omega_{j} \right)\right)\prod_{j}{\rm d}p_{\rm in}^{i}\quad\mbox{and}\quad\prod_{j}{\rm d }\Lambda_{\rm out}^{j}\ =\ \det\left(\lambda_{2}^{ij}\left(\Omega_{i},\Omega_{j} \right)\right)\prod_{j}{\rm d}q_{\rm in}^{i}\,, \tag{31}\] to write the wavefunction as \[\psi\left(p_{\rm in,\ tot},q_{\rm in,\ tot}\right) = \prod_{j}\int{\rm d}q_{\rm in}^{\prime\,i}\int{\rm d}p_{\rm in}^{ \prime\,i}\int{\rm d}q_{\rm out,\ tot}\det\left(\lambda_{1}^{ij}\left(\Omega _{i},\Omega_{j}\right)\right)\det\left(\lambda_{2}^{ij}\left(\Omega_{i},\Omega _{j}\right)\right) \tag{32}\] \[\qquad\times\int\frac{{\rm d}p_{\rm out,\ tot}}{2\pi}\psi\left( \lambda_{1}^{ij}p_{\rm in}^{i},\lambda_{2}^{ij}q_{\rm in}^{i}\right)\] \[\qquad\qquad\times\langle y_{\rm tot},\Lambda_{\rm out,\ tot}|p_{\rm out,\ tot},q_{\rm out,\ tot}\rangle S_{\rm tot}^{*}\,.\] For every \(j\) in the product, we have a sum over all incoming particles labelled by \(i\). In each term of the sum, we rescale the integration variables \(p_{\rm in}\) and \(q_{\rm in}\) to neutralise the corresponding factors of \(\lambda_{1}\) and \(\lambda_{2}\), just as we did in the single particle case, to arrive at \[\psi\left(p_{\rm in,\ tot},q_{\rm in,\ tot}\right) = \int{\rm d}q_{\rm in,\ tot}^{\prime}\int{\rm d}p_{\rm in,\ tot}^{ \prime}\int{\rm d}q_{\rm out,\ tot}\int\frac{{\rm d}p_{\rm out,\ tot}}{2\pi} \psi\left(p_{\rm in,\ tot}^{\prime},q_{\rm in,\ tot}^{\prime}\right) \tag{33}\] \[\qquad\qquad\qquad\qquad\times\langle y_{\rm tot},\Lambda_{\rm out,\ tot}|p_{\rm out,\ tot},q_{\rm out,\ tot}\rangle S_{\rm tot}^{*}\,.\] In analogy to (21), we now write \[\langle y_{\rm tot},\Lambda_{\rm out,\ tot}|p_{\rm out,\ tot},q_ {\rm out,\ tot}\rangle = \prod_{j}\langle y_{j},\Lambda_{\rm out}^{j}|p_{\rm out}^{j},q_{ \rm out}^{j}\rangle \tag{34}\] \[= \exp\left(i\sum_{j}y_{j}p_{\rm out}^{j}+i\sum_{j}\Lambda_{\rm out }^{j}q_{\rm out}^{j}\right)\,.\] Therefore, as we did in the single particle case, we may invert the previous relation for the scattering matrix to finally find \[S_{\rm tot}\ =\ \exp\left(i\lambda_{1}^{ij}p_{\rm in}^{i}p_{\rm out}^{j}+i \lambda_{2}^{ij}q_{\rm in}^{i}q_{\rm out}^{j}\right)\,, \tag{35}\] where a sum over all in and out particles is implicit. The continuum limit is now easy to achieve. We first promote the momenta and charges to be distributions as smooth functions of the sphere coordinates and then replace the sum over in and out particles with integrals over the sphere coordinates as \[S_{\rm tot} = \exp\left[i\int{\rm d}\Omega\,{\rm d}\Omega^{\prime}\left(\lambda_{ 1}\left(\Omega,\Omega^{\prime}\right)p_{\rm in}\left(\Omega\right)p_{\rm out} \left(\Omega^{\prime}\right)+\lambda_{2}\left(\Omega,\Omega^{\prime}\right)q_{ \rm in}\left(\Omega\right)q_{\rm out}\left(\Omega^{\prime}\right)\right)\right] \tag{36}\] \[= \exp\left[i\left(\frac{8\pi G\,p_{\rm in}p_{\rm out}}{\ell^{2}+ \ell+1}+\frac{q_{\rm in}q_{\rm out}}{\ell\left(\ell+1\right)}\right)\right]\,,\] where we expanded the expression in partial waves in the second line and substituted for \(\lambda_{1}\) and \(\lambda_{2}\) using (6) and (15). Of course, the momentum and charge distributions are also expanded in spherical harmonics, but their partial wave indices have been suppressed. ## 3 Scalar QED near the horizon In this section, we set up the effective theory of scalar QED near the horizon. Let us consider a complex scalar field, minimally coupled to the photon in the gravitational background of the Schwarzschild black hole (1): \[S\left[\phi,A_{\mu}\right] = \int{\rm d}^{4}x\sqrt{-g}\left[-\left(D_{\mu}\phi\right)^{*} \left(D^{\mu}\phi\right)-m^{2}\left|\phi\right|^{2}-\frac{1}{4}F_{\mu\nu}F^{ \mu\nu}\right]\,. \tag{37}\] Here, the covariant derivative \(D\) enables gauge and gravitational covariance whereas in what follows, gravitational covariance is enabled by \(\nabla\). The action of the former on the complex scalar is defined by \[\left(D_{\mu}\phi\right)^{*}\left(D^{\mu}\phi\right) = \left(\nabla_{\mu}\phi-iqA_{\mu}\phi\right)^{*}\left(\nabla^{\mu} \phi-iqA^{\mu}\phi\right) \tag{38}\] \[= \nabla_{\mu}\phi^{*}\nabla^{\mu}\phi+iqA^{\mu}\left(\phi^{*} \nabla_{\mu}\phi-\phi\nabla_{\mu}\phi^{*}\right)+q^{2}A_{\mu}^{2}\left|\phi \right|^{2}\,.\] Partial integration now allows us to write the matter action as \[S_{M} \coloneqq \int{\rm d}^{4}x\sqrt{-g}\left[\phi^{*}\left(\Box-m^{2}\right) \phi-qA^{\mu}j_{\mu}-q^{2}A_{\mu}^{2}\left|\phi\right|^{2}\right]\,, \tag{39}\] where with \(\Box\) we denoted the d'Alembertian in the Schwarzschild background while the scalar current \(j_{\mu}\) has been defined as \[j_{\mu} \coloneqq i\left(\phi^{*}\nabla_{\mu}\phi-\phi\nabla_{\mu}\phi^{*} \right)\,. \tag{40}\] The Maxwell action can also be partially integrated to write it in the form \(A_{\mu}{\cal O}^{\mu\nu}A_{\nu}\): \[S_{A_{\mu}} \coloneqq -\,\frac{1}{4}\int{\rm d}^{4}x\left[\left(\nabla_{\mu}A_{\nu}- \nabla_{\nu}A_{\mu}\right)\left(\nabla^{\mu}A^{\nu}-\nabla^{\nu}A^{\mu}\right)\right] \tag{41}\] \[= \frac{1}{2}\int{\rm d}^{4}xA_{\mu}\left[g^{\mu\nu}\Box-\nabla^{ \mu}\nabla^{\nu}-R^{\mu\nu}\right]A_{\nu}\,.\] Since the Schwarzschild metric is a vacuum solution to Einstein equations, the quadratic operator in (41) therefore reduces to \[{\cal O}^{\mu\nu} \coloneqq g^{\mu\nu}\Box-\nabla^{\mu}\nabla^{\nu}\,. \tag{42}\] ### Gauge fixing In what follows, we will exploit the background spherical symmetry of the theory to expand the gauge field into partial waves. As Regge and Wheeler argued [22; 24], vector spherical harmonics can be split into even and odd parity modes \[A_{\mu}^{+}\left(u,v,\Omega\right) = \sum_{\ell m}\left(\begin{matrix}A_{a}\left(u,v\right)\\ A^{+}\left(u,v\right)\partial_{A}\end{matrix}\right)Y_{\ell m}\left(\Omega \right)\,, \tag{3.7}\] \[A_{\mu}^{-}\left(u,v,\Omega\right) = \sum_{\ell m}\left(\begin{matrix}0\\ -A^{-}\left(u,v\right)\epsilon_{A}{}^{B}\partial_{B}\end{matrix}\right)Y_{ \ell m}\left(\Omega\right)\,, \tag{3.8}\] where \(Y_{\ell m}\left(\Omega\right)\) are the familiar spherical harmonics written in a real basis, lowercase Latin indices represent coordinates along the longitudinal directions and uppercase Latin indices represent coordinates along the transverse sphere. All fields \(A_{a}\) and \(A^{\pm}\) depend only on longitudinal coordinates and carry partial wave indices which we suppressed to avoid clutter of notation. Moreover, the antisymmetric tensor \(\epsilon_{A}{}^{B}\) whose indices are raised and lowered by the round metric on the sphere or radius \(r\) is given by \[\epsilon_{A}{}^{B}\ =\ \begin{pmatrix}0&\sin\theta\\ -\csc\theta&0\end{pmatrix}\,. \tag{3.9}\] The Maxwell field has a gauge redundancy that needs to be fixed by a choice of gauge. We choose one5 where \(A^{+}\left(u,v\right)=0\). This choice may be seen as the adaptation of the "Regge-Wheeler gauge" for gravitational scattering to the electromagnetic case. This results in Footnote 5: This is achieved by choosing a gauge parameter, say \(\Lambda\), such that \(\partial_{a}\Lambda=-\partial_{a}A^{+}\), \(\partial_{A}\Lambda=-A^{+}\partial_{A}\) and then redefining \(A_{a}=A_{a}+\partial_{a}A^{+}\). \[A_{\mu}^{+}\left(u,v,\Omega\right) = \sum_{\ell m}\left(\begin{matrix}A_{a}\left(u,v\right)\\ 0\end{matrix}\right)Y_{\ell m}\left(\Omega\right)\,, \tag{3.10}\] \[A_{\mu}^{-}\left(u,v,\Omega\right) = \sum_{\ell m}\left(\begin{matrix}0\\ -A^{-}\left(u,v\right)\epsilon_{A}{}^{B}\partial_{B}\end{matrix}\right)Y_{ \ell m}\left(\Omega\right)\,. \tag{3.11}\] After making the field redefinitions \[\mathcal{A}_{a}^{\ell m}\left(u,v\right)\ \coloneqq\ \frac{\sqrt{A\left(r\right)}A_{a} \left(u,v\right)}{r\left(u,v\right)}\quad\text{and}\quad\mathcal{A}^{\ell m} \left(r\right)\ \coloneqq\ \frac{A^{-}\left(u,v\right)}{r\left(u,v\right)}\,, \tag{3.12}\] and plugging the spherical harmonic decomposition (3.10) and (3.11) into the Maxwell action, Eq. (3.5), we find \[S_{A_{\mu}^{+}}\ =\ \frac{1}{2}\sum_{\ell m}\int\mathrm{d}^{2}x\,\mathcal{A}_{ \ell m}^{a}\Delta_{ab}^{-1}\mathcal{A}_{\ell m}^{b}\quad\text{and}\quad S_{A_ {\mu}^{-}}\ =\ \frac{1}{2}\sum_{\ell m}\int\mathrm{d}^{2}x\,\mathcal{A}_{\ell m}\Delta^{-1} \mathcal{A}_{\ell m}\,. \tag{3.13}\] Above, the quadratic operators are given by \[\Delta_{ab}^{-1}\ =\ \eta_{ab}\left\{\eta^{cd}\partial_{c}\partial_{d}- \frac{A\left(r\right)^{2}}{16r^{2}R^{2}}\left[\left(1+\frac{r}{R}\right)^{2}+2 \left(1+\frac{r}{R}\right)-8\left(2+\frac{r}{R}\right)+4\right]x_{a}x^{a}- \frac{A\left(r\right)}{rR}\right.\] \[\qquad\qquad-\left.\frac{\left(\lambda-1\right)A\left(r\right)}{r^ {2}}+\frac{A\left(r\right)}{4rR}\left(1+\frac{r}{R}\right)-\frac{A\left(r \right)R}{2r^{3}}\right\}-\frac{A\left(r\right)}{4rR}\left(3+\frac{r}{R}\right) \left(x_{b}\partial_{a}-x_{a}\partial_{b}\right)\] \[\qquad\qquad+\frac{A\left(r\right)^{2}}{16r^{2}R^{2}}\left[\left( 1+\frac{r}{R}\right)^{2}-2\left(2+2\frac{r}{R}+\frac{r^{2}}{R^{2}}\right) \right]x_{a}x_{b}-\partial_{a}\partial_{b}\,, \tag{3.14a}\] \[\Delta^{-1}\ =\ \left(\lambda-1\right)\eta^{ab}\partial_{a} \partial_{b}-\frac{A\left(r\right)\left(\lambda-1\right)\left(\lambda-2 \right)}{r^{2}}-\frac{2\left(\lambda-1\right)A\left(r\right)R}{r^{3}}\] \[\qquad\qquad-\frac{A\left(r\right)\left(\lambda-1\right)}{rR}x^ {a}\partial_{a}\,, \tag{3.14b}\] where we defined \(\lambda\coloneqq\ell^{2}+\ell+1\) and \(\eta_{ab}\) is the flat metric in two-dimensions with off-diagonal elements given by \(-1\). It is evident that we have traded a single four-dimensional theory in the Schwarzschild background for an infinite tower of decoupled two-dimensional theories, one for each partial wave, with curvature effects encapsulated in potentials. We present the details of this calculation in Appendix A. ### Near horizon limits and the photon propagator While the four-dimensional theory in curved space can be simplified into decoupled two-dimensional theories in flat space with extra potentials as we demonstrated in the previous section, we have not lost any generality. Therefore, it is still an analytically intractable task to invert the quadratic operators (3.14). As was shown in the gravitational case in [14; 15], the way forward is a near-horizon approximation where the operators simplify considerably. #### 3.2.1 Shockwave approximation Since the eikonal approximation near the black hole horizon derived its motivation from consideration of shockwave geometries, it is natural to impose a constraint on the gauge field fluctuations to obey the shockwave configuration (2.11) in the near-horizon region. We would like to impose these restrictions in a covariant manner on the longitudinal directions for each partial wave. Considering a near-horizon approximation to linear order implies that \(x^{a}x_{a}\sim uv\sim 0\). Additionally, the shockwave approximation can be captured by the condition that \(x^{a}A_{a}\sim uA_{u}\sim vA_{v}\sim 0\). This is understood as follows. Consider the past horizon located at \(v=0\). Thus, one of the terms6, \(uA_{u}\), is naturally vanishing on the horizon. The second term drops if we choose the shockwave configuration as in (2.11) where the \(u\)-component of the gauge field vanishes. An analogous approximation clearly holds on the past horizon. Footnote 6: Notice that in comparison to (2.11), a lowering of the gauge field index is achieved by the metric as \(A_{u}=g_{uv}A^{v}\). Near the horizon, \(g_{uv}\sim-1\). In order to employ this approximation on (3.14), we first note that to linear order, \(r\left(u,v\right)=R+\mathcal{O}\left(uv\right)\) and therefore, \(A\left(u,v\right)\sim 1\). With these considerations, the quadratic operators (3.14) simplify to \[\Delta_{ab}^{-1} = \eta_{ab}\left(\eta^{cd}\partial_{c}\partial_{d}-\frac{\lambda}{R^{2 }}\right)-\frac{x_{b}\partial_{a}}{R^{2}}-\partial_{a}\partial_{b}\,, \tag{3.15a}\] \[\Delta^{-1} = \left(\lambda-1\right)\eta^{ab}\partial_{a}\partial_{b}-\frac{ \lambda\left(\lambda-1\right)}{R^{2}}-\frac{\left(\lambda-1\right)}{R^{2}}x^{ a}\partial_{a}\,. \tag{3.15b}\] In the action, the term containing \(x_{b}\partial_{a}\) can further be simplified as \[\int\mathrm{d}^{2}x\mathcal{A}^{a}x_{b}\partial_{a}\mathcal{A}^{b} = \int\mathrm{d}^{2}x\left(\partial_{a}\left(\mathcal{A}^{a}x_{b} \mathcal{A}^{b}\right)-\mathcal{A}^{a}\eta_{bc}\partial_{a}x^{c}\mathcal{A}^{ b}\right) = -\int\mathrm{d}^{2}x\mathcal{A}^{a}\eta_{ab}\mathcal{A}^{b}\,. \tag{3.16}\] Similarly, the term containing \(x^{a}\partial_{a}\) can also be simplified to \[-\frac{\left(\lambda-1\right)}{R^{2}}\int\mathrm{d}^{2}x\mathcal{ A}x^{b}\partial_{b}\mathcal{A} = -\frac{\left(\lambda-1\right)}{2R^{2}}\int\mathrm{d}^{2}x\,x^{b} \partial_{b}\mathcal{A}^{2} \tag{3.17}\] \[= \frac{\left(\lambda-1\right)}{R^{2}}\int\mathrm{d}^{2}x\, \mathcal{A}^{2}+\frac{\left(\lambda-1\right)}{R^{2}}\int\mathrm{d}^{2}x\, \partial_{b}\left(x^{b}\mathcal{A}^{2}\right)\,.\] The issue of the boundary term may potentially be subtle. On first glance, it appears to be safe to assume that the field falls off at the boundaries. However, notions of "far past" and "far future" on the horizon are not well-defined in an evaporating black hole formed by collapse within the effective field theory regime being considered in this paper. Nevertheless, we will blithely ignore this boundary term in this work and leave a careful analysis of the relevance of it in the effective theory for the future. Therefore, the quadratic operators in this approximation scheme can be written in their final form in the following way \[\Delta_{ab}^{-1} = \eta_{ab}\left(\eta^{cd}\partial_{c}\partial_{d}-\frac{\left( \lambda-1\right)}{R^{2}}\right)-\partial_{a}\partial_{b}\,, \tag{3.18a}\] \[\Delta^{-1} = \left(\lambda-1\right)\left[\eta^{ab}\partial_{a}\partial_{b}- \frac{\left(\lambda-1\right)}{R^{2}}\right]\,. \tag{3.18b}\] It is noteworthy that when \(\ell=0\), we have that \(\lambda=1\) and thus the odd action vanishes. This is consistent with the fact that there are no odd degrees of freedom in the monopole sector. Propagator for the photon:These quadratic operators above, in Eq. (3.18), may be written in Fourier space as follows: \[\Delta_{ab}^{-1}\left(p\right) = -\eta_{ab}\left(p^{2}+\frac{\left(\lambda-1\right)}{R^{2}}\right) +p_{a}p_{b}\,, \tag{3.19}\] \[\Delta^{-1}\left(p\right) = -\left(\lambda-1\right)\left[p^{2}+\frac{\left(\lambda-1\right)} {R^{2}}\right]\,. \tag{3.20}\] In order to find their inverses, we demand that \[\Delta_{ab}^{-1}\left(p\right)\Delta^{bc}\left(p^{\prime}\right) = \delta_{a}^{c}\,\delta^{\left(2\right)}\left(p-p^{\prime}\right) \quad\text{and}\quad\Delta^{-1}\left(p\right)\Delta\left(p^{\prime}\right) = \delta^{\left(2\right)}\left(p-p^{\prime}\right)\,. \tag{3.21}\] Lorentz invariance along the longitudinal directions near the horizon implies that the most general ansatz for the propagator for the even mode can be written as \[\Delta^{bc}\left(k\right) = f_{1}\left(k^{2}\right)\left(\eta^{bc}-f_{2}\left(k^{2}\right)k^{ b}k^{c}\right)\,. \tag{3.22}\] Explicitly computing \(\Delta_{ab}^{-1}\left(k\right)\Delta^{bc}\left(k^{\prime}\right)\) and solving for the unknown functions \(f_{i}\), we find that the propagator for the even mode of the photon is \[\Delta^{ab}\left(k\right) = \frac{-1}{k^{2}+\frac{\lambda-1}{R^{2}}-i\epsilon}\left(\eta^{ab }+\frac{R^{2}k^{a}k^{b}}{\lambda-1}\right)\,. \tag{3.23}\] In similar vein, the propagator for the odd mode of the photon can be worked out to be \[\Delta\left(k\right) = \frac{-1}{\left(\lambda-1\right)\left[p^{2}+\frac{\left(\lambda -1\right)}{R^{2}}\right]}\,. \tag{3.24}\] Just as in the case of the graviton, the photon acquires an effective mass near the horizon owing to curvature effects, while the photon in four dimensions remains massless as it must. #### 3.2.2 A leading order near-horizon approximation As it turns out, there is a different approximation that simplifies the quadratic operators considerably. This was also noted in the case of the graviton [15]. In this approximation, unlike in the shockwave approximation, the configurations that the photon may acquire are not constrained. Instead, we simply work to leading order in the near-horizon approximation assuming that the gauge field does not blow up on the horizon. This implies that all terms proportional to \(x^{a}\) can be dropped, leading us to the following operators in this scheme: \[\Delta_{ab}^{-1} = \eta_{ab}\left(\eta^{cd}\partial_{c}\partial_{d}-\frac{\lambda}{ R^{2}}\right)-\partial_{a}\partial_{b}\,, \tag{3.25a}\] \[\Delta^{-1} = \left(\lambda-1\right)\left[\eta^{ab}\partial_{a}\partial_{b}- \frac{\lambda}{R^{2}}\right]\,. \tag{3.25b}\] Following the calculation in the shockwave approximation, the corresponding propagators in this alternative leading order near-horizon approximation can easily be found to be \[\Delta^{ab}\left(k\right) = \frac{-1}{k^{2}+\frac{\lambda}{R^{2}}-i\epsilon}\left(\eta^{ab}+ \frac{R^{2}k^{a}k^{b}}{\lambda}\right)\,, \tag{3.26}\] \[\Delta\left(k\right) = \frac{-1}{\left(\lambda-1\right)\left(p^{2}+\frac{\lambda}{R^{2} }\right)}\,. \tag{3.27}\] ### Interaction vertices In this section, we proceed with writing the interaction vertices in a partial wave basis, starting from the three-vertex in the following section and subsequently focussing on the four-vertex. #### 3.3.1 Three-point interaction The three-vertex in (3.3) is given by \[S^{(3)} \coloneqq -iq\int\mathrm{d}^{4}x\sqrt{-g}\,A^{\mu}\left(\phi^{\star}\nabla_{ \mu}\phi-\phi\nabla_{\mu}\phi^{\star}\right) \tag{3.28}\] \[= -iq\int\mathrm{d}^{4}x\sqrt{-g}\,\left[g^{ab}A_{a}\left(\phi^{ \star}\partial_{b}\phi-\phi\partial_{b}\phi^{\star}\right)+g^{AB}A_{A}\left( \phi^{\star}\partial_{B}\phi-\phi\partial_{B}\phi^{\star}\right)\right]\,.\] Since the dominant contribution to the high-energy amplitudes in the eikonal sector arise from the longitudinal momenta, we will henceforth ignore the transverse effects. This amounts to dropping the second term in the square brackets above. We then expand all fields in partial wave basis to find \[S^{(3)} = -iq\sum_{\begin{subarray}{c}\ell m\\ \ell_{1}m_{1}\\ \ell_{2}m_{2}\end{subarray}}\int\mathrm{d}^{2}xA\left(r\right)r^{2}g^{ab}A_{a }^{\ell m}\left(\phi_{\ell_{1}m_{1}}^{\star}\partial_{b}\phi_{\ell_{2}m_{2}}- \phi_{\ell_{2}m_{2}}\partial_{b}\phi_{\ell_{1}m_{1}}^{\star}\right)C^{(3)} \left[\ell_{i},m_{i}\right]\,. \tag{3.29}\] To arrive at this expression, we defined the following integral of three spherical harmonics at different \(\ell,m\)'s on the two-sphere: \[C^{(3)}\left[\ell_{i},m_{i}\right] \coloneqq \int\mathrm{d}\Omega_{(2)}Y_{\ell m}\left(\Omega\right)Y_{\ell_{1 }m_{1}}\left(\Omega\right)Y_{\ell_{2}m_{2}}\left(\Omega\right)\,. \tag{3.30}\] In general, interaction terms break the spherical symmetry of the background as can be seen from the presence of the Clebsch-Gordon coefficients in the three-vertex above. While it is certainly possible to perform several calculations with this general vertex, it turns out to be very cumbersome for the resummation of eikonal diagrams. Therefore, it is convenient to choose one scalar leg in each vertex of the diagrams to always be in a fixed partial wave, say \(\ell=0\). Such a choice may be thought of as being reasonable given that we do not imagine the spherical symmetry of the large black hole background to be badly destroyed by perturbative scattering processes. This approximation then leads us to a simplification of the above three-vertex where one of the spherical harmonics merely gives an overall factor of \(Y_{00}\) as follows: \[S^{(3)} \approx -i\frac{q}{\sqrt{4\pi}}\sum_{\ell m}\int\mathrm{d}^{2}xA\left(r \right)r^{2}g^{ab}A_{a}^{\ell m}\left(\phi_{\ell m}^{\star}\partial_{b}\phi_{ 0}-\phi_{0}\partial_{b}\phi_{\ell m}^{\star}\right)\,, \tag{3.31}\] where we denoted the scalar mode in the s-wave by \(\phi_{0}\). In order to use the same photon mode that appeared in the propagators of the previous section, we perform the field redefinition in Eq. (3.12) in addition to rescaling the scalar field as \(\phi\rightarrow\frac{\varphi}{r}\) to find \[S^{(3)} \approx -i\frac{q}{\sqrt{4\pi}\,R}\sum_{\ell m}\int\mathrm{d}^{2}x\,\eta^ {ab}\mathcal{A}_{a}^{\ell m}\left(\varphi_{\ell m}^{\star}\partial_{b}\varphi_ {0}-\varphi_{0}\partial_{b}\varphi_{\ell m}^{\star}\right)\,. \tag{3.32}\] This result is approximate in two ways. One is that we have ignored the mixing of partial waves as described above. On the other hand, we took a near-horizon limit where the field redefinitions of the scalar result in sub-leading terms in \(1/R\) which we have ignored. #### 3.3.2 Four-point interaction Next, we move to the four vertex in (3.3): \[S^{(4)} \coloneqq -\,q^{2}\int\mathrm{d}^{4}x\sqrt{-g}\,A^{\mu}A_{\mu}\left|\phi^{2}\right| \tag{3.33}\] \[= -\,q^{2}\int\mathrm{d}\Omega\int\mathrm{d}^{2}xA\left(r\right)r^ {2}\left[g^{ab}A_{a}A_{b}+g^{AB}A_{A}A_{B}\right]\left|\phi\right|^{2}\,.\] We now expand all fields in partial waves as before. However, the integral over the two-sphere now involves four spherical harmonics in the even sector. Whereas in the odd sector, two of the four spherical harmonics come with derivatives on them as can be seen from the definition of the odd component of the photon in (3.11). Following our previous choice to ignore partial wave mixing, we now take both scalar modes in the vertex to be in the s-wave.7 Finally, redefining the fields as in the three-vertex case, we find Footnote 7: In the odd sector, there is no other available choice since the odd mode of the photon vanishes identically in the s-wave. The even sector, however, allows for more choice but we make the simplest one. Other choices, or even the most general integral with all four spherical harmonics, may just as well be written down in terms of products of Clebsch-Gordon coefficients. \[S^{(4)} \approx -\,\frac{q^{2}}{4\pi R^{2}}\int\mathrm{d}^{2}x\left(\mathcal{A}_{ a,\ell m}^{2}+\left(\lambda-1\right)\mathcal{A}_{\ell m}^{2}\right)\left| \varphi\right|^{2}\,. \tag{3.34}\] To arrive at this expression, in the odd sector, we made use of the following familiar integral \[\int\mathrm{d}\Omega\,\epsilon^{AB}D_{B}Y_{\ell m}\epsilon_{A}{}^{C}D_{C}Y_{ \ell^{\prime}m^{\prime}}\ =\ \left(\lambda-1\right)\delta_{\ell\ell^{\prime}}\delta_{mm^{\prime}}\,. \tag{3.35}\] ## 4 Eikonal S-matrix = shockwave S-matrix Having built up all the tools necessary for computing scattering amplitudes in the theory near the black hole horizon, we first summarise the necessary Feynman rules before moving on to the computations of the amplitudes. ### Feynman rules near the horizon In this section, we collect all the Feynman rules we have derived in the previous sections (propagator of the even mode, propagator of the odd mode, scalar propagator and finally the two vertices). * The propagators of the even mode of the photon in the shockwave and leading order near-horizon approximations, respectively, are \[\mathcal{P}^{ab}\left(k\right) =\ \frac{-i}{k^{2}+\frac{\lambda-1}{R^{2}}-i\epsilon}\left(\eta^{ab }+\frac{R^{2}k^{a}k^{b}}{\lambda-1}\right)\,,\] (4.1a) \[\mathcal{P}^{ab}\left(k\right) =\ \frac{-i}{k^{2}+\frac{\lambda}{R^{2}}-i\epsilon}\left(\eta^{ab}+ \frac{R^{2}k^{a}k^{b}}{\lambda}\right)\,.\] (4.1b) * The propagators of the odd mode of the photon, on the other hand, again in the shockwave and leading order near-horizon approximations, respectively, are given by \[{\cal P}\left(k\right) = \frac{-i}{\left(\lambda-1\right)\left(p^{2}+\frac{\lambda-1}{R^{2} }-i\epsilon\right)}\,,\] (4.2a) \[{\cal P}\left(k\right) = \frac{-i}{\left(\lambda-1\right)\left(p^{2}+\frac{\lambda}{R^{2} }-i\epsilon\right)}\,.\] (4.2b) These were derived in Section 3.2.1 and Section 3.2.2. * The scalar propagator is straightforward to compute and was done in [15]. We have: \[{\cal P}_{\phi}\left(k\right) = \frac{-i}{k^{2}+\frac{\lambda}{R^{2}}+m^{2}-i\epsilon}\,.\] (4.3) * Next, we have two three-vertices arising from the results of Section 3.3.1. These are drawn in Fig. 1 below. * Finally, we have two four vertices, one each from the even and odd photon as can be seen from Section 3.3.2. These are drawn in Fig. 2 below. ### Tree level elastic \(2-2\) diagrams Using the Feynman rules from the previous section, we first start with the two dominant tree level diagrams, which are drawn in Fig. 3. In terms of the Mandelstam variables, namely \[s \coloneqq -\left(p_{1}+p_{2}\right)^{2} = -\left(p_{3}+p_{4}\right)^{2}\,, \tag{4.4}\] \[t \coloneqq -\left(p_{1}-p_{3}\right)^{2} = -\left(p_{2}-p_{4}\right)^{2}\,,\] (4.5) \[u \coloneqq -\left(p_{1}-p_{4}\right)^{2} = -\left(p_{2}-p_{3}\right)^{2}\,, \tag{4.6}\] Figure 1: Here, the dashed lines refer to the scalar mode in the s-wave whereas the solid black line corresponds to the scalar mode in an arbitrary partial wave. The hats indicate that the modes are in Fourier space. The arrows superimposed on the scalar legs indicate the flow of charge while the external arrows indicate flow of momentum. the two diagrams in the left and right panels of Fig. 3 can be evaluated to find \[i\mathcal{M}_{e^{-}e^{-}}\ =\ \frac{-iq^{2}s\left(\lambda-2\right)}{2\pi\lambda \left(\lambda-2\right)-2\pi R^{2}t\left(\lambda-1\right)}\left[1+\frac{t}{2s}- \frac{2m^{2}}{s}-\frac{\lambda+1}{sR^{2}}-\frac{1}{sR^{2}}\frac{(\lambda-1)^{2 }}{\lambda-2}\right] \tag{4.7}\] and \[i\mathcal{M}_{e^{+}e^{-}}\ =\ \frac{iq^{2}s\left(\lambda-2\right)}{2\pi \lambda\left(\lambda-2\right)-2\pi R^{2}t\left(\lambda-1\right)}\left[1+\frac{ t}{2s}-\frac{2m^{2}}{s}-\frac{\lambda+1}{sR^{2}}-\frac{1}{sR^{2}}\frac{( \lambda-1)^{2}}{\lambda-2}\right]\,, \tag{4.8}\] respectively. Here, we have made extensive use of the fact that the external particles are of course on-shell, namely \[p_{1}^{2} =\ -m^{2}-\mu^{2}\lambda\,, p_{3}^{2} =\ -m^{2}-\mu^{2}\,, \tag{4.9}\] \[p_{2}^{2} =\ -m^{2}-\mu^{2}\,, p_{4}^{2} =\ -m^{2}-\mu^{2}\lambda\,. \tag{4.10}\] We are primarily interested in the eikonal limit of scattering in this paper, which in flat space amounts to negligible momentum transfer \(t\to 0\). Moreover, we demand the black hole eikonal condition that \(M_{BH}E=M_{BH}\sqrt{s}\gg M_{Pl}^{2}\) which is equivalent to demanding that \(sR^{2}\gg 1\) and that \(s\gg m^{2}\). In this black hole eikonal limit, the above tree level amplitudes reduce to \[i\mathcal{M}_{\rm tree}\ =\ \pm\,\frac{iq^{2}s}{2\pi\left(\ell^{2}+\ell+1 \right)}\,. \tag{4.11}\] Figure 3: The two dominant tree level diagrams built out of the three vertices of the theory in the \(t\)-channel. Figure 2: In these vertices, the solid black wiggly lines represent the even mode of the photon as in the three-vertex case, whereas the blue wiggly line refers to the odd mode of the photon. The scalar modes remain as before. There is of course a third tree level diagram which is in the \(s\)-channel but it can be checked that this is of \({\cal O}\left(s^{0}\right)\) and therefore heavily sub-leading in the large \(s\) limit. The above results were derived in the leading order near-horizon approximation of Section 3.2.2. The analogous result in the shockwave approximation of Section 3.2.1 is given by \[i{\cal M}_{\rm tree}\ =\ \pm\,\frac{iq^{2}s}{2\pi\ell\left(\ell+1\right)}\,. \tag{4.12}\] ### Loop diagrams and the eikonal ladder Loop diagrams in the black hole eikonal limit are dominated by the so-called ladder diagrams. The one and two loop diagrams are shown in Fig. 4 and Fig. 5, respectively. Following the analysis in the gravitational case [1; 13; 14; 15], a general loop diagram with \(n\) virtual photons exchanged can be written as \[i{\cal M}_{n} = \left(i\frac{q}{\sqrt{4\pi}\,R}\right)^{2n}\int\prod_{j=1}^{n} \left[\frac{\mathrm{d}^{2}k_{j}}{\left(2\pi\right)^{2}}4p_{a}^{1}p_{b}^{2}{ \cal P}^{ab}\left(k_{j}\right)\right]\times I\times(2\pi)^{2}\,\delta^{(2)} \left(\sum\nolimits_{j=1}^{n}k_{j}\right) \tag{4.13}\] \[= \left(i\frac{q}{\sqrt{4\pi}\,R}\right)^{2n}\left(\frac{s}{2} \right)^{n}\int\prod_{j=1}^{n}\left[\frac{\mathrm{d}^{2}k_{j}}{\left(2\pi \right)^{2}}4{\cal P}^{uv}\left(k_{j}\right)\right]\times I\times(2\pi)^{2}\, \delta^{(2)}\left(\sum\nolimits_{j=1}^{n}k_{j}\right)\,.\] Of course, \(n\) exchanged photons implies an \(n-1\) loop amplitude. This equation is the two-dimensional analog of Eq. (3.1) in [1], with electromagnetic vertices replacing the meson ones, Figure 4: All leading one-loop diagrams in the black hole eikonal. and with \(q=p_{1}-p_{3}=p_{2}-p_{4}=0\). To get to the second equality, we assumed the two momenta to be light-like, i.e., \(p_{1}=\left(p_{u}^{1},0\right)\), \(p_{2}=\left(0,p_{v}^{2}\right)\). All the matter propagators to be inserted are contained in the quantity \(I\), which can be derived analogously to [1], resulting in \[i\mathcal{M}_{n}\ =\ -\frac{q^{2}s}{8\pi R^{2}n!}\int\frac{\mathrm{d}^{2}k}{ \left(2\pi\right)^{2}}4\mathcal{P}^{uv}\left(k\right)\int\mathrm{d}^{2}xe^{-ik \cdot x}\left(i\chi\right)^{n-1}\,, \tag{4.14}\] where the quantity \(\chi\) has been defined as \[\chi\ \coloneqq\ -\frac{iq^{2}s}{8\pi R^{2}}\int\frac{\mathrm{d}^{2}k} {\left(2\pi\right)^{2}}4\mathcal{P}^{uv}\left(k\right)e^{-ik\cdot x}\times \left[\frac{1}{-2p_{1}\cdot k-i\epsilon}\frac{1}{2p_{2}\cdot k-i\epsilon}\right. \\ \left.+\frac{1}{-2p_{1}\cdot k-i\epsilon}\frac{1}{-2p_{2}\cdot k- i\epsilon}+\frac{1}{2p_{1}\cdot k-i\epsilon}\frac{1}{2p_{2}\cdot k-i\epsilon}\right. \\ \left.+\frac{1}{2p_{1}\cdot k-i\epsilon}\frac{1}{-2p_{2}\cdot k-i \epsilon}\right]\,. \tag{4.15}\] The expression in square brackets can be rewritten in a more convenient form as \[\chi\ =\ -\frac{iq^{2}s}{8\pi R^{2}}\int\frac{\mathrm{d}^{2}k}{ \left(2\pi\right)^{2}}4\mathcal{P}^{uv}\left(k\right)e^{-ik\cdot x}\left(\frac {1}{2p_{1}\cdot k+i\epsilon}-\frac{1}{2p_{1}\cdot k-i\epsilon}\right)\\ \times\left(\frac{1}{2p_{2}\cdot k+i\epsilon}-\frac{1}{2p_{2} \cdot k-i\epsilon}\right)\,. \tag{4.16}\] Now, making use of the identity \[\frac{1}{x+i\epsilon}-\frac{1}{x-i\epsilon}\ =\ -2\pi i\delta\left(x\right)\,, \tag{4.17}\] Figure 5: All leading \(e^{-}e^{-}\) two loop diagrams in the black hole eikonal. we arrive at a simple expression for \(\chi\): \[\chi = -\,\frac{iq^{2}s}{8\pi R^{2}}\int\frac{\mathrm{d}^{2}k}{\left(2\pi \right)^{2}}4\mathcal{P}^{uv}\left(k\right)e^{-ik\cdot x}\left(-2\pi i\right)^{2 }\delta\left(2p_{1}\cdot k\right)\delta\left(2p_{2}\cdot k\right) \tag{4.18}\] \[= -\,\frac{q^{2}}{4\pi R^{2}}\mathcal{P}^{uv}\left(0\right)\] \[= \begin{cases}-\frac{q^{2}}{4\pi\left(\lambda-1\right)}&\text{in the shockwave approximation of Section \ref{sec:2.1}}\,,\\ -\frac{q^{2}}{4\pi\lambda}&\text{in the leading order approximation of Section \ref{sec:2.2}}\,.\end{cases}\] Since the resulting eikonal function, conveniently enough, does not depend on spacetime coordinates, we may write \[i\mathcal{M}_{n} = -\,\frac{q^{2}s}{8\pi R^{2}n!}\left(i\chi\right)^{n-1}\int\frac{ \mathrm{d}^{2}k}{\left(2\pi\right)^{2}}4\mathcal{P}^{uv}\left(k\right)\int \mathrm{d}^{2}xe^{-ik\cdot x} \tag{4.19}\] \[= -\,\frac{q^{2}s}{8\pi R^{2}n!}\left(i\chi\right)^{n-1}\int\frac{ \mathrm{d}^{2}k}{\left(2\pi\right)^{2}}4\mathcal{P}^{uv}\left(k\right)\left(2 \pi\right)^{2}\delta^{\left(2\right)}\left(k\right)\] \[= 2s\frac{\left(i\chi\right)^{n}}{n!}\,.\] The complete resummed perturbatively exact amplitude is therefore given by \[i\mathcal{M}\ =\ i\sum_{n}\mathcal{M}_{n}\ =\ 2s\left[\exp(i\chi)-1\right]\,. \tag{4.20}\] Inserting (4.18) in (4.20) in the above equation, and recalling that \(\lambda=\ell^{2}+\ell+1\), we find \[i\mathcal{M}\ =\ \begin{cases}4p_{\text{in}}p_{\text{out}}\left[\exp\left(- \frac{i}{4\pi}\frac{q^{2}}{\ell^{2}+\ell}\right)-1\right]&\text{shockwave approximation}\,,\\ 4p_{\text{in}}p_{\text{out}}\left[\exp\left(-\frac{i}{4\pi}\frac{q^{2}}{\ell^{2}+ \ell+1}\right)-1\right]&\text{leading order approximation}\,,\end{cases} \tag{4.21}\] where we also relabelled the external momenta as \(p_{in}\) and \(p_{out}\). This amplitude is a result of diagrams of the \(e^{-}e^{-}\) kind in Fig. 5. Considering the remaining case of \(e^{-}e^{+}\) scattering results in an overall sign in the phase of the exponent. These two cases can be combined into a single formula, resulting in \[i\mathcal{M}\ =\ \begin{cases}4p_{\text{in}}p_{\text{out}}\left[\exp \left(-\frac{i}{4\pi}\frac{q_{\text{in}}q_{\text{out}}}{\ell^{2}+\ell}\right) -1\right]&\text{shockwave approximation}\,,\\ 4p_{\text{in}}p_{\text{out}}\left[\exp\left(-\frac{i}{4\pi}\frac{q_{\text{in}}q _{\text{out}}}{\ell^{2}+\ell+1}\right)-1\right]&\text{leading order approximation}\,,\end{cases} \tag{4.22}\] where \(q_{\text{in}}\) and \(q_{\text{out}}\) are the asymptotic charges of the in-particle and out-particle, respectively. For particles, we have that \(q_{\text{in/out}}=-q\) and that \(q_{\text{in/out}}=q\) for antiparticles. The relation between the scattering amplitude and the S-matrix is given by \[\langle\text{out}|S-\mathds{1}|\text{in}\rangle\ \ =\ (2\pi)^{2}\delta^{\left(2 \right)}\left(p_{1}+p_{2}-p_{3}-p_{4}\right)i\,\langle\text{out}|\mathcal{M}| \text{in}\rangle. \tag{4.23}\] For instance, considering particles, the in- and out-states can be defined as \[|{\rm in}\rangle \ \ \coloneqq\ |{\rm in}_{1}\rangle\otimes|{\rm in}_{2}\rangle\ =\ \frac{1}{2}\left(a_{00}^{\dagger}(p_{1})+a_{\ell m}^{\dagger}(p_{1})\right) \left(a_{00}^{\dagger}(p_{2})+a_{\ell m}^{\dagger}(p_{2})\right)|0\rangle\,, \tag{4.24}\] \[|{\rm out}\rangle \ \ \coloneqq\ |{\rm out}_{1}\rangle\otimes|{\rm out}_{2}\rangle\ =\ \frac{1}{2}\left(a_{00}^{\dagger}(p_{3})+a_{\ell m}^{\dagger}(p_{3})\right) \left(a_{00}^{\dagger}(p_{4})+a_{\ell m}^{\dagger}(p_{4})\right)|0\rangle\,. \tag{4.25}\] A similar definition exists for antiparticles. In the free theory, a straightforward calculation leads to8 Footnote 8: In comparison to [14; 15], the overall kinematic factor differs by a factor of 2 owing to a different pre-factor in the definition of the Mandelstam variables. \[\langle{\rm out}|{\rm in}\rangle\ \ =\ 2s(2\pi)^{2}\left[\delta(p_{1}-p_{3}) \delta(p_{2}-p_{4})+\delta(p_{1}-p_{4})\delta(p_{2}-p_{3})\right]\,. \tag{4.26}\] On the other hand, using Eq. (4.20), we may write the interacting piece as \[\langle{\rm out}|S-\mathds{1}|{\rm in}\rangle\ \ =\ 2s(2\pi)^{2}(e^{i\chi}-1)\left[ \delta(p_{1}-p_{3})\delta(p_{2}-p_{4})+\delta(p_{1}-p_{4})\delta(p_{2}-p_{3}) \right]\,. \tag{4.27}\] Putting it all together, Eq. (4.23) in the operator notation gives \[S\ =\ \mathds{1}e^{i\chi}\ \ =\ \begin{cases}\mathds{1}\exp\left(-\frac{i}{4\pi} \frac{q_{\rm in}q_{\rm out}}{\ell^{2}+\ell}\right)&\text{ shockwave approximation}\,,\\ \mathds{1}\exp\left(-\frac{i}{4\pi}\frac{q_{\rm in}q_{\rm out}}{\ell^{2}+\ell+1 }\right)&\text{leading order approximation}\,.\end{cases} \tag{4.28}\] This result agrees with the expectation from the first quantised shockwave S-matrix in (2.36) up to a curious factor of \(4\pi\), to which we turn our attention in the following section. ### A certain factor of \(4\pi\) As can be seen from comparing (4.21) in the field-theoretical eikonal and (2.36) of the first quantised shockwave calculation, there is a curious discrepancy of a factor of \(4\pi\). This factor arises in the field theory calculation from the \(s\)-wave component of one of the scalars in the three-point vertex in Section 3.3.1. One may suspect that the difference between the two resides in the sources. Should the charge densities in the two frameworks be the same, we expect the S-matrix elements to agree. In what follows, we show that this is indeed true. In the quantum-mechanics case, the \(v\)-component of the current density, in a given partial, can be directly read off from Eq. (2.10): \[J_{v}^{\ell m}\ =\ \ -\frac{q_{\rm in}^{\ell m}}{R^{2}}\delta(u)\,. \tag{4.29}\] On the other hand, in the field-theory side, the \(v\)-component of the current is given by \[j_{v}\ \ =\ -iq_{\rm in}\left(\phi^{*}\partial_{v}\phi-v\phi\partial_{v} \phi^{*}\right)\,, \tag{4.30}\] as can be seen from (3.4). Expanding in spherical harmonics, inserting the rescaling \(\phi=\varphi/R\), and demanding one of the scalars to be in the \(s\)-wave, we find \[j_{v}^{\ell m} = -i\frac{q_{\rm in}}{R^{2}}Y_{00}\left(\varphi_{0}^{*}\partial_{v} \varphi_{\ell m}-\varphi_{\ell m}\partial_{v}\varphi_{0}^{*}+\varphi_{\ell m}^ {*}\partial_{v}\varphi_{0}-\varphi_{0}\partial_{v}\phi_{\ell m}^{*}\right)\,. \tag{4.31}\] The main difference between (4.30) and (4.31) is that the current density in quantum field theory is an operator. Thus, a proper comparison warrants its expectation value of \(j_{v}^{\ell m}\) in an appropriately defined initial state: \[\left|{\rm in}\right\rangle = \int\frac{{\rm d}p}{2\pi}\Phi\left(p\right)\times\frac{1}{\sqrt{ 2}}\left(a_{\ell m}^{\dagger}\left(p\right)+a_{0}^{\dagger}\left(p\right) \right)\left|0\right\rangle, \tag{4.32}\] where \(\Phi(p)\) is a normalized test function localized around a specific momentum, say, \(p=p_{1}\). The expression above, (4.32), represents a one-particle state where we have a superposition of two shells at equal momentum where one of them is in the \(s\)-wave. We now recall that the only non-vanishing commutation relations are \[\left[a_{\ell m}\left(p\right),a_{\ell^{\prime}m^{\prime}}^{\dagger}\left(p^{ \prime}\right)\right] = 2\pi\delta\left(p-p^{\prime}\right)\delta_{\ell\ell^{\prime}} \delta_{mm^{\prime}}\,. \tag{4.33}\] To compute the expectation value of \(j_{v}^{\ell m}\), we begin with the first term in (4.30): \[-i\frac{q_{\rm in}}{R^{2}}Y_{00}\langle{\rm in}|\varphi_{0}^{*} \partial_{v}\varphi_{\ell m}|{\rm in}\rangle = -\frac{1}{2R^{2}}Y_{00}\int\frac{{\rm d}p{\rm d}p^{\prime}}{\left( 2\pi\right)^{2}}\frac{p^{\prime}}{\sqrt{pp^{\prime}}}\int\frac{{\rm d}k{\rm d }k^{\prime}}{\left(2\pi\right)^{2}}\Phi^{*}\left(k\right)\Phi\left(k^{\prime}\right) \tag{4.34}\] \[\times\left\langle 0|a_{0}\left(k\right)a_{\ell m}\left(p \right)a_{\ell m}^{\dagger}\left(p^{\prime}\right)a_{\ell m}^{\dagger}\left(k^ {\prime}\right)|0\right\rangle e^{i(p-p^{\prime})x}\] \[= -\frac{1}{4R^{2}}Y_{00}\int\frac{{\rm d}p{\rm d}p^{\prime}}{ \left(2\pi\right)^{2}}\frac{p^{\prime}}{\sqrt{pp^{\prime}}}\int\frac{{\rm d}k {\rm d}k^{\prime}}{\left(2\pi\right)^{2}}\Phi^{*}\left(k\right)\Phi\left(k^{ \prime}\right)\] \[\times\left(2\pi\right)^{2}\delta\left(k-p\right)\delta\left(k^{ \prime}-p^{\prime}\right)e^{i(p-p^{\prime})x}\,.\] Similar expressions follow for the second and third terms in (4.31) and we may write the complete expectation value as9 Footnote 9: For a state \(\Phi\left(p\right)\) with support sharply localised in momentum at \(p_{1}\), the contribution to the integral from polynomials essentially comes from \(p^{\prime}=p=p_{1}\) allowing us to kill the polynomial pre-factors. The exponentials however, oscillate faster and need to be kept. \[\langle{\rm in}|j_{v}^{\ell m}|{\rm in}\rangle = -\frac{1}{2R^{2}}q_{\rm in}Y_{00}\int\frac{{\rm d}p{\rm d}p^{ \prime}}{\left(2\pi\right)^{2}}\frac{p^{\prime}}{\sqrt{pp^{\prime}}}\int\frac{ {\rm d}k{\rm d}k^{\prime}}{\left(2\pi\right)^{2}}\Phi^{*}\left(k\right)\Phi \left(k^{\prime}\right) \tag{4.35}\] \[\times\left(2\pi\right)^{2}\left[\delta\left(k-p\right)\delta \left(k^{\prime}-p^{\prime}\right)e^{i(p-p^{\prime})x}+\delta\left(k-p^{ \prime}\right)\delta\left(p-k^{\prime}\right)e^{-i(p-p^{\prime})x}\right]\] \[= -\frac{q_{\rm in}}{2R^{2}}Y_{00}\int\frac{{\rm d}p{\rm d}p^{ \prime}}{\left(2\pi\right)^{2}}\left(\Phi^{*}\left(p\right)\Phi\left(p^{ \prime}\right)e^{i(p-p^{\prime})x}+\Phi^{*}\left(p^{\prime}\right)\Phi\left(p \right)e^{-i(p-p^{\prime})x}\right)\] \[= -\frac{q_{\rm in}}{R^{2}}Y_{00}\left|\Phi\left(x\right)\right|^{2}\,,\] where we made use of the inverse Fourier transform \[\Phi\left(x\right) = \int\frac{{\rm d}p}{2\pi}\Phi\left(p\right)e^{-ipx}\,. \tag{4.36}\] Now, interpreting \(|\Phi(x)|^{2}\) as a probability distribution, we assume the particle with momentum \(p_{1}\) is localized at \(u=0\), and find \[\langle J_{v}^{\ell m}\rangle\ =\ \ -\frac{q_{\rm in}}{R^{2}}Y_{00}\delta(u)\ =\ \ -\frac{q_{\rm in}}{\sqrt{4\pi}\,R^{2}}\delta(u)\,. \tag{100}\] Comparing (101) and (100), we see that \(q_{in}^{lm}=q_{in}/\sqrt{4\pi}\). This resolves the apparent discrepancy between the field theory and shockwave results, leading to perfect agreement. ## 5 Conclusions In this article, we established an equivalence between the \(1\to 1\) S-matrix in the first quantised formalism arising from electromagnetic shockwaves as classical solutions to Maxwell equations near the black hole horizon and the \(t\)-channel elastic \(2\to 2\) in the black hole eikonal limit. In order to do so, we developed a second quantised theory for electromagnetic fluctuations and charged particle scattering near the black hole horizon. While the \(1\to 1\) result builds on [9], the \(2\to 2\) result extends the formalism first developed in [14; 15]. The formalism developed in this article is naturally suited for incorporating other forces of the Standard Model. It would be very interesting to see if there are non-Abelian shockwaves near the horizon and if new physics emerges. The second quantised theory allows for a calculation of various quantities of physical interest, including corrections to the electromagnetic potential near the horizon in the spirit of [25] and other classical observables [26]. The gravitational eikonal also led to speculation about a certain antipodal correlation on the bifurcation sphere on the horizon [10; 27; 28]. It would be interesting to find an electromagnetic analog of these proposals. An analog of the relation between the shockwave algebra and the soft algebra near null-infinity found in [29] is also an interesting question to explore near the horizon of a black hole. In [30], all symmetries associated with the near-horizon scattering of gravitational radiation have been derived. The techniques developed there can easily be adapted to the electromagnetic radiation that will emerge from the theory developed here in this paper. Such electromagnetic radiation is expected to result in a near-horizon memory effect that may have observable consequences in the spectral fluctuations of stellar oscillations as discussed in [30]. ## Acknowledgements We are grateful to Gerard 't Hooft for various helpful conversations over the years. We acknowledge the support of the Netherlands Organisation for Scientific Research (NWO) and the Delta-Institute for Theoretical Physics (D-ITP) that is funded by the Dutch Ministry of Education, Culture and Science (OCW). Nava G. is currently supported by project RTI4001 of the Department of Atomic Energy, Govt. of India. Maxwell theory in partial waves The quadratic Lagrangian for the Maxwell field written in terms of (3.6) is \[\mathcal{L}_{\gamma}=\frac{1}{2}A_{\mu}\mathcal{O}^{\mu\nu}A_{\nu}=\frac{1}{2} \left(A_{a}\mathcal{O}^{ab}A_{b}+A_{a}\mathcal{O}^{aB}A_{B}+A_{B}\mathcal{O}^{Ba }A_{a}+A_{A}\mathcal{O}^{AB}A_{B}\right),\] (A.1) where the four terms above are given by \[A_{a}\mathcal{O}^{aB}A_{B} =-A_{a}^{+}\tilde{\nabla}^{a}\left(V^{b}A_{b}^{+}\right),\] (A.2) \[A_{B}\mathcal{O}^{Ba}A_{a} =\frac{1}{2}g^{AB}A_{A}^{-}\left(V_{a}\tilde{\nabla}^{a}-V_{a}V^{ a}\right)A_{B}^{-},\] (A.3) \[A_{A}\mathcal{O}^{AB}A_{B} =A_{A}^{-}\left(g^{AB}\hat{\nabla}^{C}\hat{\nabla}_{C}-\frac{1}{4 }g^{AB}V^{a}V_{a}+g^{AB}\hat{\square}\right)A_{B}^{-}-\frac{1}{2}A_{A}^{-}g^{ AB}\tilde{\nabla}^{a}\left(V_{a}A_{B}^{-}\right),\] (A.4) \[A_{a}\mathcal{O}^{ab}A_{b} =A_{a}^{+}\left(g^{ab}\hat{\square}-g^{ac}g^{bd}\tilde{\nabla}_{c }\tilde{\nabla}_{d}+\frac{1}{r^{2}}g^{ab}\Delta_{\Omega}+g^{ab}V^{d}\tilde{ \nabla}_{d}-\frac{1}{2}g^{ac}V_{c}V^{b}\right)A_{b}^{+}.\] (A.5) Above, the differential operators \(\tilde{\nabla}\) and \(\tilde{\square}\) are the covariant derivative and the d'Alembertian on the light-cone, respectively. Lower-case Latin indices are raised and lowered with the light-cone metric \(g_{ab}\). Moreover, \(\hat{\nabla}\) represents the covariant derivative on the sphere, and upper-case Latin indices are raised and lowered with the metric of the two-sphere \(g_{AB}\). Finally, we denoted the Laplacian on the unit round sphere with \(\Delta_{\Omega}\) and defined the vector potential \(V_{a}\coloneqq 2\partial_{a}\log r\). From the above expressions, we immediately notice the decoupling between even- and odd-parity modes. These expressions are derived by direct computation, following the strategy laid out in the gravitational case in [14; 15]. We would like to employ the partial wave decomposition described in Section 3. To this end, we first notice that the sum of (A.2) and (A.5) gives \[A_{a}\mathcal{O}^{ab}A_{b}+A_{a}\mathcal{O}^{aB}A_{B}=\sum_{\ell,m;\ell^{ \prime},m^{\prime}}Y_{\ell m}Y_{\ell^{\prime}m^{\prime}}A_{\ell m,a}\mathcal{ P}^{ab}A_{\ell^{\prime}m^{\prime},b},\] (A.6) where the operator \(\mathcal{P}^{ab}\) is given by \[\mathcal{P}^{ab}\coloneqq g^{ab}\tilde{\square}-\tilde{\nabla}^{a}\tilde{ \nabla}^{b}-g^{ab}\frac{\lambda-1}{r^{2}}+g^{ab}V^{c}\tilde{\nabla}_{c}-\frac {1}{2}V^{a}V^{b}-\tilde{\nabla}^{a}V^{b}-V^{b}\tilde{\nabla}^{a}.\] (A.7) To arrive at this expression, we have used that \(\Delta_{\Omega}Y_{\ell m}=\ell(\ell+1)Y_{\ell m}\coloneqq\left(\lambda-1\right) Y_{\ell m}\). Next, the sum of (A.3) and (A.4) gives \[A_{B}\mathcal{O}^{Ba}A_{a}+A_{A}\mathcal{O}^{AB}A_{B}=\sum_{\ell,m;\ell^{ \prime},m^{\prime}}g^{AB}\left(\epsilon_{A}{}^{C}\partial_{C}Y_{\ell m}\right) \left(\epsilon_{B}{}^{D}\partial_{D}Y_{\ell^{\prime}m^{\prime}}\right)A_{ \ell m,2}\mathcal{P}A_{\ell^{\prime}m^{\prime},2},\] (A.8) where the operator \(\mathcal{P}\) is now \[\mathcal{P}\coloneqq\tilde{\square}+\frac{2-\lambda}{r^{2}}-\frac{1}{2}\tilde {\nabla}^{a}V_{a}-\frac{3}{4}V_{a}V^{a}.\] (A.9) The resulting action for the even mode of the photon is therefore \[S_{\gamma,\text{even}}\coloneqq\frac{1}{2}\sum_{\ell,m}\int\text{d}^{2}xA(r)r^{2} A_{\ell m,a}\mathcal{P}^{ab}A_{\ell^{\prime}m^{\prime},b},\] (A.10) where we used the usual orthogonality of the scalar spherical harmonics. Similarly, the action for the odd-parity contribution is given by \[S_{\gamma,\text{odd}} =\frac{1}{2}\sum_{\ell,m;\ell^{\prime},m^{\prime}}\int\text{d} \Omega g^{AB}\left(\epsilon_{A}{}^{C}\partial_{C}Y_{\ell m}\right)\left( \epsilon_{B}{}^{D}\partial_{D}Y_{\ell^{\prime}m^{\prime}}\right)\int\text{d}^{ 2}xA(r)r^{2}A_{\ell m,2}\mathcal{P}A_{\ell^{\prime}m^{\prime},2}\] \[=\frac{1}{2}\sum_{\ell,m}\int\text{d}^{2}xA(r)r^{2}A_{\ell m,2} \mathcal{P}A_{\ell^{\prime}m^{\prime},2}\,.\] (A.11) where this time we used the orthogonality relation for the vector spherical harmonics \[\int\text{d}\Omega g^{AB}\left(\epsilon_{A}{}^{C}\partial_{C}Y_{\ell m}\right) \left(\epsilon_{B}{}^{D}\partial_{D}Y_{\ell^{\prime}m^{\prime}}\right)=\left( \lambda-1\right)\delta_{\ell\ell^{\prime}}\delta_{mm^{\prime}}\,,\] (A.12) and absorbed the factor \(\lambda-1\) into the operator. Spherical symmetry of the background has ensured that the four-dimensional theory has been reduced to an infinite tower of decoupled two-dimensional theories, one for each partial wave. In order to write down the action in a form suitable for our purposes, we will absorb a factor of \(r^{2}\) into the fields; this is achieved by introducing the following new operator acting on the fields: \[\mathcal{D}_{a}(\boldsymbol{\cdot})\coloneqq\tilde{\nabla}_{a}(\boldsymbol{ \cdot})+\frac{1}{2}V_{a}(\boldsymbol{\cdot})=\frac{1}{r}\tilde{\nabla}_{a}(r \boldsymbol{\cdot})\,.\] (A.13) The operators \(\mathcal{P}_{ab}\) and \(\mathcal{P}\) can then be written in terms of \(\mathcal{D}_{a}\) as \[\mathcal{P}^{ab} =g^{ab}\left[\mathcal{D}_{a}\mathcal{D}^{a}-F_{c}^{c}-\frac{ \lambda-1}{r^{2}}\right]-\mathcal{D}^{a}\mathcal{D}^{b}-F^{ab}-V^{[b}\mathcal{ D}^{a]},\] (A.14) \[\mathcal{P} =\left(\lambda-1\right)\mathcal{D}_{a}\mathcal{D}^{a}+\frac{ \left(\lambda-1\right)\left(2-\lambda\right)}{r^{2}}-2\left(\lambda-1\right)F_ {a}^{a}-\left(\lambda-1\right)V_{a}\mathcal{D}^{a},\] (A.15) where we have defined the symmetric tensor \(F_{ab}\) as \[F_{ab}:=\frac{1}{2}\mathcal{D}_{(a}V_{b)}=\frac{1}{r}\tilde{\nabla}_{a}\tilde{ \nabla}_{b}r=\frac{1}{2}\tilde{\nabla}_{(a}V_{b)}+\frac{1}{4}V_{a}V_{b}.\] (A.16) From the definition of \(\mathcal{D}_{a}\), it is straightforward to derive the identities \[\mathcal{D}^{a}\mathcal{D}_{a}(\boldsymbol{\cdot}) =\frac{1}{r}\tilde{\square}(r\boldsymbol{\cdot}),\] (A.17) \[\mathcal{D}^{a}\mathcal{D}^{b}(\boldsymbol{\cdot}) =\frac{1}{r}\tilde{\nabla}^{a}\tilde{\nabla}^{b}(r\boldsymbol{ \cdot}),\] (A.18) \[V^{[b}\mathcal{D}^{a]}(\boldsymbol{\cdot}) =\frac{1}{r}V^{[b}\tilde{\nabla}^{a]}(r\boldsymbol{\cdot}),\] (A.19) which allow us to rewrite the two integrands in the even and odd actions (ignoring the function \(A(r)\) for the moment) as10 Footnote 10: For simplicity, we suppressed the partial wave indices. \[r^{2}A_{a}\mathcal{P}^{ab}A_{b} \xrightarrow{\eqref{eq:2.1.1}\,\eqref{eq:2.1.2}\,}rA_{a}\left[g^{ ab}\left(\tilde{\Box}-F_{c}^{c}-\frac{\lambda-1}{r^{2}}\right)-\tilde{\nabla}^{a} \tilde{\nabla}^{b}-F^{ab}-V^{[b}\tilde{\nabla}^{a]}\right]rA_{b},\] (A.20) \[r^{2}A_{2}\mathcal{P}A_{2} \xrightarrow{\eqref{eq:2.1.2}\,\eqref{eq:2.1.2}\,}rA_{2}\left[ \left(\lambda-1\right)\left(\tilde{\Box}-2F_{a}^{a}-V_{a}\tilde{\nabla}^{a} \right)+\frac{\left(\lambda-1\right)\left(2-\lambda\right)}{r^{2}}\right]rA_{2}.\] (A.21) We may now safely make the following field redefinitions: \[\tilde{A}_{a}\coloneqq rA_{a},\quad\mathcal{A}\coloneqq rA_{2}.\] (A.22) Th complete quadratic action for the photon can then be written as \[S_{\gamma}=S_{\gamma,\text{even}}+S_{\gamma,\text{odd}}=\frac{1}{2}\int \mathrm{d}^{2}x\sqrt{-\tilde{g}}\tilde{A}^{a}\tilde{\Delta}_{ab}^{-1}\tilde{A }^{b}+\frac{1}{2}\int\mathrm{d}^{2}x\sqrt{-\tilde{g}}\tilde{A}\tilde{\Delta}^ {-1}\tilde{A},\] (A.23) where \(\sqrt{-\det\left(g_{ab}\right)}\coloneqq\sqrt{-\tilde{g}}=A(r)\), while the operators \(\tilde{\Delta}_{ab}^{-1}\) and \(\tilde{\Delta}^{-1}\) are given by \[\tilde{\Delta}_{ab}^{-1} \coloneqq g_{ab}\left(\tilde{\Box}-F_{c}^{c}-\frac{\lambda-1}{r^{ 2}}\right)-\tilde{\nabla}_{a}\tilde{\nabla}_{b}-F_{ab}-V_{[b}\tilde{\nabla}_ {a]},\] (A.24) \[\tilde{\Delta}^{-1} \coloneqq\left(\lambda-1\right)\tilde{\Box}+\frac{\left(\lambda- 1\right)\left(2-\lambda\right)}{r^{2}}-2\left(\lambda-1\right)F_{a}^{a}-\left( \lambda-1\right)V_{a}\tilde{\nabla}^{a}.\] (A.25) To find the photon propagator, we observe that the metric in the effective two-dimensional theory is conformally flat, i.e, \(\tilde{g}_{ab}=A(r)\eta_{ab}\), where \(\eta_{ab}\) is the two-dimensional Minkowski metric in light-cone coordinates with off-diagonal elements being \(-1\). This allows us to rewrite the theory in flat space with curvature effects traded for potentials. As an illustration, consider the action for the odd mode: \[S_{\gamma,\text{odd}} =\frac{\lambda-1}{2}\int\mathrm{d}^{2}xA(r)\mathcal{A}\left[ \tilde{g}^{ab}\tilde{\nabla}_{a}\tilde{\nabla}_{b}+\frac{2-\lambda}{r^{2}}-2 \tilde{g}^{ab}F_{ab}-\tilde{g}^{ab}V_{a}\tilde{\nabla}_{b}\right]\mathcal{A}\] \[=\frac{\lambda-1}{2}\int\mathrm{d}^{2}xA(r)\mathcal{A}\left[ \frac{\eta^{ab}}{A(r)}\tilde{\nabla}_{a}\tilde{\nabla}_{b}+\frac{2-\lambda}{r ^{2}}-\frac{2\eta^{ab}}{A(r)}F_{ab}-\frac{\eta^{ab}}{A(r)}V_{a}\tilde{\nabla}_ {b}\right]\mathcal{A},\] (A.26) Pulling the function \(A\left(r\right)\) through the quadratic operator, the odd action becomes: \[S_{\gamma,\text{odd}}=\frac{\lambda-1}{2}\int\mathrm{d}^{2}x\mathcal{A}\left[ \partial^{2}+A(r)\frac{2-\lambda}{r^{2}}-2F_{a}^{a}-V^{b}\partial_{b}\right] \mathcal{A},\] (A.27) where we defined \(\partial^{2}\coloneqq\eta^{ab}\partial_{a}\partial_{b}\). Furthermore, we need to consistently redefine the antisymmetric tensor \(F_{ab}\) in terms of partial derivatives. From it definition (A.16), we have \[F_{ab}=\frac{1}{4}\tilde{\nabla}_{a}V_{b}+\frac{1}{4}\tilde{\nabla}_{b}V_{a}+ \frac{1}{4}V_{a}V_{b}=\frac{1}{4}\left(\partial_{a}V_{b}-\Gamma_{ab}^{e}V_{e} \right)+\frac{1}{4}\left(\partial_{b}V_{a}-\Gamma_{ba}^{e}V_{e}\right)+\frac{1 }{4}V_{a}V_{b}.\] (A.28) We now express the Christoffel symbols of the form \(\Gamma^{e}_{ab}\) as \[\Gamma^{e}_{ab}=2\delta^{e}_{(a}U_{b)}-\tilde{g}_{ab}U^{e}=2\delta^{e}_{(a}U_{b)}- \tilde{g}_{ab}\tilde{g}^{de}U_{d}=2\delta^{e}_{(a}U_{b)}-\eta_{ab}\eta^{de}U_{d} =2\delta^{e}_{(a}U_{b)}-\eta_{ab}U^{e},\] (A.29) where we introduced a new potential, \(U_{a}\), defined as \[U_{a}\coloneqq\frac{1}{2A(r)}\partial_{a}A(r).\] (A.30) Therefore, we have that \[F_{ab} =\frac{1}{2}\partial_{(a}V_{b)}-\frac{1}{2}\left(\delta^{e}_{a}U_ {b}+\delta^{e}_{b}U_{a}-\eta_{ab}U^{e}\right)V_{e}+\frac{1}{4}V_{a}V_{b}\] \[=\frac{1}{2}\partial_{(a}V_{b)}-\frac{1}{2}U_{b}V_{a}-\frac{1}{2} U_{a}V_{b}+\frac{1}{2}\eta_{ab}U^{e}V_{e}+\frac{1}{4}V_{a}V_{b}\] \[=\frac{1}{2}\partial_{(a}V_{b)}-U_{(a}V_{b)}+\frac{1}{2}\eta_{ab} U^{e}V_{e}+\frac{1}{4}V_{a}V_{b}.\] (A.31) The last equality in the above expression is our new definition of \(F_{ab}\), after the rescaling. An analogous procedure yields the following action for the even photon: \[S_{\gamma,\text{even}}=\frac{1}{2}\int\mathrm{d}^{2}xA(r)\tilde{A}^{a}\left[g _{ab}\left(\tilde{\Box}-F^{c}_{c}-\frac{\lambda-1}{r^{2}}\right)-\tilde{\nabla }_{a}\tilde{\nabla}_{b}-F_{ab}-V_{[b}\tilde{\nabla}_{a]}\right]\tilde{A}^{b}.\] (A.32) Redefining \(\tilde{A}_{a}\coloneqq\sqrt{A(r)}\mathcal{A}_{a}\) and using that \(\tilde{g}_{ab}=A(r)\eta_{ab}\), we obtain \[S_{\gamma,\text{even}}=\frac{1}{2}\int\mathrm{d}^{2}x\sqrt{A(r) }\mathcal{A}^{a}\left[\eta_{ab}\left(\eta^{cd}\tilde{\nabla}_{c}\tilde{\nabla }_{d}-\eta^{cd}F_{cd}-A(r)\frac{\lambda-1}{r^{2}}\right)\right.\\ \left.-\tilde{\nabla}_{a}\tilde{\nabla}_{b}-F_{ab}-\frac{1}{2}V_{ b}\tilde{\nabla}_{a}+\frac{1}{2}V_{a}\tilde{\nabla}_{b}\right]\frac{1}{ \sqrt{A(r)}}\mathcal{A}^{b}.\] (A.33) Evaluating the action of the operator in square brackets on \(\mathcal{A}^{b}/\sqrt{A(r)}\), we find \[S_{\gamma,\text{even}}=\frac{1}{2}\int\mathrm{d}^{2}x\mathcal{A} ^{a}\left[\eta_{ab}\left(\partial^{2}-U_{c}U^{c}+\frac{1}{2}V_{c}U^{c}-\frac{1 }{2}\partial_{c}V^{c}-\frac{1}{4}V_{c}V^{c}-A(r)\frac{\lambda-1}{r^{2}}\right) \right.\\ \left.+\,2U_{[b}\partial_{a]}+U_{a}U_{b}-\partial_{a}U_{b}- \partial_{a}\partial_{b}-V_{[b}\partial_{a]}-F_{ab}\right]\mathcal{A}^{b}.\] (A.34) To sum up, the complete quadratic photon action \(S_{\gamma}\) is now given by \[S_{\gamma}=S_{\gamma,\text{even}}+S_{\gamma,\text{odd}}=\frac{1}{2}\int \mathrm{d}^{2}x\mathcal{A}^{a}\Delta^{-1}_{ab}\mathcal{A}^{b}+\frac{1}{2}\int \mathrm{d}^{2}x\mathcal{A}\Delta^{-1}\mathcal{A},\] (A.35) where the operators after the rescaling have been defined as \[\begin{split}\Delta^{-1}_{ab}\coloneqq\eta_{ab}\left(\partial^{2 }-U_{c}U^{c}+\frac{1}{2}V_{c}U^{c}-\frac{1}{2}\partial_{c}V^{c}-\frac{1}{4}V_{ c}V^{c}-A(r)\frac{\lambda-1}{r^{2}}\right)\\ +2U_{[b}\partial_{a]}+U_{a}U_{b}-\partial_{a}U_{b}-\partial_{a} \partial_{b}-V_{[b}\partial_{a]}-F_{ab},\end{split}\] (A.36) \[\Delta^{-1}\coloneqq\left(\lambda-1\right)\partial^{2}+A(r)\frac{\left(\lambda-1 \right)\left(2-\lambda\right)}{r^{2}}-2\left(\lambda-1\right)F_{a}^{a}-\left( \lambda-1\right)V^{b}\partial_{b}.\] (A.37) From their definitions, the potentials appearing in these quadratic operators satisfy: \[V_{a} =\frac{A}{rR}x_{a},\] (A.38) \[U_{a} =-\frac{A}{4rR}\left(1+\frac{r}{R}\right)x_{a},\] (A.39) \[\partial_{a}V_{b} =\frac{A}{rR}\eta_{ab}-\frac{A^{2}}{2R^{2}r^{2}}\left(2+\frac{r} {R}\right)x_{a}x_{b},\] (A.40) \[\partial_{a}U_{b} =-\frac{A}{4rR}\left(1+\frac{r}{R}\right)\eta_{ab}+\frac{A^{2}}{8 R^{2}r^{2}}\left(2+2\frac{r}{R}+\frac{r^{2}}{R^{2}}\right)x_{a}x_{b},\] (A.41) \[F_{ab} =\frac{AR}{2r^{3}}\eta_{ab},\] (A.42) where we recall that the Schwarzschild background is specified by \[A(r)=\frac{R}{r}e^{1-\frac{r}{R}},\quad UV=2R^{2}\left(1-\frac{r}{R}\right)e ^{\frac{r}{R}-1}.\] (A.43) By inserting the above expressions in Eqs. (A.36) and (A.37), we obtain \[\Delta_{ab}^{-1} =\eta_{ab}\left\{\eta^{cd}\partial_{c}\partial_{d}-\frac{A\left( r\right)^{2}}{16r^{2}R^{2}}\left[\left(1+\frac{r}{R}\right)^{2}+2\left(1+ \frac{r}{R}\right)-8\left(2+\frac{r}{R}\right)+4\right]x_{a}x^{a}-\frac{A\left( r\right)}{rR}\right.\] \[\qquad\qquad\left.-\frac{\left(\lambda-1\right)A\left(r\right)}{ r^{2}}+\frac{A\left(r\right)}{4rR}\left(1+\frac{r}{R}\right)-\frac{A\left(r \right)R}{2r^{3}}\right\}-\frac{A\left(r\right)}{4rR}\left(3+\frac{r}{R}\right) \left(x_{b}\partial_{a}-x_{a}\partial_{b}\right)\] \[\qquad\qquad\left.+\frac{A\left(r\right)^{2}}{16r^{2}R^{2}} \left[\left(1+\frac{r}{R}\right)^{2}-2\left(2+2\frac{r}{R}+\frac{r^{2}}{R^{2} }\right)\right]x_{a}x_{b}-\partial_{a}\partial_{b}\,,\] (A.44a) \[\Delta^{-1} =\left(\lambda-1\right)\eta^{ab}\partial_{a}\partial_{b}-\frac{A \left(r\right)\left(\lambda-1\right)\left(\lambda-2\right)}{r^{2}}-\frac{2 \left(\lambda-1\right)A\left(r\right)R}{r^{3}}\] \[\qquad\qquad\qquad-\frac{A\left(r\right)\left(\lambda-1\right)}{ rR}x^{a}\partial_{a}\,,\] (A.44b) which is the result quoted in the main text in (3.14). ## Appendix B One-loop diagrams with the four vertex The contributions of the four vertex at tree-level are naturally sub-leading in comparison to those of the three-vertex. This is down to the simple fact that the vertex does not contain momenta. Therefore, in the limit of large energies of centre of mass, the three-vertex naturally dominates. As it turns out, this is also true at loop level as we will demonstrate in this appendix. In what follows, we consider one-loop diagrams of the type drawn in Fig. 6. Using the Feynman rules presented in Section 4.1, we write the amplitude as follows11: Footnote 11: Note that momentum conservation implies that \(k-p_{4}+p_{2}-k+p_{3}-p_{1}=0\). \[i\mathcal{M}^{\rm even}_{\rm seagull}=\int\frac{\mathrm{d}^{2}k} {\left(2\pi\right)^{2}}\left[\frac{-i}{k^{2}+\tilde{m}^{2}-i\epsilon}\left( \eta^{bc}+\frac{R^{2}k^{b}k^{c}}{\lambda-1}\right)\left(-\frac{2iq^{2}}{4\pi R ^{2}}\eta_{ab}\right)\right.\\ \times\frac{-i}{k^{\prime 2}+\tilde{m}^{2}-i\epsilon}\left(\eta^{da}+ \frac{R^{2}k^{d}k^{a}}{\lambda-1}\right)\left(-\frac{2iq^{2}}{4\pi R^{2}}\eta _{cd}\right)\right],\] (B.1) with \(\tilde{m}^{2}\coloneqq\frac{\lambda-1}{R^{2}}\), \(k^{\prime}\coloneqq k-\tilde{p}\), \(\tilde{p}\coloneqq p_{4}-p_{2}\). It is easy to see that Eq. (B.1) can be split into four contributions. We can thus write \[i\mathcal{M}^{\rm even}_{\rm seagull}=I_{1}+I_{2}+I_{3}+I_{4},\] (B.2) where the following quantities have been defined: \[I_{1} \coloneqq\frac{q^{4}}{2\pi^{2}R^{4}}\int\frac{\mathrm{d}^{2}k}{ \left(2\pi\right)^{2}}\frac{1}{\left(k^{2}+\tilde{m}^{2}-i\epsilon\right) \left(k^{\prime^{2}}+\tilde{m}^{2}-i\epsilon\right)},\] (B.3) \[I_{2} \coloneqq\frac{q^{4}}{4\pi^{2}R^{2}}\frac{1}{\lambda-1}\int\frac{ \mathrm{d}^{2}k}{\left(2\pi\right)^{2}}\frac{k^{\prime 2}}{\left(k^{2}+\tilde{m}^{2}-i \epsilon\right)\left(k^{\prime^{2}}+\tilde{m}^{2}-i\epsilon\right)},\] (B.4) \[I_{3} \coloneqq\frac{q^{4}}{4\pi^{2}R^{2}}\frac{1}{\lambda-1}\int\frac{ \mathrm{d}^{2}k}{\left(2\pi\right)^{2}}\frac{k^{\prime 2}}{\left(k^{2}+\tilde{m}^{2}-i \epsilon\right)\left(k^{\prime^{2}}+\tilde{m}^{2}-i\epsilon\right)},\] (B.5) \[I_{4} \coloneqq\frac{q^{4}}{4\pi^{2}}\frac{1}{\left(\lambda-1\right)^{2} }\int\frac{\mathrm{d}^{2}k}{\left(2\pi\right)^{2}}\frac{\eta_{ab}\eta_{cd}k^{ b}k^{c}k^{dd}k^{\prime a}}{\left(k^{2}+\tilde{m}^{2}-i\epsilon\right)\left(k^{ \prime 2}+\tilde{m}^{2}-i\epsilon\right)}.\] (B.6) In what follows, we will work in coordinates where the near-horizon two-dimensional flat metric is diagonal, instead of the light-cone variants we have used so far. These two sets of coordinates are related by \[u=\frac{1}{\sqrt{2}}\left(x^{0}+x^{1}\right)\,,\,\,\,v=\frac{1}{\sqrt{2}}\left( x^{0}-x^{1}\right)\,.\] (B.7) Figure 6: One-loop diagram arising from the four vertex involving the even mode. We will employ dimensional regularization and start by considering (B.3), temporarily suppressing the \(i\epsilon\)'s for notational convenience, where we shift to \(d\) dimensions: \[\int\frac{\mathrm{d}^{2}k}{\left(2\pi\right)^{2}}\frac{1}{\left(k^{2}+\tilde{m}^{ 2}\right)\left(k^{\prime^{2}}+\tilde{m}^{2}\right)} \rightarrow \int\frac{\mathrm{d}^{d}k}{\left(2\pi\right)^{d}}\frac{1}{\left(k ^{2}+\tilde{m}^{2}\right)\left(k^{\prime^{2}}+\tilde{m}^{2}\right)},\,\,\,d=2+\varepsilon.\] (B.8) Using the familiar Feynman trick \[\frac{1}{AB}=\int_{0}^{1}\mathrm{d}x\frac{1}{\left[A+\left(B-A\right)x\right]^ {2}}\,,\] (B.9) the above integral can be written as \[\int\frac{\mathrm{d}^{d}k}{\left(2\pi\right)^{d}}\frac{1}{\left(k^{2}+\tilde{ m}^{2}\right)\left(k^{\prime^{2}}+\tilde{m}^{2}\right)}=\int_{0}^{1}\mathrm{d}x \int\frac{\mathrm{d}^{d}k}{\left(2\pi\right)^{d}}\frac{1}{\left[(k-xq)^{2}+ \tilde{p}^{2}x(1-x)+\tilde{m}^{2}\right]^{2}}\,.\] (B.10) Shifting the \(k\) integral above by \(k\to k+x\tilde{p}\) and performing a Wick rotation (we substitute \(k^{0}=ik_{E}^{0}\)) leads to \[\int\frac{\mathrm{d}^{d}k}{\left(2\pi\right)^{d}}\frac{1}{\left(k^{2}+\tilde{ m}^{2}\right)\left(k^{\prime^{2}}+\tilde{m}^{2}\right)}=i\int_{0}^{1}\mathrm{d}x \int\frac{\mathrm{d}^{d}k_{E}}{\left(2\pi\right)^{d}}\frac{1}{\left(k_{E}^{2} +\Delta\right)^{2}}\,,\] (B.11) where we defined \(\Delta\coloneqq\tilde{p}^{2}x(1-x)+\tilde{m}^{2}\). Momentum integrals of the kind above can be expressed in terms of gamma functions \[\int\frac{\mathrm{d}^{d}k_{E}}{\left(2\pi\right)^{d}}\frac{1}{\left(k_{E}^{2} +\Delta\right)^{\alpha}}=\frac{1}{\left(4\pi\right)^{\frac{d}{2}}}\frac{ \Gamma\left(\alpha-\frac{d}{2}\right)}{\Gamma(\alpha)}\Delta^{\frac{d}{2}- \alpha}\,.\] (B.12) In our case, with \(\alpha=2\) and \(d=2+\varepsilon\), we have \[\int\frac{\mathrm{d}^{d}k}{\left(2\pi\right)^{d}}\frac{1}{\left(k^{2}+\tilde{ m}^{2}\right)\left(k^{\prime^{2}}+\tilde{m}^{2}\right)}=\frac{i}{4\pi}(4\pi)^{- \frac{\varepsilon}{2}}\Gamma\left(1-\frac{\varepsilon}{2}\right)M^{\varepsilon -2}\int_{0}^{1}\mathrm{d}x\left(\frac{\Delta}{M^{2}}\right)^{\frac{ \varepsilon}{2}-1}\,,\] (B.13) where we introduced an auxiliary mass parameter, \(M\). This allows us to consider small-\(\varepsilon\) expansions of the following dimensionless quantities: \[(4\pi)^{-\frac{\varepsilon}{2}} =1-\frac{\varepsilon}{2}\ln(4\pi)+\ldots,\] (B.14) \[\Gamma\left(1-\frac{\varepsilon}{2}\right) =-\frac{\varepsilon}{2}\Gamma\left(-\frac{\varepsilon}{2}\right) =-\frac{\varepsilon}{2}\left(-\frac{2}{\varepsilon}-\gamma_{E}+\ldots\right),\] (B.15) \[\left(\frac{\Delta}{M^{2}}\right)^{\frac{\varepsilon}{2}-1} =\frac{M^{2}}{\Delta}+\frac{\varepsilon}{2}\frac{M^{2}\ln\left( \Delta/M^{2}\right)}{\Delta}+\ldots,\] (B.16) where \(\gamma_{E}\approx 0.5772\) is the Euler-Mascheroni constant. Using these expressions we obtain12 Footnote 12: Expanding before performing the integral is allowed in this case since each term in the expansion, when integrated, converges. \[\int\frac{\mathrm{d}^{d}k}{\left(2\pi\right)^{d}}\frac{1}{\left(k^{2}+\tilde{ m}^{2}\right)\left(k^{\prime^{2}}+\tilde{m}^{2}\right)}=\frac{i}{4\pi}\int_{0}^{1} \mathrm{d}x\frac{1}{\Delta}=\frac{iM^{\varepsilon}}{4\pi}\int_{0}^{1}\mathrm{ d}x\frac{1}{\tilde{p}^{2}x(1-x)+\tilde{m}^{2}}.\] (B.17) In principle, various cases must be considered, depending on the values \(\tilde{p}^{2}\) can assume; however, since we are only interested in the limit \(\tilde{p}\to 0\) (negligible momentum transfer), we directly expand the integrand and consider the first term of such an expansion13. We have: Footnote 13: Again, this is allowed because each term in the expansion, when integrated, converges. \[\frac{1}{\tilde{p}^{2}x(1-x)+\tilde{m}^{2}}=\frac{1}{\tilde{m}^{2}}+\frac{ \tilde{p}^{2}(x-1)x}{\tilde{m}^{4}}+\frac{\tilde{p}^{4}(x-1)^{2}x^{2}}{\tilde{m }^{6}}+\mathcal{O}(\tilde{p}^{6})\,.\] (B.18) Therefore, in this specific limit the result of the above integral is \[\int\frac{\mathrm{d}^{d}k}{\left(2\pi\right)^{d}}\frac{1}{\left(k^{2}+\tilde{m }^{2}\right)\left(k^{\prime^{2}}+\tilde{m}^{2}\right)}=\frac{iM^{\varepsilon} }{4\pi\tilde{m}^{2}}+\mathcal{O}(\varepsilon).\] (B.19) Before proceeding, let us make another observation about dimensions. When calculating Feynman diagrams in \(2+\varepsilon\) spacetime dimensions, the coupling constants will carry the dimension that is appropriate for the theory in \(2+\varepsilon\) dimensions. For the scalar quantum electrodynamics built here, the dimension of the effective coupling constant turns out to be equal to \(1-\varepsilon/2\) in mass units. On the other hand, in the 2-dimensional case we have that \(\left[\mu q\right]=1\) (of course, integrating the sphere out does not change the dimensions of the quantity \(q\)). Therefore, in order to ensure that dimensional counting remains consistent throughout the calculations, we make again use of the auxiliary parameter \(M\) and write the effective coupling constant as \(M^{-\varepsilon/2}\mu q\). Putting it all together (taking into account the various prefactors), we now write down the final expression for \(I_{1}\) in \(2+\varepsilon\) spacetime dimensions, in the limit \(\tilde{p}\to 0\): \[I_{1}^{d}\big{|}_{\tilde{p}\to 0}=\frac{iM^{-\varepsilon}q^{4}}{8\pi^{3}R^{2}} \frac{1}{\lambda-1}+\mathcal{O}(\varepsilon).\] (B.20) Let us now consider the second contribution, namely \(I_{2}\). Ignoring the prefactors for a moment, shifting to \(d\) dimensions, and writing \(k^{\prime}=k^{\prime}+\tilde{m}^{2}-\tilde{m}^{2}\), leads to \[\int\frac{\mathrm{d}^{d}k}{\left(2\pi\right)^{d}}\frac{k^{\prime 2}}{\left(k^{2 }+\tilde{m}^{2}\right)\left(k^{\prime^{2}}+\tilde{m}^{2}\right)}=\int\frac{ \mathrm{d}^{d}k}{\left(2\pi\right)^{d}}\frac{1}{k^{2}+\tilde{m}^{2}}-\int \frac{\mathrm{d}^{d}k}{\left(2\pi\right)^{d}}\frac{\tilde{m}^{2}}{\left(k^{2}+ \tilde{m}^{2}\right)\left(k^{\prime 2}+\tilde{m}^{2}\right)}.\] (B.21) The first term can be quite easily computed by performing a Wick rotation to use (B.12) with \(\alpha=1\), where the role of \(\Delta\) is now played by \(\tilde{m}^{2}\). We have: \[\int\frac{\mathrm{d}^{d}k}{\left(2\pi\right)^{d}}\frac{1}{k^{2}+\tilde{m}^{2}} =\frac{i}{\left(4\pi\right)^{\frac{d}{2}}}\Gamma\left(1-\frac{d}{2}\right) \left(\tilde{m}^{2}\right)^{\frac{d}{2}-1}.\] (B.22) We now substitute \(d=2+\varepsilon\) and expand in powers of \(\varepsilon\), keeping track of possible poles at \(\varepsilon=0\). In terms of \(\varepsilon\), Eq. (B.22) then becomes \[\int\frac{\mathrm{d}^{d}k}{\left(2\pi\right)^{d}}\frac{1}{k^{2}+\tilde{m}^{2}} =\frac{i}{4\pi}\left(4\pi\right)^{-\frac{\varepsilon}{2}}\Gamma\left(-\frac{ \varepsilon}{2}\right)\left(\tilde{m}^{2}\right)^{\frac{\varepsilon}{2}}.\] (B.23) Introducing \(M\) as before and rearranging, we get \[\int\frac{\mathrm{d}^{d}k}{(2\pi)^{d}}\frac{1}{k^{2}+\tilde{m}^{2}}= \frac{iM^{\varepsilon}}{4\pi}\left(4\pi\right)^{-\frac{\varepsilon}{2}}\Gamma \left(-\frac{\varepsilon}{2}\right)\left(\frac{\tilde{m}^{2}}{M^{2}}\right)^{ \frac{\varepsilon}{2}}. \tag{110}\] We can now safely expand, obtaining \[\int\frac{\mathrm{d}^{d}k}{(2\pi)^{d}}\frac{1}{k^{2}+\tilde{m}^{2}}=-\frac{iM^ {\varepsilon}}{2\pi}\left[\frac{1}{\varepsilon}+\frac{1}{2}\gamma_{E}+\frac{1 }{2}\ln\left(\frac{1}{4\pi}\frac{\tilde{m}^{2}}{M^{2}}\right)+\mathcal{O}( \varepsilon)\right], \tag{111}\] which is dimensionally consistent. Concerning the second term in (109), it has already been computed. Putting it all together, we obtain the final result for \(I_{2}\): \[I_{2}^{d}\big{|}_{\tilde{p}\to 0}=-\frac{iM^{-\varepsilon}q^{4}}{8\pi^{3}R^{2}} \frac{1}{\lambda-1}\left[\frac{1}{\varepsilon}+\frac{1}{2}\left(\gamma_{E}+1 \right)+\frac{1}{2}\ln\left(\frac{\lambda-1}{4\pi R^{2}M^{2}}\right)+\mathcal{ O}(\varepsilon)\right]. \tag{112}\] Now, looking at the third contribution to the amplitude, Eq. (108), we notice that it is equal to Eq. (109) upon shifting the momentum \(k\), \(k\to k+\tilde{p}\). Therefore, we move on to the fourth contribution, \(I_{4}\). Let us first consider the numerator of the integrand. By recalling how \(k^{\prime}\) is defined, it can be split as \[\eta_{ab}\eta_{cd}k^{b}k^{c}k^{\prime d}k^{\prime a} = k_{a}k^{\prime a}k_{c}k^{\prime c}\ =\ (k\cdot k^{\prime})^{2} \tag{113}\] \[= k^{2}k^{\prime 2}-k^{\prime 2}(\tilde{p}\cdot k)+k^{2}(\tilde{p} \cdot k^{\prime})-(\tilde{p}\cdot k)(\tilde{p}\cdot k^{\prime}).\] Thus, shifting to \(2+\varepsilon\) spacetime dimensions, the integral in (107) can be written as \[\int\frac{\mathrm{d}^{d}k}{(2\pi)^{d}}\frac{k^{2}k^{\prime 2}-k^{\prime 2}( \tilde{p}\cdot k)+k^{2}(\tilde{p}\cdot k^{\prime})-(\tilde{p}\cdot k)(\tilde{ p}\cdot k^{\prime})}{(k^{2}+\tilde{m}^{2})\left(k^{\prime 2}+\tilde{m}^{2} \right)}. \tag{114}\] As we can see, \(I_{4}\) has been split into four contributions. The first gives \[\int\frac{\mathrm{d}^{d}k}{(2\pi)^{d}}\frac{k^{2}k^{\prime 2}}{\left(k^{2}+ \tilde{m}^{2}\right)\left(k^{\prime 2}+\tilde{m}^{2}\right)}=\int\frac{ \mathrm{d}^{d}k}{(2\pi)^{d}}\frac{k^{\prime 2}}{k^{\prime 2}+\tilde{m}^{2}}-\int \frac{\mathrm{d}^{d}k}{(2\pi)^{d}}\frac{\tilde{m}^{2}k^{\prime 2}}{\left(k^{2}+ \tilde{m}^{2}\right)\left(k^{\prime 2}+\tilde{m}^{2}\right)}. \tag{115}\] The second piece of the right-hand side of the above equation has already been calculated before. The first one can be easily computed by Wick rotating and making use of the identity \[\int\frac{d^{d}k_{E}}{(2\pi)^{d}}\frac{k_{E}^{2}}{\left(k_{E}^{2}+\Delta \right)^{\alpha}}=\frac{d}{2}\frac{1}{(4\pi)^{\frac{d}{2}}}\frac{\Gamma\left( \alpha-\frac{d}{2}-1\right)}{\Gamma(\alpha)}\Delta^{\frac{d}{2}-\alpha+1}\,. \tag{116}\] In terms of \(\varepsilon\), by setting \(\alpha=1\) and shifting \(k\to k+\tilde{p}\), we can write \[\int\frac{\mathrm{d}^{d}k}{(2\pi)^{d}}\frac{k^{\prime 2}}{k^{\prime 2}+\tilde{m}^{2 }}=\frac{i\tilde{m}^{2}}{4\pi}(1+\varepsilon)(4\pi)^{-\frac{\varepsilon}{2}} \Gamma\left(-1-\frac{\varepsilon}{2}\right)\left(\tilde{m}^{2}\right)^{ \frac{\varepsilon}{2}}. \tag{117}\] Moreover, inserting \(M\) and expanding in powers of \(\varepsilon\), we end up with \[\int\frac{\mathrm{d}^{d}k}{(2\pi)^{d}}\frac{k^{\prime 2}}{k^{\prime 2}+\tilde{m}^{2 }}=\frac{iM^{\varepsilon}\tilde{m}^{2}}{2\pi}\left[\frac{1}{\varepsilon}+ \frac{1}{2}\left(\gamma_{E}+1\right)+\frac{1}{2}\ln\left(\frac{1}{4\pi}\frac{ \tilde{m}^{2}}{M^{2}}\right)+\mathcal{O}(\varepsilon)\right]. \tag{118}\] Therefore, Eq. (B.29) results in \[\int\frac{\mathrm{d}^{d}k}{(2\pi)^{d}}\frac{k^{2}k^{\prime 2}}{\left(k^{2}+ \tilde{m}^{2}\right)\left(k^{\prime 2}+\tilde{m}^{2}\right)}=\frac{iM^{\varepsilon} \tilde{m}^{2}}{\pi}\left[\frac{1}{\varepsilon}+\frac{1}{2}\left(\gamma_{E}+1 \right)+\frac{1}{2}\ln\left(\frac{\tilde{m}^{2}}{4\pi M^{2}}\right)+\mathcal{O} (\varepsilon)\right].\] (B.33) Let us now consider the second piece coming from Eq. (B.28). By writing \(k^{\prime}=k^{\prime}+\tilde{m}^{2}-\tilde{m}^{2}\), we can write this contribution as \[\int\frac{\mathrm{d}^{d}k}{(2\pi)^{d}}\frac{k^{\prime 2}(\tilde{p}\cdot k)}{ \left(k^{2}+\tilde{m}^{2}\right)\left(k^{\prime 2}+\tilde{m}^{2}\right)}=\int \frac{\mathrm{d}^{d}k}{(2\pi)^{d}}\frac{\tilde{p}\cdot k}{k^{2}+\tilde{m}^{2} }-\int\frac{\mathrm{d}^{d}k}{(2\pi)^{d}}\frac{\tilde{m}^{2}\tilde{p}\cdot k}{ \left(k^{2}+\tilde{m}^{2}\right)\left(k^{\prime 2}+\tilde{m}^{2}\right)}.\] (B.34) The first integral vanishes since the integrand is antisymmetric under \(k\to-k\). Concerning the second one, upon shifting \(k\to k+x\tilde{p}\) and combining the denominator by using Feynman's trick once again, we have \[\int\frac{\mathrm{d}^{d}k}{(2\pi)^{d}}\frac{\tilde{p}\cdot k}{ \left(k^{2}+\tilde{m}^{2}\right)\left(k^{\prime 2}+\tilde{m}^{2}\right)} =\int_{0}^{1}dx\int\frac{\mathrm{d}^{d}k}{(2\pi)^{d}}\frac{\tilde {p}\cdot k}{\left[\left(k-xq\right)^{2}+\tilde{p}^{2}x(1-x)+\tilde{m}^{2} \right]^{2}}\] (B.35) \[=\int_{0}^{1}dx\int\frac{\mathrm{d}^{d}k}{(2\pi)^{d}}\frac{ \tilde{p}\cdot\left(k+x\tilde{p}\right)}{\left[k^{2}+\tilde{p}^{2}x(1-x)+ \tilde{m}^{2}\right]^{2}}.\] (B.36) The first term, the one proportional to \(\tilde{p}\cdot k\), vanishes. The remaining one can be written as \[\int\frac{\mathrm{d}^{d}k}{(2\pi)^{d}}\frac{\tilde{p}\cdot k}{ \left(k^{2}+\tilde{m}^{2}\right)\left(k^{\prime 2}+\tilde{m}^{2}\right)}=\int_{0}^{1} \mathrm{d}x\int\frac{\mathrm{d}^{d}k}{(2\pi)^{d}}\frac{x\tilde{p}^{2}}{\left[ k^{2}+\tilde{p}^{2}x(1-x)+\tilde{m}^{2}\right]^{2}}.\] (B.37) We can now proceed in the same way as before, see below Eq. (B.10). However, we immediately notice that the first term of the expansion (B.18) would be multiplied by \(\tilde{p}^{2}\), and so we can safely conclude that, in this specific limit, the above integral vanishes. The next contribution in (B.28) can be also shown to be vanishing. We have: \[\int\frac{\mathrm{d}^{d}k}{(2\pi)^{d}}\frac{k^{2}(\tilde{p}\cdot k ^{\prime})}{\left(k^{2}+\tilde{m}^{2}\right)\left(k^{\prime 2}+\tilde{m}^{2} \right)} =\int\frac{\mathrm{d}^{d}k}{(2\pi)^{d}}\frac{\tilde{p}\cdot k^{ \prime}}{k^{\prime 2}+\tilde{m}^{2}}-\int\frac{\mathrm{d}^{d}k}{(2\pi)^{d}} \frac{\tilde{m}^{2}\tilde{p}\cdot k^{\prime}}{\left(k^{2}+\tilde{m}^{2}\right) \left(k^{\prime 2}+\tilde{m}^{2}\right)}\] \[=\int\frac{\mathrm{d}^{d}k}{(2\pi)^{d}}\frac{\tilde{p}\cdot k}{k^ {2}+\tilde{m}^{2}}-\int\frac{\mathrm{d}^{d}k}{(2\pi)^{d}}\frac{\tilde{m}^{2} \tilde{p}\cdot(k-\tilde{p})}{\left(k^{2}+\tilde{m}^{2}\right)\left(k^{\prime 2}+ \tilde{m}^{2}\right)}.\] (B.38) The first term is zero because the integrand is antisymmetric under \(k\to-k\). In the second term we recognise two expressions we already proved to be zero in the limit of interest. We now finally consider the last contribution in Eq. (B.28), which can be written as follows: \[\int\frac{\mathrm{d}^{d}k}{(2\pi)^{d}}\frac{(\tilde{p}\cdot k)( \tilde{p}\cdot k^{\prime})}{\left(k^{2}+\tilde{m}^{2}\right)\left(k^{\prime 2}+ \tilde{m}^{2}\right)}=\tilde{p}_{a}\tilde{p}_{b}\int\frac{\mathrm{d}^{d}k}{(2 \pi)^{d}}\frac{k^{a}(k-\tilde{p})^{b}}{\left(k^{2}+\tilde{m}^{2}\right)\left(k^ {\prime 2}+\tilde{m}^{2}\right)}.\] (B.39) The second term in the numerator of the above expression vanishes since, excluding the factor of \(\tilde{p}^{2}\), it is exactly the same integral as in Eq. (B.37). Concerning the first piece, we have \[\tilde{p}_{a}\tilde{p}_{b}\int\frac{\mathrm{d}^{d}k}{(2\pi)^{d}} \frac{k^{a}k^{b}}{\left(k^{2}+\tilde{m}^{2}\right)\left(k^{\prime 2}+\tilde{m}^{2} \right)}=\tilde{p}_{a}\tilde{p}_{b}\int_{0}^{1}dx\int\frac{\mathrm{d}^{d}k}{(2 \pi)^{d}}\frac{(k+x\tilde{p})^{a}(k+x\tilde{p})^{b}}{\left(k^{2}+\Delta\right)^{ 2}},\] (B.40) where the definition of \(\Delta\) is the same as the one below Eq. (B.11). Splitting the numerator we immediately notice that the above expression gives rise to integrals that vanish as long as \(\tilde{p}\to 0\). Thus, the only non-vanishing contribution in Eq. (B.28) is the first one. Putting it all together14, we now write down the final result for \(I_{4}\): Footnote 14: Essentially we are considering the prefactors in (B.6) as well as the fact that the effective coupling constant has to be written as \(M^{-\varepsilon/2}\mu q\). \[I_{4}^{d}\big{|}_{\tilde{p}\to 0}=\frac{iM^{-\varepsilon}q^{4}}{4\pi^{3}R^{2}} \frac{1}{\lambda-1}\left[\frac{1}{\varepsilon}+\frac{1}{2}\left(\gamma_{E}+1 \right)+\frac{1}{2}\ln\left(\frac{\lambda-1}{4\pi R^{2}M^{2}}\right)+\mathcal{ O}(\varepsilon)\right].\] (B.41) Thus, summing over all the contributions, \(i\mathcal{M}^{\rm even}_{\rm seagull}\) in the limit \(\varepsilon\to 0\) results in \[i\mathcal{M}^{\rm even}_{\rm seagull}=\frac{iq^{4}}{8\pi^{3}R^{2}}\frac{1}{ \lambda-1}.\] (B.42) The second one-loop diagram with four-vertices arises from the odd mode of the photon and is drawn in Fig. 7. This diagram evaluates to \[i\mathcal{M}^{\rm odd}_{\rm seagull} =\int\frac{\mathrm{d}^{2}k}{(2\pi)^{2}}\frac{1}{\lambda-1}\frac{- i}{k^{2}+\tilde{m}^{2}}\left(-\frac{2iq^{2}}{4\pi R^{2}}\right)\frac{1}{ \lambda-1}\frac{-i}{k^{\prime 2}+\tilde{m}^{2}}\left(-\frac{2iq^{2}}{4\pi R^{2}}\right)\] \[=\frac{q^{4}}{4\pi^{2}R^{4}}\frac{1}{\left(\lambda-1\right)^{2}} \int\frac{\mathrm{d}^{2}k}{(2\pi)^{2}}\frac{1}{k^{2}+\tilde{m}^{2}}\frac{1}{k^ {\prime 2}+\tilde{m}^{2}}\] (B.43) \[=\frac{iq^{4}}{16\pi^{3}R^{2}}\frac{1}{\left(\lambda-1\right)^{3} }\,,\] (B.44) where we made use of the result obtained in (B.19), in the limit \(\varepsilon\to 0\). As expected, we see from these results that the four vertex contributions do not scale with the centre of mass energy of the scattering process. They may, nevertheless, be seen as corrections to the eikonal ampltidues (that yield the classical electromagnetic shockwave) that are calculable in this second quantised formalism. Figure 7: One-loop four-vertex diagram involving the odd mode of the photon.
2309.05949
Detection of the extended $γ$-ray emission around supernova remnant DA 530 with Fermi-LAT
We report the extended GeV $\gamma$-ray emission around the high Galactic latitude supernova remnant (SNR) DA 530 with the PASS 8 data recorded by the Fermi Large Area Telescope (Fermi-LAT). The $\gamma$-ray spectrum in the energy range of 100 MeV - 1 TeV follows a power law model with an index of 2.23. The much more extended $\gamma$-ray emission than the radio shell of DA 530 and the spatial coincidence with the molecular cloud suggest that the $\gamma$-ray emission could be originated from the hadronic process, where the high energy protons are accelerated in and escaped from the shock of DA 530. With a steady-state injection model of protons, the $\gamma$-ray spectrum can be well fitted with the typical Galactic value for diffusion coefficient and the low energy content of the total escaped protons.
Yuliang Xin, Xiaolei Guo
2023-09-12T04:01:08Z
http://arxiv.org/abs/2309.05949v1
# Detection of the extended \(\gamma\)-ray emission around supernova remnant DA 530 with _Fermi_-LAT ###### Abstract We report the extended GeV \(\gamma\)-ray emission around the high Galactic latitude supernova remnant (SNR) DA 530 with the PASS 8 data recorded by the Fermi Large Area Telescope (_Fermi_-LAT). The \(\gamma\)-ray spectrum in the energy range of 100 MeV - 1 TeV follows a power law model with an index of 2.23. The much more extended \(\gamma\)-ray emission than the radio shell of DA 530 and the spatial coincidence with the molecular cloud suggest that the \(\gamma\)-ray emission could be originated from the hadronic process, where the high energy protons are accelerated in and escaped from the shock of DA 530. With a steady-state injection model of protons, the \(\gamma\)-ray spectrum can be well fitted with the typical Galactic value for diffusion coefficient and the low energy content of the total escaped protons. gamma rays: general - gamma rays: ISM - ISM: individual objects (DA 530) - radiation mechanisms: non-thermal 0000-0002-0002]Yuliang Xin 0000-0002-3188-7885]Xiaolei Guo ## 1 Introduction Supernova remnants (SNRs) are widely believed to be the dominant accelerators of Galactic cosmic rays (CRs). And CRs can be accelerated by the high speed shock of SNR with the mechanism of diffusive shock acceleration (DSA; Blandford & Eichler, 1987). The highest-energy CRs are expected to escape from the shock due to the absence of the self-generated magnetic turbulence (Ptuskin & Zirakashvili, 2005; Fujita et al., 2011). If there are some molecular clouds (MCs) in the vicinity of SNRs, these MCs could be illuminated by the escaped protons and produce the intense \(\gamma\)-ray emission with the hadronic process, i.e., the \(\gamma\)-ray emission are believed to be from the decay of neutral pions produced in inelastic collisions between escaped protons and the dense gas in MCs. And several sources have been detected in this perspective, e.g. W28 (Aharonian et al., 2008; Cui et al., 2018), W44 (Uchiyama et al., 2012), SNR G15.4+0.1 (Li et al., 2023), SNR G45.7-0.4 (Zhang et al., 2021), etc. Especially in the case of W28, the GeV and TeV \(\gamma\)-ray emission are detected in three distinct regions, which are spatially coinciding with MCs offset from the shock of SNR (Aharonian et al., 2008; Cui et al., 2018). Searching for the \(\gamma\)-ray emission from such associations could be helpful to explore the propagation properties of high energy particles escaped from SNRs and afterwards travelled through the interstellar medium (ISM), which further provides a key ingredient to establishing the connection between SNRs and the origin of Galactic CRs. DA 530, also known as G93.3+6.9, is a high Galactic latitude SNR (Roger & Costain, 1976). And based on the radio observations, DA 530 is classified to be a shell-type SNR with a bilateral morphology (Landecker et al., 1999). The radio radiation of DA 530 has extremely high polarization percentage, reaching more than 50%, which could be interpreted by the well-ordered magnetic field across the whole remnant (Haslam et al., 1980; Lalitha et al., 1984). The X-ray emission of DA 530 first detected by _ROSAT_ is extremely faint with a centrally brightened morphology (Landecker et al., 1999). By re-analysing the _Chandra_ data, Jiang et al. (2007) found a small-scale hard X-ray feature near the centre of the remnant, which is argued to be a pulsar wind nebula (PWN) associated with SNR. And the age of DA 530 was suggested to be \(\sim\) 5000 yrs based on the canonical blast wave model from Sedov (1959). However, the subsequent _XMM_-Newton observations detected a large extended source (XMM J205314.4+551528) in the radio bright southeast (SE) rim of DA 530 (Bocchino et al., 2008), and the authors explained it to be the PWN associated with DA 530. The recent _Suzaku_ data analysis of the SE rim of DA 530 confirmed the results of _XMM_-Newton, which suggested that the PWN scenario can not be ruled out (Deniz et al., 2022). Nonetheless, the searching for the pulsars or compact central sources associated with DA 530 in the radio and X-ray bands show the null results (Lorimer et al. 1998; Landecker et al., 1999; Kaplan et al., 2004; Straal and van Leeuwen, 2019). The distance of DA 530 is not very clear so far, which was first derived to be 6.9 \(\pm\) 2.2 kpc using the empirical surface brightness-diameter (\(\Sigma\) - _D_) relation of SNRs (Roger and Costain, 1976). Then an updated distance of 2 - 5 kpc was given by Haslam et al. (1980). Based on the neutral hydrogen (HI) observations with Dominion Radio Astrophysical Observatory Synthesis Telescope (DRAO-ST), Landecker et al. (1999) reported that DA 530 lies within a shell of HI, possibly created by an earlier stellar wind of the progenitor. And the distance of it is estimated to be 1.0 - 3.5 kpc. Subsequently, Foster and Routledge (2003) derived a distance of 2.2 \(\pm\) 0.5 kpc using the updated method for the absorption column density. Using the data from DRAO-ST and National Radio Astronomy Observatory Very Large Array (NRAO-VLA), Booth et al. (2022) observed the absorption by intervening HI of the polarized emission from DA 530, and concluded that the distance of DA 530 is \(4.4^{+0.4}_{-0.2}\) kpc. Using the Seoul Radio Astronomy Observatory (SRAO) 6-m telescope CO observations, Jeong et al. (2012) detected CO emission at -6 to +5 km s\({}^{-1}\) at the northeast boundary of DA 530, which shows that there is a large diffuse molecular cloud. In this work, we report the detection of the extended \(\gamma\)-ray emission around SNR DA 530, with the PASS 8 data recorded by _Fermi_-LAT. The data analysis method and results are shown in Section 2, including the spatial and spectral analyses. The observations of molecular cloud around DA 530 is presented in Section 3. And the discussion of the potential origin of the \(\gamma\)-ray emission is shown in Section 4, followed by the summary in Section 5. ## 2 _Fermi_-LAT data analysis ### Data Reduction _Fermi_-LAT is a pair-conversion \(\gamma\)-ray telescope that is sensitive to photon energies greater than 20 MeV. The LAT has continuously monitored the sky since 2008 and scans the entire sky every 3 hr (Atwood et al., 2009). Note that the latest released Pass 8 data set has significant improvements in comparison with the former ones, including an enhanced effective area, especially in the low energy range and better point-spread function (PSF)1. And in the following analysis, we select the latest Pass 8 version of _Fermi_-LAT data recorded from August 4, 2008 (Mission Elapsed Time 239557418) to August 4, 2022 (Mission Elapsed Time 681264005) with "Source" event class (evclass = 128 & evtype = 3) to analyse the \(\gamma\)-ray emission around DA 530. The region of interest (ROI) is a \(20^{\circ}\times 20^{\circ}\) square region centered at the position of DA 530 (R.A. = \(313^{\circ}.14\), decl. = \(55^{\circ}.36\); Roger and Costain, 1976). And in order to reduce the contamination from Earth Limb, the events with zenith angle larger than \(90^{\circ}\) are excluded. We adopt the events with energy range of 100 MeV - 1 TeV for the spectral analysis. While for the spatial analysis, the events in the energy range of 1 GeV - 1 TeV is selected considering the impact of PSF of _Fermi_-LAT. The data are analyzed using the standard _Fermi ScienceTools_2 with the instrumental response function (IRF) of "P8R3_SOURCE_V3". The binned likelihood analysis method with _gtlike_ is used to fit the data. To model the Galactic and isotropic diffuse background emissions, gll_iem_v07.fits and iso_P8R3_SOURCE_V3_v1.txt 3 are adopted. All sources in the incremental version of the fourth _Fermi_-LAT source catalog (4FGL-DR3; Abdollahi et al., 2020, 2022) within a radius of \(20^{\circ}\) from the ROI center and the two components of the diffuse background, are included in the source model, which is generated by the user-contributed software make4FGLxml.py4. During the likelihood analysis, the normalizations and the spectral parameters of all sources within \(7^{\circ}\) to the center of ROI, together with the normalizations of the two components of the diffuse background, are set to be free. Footnote 1: [https://www.slac.stanford.edu/exp/glast/groups/canda/lat_Performance.htm](https://www.slac.stanford.edu/exp/glast/groups/canda/lat_Performance.htm) Footnote 2: [http://fermi.gsfc.nasa.gov/ssc/data/analysis/software/](http://fermi.gsfc.nasa.gov/ssc/data/analysis/software/) Footnote 3: [http://fermi.gsfc.nasa.gov/ssc/data/access/lat/BackgroundModels.html](http://fermi.gsfc.nasa.gov/ssc/data/access/lat/BackgroundModels.html) ### Spatial Analysis In the region of DA 530, a \(\gamma\)-ray point source (4FGL J2051.1+5539) is listed in the 4FGL-DR3 catalog, which has no identified counterpart (Abollahi et al., 2022). First, we create a \(3^{\circ}.0\)\(\times\)\(3^{\circ}.0\) Test Statistic (TS) map by subtracting the emission from the sources and backgrounds (except 4FGL J2051.1+5539) in the best-fit model with _gttsmap_, which is shown in the left panel of Figure 1. At the center of ROI, a significant \(\gamma\)-ray excess (labelled as SrcT) is found around SNR DA 530. For the spatial template of SrcT, we first treat it to be a point source, and the best-fit position is given to be R.A. = \(312^{\circ}.983\pm 0^{\circ}.029\), decl. = \(55^{\circ}.666\pm 0^{\circ}.029\) by adopting Fermipy, a PYTHON package that automates analyses with _Fermi ScienceTools_(Wood et al., 2017). The TS value of SrcT as a point source is 33.46 with the new coordinate. Then we carried out a spatial extension test for the \(\gamma\)-ray emission of SrcT using an uniform disk and a two-dimensional (2D) Gaussian template with Fermipy. And the best-fit central positions and extensions of the spatial templates are listed in Table 1, together with the corresponding TS value of SrcT and the maximum likelihood values. We compared the overall maximum likelihood of the extended template (\(\mathcal{L}_{\rm ext}\)) with that of the point source model (\(\mathcal{L}_{\rm pt}\)), and the significance of the extended model is defined to be TS\({}_{\rm ext}\) = 2(\(\ln\mathcal{L}_{\rm ext}-\ln\mathcal{L}_{\rm pt}\)). Lande et al. (2012) suggests that a source can be assessed to be significantly extended if TS\({}_{\rm ext}\)\(>\) 16. The different maximum likelihood values of the different templates show that an uniform disk can best fit the \(\gamma\)-ray emission from SrcT. And the central position and the 68% containment radius of the uniform disk are fitted to be R.A. = 313\({}^{\circ}\).611, decl. = 55\({}^{\circ}\).344 and R\({}_{\rm 68}\) = 0\({}^{\circ}\).527, respectively. The value of TS\({}_{\rm ext}\) between the uniform disk model and point source model is calculated to be 26.2, corresponding to \(\sim\)5.1\(\sigma\) extension with one additional degree of freedom (dof). With the uniform disk template, the TS value of SrcT is fitted to be 58.75 in the energy range of 1 GeV - 1 TeV, corresponding to a significance level of \(\sim\)6.7 \(\sigma\) with five degrees of freedom. \begin{table} \begin{tabular}{c c c c c c} \hline \hline Spatial Model & R.A., decl. & TS Value & Degrees of Freedom & -log(Likelihood) \\ \hline Point Source & 312\({}^{\circ}\).983 \(\pm\) 0\({}^{\circ}\).029, 55\({}^{\circ}\).666 \(\pm\) 0\({}^{\circ}\).029 & 33.46 & 4 & -182258.99 \\ \hline Uniform disk & 313\({}^{\circ}\).611 \(\pm\) 0\({}^{\circ}\).043, 55\({}^{\circ}\).344 \(\pm\) 0\({}^{\circ}\).049, & & & \\ & R\({}_{\rm 68}\) = 0\({}^{\circ}\).527\({}^{+0\circ}_{-0\circ}\)29 & 58.75 & 5 & -182272.08 \\ \hline 2D-Gaussian & 313\({}^{\circ}\).499 \(\pm\) 0\({}^{\circ}\).100, 55\({}^{\circ}\).415 \(\pm\) 0\({}^{\circ}\).102, & & & \\ & R\({}_{\rm 68}\) = 0\({}^{\circ}\).567\({}^{+0\circ}_{-0\circ}\)080 & 53.93 & 5 & -182269.65 \\ \hline \end{tabular} \end{table} Table 1: Spatial Analysis for SrcT in the energy range of 1 GeV - 1 TeV Figure 1: Left: 3\({}^{\circ}\).0 \(\times\) 3\({}^{\circ}\).0 TSmap in the energy range of 1 GeV - 1 TeV. The red cross shows the position of 4FGL J2051.1+5539 in 4FGL-DR3 catalog. And the green solid circle marks the best-fit 68% containment radius of the uniform disk for the spatial template of SrcT. The radio image of SNR DA 530 at 1.4 GHz is shown as the cyan contours. Right: SED of SrcT in the energy range of 100 MeV - 1 TeV with the gray histogram shown as the TS value for each energy bin. The red error bars show the statistical errors and the sums of the statistical and systematic errors calculated by \(\sigma=\sqrt{\sigma_{\rm stat}^{2}+\sigma_{\rm syst}^{2}}\) are marked by the blue error bars. The arrows indicate the 95% upper limits for the energy bin with TS value of SrcT smaller than 5.0. The black solid and dashed lines show the global best-fit power law spectrum and its 1\(\sigma\) statistic error in the energy range of 100 MeV - 1 TeV. ### Spectral Analysis To investigate the \(\gamma\)-ray spectrum of SrcT, the global likelihood analysis is performed in the energy range from 100 MeV to 1 TeV with the spatial template of an uniform disk. The spectrum of SrcT can be well described by a power law (PL) model. The spectral index and the integral photon flux in the energy range of 100 MeV - 1 TeV are fitted to be 2.23 \(\pm\) 0.09 and \((1.25\pm 0.23)\times 10^{-8}\) photon cm\({}^{-2}\) s\({}^{-1}\), respectively. And we also adopted a log-parabola model (LPb) to test the spectral curvature of SrcT. The variation of the maximum likelihood values between PL and LPb models is only TS\({}_{\rm curve}=2(\ln\mathcal{L}_{\rm LPb}-\ln\mathcal{L}_{\rm PL})=2.1\), which suggests no significant curvature for the \(\gamma\)-ray spectrum of SrcT. With the PL model for SrcT, we divide the data into ten logarithmically equal energy bins from 100 MeV to 1 TeV and repeat the likelihood fitting to give the spectral energy distribution (SED) of SrcT. For the likelihood analysis, only the spectral normalizations of sources within 7\({}^{\circ}\) from SrcT are left free, together with the normalizations of the two components of the diffuse background. And the spectral indices of these sources are fixed to be the best-fit values in the global likelihood analysis. For the energy bin with a TS value of SrcT smaller than 5.0, an upper limit with a 95% confidence level is calculated. For the SED, we also estimated the systematic errors due to the Galactic diffuse emission by changing the best-fit normalization of the Galactic diffuse model artificially by \(\pm\)6% (Abdo et al., 2010). And the sums of the statistical and systematic errors are calculated by \(\sigma=\sqrt{\sigma_{\rm stat}^{2}+\sigma_{\rm syst}^{2}}\) for each energy bin. The SED of SrcT is shown in the right panel of Figure 1, which is also consistent with the global fitting of the power law model. ## 3 Co Observation With the CO observations from SRAO, Jeong et al. (2012) claimed a large diffuse molecular cloud surrounding the northeast boundary of DA 530. However, the size of the \(\gamma\)-ray emission detected here is much larger than that of their CO observation. Therefore, we try to search for the components of molecular cloud in a large extend using the data from the CfA 1.2m millimeter-wave telescope to understand the origin of the \(\gamma\)-ray excess (Dame et al., 2001). And we found that in the \(\gamma\)-ray emission region, the velocity distribution of the CO content shows a clear excess in the velocity range of -6 \(\sim\) +5 km s\({}^{-1}\) as shown in Figure 2, which is consistent with the range in Jeong et al. (2012). Adopting the standard Galactic rotation model (Reid et al., 2016, 2019), the velocity interval corresponds to a kinetic distance of \(\sim\)1.6 kpc. And considering the systematic uncertainties due to the rotation curve, the derived distance is close to the value of 2.2 \(\pm\) 0.5 kpc derived by Foster & Routledge (2003) using the updated method for the absorption column density, which is also much lower that the result of 4.4 kpc in Booth et al. (2022). We have then estimated the mass content of the molecular material in the \(\gamma\)-ray emission region with \[M=\mu m_{\rm H}d^{2}\Omega_{\rm px}X_{\rm CO}{\sum_{\rm px}}W_{\rm CO} \tag{1}\] Here the mean molecular weight \(\mu\) is adopted to be 2.8 assuming a relative helium abundance of 25%. \(m_{\rm H}\) is the mass of the Hydrogen nucleon, and the distance is adopted to be d = 2.2 kpc (Foster & Routledge, 2003). \(\Omega_{\rm px}\) is the solid angle subtended for each pixel in the map shown as in Figure 2. And the value of the conversion factor of \(X_{\rm CO}=2\times 10^{20}\) cm\({}^{-2}\) (K km s\({}^{-1}\))\({}^{-1}\) is adopted here (Bolatto et al., 2013). \(\sum_{\rm px}\)W\({}_{\rm CO}\) is calculated by summing the map content of each pixel in the desired sky region and velocity range. For the region of 0.527\({}^{\circ}\) sky integration radius, the total mass of molecular clouds in the region within the 68% containment radius of the extended \(\gamma\)-ray source is estimated to be \(\sim 2.6\times 10^{5}d_{2.2}^{2}M_{\odot}\). And the corresponding average gas number density is about n\({}_{\rm gas}\) = 300 cm\({}^{-3}\) by assuming a spherical geometry of the gas distribution. ## 4 Discussion The _Fermi_-LAT data analysis above shows an extended \(\gamma\)-ray source SrcT, which locates around the high Galactic latitude SNR DA 530. And the GeV \(\gamma\)-ray spectrum of SrcT can be described by a power law model with an index of 2.23 \(\pm\) 0.09. To determine the origin of the \(\gamma\)-ray emission, we also searched for the CO observation, and SrcT is spatially consistent with the molecular clouds, which is located outside the radio shell of DA 530. The spatial coincidence between the \(\gamma\)-ray emission and the molecular clouds suggests that the \(\gamma\)-ray emission from SrcT could be produced by the hadronic \(\pi^{0}\) decay originating from the interaction between the molecular gas and high energy protons, which are accelerated in and escaped from the shock of DA 530. Such scenario is similar to the origin of the \(\gamma\)-ray emission around SNR G15.4+1.0 (Li et al., 2023) and the SNR associated with PSR J0837-2454(Zhang & Xin, 2023). To explain the \(\gamma\)-ray emission from SrcT, we assume the steady-state injection of protons into an uniform emission region. And the injection time is adopted to be the age of DA 530 with T = 5000 yrs from Jiang et al. (2007). The spectrum of injected protons is adopted to be a power law with an exponential cutoff: \[Q(E)=Q_{0}E^{-\Gamma}\exp\left(-\frac{E}{E_{\rm p,cut}}\right). \tag{2}\] Here the spectral index is suggested to be \(\Gamma=2.0\), which is consistent with the radio spectral index of DA 530 of \(\alpha\) = 0.45 \(\pm\) 0.04 by assuming the same spectral index for electrons and protons accelerated by the shock of SNR. The cutoff energy of protons can not be well constrained and is first adopted to be the energy of the cosmic ray knee with \(E_{\rm p,cut}\) = 3 PeV. And the total energy of the injected protons is assumed to be W\({}_{\rm p,inj}\) = \(\eta\)E\({}_{\rm SN}\), where \(\eta\) is the fraction of the kinetic energy of DA 530 converted into the escaped proton energy, and the the kinetic energy of DA 530, E\({}_{\rm SN}\), is adopted to be a typical value of 10\({}^{51}\) erg. By integrating the Eq. 16 in Thoudam & Horandel (2012) on the variable radius over a sphere with radius R, the escaped proton spectrum within the \(\gamma\)-ray emission region can be derived as (Aharonian & Atoyan, 1996; Thoudam & Horandel, 2012): \[N_{p}(E,T)=\frac{Q(E)}{4\pi D(E)T}\int_{0}^{R}4\pi rdr\ {\rm erfc}\left[ \frac{r}{\sqrt{4D(E)T}}\right] \tag{3}\] Here, the diffusion coefficient of protons is set to be spatially constant and energy-dependent with \(D(E)=\chi D_{0}(E/E_{0})^{\delta}\), where \(D_{0}=3\times 10^{28}\) cm\({}^{2}\) s\({}^{-1}\) at \(E_{0}=10\) GeV and \(\chi=1.0\) corresponds to the typical value of Galactic diffusion coefficient (Blasi, 2013). The value of \(\delta\) is adopted to be 1/3 or 1/2, which corresponds to the Kolmogorov turbulence or Kraichnan turbulence for the diffusion coefficient (Ptuskin et al., 2006). And with the distance of 2.2 kpc and 68% containment radius of 0\({}^{\circ}\).527 for SrcT, the physical radius of the \(\gamma\)-ray emission region is estimated to be R = 20.2 pc. For an injected spectrum given by \(Q(E)\propto E^{-\Gamma}\) and \(D(E)\propto E^{\delta}\), the spectrum of escaped protons, \(N_{p}(E)\), approximately follows \(N_{p}(E)\propto E^{-(\Gamma+\delta)}\) in the high energy, where the diffusion radius of protons defined as \(\sqrt{4D(E)T}\) is larger than the size of the emission region R. The typical values of \(\eta\), \(\chi\) and \(\delta\) are selected to calculate the different spectra of escaped protons in the \(\gamma\)-ray emission region, and the corresponding \(\gamma\)-ray fluxes are calculated using the \(naima\) package by adopting the ambient gas density of n\({}_{\rm gas}\) = 300 cm\({}^{-3}\)(Zabalza, 2015). Figure 2: Integrated CO emission intensity (K km s\({}^{-1}\)) around DA 530 in the velocity range of -6 \(\sim\) +5 km s\({}^{-1}\). The 68% containment radius of the extended \(\gamma\)-ray emission of SrcT is marked by the yellow solid circle. The green and cyan contours show the \(\gamma\)-ray morphology of SrcT and the radio image of SNR DA 530 at 1.4 GHz, as shown in the left panel of Figure 1. Figure 3 shows the resulting hadronic \(\gamma\)-ray spectra with the different parameters. The spectra with the typical value of Galactic diffusion coefficient, \(\chi\) = 1.0, could explain the observational GeV data. And the total energy of injected protons above 1 GeV is fitted to be about 2.0 \(\times\) 10\({}^{48}\) erg. which is lower than the estimated value by assuming the total acceleration efficiency of 5% - 10% and the typical kinetic energy of 10\({}^{51}\) erg for SNR (Blasi, 2013). Such result could be attributed to the possible low kinetic energy of DA 530 with \(\sim\)10\({}^{49}\) erg (Jiang et al., 2007). And it also could mean that the bulk of accelerated particles are still trapped inside the remnant, and the non-significant \(\gamma\)-ray emission within the shell of DA 530 shown in the left panel of Figure 1 could be explained by the low gas density inside SNR, which is also consistent with the evolution environment of a stellar wind bubble for DA 530 (Landecker et al., 1999). The total energies of escaped protons above 1 GeV in the \(\gamma\)-ray emission region are calculated to be 1.1 and 1.3 \(\times\) 10\({}^{47}\) (\(n_{\rm gas}\)/300 cm\({}^{-3}\))\({}^{-1}\) erg for \(\delta\) = 1/2 and \(\delta\) = 1/3, respectively. To increase the total energy of injected protons, one needs to increase the diffusion coefficient to make the \(\gamma\)-ray spectrum not be changed significantly. By fixing the value of W\({}_{\rm p,inj}\) = 10\({}^{50}\) erg, \(\eta\) = 0.1, the diffusion coefficient needs to be about two orders of magnitude higher than the typical Galactic value. And the estimated total energy of escaped protons in the \(\gamma\)-ray emission region is about 1.2\(\times\)10\({}^{47}\) (\(n_{\rm gas}\)/300 cm\({}^{-3}\))\({}^{-1}\) erg. In addition, we also decreased the cutoff energy of protons to estimate the different escaping models. And the allowed minimum value of cutoff energy is about 10 TeV, shown as the red dotted line in Figure 3 with the total energy of escaped protons in the \(\gamma\)-ray emission region of 1.1\(\times\)10\({}^{47}\) (\(n_{\rm gas}\)/300 cm\({}^{-3}\))\({}^{-1}\) erg, which expects a much lower flux in the TeV band. Moreover, we also considered the the minimum energy of protons that can escape from the SNR with a simple but reasonable approach of \(E_{\rm esc}\) = \(E_{\rm max}(t/t_{\rm sed})^{-\alpha}\)(Gabici et al., 2009; Ohira et al., 2011; Thoudam & Horandel, 2012), which assuming that the escape of the highest energy particles starts at the onset of the Sedov phase of SNR. Here the maximum energy of protons, \(E_{\rm max}\), is assumed to be equal to \(E_{\rm p,cut}\) with 3 PeV, and \(t_{\rm sed}\) = 500 yrs (Thoudam & Horandel, 2012). SNR DA 530 with the age of 5000 yrs is suggested to be in the Sedov phase (Jiang et al., 2007), and the values of \(E_{\rm esc}\) are calculated to be in the range of 30 TeV - 2 PeV for the different values of \(\alpha\) from 0.2 to 2.0 (Thoudam & Horandel, 2012). And such energy range of escaping protons is in accord with the needed one that explaining the GeV \(\gamma\)-ray spectrum. ## 5 Summary In this work, we analyzed the GeV \(\gamma\)-ray emission in the field of SNR DA 530, using 14 years of _Fermi_-LAT data, and found an extended \(\gamma\)-ray source, SrcT, around DA 530, which can be described by an uniform disk template. The Figure 3: Modeling of the \(\gamma\)-ray spectra with the hadronic escaping models. The red solid, green and blue dashed lines indicate the scenarios with the different values for \(\chi\), \(\delta\) and \(\eta\) and the cutoff energy of \(E_{\rm p,cut}\) = 3 PeV. And the red dotted line shows the scenario with \(E_{\rm p,cut}\) = 10 TeV, as shown in the legend. The cyan dot–dashed line shows the differential sensitivity of CTA-North (50 hr; Cherenkov Telescope Array Consortium et al., 2019). GeV \(\gamma\)-ray spectrum of SrcT can be fitted by a power law model with an index of 2.23 \(\pm\) 0.09. The size of the \(\gamma\)-ray emission region is much larger than that of the radio shell of DA 530. Based on the CO observation from the CfA 1.2m millimeter-wave telescope, we found that the molecular cloud component is spatially consistent with the \(\gamma\)-ray emission region. Considering the much more extended \(\gamma\)-ray emission and the spatial coincidence with the molecular cloud, the \(\gamma\)-ray emission of SrcT is suggested to be from the hadronic \(\pi^{0}\) decay due to the inelastic collisions between the cloud and the high energy protons, which are accelerated in and escaped from the shock of DA 530. With the assumption of steady-state injection of protons, the \(\gamma\)-ray spectrum can be well explained by the model with the typical Galactic value for the diffusion coefficient (\(\chi=1.0\)). The total energy of the injected protons needs to be much lower, which can be explained by the low kinetic energy of DA 530 or the assumption that the bulk of accelerated particles are still trapped inside the remnant. The hadronic \(\gamma\)-ray spectra with the different parameters expect the different fluxes in the TeV band. And the potential detection by the Cherenkov Telescope Array in the northern hemisphere (CTA-North; Cherenkov Telescope Array Consortium et al., 2019) in the future could help to test the different models and to constrain the escaping proton energy. Moreover, the high-resolution observations for molecular cloud around DA 530 are necessary to clearly identify the origin of the extended \(\gamma\)-ray emission. We would like to thank the anonymous referee for very helpful comments, which help to improve the paper. This work is supported by the National Natural Science Foundation of China under the grants 12103040, 12147208 and U1931204, and the Natural Science Foundation for Young Scholars of Sichuan Province, China (No. 2022NSFSC1808).
2306.17510
Influence of Dark Matter on the Magnetized Neutron Star
Over the past two decades, significant strides have been made in the study of Dark Matter (DM) admixed neutron stars and their associated properties. However, an intriguing facet regarding the effect of DM on magnetized neutron stars still remains unexplored. This study is carried out to analyze the properties of DM admixed magnetized neutron stars. The equation of state for the DM admixed neutron star is calculated using the relativistic mean-field model with the inclusion of a density-dependent magnetic field. Several macroscopic properties, such as mass, radius, particle fractions, tidal deformability, and the $f$-mode frequency, are calculated with different magnetic field strengths and DM configurations. The equation of state is softer with the presence of DM as well as for the parallel components of the magnetic field and vice-versa for the perpendicular one. Other macroscopic properties, such as mass, radius, tidal deformability, etc., are also affected by both DM and magnetic fields. The change in the magnitude of different neutron star observables is proportional to the amount of DM percentage and the strength of the magnetic field. We observe that the change is seen mainly in the core part of the star without affecting the crustal properties.
Vishal Parmar, H. C. Das, M. K. Sharma, S. K. Patra
2023-06-30T09:55:11Z
http://arxiv.org/abs/2306.17510v2
# Influence of Dark Matter on the Magnetized Neutron Star ###### Abstract Over the past two decades, significant strides have been made in the study of Dark Matter (DM) admixed neutron stars and their associated properties. However, an intriguing facet regarding the effect of DM on magnetized neutron stars still remains unexplored. This study is carried out to analyse the properties of DM admixed magnetized neutron stars. The equation of state for the DM admixed neutron star is calculated using the relativistic mean-field model with the inclusion of a density-dependent magnetic field. Several macroscopic properties such as mass, radius, particle fractions, tidal deformability, and the \(f\)-mode frequency are calculated with different magnetic field strengths and DM configurations. The equation of state is softer with the presence of DM as well as for the parallel components of the magnetic field, and vice-versa for the perpendicular one. Other macroscopic properties, such as mass, radius, tidal deformability, etc., are also affected by both DM and the magnetic fields. The change in the magnitude of different neutron star observables is proportional to the amount of DM percentage and the strength of the magnetic field. We observe that the change is seen mainly in the core part of the star without affecting the crustal properties. ## I Introduction In recent years, the investigation of magnetars and pulsars, characterized as highly magnetized neutron stars, has emerged as a fascinating research field at the juncture of nuclear physics and astrophysics. These enigmatic celestial objects exhibit magnetic fields of remarkable strength (\(B\sim 10^{17}-10^{18}\) G), surpassing those typically observed in neutron stars by several orders of magnitude [1]. Such immensely powerful field conditions are presently beyond the reach of terrestrial laboratories. Consequently, pulsars and magnetars serve as extraterrestrial laboratories for examining and advancing physical theories. These objects present a plethora of exhilarating phenomena, including manifestations of exotic QED mechanisms like photon splitting and magnetic pair creation [2], outburst and quiescent emissions [3], seismic activity [4], dissipative processes in the magnetospheres [5], axion-like particles [6], and dense matter physics [7], among others. The exploration of these physical phenomena establishes the study of magnetized neutron stars as a pivotal research area in astrophysics, offering valuable insights into the behaviour of matter and radiation in extreme environments [8]. It is a well-known fact with compelling evidence that most of the matter in the Universe is dark matter (DM) [9]. Since neutron stars are highly compact and dense, the collision between the DM particles and constituents of the neutron star results in the loss of energy for the DM to become bound to the gravitational pull of the neutron star. Therefore, neutron stars have long been used as a tool in the quest to uncover the particle nature of DM and their scattering cross sections [10; 11; 12]. The DM admixture neutron star results in significant deviation in the neutron star observables such as mass-radius profile, tidal deformation, luminosity [13], accretion [14], etc., and hence, can act as a probe to measure the DM properties indirectly. In the last two decades, there have been numerous attempts to study the DM admixed neutron star and associated properties [10; 13; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25]. However, despite significant progress, the impact of DM on magnetized neutron stars remains a scientific question that necessitates immediate exploration. Since most of the observed neutron stars are either pulsars or magnetars [26], it becomes essential to study the influence of DM on these compact stars. Recently strange star admixed with fermionic DM in a strong magnetic field (MF) was analyzed using the MIT bag model [7]. It was shown that tidal deformability gets intensely affected by such stars. In this paper, we aim to present an analysis of the unexplored influence of DM on pure hadronic magnetized neutron stars over a MF range (\(10^{17}-10^{19}\)G). We use the magnetized neutron star formalism as described in [27; 28; 29]. We use the effective relativistic mean field (E-RMF) model for the nuclear interaction. The E-RMF theory has been successfully applied to a wide range of nuclear physics problems ranging from finite nuclei to the neutron star [22; 30; 31; 32; 33]. This theory has recently been used to address the DM admixed neutron star [34; 35]. In the present work, we use two E-RMF models, namely, BigApple [36] and IOPB [31]. The BigApple force is designed to account for the \(2.6M_{\odot}\) and original constraint on the tidal deformability of a \(1.4M_{\odot}\) neutron star in accordance with the secondary component of GW190814 [37]. At the same time, the IOPB parameter set reproduces the maximum mass from massive pulsars such as PSR J0740+6620, which estimate that the neutron star mass should be greater than \(2~{}M_{\odot}\) (\(M=2.14^{+0.10}_{-0.09}~{}M_{\odot}\)) [38]). These parameter sets also reproduce the nuclear matter and finite nuclei observables in agreement with empirical and observational constraints [31; 39]. To model the DM interaction with the neutron star, we consider the Neutralino as a DM candidate, which belongs to WIMP as taken in Ref. [40]. Further, DM is treated analogous to a neutron, considering the DM as a charge-less fermion. Other types of DM candidates are also hypothesized, such as Bosonic, asymmetric, etc., having different properties compared to the Fermionic one. Several works have been done to explore such types of scenarios that explore the effects of DM on the different NS properties [19; 20; 21; 22; 23; 24; 40]. However, there are still windows to explore the DM effects on other properties of the compact star, its particle nature, etc. Henceforth, we aim to investigate the possible changes in the DM admixed magnetized neutron star equation of state (EOS), its composition, and neutron star observables, which include mass-radius relations, tidal deformability, and \(f\)-mode oscillation. It is seen that with an increasing DM mass, the maximum mass of the neutron star decreases [19]. On the other hand, it was shown that the maximum mass of the neutron star is an increasing/decreasing function of the MF depending on the perpendicular/parallel pressure [41; 42; 43]. Therefore, it is interesting to examine the combined effect of DM admixed neutron star. Although the neutron star should be deformed due to the anisotropic pressure in the presence of the MF, we consider the neutron star to be spherically symmetric, which allows us to use the Tolman-Oppenheimer-Volkoff (TOV) solution. This assumption holds good for the range of MF \(10^{15}-10^{18}\) as the deformation from spherical symmetry turns out to be less than 1% [41; 43; 44]. Moreover, in the present work, we present separate results for the perpendicular and parallel pressure in line with Refs. [41; 44; 45]. The paper's organization is as follows: In Sec II, we describe the effect of MF on the EOS employing the E-RMF framework and DM Model. The results indicating the EOS, composition, mass-radius relations, tidal deformability, and \(f\)-mode oscillation are discussed in Sec. III. Finally, we summarize our results in Sec. IV. ## II Formalism ### RMF model The effective Lagrangian in the E-RMF, which include \(\sigma\), \(\omega\), \(\rho\), \(\delta\), and photon in association with the baryons can be written as [46; 47; 48; 20; 48], \[\mathcal{E}(r) = \psi^{\dagger}(r)\bigg{\{}i\alpha\cdot\mathbf{\nabla}+\beta[M-\Phi(r )-\tau_{3}D(r)]+W(r)+\frac{1}{2}\tau_{3}R(r)+\frac{1+\tau_{3}}{2}A(r)-\frac{i \beta\alpha}{2M}\bigg{(}f_{\omega}\mathbf{\nabla}W(r)+\frac{1}{2}f_{\rho}\tau_{3} \mathbf{\nabla}R(r)\bigg{)}\bigg{\}}\psi(r) \tag{1}\] \[+ \bigg{(}\frac{1}{2}+\frac{k_{3}\Phi(r)}{3!M}+\frac{k_{4}}{4!} \frac{\Phi^{2}(r)}{M^{2}}\bigg{)}\frac{m_{\omega}^{2}}{g_{\omega}^{2}}\Phi(r) ^{2}+\frac{1}{2g_{\omega}^{2}}\Big{(}1+\alpha_{1}\frac{\Phi(r)}{M}\Big{)}(\bm {\nabla}\Phi(r))^{2}-\frac{1}{2g_{\omega}^{2}}\Big{(}1+\alpha_{2}\frac{\Phi(r) }{M}\Big{)}(\mathbf{\nabla}W(r))^{2}\] \[- \frac{1}{2}\Big{(}1+\eta_{1}\frac{\Phi(r)}{M}+\frac{\eta_{2}}{2} \frac{\Phi^{2}(r)}{M^{2}}\Big{)}\frac{m_{\omega}^{2}}{g_{\omega}^{2}}W^{2}(r) -\frac{1}{2e^{2}}(\mathbf{\nabla}A^{2}(r))^{2}-\frac{1}{2g_{\rho}^{2}}(\mathbf{\nabla} R(r))^{2}-\frac{1}{2}\Big{(}1+\eta_{\rho}\frac{\Phi(r)}{M}\Big{)}\frac{m_{\rho}^{2}}{g_{ \rho}^{2}}R^{2}(r)\] \[- \frac{\zeta_{0}}{4!}\frac{1}{g_{\omega}^{2}}W(r)^{4}-\Lambda_{ \omega}(R^{2}(r)W^{2}(r))+\frac{1}{2g_{\delta}^{2}}(\mathbf{\nabla}D(r))^{2}+\frac {1}{2}\frac{m_{\delta}^{2}}{g_{\delta}^{2}}(D(r))^{2}.\] Here \(\Phi(r)\), \(W(r),R(r),D(r)\) and \(A(r)\) are the fields corresponding to \(\sigma\), \(\omega\), \(\rho\) and \(\delta\) mesons and photon, respectively. The \(g_{\ast}\), \(g_{\omega}\), \(g_{\rho}\), \(g_{\delta}\) and \(\frac{e^{2}}{4\pi}\) are the corresponding coupling constants and \(m_{s}\), \(m_{\omega}\), \(m_{\rho}\) and \(m_{\delta}\) are the corresponding masses. The zeroth component \(T_{00}=H\) and the third component \(T_{ii}\) of energy-momentum tensor \[T_{\mu\nu}=\partial^{\nu}\phi(x)\frac{\partial\mathcal{E}}{\partial(\partial_ {\mu}\phi(x))}-\eta^{\nu\mu}\mathcal{E}, \tag{2}\] yields the energy and pressure density, respectively. The energy spectrum of the proton, which gets modified due to the Landau level, is written as [27; 28] \[E_{p}=\sqrt{k_{z}^{2}+\overline{M}_{n,\sigma_{z}}^{p^{2}}}+W-R/2, \tag{3}\] and for charged leptons (electron and muon) as \[E_{e,\mu}=\sqrt{k_{z}^{2}+\overline{M}_{n,\sigma_{z}}^{e,\mu^{2}}}, \tag{4}\] where \[\overline{M}_{n,\sigma_{z}}^{(p)^{2}}=M_{(p)}^{*^{2}}+2\Big{(}n+ \frac{1}{2}-\frac{1}{2}\frac{q}{|q|}\sigma_{z}\Big{)}|q|B. \tag{5}\] \[\overline{M}_{n,\sigma_{z}}^{(e,\mu)^{2}}=M_{(e,\mu)}^{2}+2\Big{(} n+\frac{1}{2}-\frac{1}{2}\frac{q}{|q|}\sigma_{z}\Big{)}|q|B. \tag{6}\] Here, \(\sigma_{z}\) is the spin along the axis of the MF \(B\), \(n\) is the principal quantum number, and \(k_{z}\) is the momentum along the direction of the MF. \(M^{*}\) is the effective mass for the proton. The neutron spectrum is similar to the Dirac particle and takes form. \[E_{n}=\sqrt{k^{2}+M_{n}^{*^{2}}}+W+R/2. \tag{7}\] The number and energy density at zero temperature and in the presence of a MF is given by [27] \[\rho_{i=e,\mu,p}=\frac{|q|B}{2\pi^{2}}\sum_{\sigma_{z}}\sum_{n=0}^{n_{max}}k_{f, n,\sigma_{z}}^{i}, \tag{8}\] \[E_{i=e,\mu,p} = \frac{|q|B}{4\pi^{2}}\sum_{\sigma_{s}}\sum_{n=0}^{n_{max}} \tag{9}\] \[\times \Big{[}E_{f}^{i}k_{f,n,\sigma_{z}}^{i}+\overline{M}_{n,\sigma_{z} }^{i^{2}}\ln\Big{(}\Big{|}\frac{E_{f}^{i}+k_{f,n,\sigma_{z}}^{i}}{\overline{M}_ {n,\sigma_{z}}^{i^{2}}}\Big{|}\Big{)}\Big{]}\Big{]}\Big{]}\Big{]}\Big{]}\Big{]}\Big{]}\] respectively. In above equations, \(k_{f,n,\sigma_{z}}^{i}\) is defined by \[k_{f,n,\sigma_{z}}^{i^{2}}=E_{f}^{i^{2}}-\overline{M}_{n,\sigma_{z}}^{i^{2}}, \tag{10}\] where the Fermi energies are fixed by the respective chemical potentials given by \[E_{f}^{i=e,\mu}=\mu_{\mu,e}, \tag{11}\] \[E_{f}^{b=p,n}=\mu_{b}-W\pm R/2. \tag{12}\] In Eqs. (8) and (9), the \(n_{\rm max}\) is the integer for which the Fermi momentum remains positive in Eq. (10) and is written as \[n_{\rm max} = \Bigg{[}\frac{E_{f}^{i^{2}}-M^{\star^{2}}}{2|q|B}\Bigg{]},\ {\rm proton} \tag{13}\] \[n_{\rm max} = \Bigg{[}\frac{E_{f}^{i^{2}}-M^{2}}{2|q|B}\Bigg{]},\ {\rm electron}\ \&\ {\rm muon}\,.\] Here \([x]\) represents the greatest integer less than or equal to \(x\). The scalar density for the protons is further determined as \[\rho_{p}^{s}=\frac{|q|BM^{\star}}{2\pi^{2}}\sum_{\sigma_{z}}\sum_{n=0}^{n_{max} }\ln\Big{(}\Big{|}\frac{E_{f}^{i}+k_{f,n,\sigma_{z}}^{i}}{\overline{M}_{n, \sigma_{z}}^{i^{2}}}\Big{|}\Big{)}. \tag{14}\] The number, scalar, and energy density for the neutrons are similar to the field-free case and can be found in [30; 31] and reference therein. The total energy density is the sum of matter-energy density and the contribution from the electromagnetic field, \(\frac{B^{2}}{8\pi}\). Now the parallel (\(P_{\parallel}\)) and perpendicular (\(P_{\perp}\)) components of the pressure can be written as [49] \[P_{\parallel}=\frac{|q|B}{4\pi^{2}}\sum_{\sigma_{z}=\pm 1} \sum_{n=0}^{n=n_{max}}\Bigg{[}E_{f}^{i}k_{f,n,\sigma_{z}}^{i}- \overline{M}_{n,\sigma_{z}}^{i^{2}}\] \[\times\ln\Big{(}\Big{|}\frac{E_{f}^{i}+k_{f,n,\sigma_{z}}^{i}}{ \overline{M}_{n,\sigma_{z}}^{i^{2}}}\Big{|}\Big{)}\Bigg{]}.\] It also follows from Eq. (9) and (15) that \[P_{\parallel}=\sum_{i=n,p}\mu_{i}\rho_{i}-E\,. \tag{16}\] The transverse pressure is written as \[P_{\perp}=\frac{|q|^{2}B^{2}}{2\pi^{2}}\sum_{\sigma_{z}=\pm 1} \sum_{n=0}^{n=n_{max}}n\ln\Big{(}\Big{|}\frac{E_{f}^{i}+k_{f,n,\sigma_{z}}^{i }}{\overline{M}_{n,\sigma_{z}}^{i^{2}}}\Big{|}\Big{)}. \tag{17}\] The pure electromagnetic contribution \(\propto B^{2}\) to the energy and pressure density is taken as \(\frac{B^{2}}{8\pi}\)[27]. ### Density-dependent MF In this work, the MF (\(B\)) is parametrized from the surface to the center of the star as [50; 51; 52; 50] \[B\Bigg{(}\frac{\rho}{\rho_{0}}\Bigg{)}=B_{\rm surf}+B_{c}\Bigg{(}1-\exp\Bigg{\{} -\beta\Bigg{(}\frac{\rho}{\rho_{0}}\Bigg{)}^{\gamma}\Bigg{\}}\Bigg{)}. \tag{18}\] Here, \(\rho_{0}\) is the saturation density, \(B_{\rm surf}\) is the surface MF taken to be \(10^{15}\) G and \(B_{c}\) is the MF at the center of the star. The parameters \(\beta=0.02\) and \(\gamma=3.00\) are chosen to reproduce the observational MF [53]. ### DM Model In this section, we provide the formalism for the DM admixed neutron star. The DM interacts with nucleons through the Higgs portal. In this study, we choose Neutralino as a DM candidate, which belongs to WIMPS. The interacting Lagrangian is in the following [19; 20; 35; 35; 40]: \[\mathcal{L}_{\rm DM} = \bar{\chi}\Big{[}i\gamma^{\mu}\partial_{\mu}-M_{\chi}+yh\Big{]} \chi+\frac{1}{2}\partial_{\mu}h\partial^{\mu}h \tag{19}\] \[- \frac{1}{2}M_{h}^{2}h^{2}+\sum f\frac{m}{v}\bar{\psi}h\psi,\] \(\psi_{{}_{B}}\) and \(\chi\) are the baryons and DM wave functions respectively. Here, we choose values of the DM-Higgs coupling (\(y\)), proton-Higgs form factor (\(f\)), and vacuum value (\(v\)) of Higgs are 0.07, 0.35, and 246 GeV, respectively, as considered in Refs. [35; 20]. The free parameters are constrained with the help of DM detection data available to date [20]. With this preliminary information, we can calculate the DM scalar density (\(\rho_{s}^{\rm DM}\)), energy density, and pressure using the mean-field approximation as done in Refs. [19; 40] \[\rho_{s}^{\rm DM}=\langle\bar{\chi}\chi\rangle=\frac{\gamma}{2\pi^{2}}\int_{0} ^{k_{f}^{\rm DM}}dk\ \frac{M_{\chi}^{\star}}{\sqrt{M_{\chi}^{\star 2}+k^{2}}}, \tag{20}\] where \(k_{f}^{\rm DM}\) is the Fermi momentum for DM. \(\gamma\) is the spin degeneracy factor with a value of 2 for neutron and proton. The energy density (\(\mathcal{E}_{\rm DM}\)) and pressure (\(P_{\rm DM}\)) for neutron star with DM can be obtained by solving the Eq. (19) \[\mathcal{E}_{\rm DM} = \frac{1}{\pi^{2}}\int_{0}^{k_{f}^{\rm DM}}k^{2}\ dk\sqrt{k^{2}+(M _{\chi}^{\star})^{2}}+\frac{1}{2}M_{h}^{2}h_{0}^{2}, \tag{21}\] and \[P_{\rm DM} = \frac{1}{3\pi^{2}}\int_{0}^{k_{f}^{\rm DM}}\frac{k^{4}\ dk}{\sqrt{k ^{2}+(M_{\chi}^{\star})^{2}}}-\frac{1}{2}M_{h}^{2}h_{0}^{2}, \tag{22}\] \(M_{h}\) is the Higgs mass equal to 125 GeV, and \(h_{0}\) is the Higgs field. The total EoS for the DM admixed neutron star in the presence of MF is given by \[\mathcal{E}=\mathcal{E}_{\rm nucl.}+\mathcal{E}_{\rm DM},P=P_{\rm nucl.}+P_{\rm DM}. \tag{23}\] The MF does not affect the DM since it is charged less, similar to a neutron. However, with the Higgs contribution, the effective mass of the system becomes \[M_{n,p}^{*}=M-\Phi-\sum_{n,p}\frac{fM_{n,p}}{v}h. \tag{24}\] The effective mass then dictates the property of the system through \(n_{max}\) in Eq. (13). ## III Results and Discussion ### Particle Fractions and EoS The particle fraction (PF) of the species, such as neutrons, protons, electrons, muons, and DM, can be calculated using the formula \[X_{i}=\rho_{i}/\rho_{b}\,, \tag{25}\] where \(\rho_{i}\) is the density of each species, and \(\rho_{b}\) is the baryon density. In Fig. 1, we calculate the value of \(X_{i}\) for DM admixed magnetized neutron star for three different DM momenta Figure 1: The particle fractions (upper panel) of different species with the presence of MF and DM having Fermi momenta 0.00, 0.02, and 0.04 GeV, respectively for BigApple (left panel), and IOPB-I (right panel) parameter sets and their EoSs (lower panel). \(k_{f}^{\rm DM}=0.00,0.02\), and \(0.04\) GeV with the variation of the core MF. The PF for neutron and DM are not taken as they are charge-less particles. Hence, the MF has no interactions with them. In this study, we have not included the anomalous magnetic moment (AMM) of the neutrons and protons for simplicity of the results. From the upper panel of Fig. 1, we notice that protons and electrons appear almost at the same density. However, the muon appears \(\approx 0.1\) fm\({}^{-3}\) for both BigApple and IOPB-I cases. It is observed that a lower magnitude of MF (for example, \(10^{17}\) G) doesn't change the PF significantly. However, with an increase in the MF strength at the core, the population density changes and shows oscillating behavior due to the subsequent filling of Landau levels. This is because, with an increase in MF strength, the mass of the charged particle becomes heavier, and it oscillates prominently, especially in the core of the neutron star, as mentioned in Ref. [41]. We have also calculated the PF for the parallel (\(\parallel\)) and perpendicular (\(\perp\)) components of the MF and found that it is almost similar for both components. In addition to MF, it is also noticed that the neutron star with a finite DM fraction has does not affect the PF. The EOSs for magnetized neutron stars with DM admixture are calculated and presented in the lower panel of Figure 1, illustrating the dependence on the MF strength for various DM fractions. The EOSs exhibit increased stiffness (or softness) for the \(\perp\) (\(\parallel\)) components of the MF. Additionally, the presence of DM introduces further softening effect on the EOSs, with the degree of softness primarily determined by the DM content within the neutron star. Comparing the models employed in this study, BigApple demonstrates a stiffer EOS than IOPB-I. However, the influence of the MF on the softness or stiffness of the BigApple model is relatively less prominent as compared to the IOPB-I case. This model dependency of the MF effect on EoS stems from their effective masses which control the number of Landau levels (see Eq. (24) and (13)). In a previous study [29], we computed the equation of states (EOSs) for magnetized crusts using the CLDM model. In the present investigation, we employ the same model to determine the crust EOS. However, for the core EOS, we consider robust mean-field (RMF) models, namely BigApple and IOPB-I. Subsequently, we construct unified EOSs encompassing the BigApple and IOPB-I cases, as depicted in Figure 1. It is worth noting that the percentage of DM remains nearly constant throughout the neutron star, predominantly concentrated within the core region. The MF, on the other hand, exerts negligible influence on the crust EOS. Consequently, the lower-density EOSs for both the BigApple and IOPB-I models exhibit minimal variation as a function of MF strength and DM fractions. ### Mass-Radius relations, tidal deformability The mass-radius (\(M-R\)) relations are obtained with Tolmann-Oppenheimer-Volkoff [54; 55] equations for the range of central densities, which is shown in Fig. 2. We calculate the \(M-R\) profiles for both BigApple and IOPB-I EOSs with \(\parallel\) and \(\perp\) components of MF by varying DM momenta 0.00, 0.02, and 0.04 GeV. The magnitude of the maximum mass and its corresponding radius decreases for parallel MF components and vice-versa for the perpendicular fields. In addition to MF, the DM also reduces the magnitude of \(M\) and \(R\) values, which depend on its percentage inside the star. In the case of the BigApple (IOPB-I) case, the maximum mass is 2.60 \((2.15)M_{\odot}\), and its corresponding radius is \(12.41\) (\(11.91\) Figure 2: Mass-Radius relations for the DM admixed magnetized neutron star with varying the MF strength for BigApple (left) and IOPB-I (right). The overlaid bands are the observational data given by different observations (see text for details). km without the inclusion of the DM and MF. With the inclusion of MF/DM, the magnitude of the maximum mass and its corresponding radius decreases \(\sim 4-5\%\). For \(k_{f}^{\rm DM}=0.00,0.02\) GeV with all MF components, the curves corresponding to the BigApple model reasonably satisfy the overlaid observational data. However, with \(k_{f}^{\rm DM}=0.04\) GeV, all curves satisfy only the maximum mass constraint given by PSR J0740+6620 [38]. Moreover, the canonical radii corresponding to 0.04 GeV for the BigApple case don't pass through NICER and revised NICER limits [56; 57]. In the case of the IOPB-I parameter set (right panel of Fig. 2) with \(k_{f}^{\rm DM}=0.00,0.02\) GeV, only parallel components of MF satisfy all the constraints imposed by Cromartie _et al._, pulsar, and NICER. However, for \(k_{f}^{\rm DM}=0.04\) GeV, none of the constraints is satisfied for IOPB-I set. Hence, from this study, we observe that one can put constraints on the amount of DM percentage and the strength of the MF by employing diverse observational data. Through the preceding analysis, it is evident that the EOS and mass-radius profile of DM admixed magnetized neutron stars are governed by two competing mechanisms. Firstly, the EOS experiences stiffening (softening) due to the \(\perp\) (\(\parallel\)) pressure, while secondly, it undergoes softening as a consequence of increased DM content. To investigate the potential influence of MF on the rate at which the neutron star mass decreases due to the presence of DM, we construct a plot of the maximum mass as a function of DM Fermi momenta for varying MF strengths, as illustrated in Fig. 3. Remarkably, for higher MF strengths, the rate of maximum mass reduction caused by DM content exhibits a slight decrease. In other words, the MF exerts an attenuating effect on the DM within the neutron star. Nonetheless, this effect is primarily notable under high MF strengths, indicating that the MF does not significantly impact the DM's influence on the properties of neutron stars. Next, we calculate the dimensionless tidal deformability of the DM admixed magnetized neutron star. The dimensionless tidal deformability (\(\Lambda\)) of the star is calculated using the relation [58] \[\Lambda=\frac{2}{3}k_{2}C^{5}\,, \tag{26}\] where \(k_{2}\) is the Love number for the quadrupole case. The solution for \(k_{2}\) can be found in Refs. [39; 58]. \(C\) is the compactness defined as \(M/R\). We calculate the \(\Lambda\) for different DM momenta with the variation of MF, which is shown in Fig. 4 for BigApple and IOPB-I E-RMF sets. With the addition of DM, the values of \(\Lambda\) decrease. The curves corresponding to all the magnetized EOSs, including different DM momenta, are observed to be relatively diminished in the lower mass regimes. However, substantial changes have been observed at the maximum mass limit. Different error bars are taken from the GW170817 and GW190814 events to constrain the value of \(\Lambda\). Except for DM momenta 0.04 GeV, none of the curves passes through the GW170817 data. However, all the curves well reproduced the GW190814 limit except for \(k_{f}^{DM}=0.04\) GeV for both BigApple and IOPB-I cases. In comparison to DM, the change in the value of \(\Lambda\) for both \(\parallel\) and \(\perp\) components are bleak for the canonical mass. However, the \(\Lambda\) changes considerably for maximum neutron star mass depending on the strength of MF. Hence, we observed a significant effect due to the DM for both BigApple and IOPB-I cases. ### Calculation of \(f\)-mode oscillation of the magnetized neutron star In the present section, we use the formalism required to calculate the \(f\)-mode frequency as done in our previous study with the relativistic Cowling approximation [22]. The \(f\)-mode frequency for the quadrupole case is calculated with the variation of MF strength for different DM fractions, as shown in Fig. 5. We find the marginal changes in the \(f\)-mode frequency in the case of the BigApple. With the increase in \(k_{f}^{\rm DM}\), the EOS becomes softer, which gives the lower values of the maximum mass and its corresponding radius. Also, a relatively lower massive star oscillates with a higher frequency and vice-versa. Therefore, the magnitude of \(f\)-mode frequency for \(k_{f}^{\rm DM}=0.04\) GeV is higher than other DM momenta. We observe that the parallel and perpendicular MF components have marginal effects on the \(f\)-mode frequency of the star. In the case of IOPB-I (right panel in Fig. 5), there are significant changes for both MF as well as DM. This is because the MF considerably affects the EOSs for the IOPB-I. The magnitude of \(f\)-mode frequency is higher for IOPB-I because it has a softer EOS compared to BigApple. Therefore, it oscillates with a higher magnitude and radiates more \(f\)-mode frequency. Figure 3: Maximum mass of magnetized neutron star admixed with the DM as a function of DM momentum for different MF values. ### Relative change in the magnitude of neutron star properties Given the significant impact of both MF and DM on neutron star properties, it becomes imperative to thoroughly investigate and understand their relative influences on various aspects of neutron stars. To examine this, in Fig. 6, we calculate the percentage change in the magnitude of the mass, radius, tidal deformability, dimensionless moment of inertia, and \(f\)-mode oscillation frequency in comparison to zero MF strength and DM content for three different DM percentage. The pink and cyan bars represent the perpendicular and parallel components of the pressure corresponding to that central MF (\(B_{c}\)). The relative changes in the neutron star properties are contingent upon three key components: (i) the strength of the MF, (ii) the percentage of DM, and (iii) the specific EOS employed. Notably, for MF strengths below \(1\times 10^{18}\) G, both the parallel and perpendicular pressure components exhibit negligible variations, as the emergence of anisotropy becomes evident only at higher MF strengths. In the presence of higher Figure 4: Dimensionless tidal deformability for the DM admixed magnetized neutron star with varying the MF strength for BigApple (left) and IOPB-I (right). The value of \(\Lambda_{1.4}=190^{+390}_{-120}\), and \(\Lambda_{1.4}=616^{+275}_{-158}\) taken from the GW170817, and GW190814 events respectively. Figure 5: \(f\)-mode oscillation frequency of the magnetized neutron star with different fractions of DM for BigApple (left) and IOPB-I (right) cases. Figure 6: Relative change in the magnitude with respect to zero magnetic cases in the center (\(B_{c}\)) for \(\Delta M\), \(\Delta R\), \(\Delta\Lambda\), \(\Delta\bar{I}\), and \(\Delta f\)-mode frequency compare with different DM fractions for BigApple (upper), and IOPB-I (lower) cases. MF strengths, the parallel pressure component facilitates a decrease in the maximum mass due to the influence of DM, while the perpendicular pressure component mitigates the rate at which this decrement occurs. In contrast to the maximum mass, the radius of the star exhibits the opposite behavior. Specifically, for higher MF strengths, the perpendicular component of the pressure contributes to a decrease in the radius, while the parallel pressure component attenuates this effect. The maximum mass and corresponding radius of the star can exhibit changes as high as 10%, highlighting significant variations in these properties due to the presence of MF and DM. The dimensionless tidal deformability (\(\Lambda\)) and the normalized moment of inertia (\(\bar{I}\)) are notably more influenced by the presence of MF compared to DM. Additionally, the changes in these properties demonstrate sensitivity to the chosen EOS. The combined effects of DM and MF offer a range of possibilities that prove valuable in reproducing specific observational data associated with \(\Lambda\) and \(\bar{I}\). Furthermore, the \(f\)-mode oscillation frequency is found to be influenced by the presence of MF and DM. The introduction of DM significantly increases the \(f\)-mode frequency, reaching up to 2% for \(k_{f}^{\rm DM}=0.02\) and 10% for \(k_{f}^{\rm DM}=0.04\). The presence of MF further impacts this frequency increase. The influence of MF on top of DM depends on the strength of the MF and the chosen EOS used in the analysis. ## IV Summary and conclusion This study explores the different properties of the DM admixed magnetized neutron star. The magnetized EOSs are calculated with the relativistic mean-field model with density-dependent MF. The well-known RMF models, namely BigApple, and IOPB-I, are used to obtain the EOSs for the magnetized neutron star. In the case of the DM, we choose the simple DM model, where the DM particle interacts with nucleons by exchanging Higgs. The MF strength in the core is varied by fixing the surface MF. Moreover, the DM fraction is almost constant throughout the neutron star. To see its effect on the magnetized neutron star, we vary its fraction from \(0.00-0.04\) GeV. We have calculated various properties such as mass, radius, tidal deformability, \(f\)-mode frequency with different interaction strengths for MF, and percentage of DM. The EOSs of the DM admixed magnetized neutron star are found to be softer for the parallel component of the pressure and stiffer for the perpendicular one. The softness also depends on the DM contained inside the neutron star. The particle fraction for the magnetized neutron star exhibits an oscillating nature (predominantly in the core region of the star), due to an increase in the proton mass and filling of Landau levels for the high MF. The macroscopic properties are significantly affected by the DM as well as the MF. A higher DM percentage having high MF strength in the core has pronounced effects on the neutron star. It has been observed that the maximum mass and its corresponding radius decreases \(\sim 4-5\%\) for DM admixed neutron star. The resulting maximum mass and radius of the magnetized neutron star admixed with DM then result from the competitive behavior of MF strength and DM percentage. Furthermore, the rate at which the maximum mass decreases with increasing DM percentage is attenuated by the presence of MF. The combined inclusion of both DM and MF proves instrumental in reproducing certain observational data that could not be predicted in the absence of these interactions. The tidal deformability (\(\Lambda\)) and \(f\)-mode oscillation frequency are obtained for both BigApple and IOPB-I cases by varying the MF strength and DM percentage. The \(\parallel\) components have a higher magnitude of \(\Lambda\) in comparison to \(\perp\) one. We found similar results for the \(f\)-mode frequency of the star. It has been observed that a significant change arises in (\(\Lambda\)) and \(f\)-mode oscillation frequency due to both MF and DM. In future studies, it is possible to explore additional macroscopic properties of both static and rotating DM admixed magnetized neutron stars. In this work, a spherically symmetric neutron star model is considered as a simplifying assumption. However, to calculate the properties of neutron stars more efficiently, advanced techniques such as Lorene and numerical relativity can be employed for both the static and rotating cases. Such techniques may provide a comprehensive insight into the properties of DM admixed magnetized neutron stars.
2309.06298
A billion years of evolution manifest in nanosecond protein dynamics
Protein dynamics form a critical bridge between protein structure and function, yet the impact of evolution on ultrafast processes inside proteins remains enigmatic. This study delves deep into nanosecond-scale protein dynamics of a structurally and functionally conserved protein across species separated by almost a billion years, investigating ten homologs in complex with their ligand. By inducing a photo-triggered destabilization of the ligand inside the binding pocket, we resolved distinct kinetic footprints for each homolog via transient infrared spectroscopy . Strikingly, we found a cascade of rearrangements within the protein complex which manifest in three discrete time points of dynamic activity, conserved over hundreds of millions of years within a narrow window. Among these processes, one displays a subtle temporal shift correlating with evolutionary divergence, suggesting reduced selective pressure in the past. Our study not only uncovers the impact of evolution on molecular processes in a specific case, but has also the potential to initiate a novel field of scientific inquiry within molecular paleontology, where species are compared and classified based on the rapid pace of protein dynamic processes; a field which connects the shortest conceivable time scale in living matter (10^-9 s) with the largest ones (10^16 s).
Philipp J. Heckmeier, Jeannette Ruf, Charlotte Rochereau, Peter Hamm
2023-09-12T15:01:51Z
http://arxiv.org/abs/2309.06298v1
# A billion years of evolution manifest in nanosecond protein dynamics ###### Abstract **Protein dynamics form a critical bridge between protein structure and function [1; 2], yet the impact of evolution on ultrafast processes inside proteins remains enigmatic. This study delves deep into nanosecond-scale protein dynamics of a structurally and functionally conserved protein across species separated by almost a billion years [3; 4; 5], investigating ten homologs in complex with their ligand. By inducing a photo-triggered destabilization of the ligand inside the binding pocket [6; 7], we resolved distinct kinetic footprints for each homolog via transient infrared spectroscopy [8; 9]. Strikingly, we found a cascade of rearrangements within the protein complex which manifest in three discrete time points of dynamic activity, conserved over hundreds of millions of years within a narrow window. Among these processes, one displays a subtle temporal shift correlating with evolutionary divergence, suggesting reduced selective pressure in the past. Our study not only uncovers the impact of evolution on molecular processes in a specific case, but has also the potential to initiate a novel field of scientific inquiry within molecular paleontology, where species are compared and classified based on the rapid pace of protein dynamic processes; a field which connects the shortest conceivable time scale in living matter (\(\mathbf{10^{-9}}\) s) with the largest ones (\(\mathbf{10^{16}}\) s).** ## Main Proteins exist as dynamic ensembles, rather than being rigid and static entities. They constantly undergo rearrangements, folding-, and unfolding processes on a nanosecond time scale [2; 6; 7; 10]. Understanding this dynamic nature is essential to comprehending their function. As protein dynamics serve as the crucial link between structure and function [1], their experimental investigation has predominantly focused on individual protein examples, providing insights into specific [11; 12; 13], often intrinsically disordered cases [14; 15; 16]. Surprisingly, protein dynamics within a group of closely related proteins, such as a family of homologs, have rarely been experimentally explored, and if so, in the slow-paced millisecond regime [17; 18; 16] where rapid fluctuations of conformational adaptations are not resolved. Consequently, little is known about whether structural homologs display conserved ultrafast protein dynamics throughout evolution. How may nano-scale protein dynamics evolve over hundreds of million years within a protein family? Revealing the rapid dynamic processes within proteins requires the use of an appropriate toolkit. Thus far, the conservation of protein structures has been primarily observed through structure comparison using X-ray crystallography [19; 20; 21; 22]. X-ray crystallography provides valuable insights with a predominantly static view of proteins, but lacks the mechanistic intricacies that define their dynamics. As an alternative approach, NMR spectroscopy excels at resolving small conformational differences and dynamics in equilibrium [11; 16; 17; 23], yet it falls short in recording non-equilibrium processes. In contrast, infrared spectroscopy is sensitive to subtle differences in protein conformations and is a powerful tool to temporally resolve fast dynamical processes within proteins [8; 9]. In combination with an phototrigger, this technique enables the initiation and monitoring of sequential destabilization within a protein complex, with a temporal resolution as fast as a picoseconds [6; 7; 24; 25]. The key challenge lies in investigating the specific time points at which certain processes occur, in order to resolve the influence of evolution on molecules that are inherently dynamic and exhibit fluent transitions between conformational states. ## McL-1: A prime example of conservation This study is concerned with the protein myeloid cell leukemia 1 (MCL-1), a member of the BCL-2 protein family, which plays a crucial role as a key regulator of apoptosis, the programmed cell death [3; 29]. It is found not only in humans, but also in a diverse range of metazoan organisms [4; 30]. Functioning as an anti-apoptotic protein, MCL-1 interacts promiscuously with pro-apoptotic factors through \(\alpha\)-helical domains known as BCL-2 homology domain 3 (BH3) [4; 29; 31; 32], e.g. the BH3 domain of the pro-apoptotic protein PUMA [26; 33] (Fig. 1a). Homologs of this protein family have been identified in all vertebrates and even in more distantly related species such as sponges [34] and _Cnidaria_[35], whose last common ancestor with _Homo sapiens_ existed over 700 million years ago [5]. We selected ten MCL-1 homologs (Fig. 1b) from species, whose last common ancestors with _Homo sapiens_ are distributed equidistantly on an evolutionary time axis up to a billion years from present day to the past. We opted for a horizontal approach by comparing se quences of currently living species, as opposed to a vertical approach involving the reconstruction of ancestral proteins [36; 37]. Besides _Homo sapiens_, we included _Mus musculus_, _Bos taurus_, _Gallus gallus_, _Alligator mississippiensis_, _Xenopus laevis_, _Danio rerio_, a _Petromyzon marinus_ candidate [38], _Lingula unquis_, and _Hydra vulgaris_. Before exploring the protein dynamics for this homolog selection, our objective was to unequivocally establish the conservation of both the structure and function of MCL-1. By comparing the amino acid sequences of the homologs to their human equivalent, we found that sequence identity dramatically decreased as a function of evolutionary divergence (Fig. 1c), approaching a level of saturation at 25% where homology becomes challenging to detect [39]. The conserved amino acid residues are mostly associated with the canonical binding groove (Extended Data Fig. 1a), consistent with the prevailing scientific perspective [40], or are localized at the hydrophobic core of the protein. As solely the human and murine homologs bear experimentally acquired structures (e.g. PDB: 6QFM, 2ROC), we used two structure prediction models, AlphaFold [27] and RosettaFold [28], to compute the structures for the remaining homologs (Extended Data Fig. 1b). In comparison to their experimental equivalents, we found conserved topologies (TM scores \(\geq\) 80% [41; 42], Fig. 1d) and only small spatial differences between the predicted protein backbone (RMSD \(\leq\) 2.5 A, Fig. 1e). A subtle correlation between inferior structural conservation and increased divergence time became visible. Nevertheless, the predictions show that, although sequences might differ strikingly, MCL-1 structure did not substantially change over a long evolutionary time scale [43]. The primary function of MCL-1, i.e., the ability to strongly bind the BH3 domain in its binding pocket, which makes it a pivotal anti-apoptotic regulator, is also conserved. We experimentally determined MCL-1's binding affinity for a uniform PUMA BH3 ligand (bearing mu Figure 1: Structure and function of MCL-1 is conserved. (a) NMR structure of MCL-1 (grey) complexed with PUMA BH3 (yellow) (PDB: 2roc) [26]. (b) Phylogeny of ten species whose MCL-homologs were selected for this study. The phylogeny and the corresponding evolutionary divergence time in million years (Ma) were taken from TimeTree5 and cover the current state of science (July 2023) [5]. (c) Sequence identity of all investigated MCL-1 homologs (compared to _H. sapiens_) against evolutionary divergence time of the corresponding species. (d,e) Structural similarity between MCL-1 homologs (compared to _H. sapiens_), predicted with AlphaFold [27] (blue) and RosettaFold [28] (red). (f) MCL-1 homolog binding free energy for the PUMA BH3 peptide, plotted against the evolutionary divergence time. Yellow, linear fit \(\pm\) standard deviation. The Pearson correlation coefficient \(r=0.39\) indicates that the binding free energy correlates weakly with evolutionary divergence time. tations for crosslinking, see extended Data Fig. 2), with \(K_{D}\) values ranging from 100 nM to 1 \(\mu\)M for most homologs. We detected a weak correlation of \(\log K_{D}\), which refers to the binding free energy, with evolutionary divergence time (Fig. 1f). Notably, homologs from both _Hydra vulgaris_ and _Homo sapiens_, separated by an evolutionary distance of over 700 million years, bound the same ligand with comparable affinities (\(K_{D,Hydra}\) = 220 nM, \(K_{D,Homo}\) = 480 nM). Given its critical function as a 'life/death switch' [3] in numerous animal species, this result confirms that MCL-1 indeed exhibits a high degree of structural and functional conservation, manifesting in minor differences at the molecular level. MCL-1's role as a prime example of structural and functional conservation raises the question of whether the dynamics of the protein are also conserved. Are the nanosecond processes occurring in human MCL-1 also present in _Hydra vulgaris_ MCL-1? ## Conservation of protein dynamics To examine the impact of extremely slow evolutionary processes on the fast-paced protein dynamics of MCL-1, we used transient infrared spectroscopy in combination with a photoswitchable azobenzene moiety that is covalently bound to the PUMA BH3 ligand (Fig. 2a, Methods). In its _cis_-state, the crosslinked photoswitch additionally stabilizes the ligand inside the binding pocket (Extended Data Fig. 2m). Conversely, the light-induced transition from the _cis_- to the _trans_ configuration leads to a reduction in \(\alpha\)-helicity (Extended Data Fig. 2n), indicating a destabilization of PUMA BH3. Considering a time frame from pico- to microseconds [25], we studied the protein dynamics in a pump-probe experiment where the _cis_-to-_trans_ isomerization of the photoswitch is triggered by an ultrashort UV/VIS laser pulse at 420 nm and the protein vibrational spectrum is probed in the mid infrared region around 1650 cm\({}^{-1}\) (Fig. 2b). In this spectral region, C=O stretch vibrations of the protein backbone can be observed. Negative (blue) and positive (red) absorption changes serve as indicator for structural alterations [46]. We obtained homolog-specific kinetic footprints for the ten investigated species (Extended Data Fig. 3, exemplified for _M. musculus_ in Fig. 2c). Analogous to fossil footprints - the paleontologic counterpart -, the kinetic footprints display comparable elements. All of them are similarly shaped, displaying a blue shift of a band at 1645 cm\({}^{-1}\), which reveals a negative bleach towards a new (positive) band at 1675 cm\({}^{-1}\) (Fig. 2c, triangle). The signal appears within the low nanosecond time frame for all of the homologs and can be attributed to \(\alpha\)-helix unfolding [7; 9; 47]. More strikingly, the kinetic footprints exhibit diverging, species-specific details, which are particularly well visible for the spectral feature between 1655 cm\({}^{-1}\) and 1685 cm\({}^{-1}\) (Fig. 2c, circle), and a late negative feature at 1620 cm\({}^{-1}\) forming at around 100 ns (Fig. 2c, square). The first-mentioned feature (circle) is especially pronounced for mammalian/avian/reptilian homologs (Extended Data Fig. 3a-e), but loses its distinct appearance more and more for species with higher evolutionary divergence (_P. marinus_, _L. unguis_, _H. vulgaris_ (Extended Data Fig. 3h-j), displaying a solitary, less emphasized maximum at 1660 cm\({}^{-1}\). Furthermore, the kinetic footprints of the non-_Ganthostoma P. marinus_, _L. unguis_, and _H. vulgaris_, lack the late negative feature at 1620 cm\({}^{-1}\). All kinetic footprints are dominated by three phases of dynamic activity, an early-, mid-, and late phase, where the intensity of spectral features grows or decreases significantly (exemplified for _M. musculus_ in Fig. 2d, for all other species in Extended Data Fig. 4). To fathom these three dynamic processes and their corresponding time constants, we analyzed the kinetic footprints with global multiexponential fitting (Fig. 2d, details in Methods section) [44; 45; 7; 48]. Our analysis demonstrates that there are four states of molecular rearrangement upon photo-perturbation, populated with time constants \(\tau_{early}\), \(\tau_{mid}\), and \(\tau_{late}\). The time intervals in which the three observed processes take place are very narrow for the ten homologs we investigated, evidencing that not only the structure and function of MCL-1 are conserved across a wide and diverse range of today's living animals (Fig. 1) but also the underlying protein dynamics (Fig. 2e). This stands in stark contrast to the significant alterations that we observe for the primary structure of the protein homologs (Fig. 1c). When we plot the time constants against an evolutionary time scale (Fig. 2f), we find that the processes populated with \(\tau_{mid}\), correlate with the evolutionary divergence. In contrast, we did not detect similar protein dynamic drifts for the other two time constants, showing an absence of correlation of early- and late protein response with evolutionary divergence. On the other hand, if the time constants are plotted in dependence of the experimental binding affinities, it becomes evident that the processes populated with \(\tau_{late}\) are strongly correlated with the protein's affinity (Fig. 2g). We specified which parts of the protein complex contribute to which process by recording kinetic footprints for \({}^{13}\)C-\({}^{15}\)N-labelled MCL-1 of _M. musculus_ in complex with non-labelled PUMA BH3 (Extended Data Fig. 5). Separating in this way the protein from the peptide response, we found that time constant \(\tau_{early}\) (= 0.9-3.5 ns) can be attributed to the \(\alpha\)-helical unfolding of the PUMA BH3 peptide.(Fig. 2h) The time constant \(\tau_{mid}\) (= 21-50 ns) corresponds to spectral features which shift \(\approx\) 50 cm\({}^{-1}\) for isotope labelled MCL-1 (Extended Data Fig. 5), and can thus be traced back to a initial response of MCL-1, that potentially allows to rearrange and cope with the conformational destabilization originating from the binding pocket. Apparently, mammalian homologs exhibit an earlier MCL-1 adaption upon destabilization than non-mammalian/vertebrate species and again other non-_vertebrata_ (Fig. 2f, red). Finally, the terminal time constant \(\tau_{late}\) (= 0.7-3.6 \(\mu\)s) corresponds to mutual re arrangements in the whole complex. The results are in line with previous observations for the isotope labelled human MCL-1/BIM complex [7]. From our results, one might speculate whether MCL-1's initial response (\(\tau_{mid}\)) has met with less selective pressure in the past, causing it to drift. This hypothesis is supported by the absence of any discernible correlation between \(\tau_{mid}\) and the protein affinity (Fig. 2g), implying that the function of the protein is seemingly not entangled with this dynamic process. In contrast, a robust correlation between \(\tau_{late}\) and the \(K_{D}\)(Fig. 2g) indicates that the late mutual rearrangements of MCL-1 Figure 2: The conservation of ultrafast protein dynamics in MCL-1. (a) The protein MCL-1 in complex with the photoswitchable PUMA BH3 peptide. (b) Transient infrared spectroscopy of the photo-perturbed MCL-1/PUMA BH3 complex results in kinetic footprints for all homologs, exemplarily displayed in (c) for _Mus musculus_. The symbols serve as reference points for explanations in the main text. (d) Three dominating phases of increased dynamic activity are assessed (early-, mid-, late phase; dashed lines). Global multiexponential fitting with three time constants yields fits (red/blue) that cover the raw data (grey) well. Evolution-associated difference spectra [44, 45] (lower panel) were calculated for state S\({}_{1}\) (red), S\({}_{2}\) (yellow), S\({}_{3}\) (green), and for S\({}_{t}\) (blue) with time constants \(\tau_{early}\), \(\tau_{mid}\), and \(\tau_{late}\). (e) The difference spectra of all homologs display a high degree of similarity. (f) Time constants of increased dynamic activity \(\tau_{early}\), \(\tau_{mid}\), and \(\tau_{late}\) against evolutionary divergence in million years, Ma. (g) Time constants \(\tau_{early}\), \(\tau_{mid}\), and \(\tau_{late}\) against MCL-1’s affinity for PUMA BH3. Data in (f) and (g) are displayed with linear fits \(\pm\) standard deviation (yellow) and correlation coefficients \(r\) (Pearson). (h) Isotope labeling (Extended Data Fig. 5) helped to separate the signal contribution of MCL-1 and PUMA BH3 spatially and temporally. The time constants were assigned to dynamic processes in the protein complex (schematic overview). and PUMA BH3 (in the microsecond regime) are connected to the function of the protein. From our observations, it seems that the relationship between the late dynamic response and the protein affinity is conserved and cannot be inferred from the evolutionary separation of species; other factors must be at play. Irrespective of how to evaluate the given correlations, what remains truly remarkable is that our results provide an unprecedented opportunity to gain insights into the speed and extent of the impact of evolution on dynamical processes. ## Conclusion MCL-1 is a critical player in apoptosis [29], not only in human beings but also in a great variety of animals [30]. By experimentally studying ten MCL-1 homologs and their interactions with a photo-switchable ligand PUMA BH3, we gained valuable insights into the dynamics of the proteins on a broader evolutionary time scale. Using time-resolved infrared spectroscopy, we successfully recorded the kinetic footprints of the MCL-1/PUMA BH3 complex and analytically compared them - similar to bones, skulls, and footprints in the classic field of paleontology [49; 50], or protein structures and genetic information in its molecular form [51; 52; 53]. Our findings reveal a remarkable degree of conservation for the protein dynamics across the homologs, highlighting the importance of these processes in preserving their anti-apoptotic function over a span of nearly a billion years. Of particular interest is the correlation we observed between one of these ultrafast processes and the evolutionary divergence among the protein homologs, a drift in protein dynamics in the nanosecond range. This discovery challenges the prevailing focus on resolving protein structures [22] and dynamics in equilibrium [16] or analyzing genomic data [53] to understand evolution. Instead, our work highlights the importance of considering nanosecond protein dynamics as a crucial factor in unraveling the evolutionary history of these proteins. With this approach, we build a bridge between the shortest (1 ns = 10\({}^{-9}\) s) and the largest conceivable timescales in living matter (300 Ma \(\approx\) 10\({}^{16}\) s). Overall, our study defines a starting point for exploring the dynamics of countless other proteins with varying degrees of conservation. By investigating different systems that are more or less conserved, we can gain valuable insights into the extent of evolution's impact on nanosecond processes, and how these rapid processes translate to slow-paced protein function. ## Methods ### Phylogeny and Bioinformatics From the countless species in the tree of life we chose ten MCL-1 homolog sequences (Fig. 1b). Alongside the _Homo sapiens_ homolog, our selection encompasses a variety of species, including mammalian (_Mus musculus_, _Bos taurus_), avian (_Gallus gallus_), reptile (_Alligator mississippiensis_), amphibian (_Xenopus laevis_), bony fish (_Danio rerio_), and other farther related eunetazoan homologs (_Petromyzon marinus_, _Lingula unguis_). Notably, we also incorporated a homolog from _Hydra vulgaris_, one of the most distantly related organisms known to exhibit BCL-2 regulated apoptosis [35]. The curated selection represents species whose last common ancestors existed at quasi-equidistant intervals spanning nearly a billion years of evolutionary history. To assemble our dataset, we accessed amino acid sequences from the Uniprot database. All entries shared the common identifier "MCL-1" in their title or description. The amino acid sequence for _P. marinus_ was added to the selection with the help of Jeramiah Smith (gene on Chr52: 9161036..9167581, + strand; annotated: PMZ_0059412-RA) [38]. The sequences were aligned to the human variant (soluble domain, \(\Delta\)N-\(\Delta\)C aa 171-327 [33]) and harmonized in length (\(\approx\)150-160 aa) (Extended Data Fig. 1a). The chosen sequences (refer to Extended Data Tab. 1) were initially controlled for by predicting their structure using AlphaFold and RosettaFold (see below) and aligning them with experimental structures from _Homo sapiens_ and _Mus musculus_. From the sequences, we generated a multiple sequence alligement using Clustal Omega (EMBL-EBI) [54] (Fig. 1a). The phylogeny in Fig. 1b was obtained from the _TimeTree_ database ([http://timetree.org](http://timetree.org)) [5]. It was not computed from the investigated MCL-1 sequences. In contrast, the given phylogeny was constructed from median and adjusted divergence times which were estimated by _TimeTree_ based on values from an abundance of published studies. The divergence times, always related to _H. sapiens_ and tabulated in Extended Data Tab. 1 alongside their corresponding confidence interval, reflect the most current scientific understanding (June 2023). In figures Fig. 1c-f, and Fig. 2f, the divergence time is given in million years, Ma. The experimental structures of MCL-1/PUMA were retrieved from PDB (_Mus musculus_: 2ROC; _Homo sapiens_: 6QFM). In addition, we predicted the structures of all MLC-1 homologs with AlphaFold [27] and RosettaFold [28]. We used ColabFold [55] to generate AlphaFold-predicted structures, and the Robetta server [28] for RosettaFold-predicted structures, both with default parameters. To estimate the structural similarity between all protein pairs, we performed an all-against-all alignment of the predicted structures and computed the TM score and root-mean-square deviation (RMSD) of each protein pair using TM-align [41]. For both AlphaFold and RosettaFold, we selected the top ranked structure out of the five predictions for downstream analyses. We evaluated the quality of the predicted structure using AlphaFold predicted local distance difference test (pLDDT), a per-residue confidence metric which estimates how well the predicted structure would match with an experimental one and which has been shown to be well-calibrated [27]. All our predicted structures have high average pLDDT values, ranging from 0.83 to 0.93, indicating good quality predictions. ### Protein preparation Examining protein function and dynamics, we expressed ten different MCL-1 homologs using a _Escherichia coli_ BL21 expression strain (Fig. 2a). Initially, the bacterial cells were transformed with a pET-30a(+) plasmid containing the corresponding MCL-1 homolog gene, using electroporation. Positive clones were selected through Kanamycin resistance. For standard expression, bacterial cultures were cultivated in lysogeny broth medium until reaching an optical density of OD\({}_{600}=0.6\). The expression was induced by adding 700 \(\mu\)M IPTG, followed by incubation at 30 \({}^{\circ}\)C for 20 hours. Cell harvest was carried out through centrifugation (3000 x\(g\)). In order to generate heavy, uniformly \({}^{13}\)C\({}^{15}\)N-labeled MCL-1, bacterial cultures were grown in minimal medium supplemented with solely heavy carbon and nitrogen sources. The cells were cultivated to an OD\({}_{600}\) of 0.6, induced with 1 mM IPTG, and then further incubated at 30 \({}^{\circ}\)C. The expression was stopped after 4 hours with cell harvest as described above. Cell lysis was achieved by subjecting the harvested cells to sonication (20 kHz, 4 x 1 min pulses). The lysed cell suspension was purified using Ni-affinity chromatography and a His\({}_{6}\)-Tag located at the N-terminus of the protein. Purification was carried out under native conditions. The N-terminal His\({}_{6}\)-Tag was removed by 3C protease cleavage. Throughout this study, all analytical procedures were performed in a sample buffer composed of 50 mM Tris (pH 8) and 125 mM NaCl. Mass spectrometry was used to assess the protein's integrity and sample purity. For long-term storage, the samples were kept at -80 \({}^{\circ}\)C. In total, we could express the homologs of ten species given in the main text (Extended Data Fig. 6). Under identical conditions, however, we could not express _Ornithorhynchus anatinus_, _Orchesella cincta_, and _Acanthaster planci_ homologs at adequate concentrations. ### Peptide preparation PUMA BH3 (EEQWAREIGAQLRCMADDLNCQY-ERV) was synthesized using solid-state peptide synthesis on a Liberty 1 peptide synthesizer (CEM corporation, Matthews, NC, USA). In this study, the peptide was deliberately modified by introducing two mutations - replacing Arg143 and Ala150 with Cys residues - compared to the native mammalian version. These Cys residues were incorporated distal to the hydrophobic binding interface, to enable the covalent linkage of a photoswitchable azobenzene moiety. To achieve this linkage, the watersoluble photoswitch (3,3'-bis(sulfonato)-4,4'-bis(chloroacetamido)azobenzene) [56] and the peptide with reduced Cys residues were together incubated in a 20 mM Tris (pH 8.5) at a temperature of 50 \({}^{\circ}\)C, under continuous stirring for a duration of 20 hours. Hereafter, the reaction product underwent purification using both anion exchange and reversed-phase chromatography (C18 10\(\mu\)m) to isolate the successfully linked peptide. For final preparation, the buffer of the isolated linked peptide was exchanged through dialysis against the sample buffer (50 mM Tris pH 8, 125 mM NaCl). The linkage's success, as well as the peptide's purity and integrity, were controlled via mass spectrometry. ### Circular dichroism spectroscopy The expressed MCL-1 homologs have in common that they contain eight \(\alpha\)-helical elements [33], and exhibit a circular dichroism spectrum that is typical for \(\alpha\)-helical structures (Extended Data Fig. 2b, yellow). In contrast, their peptide ligand PUMA BH3 is intrinsically disordered in isolation [57] (Extended Data Fig. 2b, grey). When in complex with MCL-1, PUMA BH3 assumes an \(\alpha\)-helical shape (Extended Data Fig. 2b, black). We utilize circular dichroism spectroscopy to accomplish two distinct objectives: (i) to evaluate the \(\alpha\)-helical content of the MCL-1 and PUMA BH3 complex at a constant concentration, thereby assessing whether they are correctly folded, and (ii) to generate binding curves and determine dissociation constants (\(K_{D}\)) for all analyzed MCL-1 homologs. To record binding curves and assess the \(K_{D}\) values, we exploited the nature of PUMA BH3 which is intrinsically disordered in solution and only exhibits an \(\alpha\)-helical secondary structure when bound by MCL-1's binding groove. Hence, for an increasing concentration of bound PUMA BH3, and a constant concentration of MCL-1, the \(\alpha\)-helical content added by titration reflects the fraction of bound peptide. For the first aspect (i), a quartz glass cuvette with a 1 mm path length was employed, and the sample concentration was maintained at 20 \(\mu\)M. We measured the spectrum between 200-260 nm at room temperature. Hereby, we examined the \(\alpha\)-helical content of the MCL-1 and PUMA BH3 complex which served as a control to for their correct structural conformation (displayed in Extended Data Fig. 2b). For the second aspect (ii), MCL-1 was brought to a concentration of 2 \(\mu\)M. A quartz glass cuvette with a path length of 1 cm was used, and continuous stirring was maintained during the spectroscopic measurements at room temperature. To record the binding curves, we titrated both the linked and unlinked forms of the PUMA BH3 peptide to the MCL-1 homolog, offering a comprehensive understanding of the binding affinity of photoswitchable and non-photoswitchable complexes. The circular dichroism was recorded at 222 nm as a function of increasing PUMA BH3 concentration. In both scenarios (i) and (ii), measurements involving the photoswitch able PUMA BH3 were conducted for both the _cis_-state (achieved through illumination with a 375 nm laser) and the dark-adapted _trans_-state. By recording the \(\alpha\)-helical content at 222 nm as a function of increasing PUMA BH3 concentration, we received binding curves for all MCL-1 homologs. In order to calculate the dissociation constant \(K_{D}\), we fitted the data to a two-state binding equilibrium [58; 59]: \[K_{D}=\frac{([M]-[MP])\times([P]-[MP])}{[MP]} \tag{1}\] where [M] is the initial concentration of the MCL-1, [P] is the initial concentration of PUMA BH3 given to the solution, and [MP] is the concentration of the protein-peptide complex. For a constant [M] = 2 \(\mu\)M and a variable [P], the fraction of bound peptide can be understood as: \[\text{Fraction bound}=\frac{([\text{M}]+[\text{P}]+\text{K}_{D})-\sqrt{([ \text{M}]+[\text{P}]+\text{K}_{D})^{2}-4\times[\text{M}]\times[\text{P}]}}{2 \times[\text{M}]} \tag{2}\] The covalently bound photoswitch in the _cis_-state stabilizes PUMA BH3 inside the binding pocket (Fig. 1a), with significantly lower \(K_{D}\) values for all of the homologs (Extended Data Fig. 2m). For PUMA BH3 in the _cis_ state, _Homo sapiens_, _Bos taurus_, and _Altigator missisipensis_ homologs showed the highest affinities, with \(K_{D}\) values in the low nanomolar regime (\(<\)10 nM), a region that was classified as physiological for wild type PUMA [31]. Switching the photoswitch from its _cis_- to its _trans_ configuration results in a loss in \(\alpha\)-helicity (Extended Data Fig. 2n) and in the destabilization of PUMA BH3. ### Transient infrared spectroscopy MCL-1 and PUMA BH3 were mixed in a 1:1 ratio prior to the spectroscopic experiment. To ensure high signal strength in transient infrared spectroscopy, both the protein and peptide were brought to concentrations of 600 \(\mu\)M each in the final sample. The overall sample volume was 800 \(\mu\)L. Considering concentrations \(>\)500 \(\mu\)M, it is expected that the PUMA BH3 peptide will be predominantly bound within MCL-1's binding pocket, as illustrated in Extended Data Fig. 2. To exclude H\({}_{2}\)O from spectroscopic experiments, the employed buffer was exchanged to a corresponding buffer containing D\({}_{2}\)O. Stringent precautions were taken to avert H\({}_{2}\)O contamination by preserving the sample within a nitrogen atmosphere devoid of water vapor. For pump-probe measurements, a pair of electronically synchronized 2.5 kHz Ti:sapphire oscillator/regenerative amplifier femtosecond laser systems (Spectra Physics) were employed, offering a maximal delay of 45 \(\mu\)s [25; 60]. One of these laser systems, featuring frequency-doubled pulses (420 nm, 3 \(\mu\)J per pulse, focused to an approximate beam diameter of 140 \(\mu\)m within the sample, and stretched to \(\sim\)60 ps to minimize sample deposition on the sample cell windows), were used to induce the _cis_ to _trans_-isomerization of the photoswitch. The second laser system was applied to generate infrared probe pulses via an optical parametric amplifier (100 fs, spot size 110 \(\mu\)m, center wavenumber 1660 cm\({}^{-1}\)). To ensure a consistent sample environment, the sample was continuously circulated within a closed-cycle flow cell system. This system consisted of a reservoir and a CaF\({}_{2}\) cell featuring a 50 \(\mu\)m optical path length. Before entering the measurement cell, the sample was irradiated in a pre-illumination step using a 375 nm continuous wave diode laser (90 mW, CrystaLaser), in order to optimally prepare the sample with \(>\)85% in the _cis_-state. ### Data analysis From time resolved infrared measurements, we obtained kinetic footprints in the form of 2D data sets \(d(\omega_{i},t_{j})\) as a function of probe frequency \(\omega_{i}\) and pump-probe delay time \(t_{j}\) (Fig. 2c, Extended Data Fig. 3). For each homolog, we subjected the 2D dataset to a global multiexponential fitting [45], operating under the premise that the investigated system can be understand as interconverting discrete states with time-invariant spectra. We employed multiexponential functions with amplitudes \(a(\omega_{i},\tau_{k})\) and a global set of time constants \(\tau_{k}\) for fitting the experimental data [61; 48; 62]: \[d(\omega_{i},t_{j})=a_{0}(\omega_{i})-\sum_{k}a(\omega_{i},\tau_{k})e^{-t_{j}/ \tau_{k}}. \tag{3}\] The time constants \(\tau_{k}\) were treated as free fitting parameters, with the constraint of a minimal number of exponential terms. Based on observations with similar systems [7; 32] we excluded data before 300 ps, to prevent the influence of the pump pulse (60 ps pulse length), which results in a strong "heat signal" at 100 ps, induced by azobenzene photoisomerization, which can be universally observed for this kind of experiment [7; 24; 32]. Three time constants \(\tau_{early}\), \(\tau_{mid}\), and \(\tau_{late}\) were needed to adequately fit the data, dissecting the dynamic response in an early-, mid-, and late phase (Extended Data Tab. 2). The one exception is _D. rerio_, which required a fourth time constant \(\tau_{D.rerio}\) = 300 ns to adequately fit the data. Under the assumption of a sequential, unidirectional process with four states S\({}_{1}\), S\({}_{2}\), S\({}_{3}\), S\({}_{t}\) and three time constants \(\tau_{early}\), \(\tau_{mid}\), \(\tau_{late}\) connecting them, we calculated concentration profiles for each state as well the corresponding evolution-associated difference spectra [44], which are depicted in Fig. 2d,e. Commencing with state S\({}_{1}\), all subsequent evolution-associated difference spectra exhibited a blue shift from 1645 cm\({}^{-1}\) to around 1675 cm\({}^{-1}\). Equally to our observations for the raw data, a very distinct positive feature at 1660 cm\({}^{-1}\) was detected in the evolution-associated difference spectra of the latest two states (Fig. 2d,e, green and blue). With the help of isotope labeling [63] (Fig. 5), we could assign this distinct spectral maximum, populated with \(\tau_{mid}\), to the initial response of MCL-1 upon photo-destabilization of its ligand. The early response at \(\tau_{early}\) exclusively originates from the unfolding of PUMA BH3. The terminal, late response at \(\tau_{late}\) results from mutual, heterogeneous rearrangements in MCL-1 and PUMA BH3. ## Abbreviations BH3, BCL-2 Homology Domain 3; MCL-1, Myeloid Cell Leukemia 1. ## End Notes **Acknowledgements** We thank Markus B. Glutz for the synthesis of the peptide and Serge Chesnov from Functional Genomics Center Zurich for their work on the mass spectrometry and amino-acid analysis. We thank Jeramiah Smith, University of Kentucky, who provided us with the sequence of the _Petromyzon marinus_. The work has been supported by the Swiss National Science Foundation (SNF) through the Sinergia grant CRSII5_213507. **Author contributions** P.J.H conceived the study. P.J.H. selected homologs and gathered sequence information, performed microbiological work, protein expression and purification, azobenzene crosslinking, and the purification of the crosslinked peptide. P.J.H. performed binding studies with circular dichroism spectroscopy. J.R. and P.J.H performed time-resolved infrared spectroscopic measurements. C.R. performed structure predictions and computed structural similarity for the homologs. P.H. conceived and built the experimental pump probe setup. P.H. developed the analysis tools for the experimental data evaluation. P.J.H. analysed the experimental data, with input from P.H. and J.R.. P.H. supervised the project and acquired the funding. P.J.H. prepared the figures. P.J.H. wrote the manuscript with strong contribution from P.H. and minor contribution from all other authors. **Competing interests** The authors declare no competing financial interest. **Additional Information** **Correspondence and requests for materials** should be addressed to Philipp J. Heckmeier. **Data Availability:** The data that support the findings of this study are openly available in Zenodo (the link will be provided at the proof stage.)
2309.11928
Video Scene Location Recognition with Neural Networks
This paper provides an insight into the possibility of scene recognition from a video sequence with a small set of repeated shooting locations (such as in television series) using artificial neural networks. The basic idea of the presented approach is to select a set of frames from each scene, transform them by a pre-trained singleimage pre-processing convolutional network, and classify the scene location with subsequent layers of the neural network. The considered networks have been tested and compared on a dataset obtained from The Big Bang Theory television series. We have investigated different neural network layers to combine individual frames, particularly AveragePooling, MaxPooling, Product, Flatten, LSTM, and Bidirectional LSTM layers. We have observed that only some of the approaches are suitable for the task at hand.
Lukáš Korel, Petr Pulc, Jiří Tumpach, Martin Holeňa
2023-09-21T09:42:39Z
http://arxiv.org/abs/2309.11928v1
# Video Scene Location Recognition with Neural Networks ###### Abstract This paper provides an insight into the possibility of scene recognition from a video sequence with a small set of repeated shooting locations (such as in television series) using artificial neural networks. The basic idea of the presented approach is to select a set of frames from each scene, transform them by a pre-trained single-image pre-processing convolutional network, and classify the scene location with subsequent layers of the neural network. The considered networks have been tested and compared on a dataset obtained from The Big Bang Theory television series. We have investigated different neural network layers to combine individual frames, particularly AveragePooling, MaxPooling, Product, Flatten, LSTM, and Bidirectional LSTM layers. We have observed that only some of the approaches are suitable for the task at hand. + Footnote †: Copyright ©2021 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). ## 1 Introduction People watching videos are able to recognize where the current scene is located. When watching some film or serial, they are able to recognize that a new scene is on the same place they have already seen. Finally, people are able to understand scenes hierarchy. All this supports human comprehensibility of videos. The role of location identification in scene recognition by humans motivated our research into scene location classification by artificial neural networks (ANNs). A more ambitious goal would be a make system able to remember unknown video locations and using this data identify video scene that is located in that location and mark it with the same label. This paper reports a work in progress in that direction. It describes the employed methodology and presents first experimental results obtained with six kinds of neural networks. The rest of the paper is organized as follows. The next section is about existing approaches to solve this problem. Section 3 is divided to two parts. The first one is about data preparation before their usage in ANNs. The second one is about design of the ANNs in our experiments. Finally, Section 4 - the last section before the conclusion shows our results of experiments with these ANNs. ## 2 ANN-Based Scene Classification The problem of scene classification has been studied for many years. There are many approaches based on neural networks, where an ANN using huge amount of images learned to recognize the type of given scene (for example, a kitchen, a bedroom, etc.). For this case several datasets are available. One example is [11], but it does not specify locations, so this and similar datasets are not usable for our task. However, our classification problem is different. We want to train an ANN able to recognize a particular location (for example "Springfield-EverGreenTerrace-742-floor2-bathroom"), which can me recorded by camera from many angles (typically, some object can be occluded by other objects from some angles). One approach using ANN to solve this task is described in [1], there convolutional networks were used. The difference to our approach is on the one hand in the extraction and usage of video images, on the other hand in types of ANN layers. Another approach is described in [4]. The authors propose a high-level image representation, called Object Bank, where an image is represented as a scale-invariant response map of a large number of pre-trained generic object detectors. Leveraging on the Object Bank representation, good performances on high level visual recognition tasks can be achieved with simple off-the-shelf classifiers such as logistic regression and linear SVM. ## 3 Methodology ### Data Preparation Video data consists of large video files. Therefore, the first task of video data preparation consists in loading the data that is currently needed. We have evaluated the distribution of the data used for ANN training. We have found there are some scenes with low occurence, whereas others occur up to 30 times more frequently compared to them. Hence, the second task of video data preparation is to increase the uniformity of their distribution, to prevent biasing the ANN to most frequent classes. This is achieved due to undersampling the frequent classes in the training data. The input consists of video files and a text file. The video files are divided into independent episodes. The text e scene. Every row contains trainformation about one scene. The scene is understand as sequence of frames, that are not interrupted by another frame with different scene location label. Every row contains a relative path to the source video file, the frame number where the scene begins and the count of the its frames. Figure 1 outlines how frames are extracted and prepared for an ANNs. For ANNs training, we select from each target scene a constant count 20 frames (denoted # frames in Figure 1). To get most informative representation of the considered scene, frames for sampling are taken from the whole length of the scene. This, in particular, prevents to select frames only within a short time interval. Each scene has its own frame distance computed from its frames count: \[SL=\frac{SF}{F}\] where SF is the count of scene frames, F is the considered constant count of selected frames and SL is the distance between two selected frames in the scene. After frames extraction, every frame is reshaped to an input 3D matrix for the ANN. Finally the reshaped frames are merged to one input matrix for the neural network. ### Used Neural Networks and Their Design Our first idea was to create a complex neural network based on different layers. However, there were too many parameters to train in view of the amount of data that we had. Therefore, we have decided to use transfer learning from some pretrained network. Because our data are actually images, we considered only ANNs pretrained on image datasets in particular ResNet50 [9], ResNet101 [9] and VGGnet [2]. Finally, we have decided to use VGGnet due to its small size. Hence, ANNs which we trained on our data are composed of two parts. The first part, depicted in Figure 2 is based on the VGGnet. At the input, we have 20 frames (resolution \(224\times 224\), BGR colors) from one scene. This is processed by a pretrained VGG19 neural network without two top layers. The two top layers were removed due to transfer learning. Its output is a vector with size 4096. For the 20 input frames we have 20 vectors with size 4096. These vectors are merged to a 2D matrix with size \(20\times 4096\). For the second part, forming the upper layers of the final network, we have considered six possibilities: a product layer, a flatten layer, an average pooling layer, a max pooling layer, an LSTM layer and a bidirectional LSTM layer. All of them, as well as the VCGnet, will be described below. Each of listed layers is preceded by a Dense layer. The Dense layer returns matrix \(20\times 12\), where number 12 is equal to the number of classes. With this output every model works differently. VGGnetThe VGGNets [2] were originally developed for object recognition and detection. They have deep convolutional architectures with smaller sizes of convolutional kernel \((3\times 3)\), stride \((1\times 1)\), and pooling window \((2\times 2)\). There are different network structures, ranging from 11 layers to 19 layers. The model capability is increased when the network is deeper, but imposing a heavier computational cost. We have used the VGG19 model (VGG network with 19 layers) from the Keras library in our case. This model [3] won the 1st and 2nd place in the 2014 ImageNet Large Scale Visual Recognition Challenge in the 2 categories called **object localization** and **image classification**, respectively. It achieves 92.7% in image classification on Caltech-101, top-5 test accuracy on ImageNet dataset which contains 14 million images belonging to 1000 classes. The architecture of the VGG19 model is depicted in figure 3. #### 3.2.1 Product array In this approach, we apply a product array layer to all output vectors from the dense layer. A Product array layer Figure 1: Input preparation for a neural network Figure 2: First, untrainable part of our neural network, where the Input Layer represents frame with resolution \(224\times 224\) in BGR colors and output is vector with length 4096, which is output from VGG19 network without last two layers computes product of all values in chosen dimension of an n-dimensional array and returns an n-1-dimensional array. A model with a product layer is outlined in Figure 4. The output from a Product layer is one number for each class, i.e. scene location, so our result is vector with 12 numbers. It returns a probability distribution over the set of scene locations. #### 3.2.2 Flatten In this approach, we apply a flatten layer to all output vectors from the dense layer. A Flatten layer creates one long vector from matrix so, that all rows are in sequence. A model with a flatten layer is outlined in Figure 5. After the input and a dense layer, a flatten layer follows, which returns long vector with \(12*20\) numbers in this case. It is followed by a second dense layer. Its output has again a dimension equal to the number of classes and it returns a probability distribution over the set of scene locations. Figure 4: Trainable part of the neural network based on a product layer Figure 5: Trainable part of the neural network based on a flatten layer Figure 3: Architecture of the used VGG19 model [10], in our network is used without FC1, FC2 and Softmax layers #### 3.2.3 Average Pooling In this approach, we apply average pooling to all output vectors from the dense layer part of the network (Figure 6). An average-pooling layer computes the average of values assigned to subsets of its preceding layer that are such that: * they partition the preceding layer, i.e., that layer equals their union and they are mutually disjoint; * they are identically sized. Taking into account these two conditions, the size \(p_{1}\times\ldots\times p_{D}\) of the preceding layer and the size \(r_{1}\times\ldots\times r_{D}\) of the sets forming its partition determine the size of the average-pooling layer. In this case, an Average Pooling layer's forming sets size is \(20\times 1\). Using this size in average-pooling layer, we get again one number for each class, which returns a probability distribution over the set of scene locations. Apart form average pooling, we have tried also max pooling. However, it led to substantially worse results. Its classification of the scene location was typically based on people or items in the foreground, not on the scene as a whole. Although using the average-pooling layer is easy, it gives acceptable results. The number of trainable parameters of the network is then low, which makes it suitable for our comparatively small dataset. #### 3.2.4 Long Short Term Memory An LSTM layer is used for classification of sequences of feature vectors, or equivalently, multidimensional time series with discrete time. Alternatively, that layer can be also employed to obtain sequences of such classifications, i.e., in situations when the neural network input is a sequence of feature vectors and its output is a a sequence of classes, in our case of scene locations. LSTM layers are intended for recurrent signal propagation, and differently to other commonly encountered layers, they consists not of simple neurons, but of units with their own inner structure. Several variants of such a structure have been proposed (e.g., [5, 8]), but all of them include at least the following four components: * _Memory cells_ can store values, aka cell states, for an arbitrary time. They have no activation function, thus their output is actually a biased linear combination of unit inputs and of the values coming through recurrent connections. * _Input gate_ controls the extent to which values from the previous unit within the layer or from the preceding layer influence the value stored in the memory cell. It has a sigmoidal activation function, which is applied to a biased linear combination of the input and recurrent connections, though its bias and synaptic weights are specific and in general different from the bias and synaptic weights of the memory cell. * _Forget gate_ controls the extent to which the memory cell state is suppressed. It again has a sigmoidal activation function, which is applied to a specific biased linear combination of input and recurrent connections. * _Output gate_ controls the extent to which the memory cell state influences the unit output. Also this gate has a sigmoidal activation function, which is applied to a specific biased linear combination of input and recurrent connections, and subsequently composed either directly with the cell state or with its sigmoidal transformation, using a different sigmoid than is used by the gates. Hence using LSTM layers a more sophisticated approach compared to simple average pooling. A LSTM, layer can keep hidden state through time with information about previous frames. Figure 7 shows that the input to an LSTM layer is a 2D matrix. Its rows are ordered by the time of frames from the input scene. Every input frame in the network is represented by one vector. The output from the LSTM layer is a vector of the same size as in previous approaches, which returns a probability distribution over the set of scene locations. #### 3.2.5 Bidirectional Long Short Term Memory An LSTM, due to its hidden state, preserves information from inputs that has already passed through it. Unidirectional LSTM only preserves information from the past because the only inputs it has seen are from the past. A Bidirectional LSTM runs inputs in two ways, one from the past to the future and one from the future to the past. To this end, it combines two hidden states, one for each direction. Figure 6: Trainable part of the neural network based on an average-pooling layer Figure 8 shows that the input to a bidirectional LSTM layer is the same as the input to a LSTM layer. Every input frame in the network is represented by one vector. The output from the Bidirectional LSTM layer is a vector of the same size as in previous approaches, which returns a probability distribution over the set of scene locations. ## 4 Experiments ### Experimental Setup The ANNs for scene location classification were implemented in the libraries Python language using TensorFlow and Keras. Neural network training was accelerated using a NVIDIA GPU. The versions of the employed hardware and software are listed in Table 1. For image preparation, OpenCV and Numpy were used. The routine for preparing frames is a generator. It has lower capacity requirements, because data are loaded just in time when they are needed and memory is released after the data have been used for ANN. All non-image information about inputs (video location, scenes information, etc.) are processed in text format by Pandas. We have 17 independent datasets prepared by ourselves from proprietary videos of the The Big Bang Theory series, thus the datasets can't be public. Each dataset originates from one episode of the series. Each experiment was trained with one dataset, so results are independent as well. So we can compare behavior of the models with different datasets. Our algorithm to select data in training routine is based on oversampling. It randomly selects target class and from the whole training dataset is randomly select source scene with replacement. This algorithm is applied due to an unbalanced proportion of different target classes. Thanks to this method, all targets are distributed equally and the network does not overfit a highly represented class. ### Results The differences between the models considered in the second, trained part of the network were tested for significance by the Friedman test. The basic null hypotheses that the mean classification accuracy for all 6 models coincides was strongly rejected, with the achieved significance \(p=2.8\times 10^{-13}\). For the post-hoc analysis, we employed the Wilcoxon signed rank test with two-sided alternative for all 15 pairs of theconsidered models, because of the inconsistence of the more commonly used mean ranks post-hoc test, to which recently Benavoli et al. pointed out [6]. For correction to multiple hypotheses testing, we used the Holm method [7]. The results are included the comparison between models in Table 2. Summary statistics of the predictive accuracy of classification all 17 episode datasets are in Table 3. Every experiment was performed on every dataset at least 7 times. The table is complemented with results for individual episodes, depicted in box plots. The model with a max-pooling layer had the worst results (Figure 12) of all experiments. Its overall mean ac \begin{table} \begin{tabular}{|l|l|} \hline CPU cores & 2 \\ \hline GPU compute capability & 3.5 and higher \\ \hline OS & Linux 5.4.0 \\ \hline CUDA & 11.3 \\ \hline Python & 3.8.6 \\ \hline TensorFlow & 2.3.1 \\ \hline Keras & 2.4.0 \\ \hline OpenCV & 4.5.2 \\ \hline \end{tabular} \end{table} Table 1: Versions of the employed hardware and software Figure 8: Trainable part of the neural network based on a bidirectional LSTM layer Figure 7: Trainable part of the neural network based on an LSTM layer curacy was around 10 %. This is only slightty higher than random choice which is \(1/12\). The model was not able to achieve better accuracy than 20 %. Its results were stable and standard deviation was very low. Slightly better results (Figure 10) had the model with the a flatten layer, it was sometimes able to achieve a high accuracy, but its standard deviation was very high. On the other hand, results for some other episodes were not better than those of the max-pooling model. A better solution is the product model, whose predictive accuracy (Figure 9) was for several episodes higher than 80 %. On the other hand, other episodes had only slightly better results than the flatten model. And it had the highest standard deviation among all considered models. The most stable results (Figure 11) with good accuracy had the model based on average-pooling layer. Its mean accuracy was 32 % and for no episode, the accuracy was substantially different. The model with unidirectional LSTM layer had the second mean accuracy of considered our models (Figure 13). Its internal memory brings advantage in compare over the previous approaches, over 40 %, though also a comparatively high standard deviation. The highest mean accuracy had the model with a bidirectional LSTM layer (Figure 14). It had a similar standard deviation as the one with a unidirectional LSTM, but an accuracy mean nearly 50 %. ## 5 Conclusion and Future Research In this paper was provided an insight into the possibility of using artificial neural networks for scene recognition location from a video sequence with a small set of repeated shooting locations (such as in television series) was provided. Our idea was to select more than one frame from each scene and classify the scene using that sequence of frames. We used a pretrained VGG19 network without two last layers. This results were used as an input to the trainable part our neural network architecture. We have designed six neural network models with different layer types. We have investigated different neural network layers to combine video frames, in particular average-pooling, max-pooling, product, flatten, LSTM, and bidirectional LSTM layers. The considered networks have been tested and compared on a dataset obtained from The Big Bang Theory television series. The model with max-pooling layer was not successful, its accuracy was the lowest of all models. The models with a flatten or product layer were very unstable, their standard deviation was very large. The most stable among all models was the one with an average-pooling layer. The models with unidirectional LSTM and bidirectional LSTM had similar standard deviation of the accuracy. The model with a bidirectional LSTM had the highest accuracy among all considered models. In our opinion, this is because its internal memory cells preserve information in both directions. Those results shows, that models with internal memory are able to classify with a higher accuracy than models without internal memory. Our method may have limitations due to the chosen pretrained ANN and the low dimension of some neural layer parts. In future research, it is desirable to achieve higher accuracy in scene location recognition. This task may also need modifying model parameters or using other architectures. It also may need other pretrained models or combining several pretrained models. It is also desirable that, if the ANN detects an unknown scene, it will remember it and next time it will recognize a scene from the same location properly. ## Acknowledgments The research reported in this paper has been supported by the Czech Science Foundation (GACR) grant 18-18080S. Computational resources were supplied by the project "e-Infrastruktura CZ" (e-INFRA LM2018140) provided within the program Projects of Large Research, Development and Innovations Infrastructures. Computational resources were provided by the ELIXIR-CZ project (LM2018131), part of the international ELIXIR infrastructure. \begin{table} \begin{tabular}{l r r r r r r r} \hline \hline & Product & Flatten & Average & Max & LSTM & BidirectionalLSTM & SummaryScore \\ \hline Product & X & **16** & \(6\) & **16** & \(5\) & 1 & 44 \\ Flatten & 1 & X & 0 & _10_ & 0 & 0 & 11 \\ Average & _11_ & **17** & X & **17** & 3 & 1 & 49 \\ Max & 1 & \(6\) & 0 & X & 0 & 0 & 7 \\ LSTM & _12_ & **17** & 14 & **17** & X & 3 & 63 \\ BidirectionalLSTM & **16** & **17** & **15** & **17** & _14_ & X & 79 \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison of accuracy results on all 17 episode datasets. The values in the table are counts of datasets, in which the model in row has higher accuracy compared to the model in column. If the difference is not significant in the Wilcoxon test than the count is in italic. If the difference is significant, then the higher count is in bold. Figure 11: Box plot with results obtained using the average-pooling model \begin{table} \begin{tabular}{l r r r r r} \hline \hline model & mean & std & 25\% & 50\% & 75\% \\ \hline Product & 43.7 & 38.4 & 4.6 & 32.4 & 85.2 \\ Flatten & 23.6 & 30.8 & 1.0 & 5.1 & 39.6 \\ Average & 32.2 & 8.1 & 26.5 & 31.5 & 37.1 \\ Max & 9.3 & 2.9 & 8.1 & 9.3 & 10.9 \\ LSTM & 40.7 & 25.2 & 19.7 & 39.9 & 59.4 \\ BidirectionalLSTM & 47.8 & 25.1 & 29.6 & 50.5 & 67.7 \\ \hline \hline \end{tabular} \end{table} Table 3: Aggregated predictive accuracy over all 17 datasets [%] Figure 10: Box plot with results obtained using the flatten model Figure 9: Box plot with results obtained using the product model Figure 14: Box plot with results obtained using the bidirectional LSTM model Figure 12: Box plot with results obtained using the max-pooling model Figure 13: Box plot with results obtained using the LSTM model
2306.00078
Fundamental bound on topological gap
We provide a universal tight bound on the energy gap of topological insulators by exploring relationships between topology, quantum geometry, and optical absorption. Applications of our theory to infrared absorption near topological band inversion, magnetic circular dichorism in Chern insulators, and topological gap in moir\'e materials are demonstrated.
Yugo Onishi, Liang Fu
2023-05-31T18:00:33Z
http://arxiv.org/abs/2306.00078v3
# Fundamental bound on topological gap ###### Abstract We provide a universal tight bound on the energy gap of topological insulators by exploring relationships between topology, quantum geometry, and optical absorption. Applications of our theory to infrared absorption near topological band inversion, magnetic circular dichroism in Chern insulators, and topological gap in moire materials are demonstrated. A fundamental property of all insulating states of matter is the presence of an energy gap, the minimum amount of energy that can be absorbed by the system. Insulating states are not all equivalent, and can be distinguished by the topological property of the ground state wavefunction. In particular, topologically nontrivial insulators, such as Chern insulators [1; 2], cannot be smoothly connected to trivial atomic insulators without closing the energy gap. In this work, we ask the question: is there a fundamental bound on the energy gap of topological insulators? And we provide an affirmative answer. A general tight bound on the energy gap of topological insulators having finite (spin) Chern numbers is derived by exploring relationships between topology, quantum geometry, and optical absorption. We begin by presenting a new sum rule for optical absorption and magnetic circular dichroism in insulators. Unlike the standard \(f\)-sum rule that relates the optical spectral weight to the charge stiffness [3], our sum rule relates a generalized optical weight--defined as the first negative moment of the absorptive part of optical conductivity-- to the quantum geometry of occupied and excited bands, see Sec. I and II. When the full frequency range \(0<\omega<\infty\) is considered, the generalized optical weight is shown to be a ground state property, and the real part of our sum rule recovers a relation previously derived in the study of electronic polarization in insulators [4]. The quantum geometric description of optical conductivity enables us to connect topological invariant and the energy gap. Using the standard and generalized optical sum rules, we discover a general tight bound on the energy gap of topological insulators (Sec. IV and V). Applications of our theory to infrared absorption near topological band inversion (Sec. III), magnetic circular dichroism in Chern insulators (Sec. IV), and topological gap in moire bands in twisted semiconductor bilayers (Sec. VI) are demonstrated. ## I Generalized optical weight With the Kubo formula, we can calculate \(\sigma_{\mu\nu}(\omega)\) for general noninteracting electronic systems. In the present work, we shall mostly consider the optical conductivity of insulators. Then, the optical conductivity \(\sigma_{\mu\nu}(\omega)\) at zero temperature is determined by interband transitions as given by: \[\sigma_{\mu\nu}(\omega)=\frac{e^{2}}{\hbar}\int[\mathrm{d}\mathbf{k}]\sum_{a,b} \frac{-i\varepsilon_{ab}A^{\mu}_{ab}A^{\nu}_{ba}}{\hbar\omega+\varepsilon_{ab} +i\delta}f_{ab}, \tag{1}\] where \(a,b\) are indices for the bands, \(\varepsilon_{ab}=\varepsilon_{a}(\mathbf{k})-\varepsilon_{b}(\mathbf{k})\), \(\varepsilon_{a}(\mathbf{k})\) is the band dispersion. \(f_{ab}=f_{a}-f_{b}\) with \(f_{a}=\Theta(\mu-\varepsilon_{a}(\mathbf{k}))\) the Fermi distribution function at zero temperature with the chemical potential \(\mu\). \(e(<0)\) and \(\hbar\) are the charge of electrons and the Planck constant. The integral is over the Brillouin zone, and \([\mathrm{d}\mathbf{k}]\) is shorthand for \(\mathrm{d}^{d}\mathbf{k}/(2\pi)^{d}\) with the spatial dimension \(d\). \(\delta\) is an infinitesimal positive quantity appearing in the Kubo formula. \(\mathbf{A}_{ab}\) is the interband Berry connection defined as \[\mathbf{A}_{ab}=\left\langle u_{\mathbf{k}a}|i\nabla_{\mathbf{k}}|u_{\mathbf{k}b}\right\rangle, \tag{2}\] where \(|u_{\mathbf{k}a}\rangle\) is the cell-periodic part of Bloch wavefunction of the band \(a\) at wave vector \(\mathbf{k}\). In deriving Eq. (1), we neglect the wavevector of light and coupling to the magnetic field. The optical conductivity (1) can be separated into the symmetric part (\(\sigma^{L}\)) and the antisymmetric part (\(\sigma^{H}\)) with respect to spatial indices: \[\sigma^{L,H}_{\mu\nu}(\omega)=\frac{\sigma_{\mu\nu}(\omega)\pm\sigma_{\nu\mu} (\omega)}{2}. \tag{3}\] From now on, we will refer to the symmetric part \(\sigma^{L}\) as longitudinal optical conductivity and the antisymmetric part \(\sigma^{H}\) as optical Hall conductivity. The real part of the longitudinal optical conductivity \(\mathrm{Re}\,\sigma^{L}\) determines the absorption of linearly polarized light, while the imaginary part of the optical Hall conductivity \(\mathrm{Im}\,\sigma^{H}\) represents the differential absorption of left- and right-handed circularly polarized light, known as magnetic circular dichroism. These two components together constitute the absorptive (Hermitian) part of optical conductivity related to energy dissipation: \(\sigma^{\mathrm{abs}}\equiv\mathrm{Re}\,\sigma^{L}+i\,\mathrm{Im}\,\sigma^{H }=(\sigma+\sigma^{\dagger})/2\). \(\sigma^{\mathrm{abs}}\) in insulators is given by: \[\sigma^{\mathrm{abs}}_{\mu\nu}(\omega)=\pi\omega e^{2}\int[\mathrm{d}\mathbf{k}] \sum_{a,b}\delta(\varepsilon_{ba}-\hbar\omega)A^{\mu}_{ab}A^{\nu}_{ba}f_{ab}. \tag{4}\] To establish a direct connection between optical absorption and quantum geometry, we introduce a generalized optical weight, \[W^{n}_{\mu\nu}(\Omega)\ \equiv\ \int_{0}^{\Omega}\mathrm{d}\omega\frac{ \sigma^{\mathrm{abs}}_{\mu\nu}(\omega)}{\omega^{n}}, \tag{5}\] where \(n\geq 0\) is an integer. \(W^{0}(\Omega)\) is the standard optical spectral weight below a cutoff frequency \(\Omega\), with \(\Omega=\infty\) corresponding to the full spectral weight. \(W^{n\geq 1}\) can be regarded as the \(n\)-th negative moment of \(\sigma^{\rm abs}\) in frequency domain. For insulators, since optical absorption only occurs at frequencies above the gap, the integral in Eq. (5) is convergent and the generalized spectral weight \(W^{n}\) is finite for all \(n\). For any given \(n\), \(W^{n}(\Omega)\) as a function of the cutoff frequency \(\Omega\) contains the full information of optical conductivity, as \(\sigma^{\rm abs}(\omega)=\omega^{n}\ {\rm d}W^{n}/{\rm d}\omega\) and the reactive (anti-Hermitian) part of the optical conductivity \(\sigma^{\rm rea}(\omega)={\rm Re}\,\sigma^{H}+i\,{\rm Im}\,\sigma^{L}\) can be further obtained from \(\sigma^{\rm abs}\) through the Kramers-Kronig relation. In this work, we focus on the first negative moment of the absorptive part of optical conductivity \(W^{1}(\Omega)\). From Eq.(4) we have \[W^{1}_{\mu\nu}(\Omega) \equiv \int_{0}^{\Omega}{\rm d}\omega\frac{\sigma^{\rm abs}_{\mu\nu}( \omega)}{\omega} \tag{6}\] \[= \frac{\pi e^{2}}{\hbar}\sum_{c,v}\int_{\varepsilon_{cv}\leq \Omega}[{\rm d}\mathbf{k}]A^{\mu}_{vc}A^{\nu}_{cv}.\] where \(c,v\) run over the conduction bands and the valence bands respectively, with \(f_{v}=1\) and \(f_{c}=0\). For a given pair of conduction and valence bands, the \(\mathbf{k}\) integral extends over the region where the energy gap \(\varepsilon_{cv}\) is below \(\hbar\Omega\). Since \({\rm Re}\,\sigma^{\rm abs}(\omega)/\omega=\epsilon^{\prime\prime}(\omega)\) is the imaginary part of the complex dielectric constant known as the loss factor, the real part of \(W^{1}(\Omega)\) represents the integrated dielectric loss below frequency \(\Omega\). In the limit \(\Omega\to\infty\), \(W^{1}(\infty)\) receives contributions from all interband transitions. Then, the \(\mathbf{k}\) integral in Eq. (6) extends over the whole Brillouin zone, and \(c,v\) run through all conduction and valence bands respectively. In this case, the right-hand side of Eq. (6) only depends on the interband Berry connection, which suggests a quantum geometric origin of the generalized spectral weight integrated over all frequencies \(W^{1}(\infty)\). More broadly, optical absorption below a given cutoff frequency \(\Omega\) involves a finite number of bands around the Fermi level. For the moment, suppose that the energy gaps between \(m\) highest valence bands and \(n\) lowest conduction bands, \(\varepsilon_{cv}(\mathbf{k})\), is smaller than \(\hbar\Omega\) at all \(\mathbf{k}\), while the energy gap to any higher conduction bands exceeds \(\hbar\Omega\). Then, optical absorption below frequency \(\Omega\) comes entirely from interband transitions among these \(m+n\) bands over the whole Brillouin zone, and we have \[\int_{0}^{\Omega}{\rm d}\omega\frac{\sigma^{\rm abs}_{\mu\nu}(\omega)}{ \omega}=\frac{\pi e^{2}}{\hbar}\sum_{c=1}^{m}\sum_{v=1}^{n}\int[{\rm d}\mathbf{k} ]A^{\mu}_{vc}A^{\nu}_{cv}. \tag{7}\] This expression only involves interband Berry connection as well, calling for a quantum geometric understanding. ## II Quantum geometry and generalized \(f\)-sum rule To develop a quantum geometric theory of interband optical conductivity, it is necessary to consider multi-band manifold over \(\mathbf{k}\) space. A set of bands \(\{\ket{u_{\mathbf{k}1}},...,\ket{u_{\mathbf{k}r}}\}\) in \(k\)-space defines a family of \(r\)-dimensional Hilbert subspace parameterized by the wavevector \(\mathbf{k}\), i.e., a vector bundle of rank \(r\) over the Brillouin zone. The geometry of this multi-band manifold can be characterized by a quantum geometric tensor, a \(r\times r\) matrix \(\underline{\underline{\mathcal{Q}}}^{\mu\nu}\) with matrix elements defined by (see for example [5]) \[\underline{\underline{\mathcal{Q}}}^{\mu\nu}_{ij}=\,\langle\partial_{\mu}u_{i} |(1-P)|\partial_{\nu}u_{j}\rangle\quad\text{with }i,j=1,...,r, \tag{8}\] where \(P=\sum_{i=1}^{r}\ket{u_{i}}\bra{u_{i}}\) is the projection operator associated with the \(r\)-dimensional subspace spanned by the bands of interest. In the single-band case (\(r=1\)), \(\underline{\underline{\mathcal{Q}}}^{\mu\nu}\) reduces to a scalar--the Abelian quantum geometric tensor, whose symmetric part and anti-symmetric part with respect to the spatial indices \(\mu,\nu\) are known as the quantum metric and the Berry curvature respectively. In the case of a multi-band manifold (\(r>1\)), \(\underline{\underline{\mathcal{Q}}}^{\mu\nu}\) is a \(r\times r\) matrix--the non-Abelian quantum geometric tensor. We can readily show \((\underline{\underline{\mathcal{Q}}}^{\mu\nu})^{\dagger}=\underline{\underline{ \mathcal{Q}}}^{\nu\mu}\). Its symmetric part and anti-symmetric part define the non-Abelian quantum metric and the non-Abelian Berry curvature respectively: \(\underline{\underline{\mathcal{Q}}}^{\mu\nu}=\underline{\underline{G}}^{\mu \nu}-\frac{i}{2}\underline{\underline{F}}^{\mu\nu}\), both of which are \(r\times r\) Hermitian matrices. Armed with the notion of quantum geometric tensor for multi-band manifold, we now establish a direct connection between optical absorption and quantum geometry. First, note that \[\sum_{c,v}A^{\mu}_{vc}A^{\nu}_{cv} = \sum_{v}\,\left\langle\partial_{\mu}u_{v}|P_{c}|\partial_{\nu}u_{ v}\right\rangle\] \[= \sum_{v}\,\left\langle\partial_{\mu}u_{v}|(1-P_{v})-(1-P_{v}-P_{c} )|\partial_{\nu}u_{v}\right\rangle\] where \(P_{v,c}\) is the projection operator onto the valence/conduction band manifold. Using this identity, we can rewrite Eq. (6): \[\int_{0}^{\Omega}{\rm d}\omega\frac{\sigma^{\rm abs}_{\mu\nu}(\omega)}{\omega} =\frac{\pi e^{2}}{\hbar}\int[{\rm d}\mathbf{k}]\left({\rm Tr}\,\underline{ \underline{\mathcal{Q}}}^{\mu\nu}_{v}-{\rm Tr}_{v}\,\underline{\underline{ \mathcal{Q}}}^{\mu\nu}_{0}\right), \tag{9}\] where \(\underline{\underline{\mathcal{Q}}}_{v},\underline{\underline{\mathcal{Q}}}_{0}\) are non-Abelian quantum geometric tensors associated with the \(m\)-dimensional valence band manifold and the \((m+n)\)-dimensional manifold of combined valence and conduction bands, respectively. \({\rm Tr}_{v}(\dots)\) is the partial trace over the valence band, namely, \({\rm Tr}_{v}\,O\equiv\sum_{v}O_{vv}\). Eq. (9) shows that the optical absorption corresponds to the change of the quantum geometry of the subspace spanned by \(m\) valence bands when the \(n\) conduction bands are added. If there is no optical transition allowed between the valence and conduction bands due to, e.g., a symmetry-based selection rule, the quantum geometry of valence bands is unchanged when the Hilbert subspace is enlarged to include conduction bands: \(A^{\mu}_{vc}=0\) leads to \(\operatorname{Tr}\underline{\mathcal{Q}}^{\mu\nu}_{v}=\operatorname{Tr}_{v} \underline{\mathcal{Q}}^{\mu\nu}_{0}\). We now further show that the real part of Eq. (9)--the integral of \(\operatorname{Re}\sigma^{L}_{\mu\nu}(\omega)/\omega\)--can be expressed in an elegant form using the quantum metric tensors alone: \[\int_{0}^{\Omega}\mathrm{d}\omega\,\frac{\operatorname{Re}\sigma ^{L}_{\mu\nu}(\omega)}{\omega}\] \[=\frac{\pi e^{2}}{2\hbar}\int[\mathrm{d}\mathbf{k}]\!\left(\sum_{v=1 }^{m}\,\left\langle\partial_{\mu}u_{v}|P_{c}|\partial_{\nu}u_{v}\right\rangle+ \sum_{c=1}^{n}\,\left\langle\partial_{\mu}u_{c}|P_{v}|\partial_{\nu}u_{c} \right\rangle\right)\] \[=\frac{\pi e^{2}}{2\hbar}\int[\mathrm{d}\mathbf{k}]\left(\operatorname {Tr}\underline{\mathcal{G}}^{\mu\nu}_{v}+\operatorname{Tr}\underline{G}^{\mu \nu}_{c}-\operatorname{Tr}\underline{G}^{\mu\nu}_{0}\right). \tag{10}\] Here \(\underline{G}_{v},\underline{G}_{c},\underline{G}_{0}\) are the quantum metric tensor for the manifold of valence bands, conduction bands, and these bands combined, respectively. Eq. (10) is the first main result of this work. It directly relates the optical absorption to the trace of the non-Abelian quantum metric tensors. For a multi-band manifold, \(\operatorname{Tr}\underline{G}^{\mu\nu}\) is a positive semi-definite \(d\times d\) matrix that is independent of the choice of basis states. It measures how the multi-band subspace changes with \(\mathbf{k}\), which can be seen from the squared norm of the change of the projection operator \(P\) to second order in \(\mathbf{k}\): \[\operatorname{Tr}\!\left[(\delta P)^{2}\right]=\sum_{\mu,\nu}2\operatorname{ Tr}\underline{G}^{\mu\nu}\delta k_{\mu}\delta k_{\nu}. \tag{11}\] Note that the quantum metric of a manifold is _not_ the sum of the ones for its subspaces. In fact, the equation (10) shows that generalized optical weight precisely measures the difference of the two. As an example, let us consider a two-dimensional free electron gas under a magnetic field, which exhibits Landau levels equally spaced by cyclotron energy \(\hbar\omega_{c}\) with \(\omega_{c}=|eB|/m\). Optical transition only occurs between two adjacent Landau levels \(n-1\leftrightarrow n\) due to the angular momentum selection rule. Based on Eq. (10), we can use the quantum metric tensors to calculate generalized optical weight \(W^{1}\) associated with this inter-Landau-level transition. It is straightforward to show that the Abelian quantum metric tensor for \(n\)-th Landau level is constant in \(\mathbf{k}\) space: \(g^{\mu\nu}_{n}=(\hbar/|eB|)(n+1/2)\delta^{\mu\nu}\), while the trace of the non-Abelian quantum metric tensor for the two-Landau-level manifold is given by \(\operatorname{Tr}\underline{G}^{\mu\nu}_{0}=(\hbar/|eB|)n\delta^{\mu\nu}\). Then, for the integer quantum Hall state at filling factor \(\nu\), we have \[\int_{0}^{\Omega}\mathrm{d}\omega\,\frac{\operatorname{Re}\sigma_{ii}(\omega) }{\omega}=\frac{e^{2}}{\hbar}\frac{\nu}{4}\text{ for }\Omega>\omega_{c}. \tag{12}\] This result agrees with a direct calculation of optical conductivity. Returning to the discussion on general systems, in the limit of the cutoff frequency \(\Omega\to\infty\), interband optical transition is allowed between any pair of occupied and unoccupied bands. In this case, our general formula Eq. (9) and Eq. (10) can be further simplified. Since the complete set of bands spans the entire Hilbert space, the corresponding quantum geometric tensor \(\underline{\mathcal{Q}}^{\mu\nu}_{0}=0\) so that Eq. (9) reduces to \[\int_{0}^{\infty}\mathrm{d}\omega\,\frac{\sigma^{\mathrm{abs}}_{\mu\nu}( \omega)}{\omega}=\frac{\pi e^{2}}{\hbar}\int[\mathrm{d}\mathbf{k}]\operatorname {Tr}\underline{\mathcal{Q}}^{\mu\nu}. \tag{13}\] This elegant formula directly relates the first negative moment of optical conductivity \(W^{1}(\infty)\) to the quantum geometry of the ground state wavefunction. We regard Eq. (13) as a generalized optical sum rule complementary to the standard \(f\)-sum rule which relates the optical spectral weight \(W^{0}(\infty)\) to the charge stiffness (Drude weight). We now discuss the real and imaginary parts of Eq. (13) separately. The imaginary part relates magnetic circular dichroism \(\sigma^{H}\) to the Chern invariant of the ground state: \[\int_{0}^{\infty}\mathrm{d}\omega\,\frac{\operatorname{Im}\sigma^{H}_{\mu\nu} (\omega)}{\omega}=-\frac{e^{2}}{4\hbar}C^{\mu\nu}. \tag{14}\] where the Chern invariant \(C^{\mu\nu}\equiv 2\pi\int[\mathrm{d}\mathbf{k}]\sum_{v}\Omega^{\mu\nu}_{v}\) is a topological invariant defined by the integral of the total Berry curvature of occupied bands over the Brillouin zone. In two dimensions, \(C^{\mu\nu}\equiv\epsilon^{\mu\nu}C\) where \(C\) is an integer. In three dimensions, \(C^{\mu\nu}\equiv\epsilon^{\mu\nu\lambda}C_{\lambda}\) where \(C_{\lambda}\) is a reciprocal lattice vector. It is well known since the work of Thouless-Kohmot-Nightingale-Nijs (TKNN) that the Chern invariant of an insulating state manifests in quantized dc Hall conductance: \(\sigma^{H}=Ce^{2}/h\) in two dimensions and \(\sigma^{H}_{\mu\nu}=\epsilon_{\mu\nu\lambda}C_{\lambda}e^{2}/h\) in three dimensions. The quantized Hall effect is a dissipationless transport phenomenon accompanied by zero longitudinal resistivity. Alternatively, Eq. (14) shows that the Chern invariant can be measured directly by magnetic circular dichroism, the difference in optical absorption of left- and right-handed circularly polarized lights. It is remarkable that the Chern invariant can be measured through both dc Hall transport and optical magnetic circular dichroism. This is not a coincidence. Eq. (14) and the TKNN formula are directly related through the Kramers-Kronig relation between the absorptive and reactive parts of optical conductivity: \(\frac{i}{\pi}P\int_{-\infty}^{\infty}\mathrm{d}\omega\,\sigma^{\mathrm{abs}}_{ \mu\nu}(\omega)/(\omega-\omega^{\prime})=\sigma^{\mathrm{ren}}_{\mu\nu}(\omega^{ \prime})\), where \(\sigma^{\mathrm{ren}}(\omega)=\operatorname{Re}\sigma^{H}+i\operatorname{Im} \sigma^{L}\). By setting \(\omega^{\prime}=0\) and using the general property \(\sigma^{\mathrm{abs}}(-\omega)=(\sigma^{\mathrm{abs}}(\omega))^{*}\), the Kramers-Kronig relation connects magnetic circular dichroism to dc Hall conductivity: \(\int_{0}^{\infty}\mathrm{d}\omega\operatorname{Im}\sigma^{H}_{\mu\nu}(\omega)/ \omega=-\frac{\pi}{2}\sigma^{H}_{\mu\nu}(0)\). From now on, we focus on the real part of Eq. (13): \[\int_{0}^{\infty}\mathrm{d}\omega\,\frac{\operatorname{Re}\sigma^{L}_{\mu\nu}( \omega)}{\omega}=\frac{\pi e^{2}}{\hbar}\int[\mathrm{d}\mathbf{k}]g^{\mu\nu}. \tag{15}\] where \(g^{\mu\nu}\equiv\operatorname{Tr}\underline{G}^{\mu\nu}\) is the trace of the non-Abelian quantum metric tensor of the occupied band manifold. Equivalently, \(g^{\mu\nu}\) is Abelian quantum metric tensor of the Slater determinant state made of all occupied bands: \(|u_{\mathbf{k}1}u_{\mathbf{k}2}...u_{\mathbf{k}r}|\). We note that Eq. (15) was first derived by a different method in the early seminal work of Souza, Wilkens and Martin on electronic polarization and localization in insulators [4]. There, the quantum metric is defined for the many-body ground state with a twisted boundary condition. In the case of noninteracting band insulators, their result reduces to Eq. (15). We also note that the integral of the trace of the quantum metric is directly related to the spread of the Wannier function in real space, as shown in the pioneering work by Marzari and Vanderbilt [6]. While it has existed for a quarter of a century, the implication of Eq. (15) for optical conductivity and quantum geometry is not adequately explored. Some studies in this direction can be found in [7; 8; 9] and references therein. We hold the view that Eq. (15) is a fundamental relation between the optical absorption and the ground state property, which is universally applicable to _all_ insulators. The right-hand side of Eq. (15)--the integral of the quantum metric of occupied bands \(g^{\mu\nu}\) over the Brillouin zone--is a quantum property of insulating ground states. Because of its relation to generalized optical weight, we call it "quantum weight": \[K^{\mu\nu}\equiv 2\pi\int[\mathrm{d}\mathbf{k}]g^{\mu\nu}. \tag{16}\] As we shall demonstrate below, the quantum weight is a central quantity that links together band topology, optical absorption and the insulating gap. ## III Infrared absorption near topological band inversion The quantum weight provides a quantitative measure of the degree of "quantumness" in the insulating state. To illustrate this point, we consider two distinct types of insulators: atomic insulators and topological insulators, which have small and large quantum weight respectively. In an atomic insulator, electrons occupy highly-localized atomic orbitals \(\phi_{n}(\mathbf{r}-\mathbf{R})\) located at lattice sites \(\mathbf{R}\). The characteristic size of these orbitals \(\xi\) is small compared to the lattice constant \(a\), hence there is no hopping between sites. In this case, the Bloch wavefunction \(\psi_{n\mathbf{k}}(\mathbf{r})\) is given by: \(\psi_{n\mathbf{k}}(\mathbf{r})=(1/\sqrt{N})\sum_{\mathbf{R}}e^{i\mathbf{k}\cdot\mathbf{R}}\phi_{n} (\mathbf{r}-\mathbf{R})\). Assuming that the spatial overlap between atomic orbitals on different sites is negligible, it is straightforward to show that the quantum geometric tensor of the occupied band manifold is related to the matrix elements of position operator between atomic orbitals on the same site: \(\underline{\mathcal{Q}}^{\mu\nu}_{ij}=\underline{G}^{\mu\nu}_{j}=\sum_{n} \left\langle\phi_{i}|r^{\mu}|\phi_{n}\right\rangle\left\langle\phi_{n}|r^{\nu }|\phi_{j}\right\rangle\) where \(i,j\) belong to occupied orbitals, and \(n\) run through unoccupied orbitals. Therefore the quantum weight \(K\sim(\xi/a)^{2}a^{2-d}\) (\(d\) is spatial dimension) is also small, resulting in weak optical absorption. The opposite case of large quantum weight can be found in narrow gap insulators near topological band inversion. When inversion symmetry is present, the effective Hamiltonian \(H(\mathbf{k})\) for low-energy states generally takes the forms of a massive Dirac fermion [10]: \[H(\mathbf{k})=\Delta\Gamma_{0}+v\sum_{\mu=1}^{d}k_{\mu}\Gamma_{\mu} \tag{17}\] where \(\Gamma_{0},...,\Gamma_{d}\) are \(4\times 4\) Dirac Gamma matrices satisfying \(\{\Gamma_{i},\Gamma_{j}\}=2\delta_{ij}\). Tuning \(\Delta\) from positive to negative induces a band inversion at \(\mathbf{k}=0\) and results in a phase transition between topologically distinct insulators. At the critical point \(\Delta=0\), the low-energy spectrum is described by massless Dirac fermions. We now calculate the quantum metric \(g\) and quantum weight \(K\) for this system, and evaluate the generalized optical weight \(\Sigma\). When \(H(\mathbf{k})\) takes the form \[H(\mathbf{k})=E_{\mathbf{k}}\sum_{\lambda=0}^{d}n_{\mathbf{k}}^{\lambda}\Gamma_{\lambda} \tag{18}\] with \(E_{\mathbf{k}}>0\) and \(\mathbf{n}\) is a unit vector, the projection operator for the occupied bands can be written as \(P_{\mathbf{k}}=\frac{1}{2}(1+\sum_{\lambda}n_{\mathbf{k}}^{\lambda}\Gamma_{\lambda})\). Using Eq. (11), we find the quantum metric tensor is \[g^{\mu\nu}=\frac{1}{2}(\partial_{\mu}\mathbf{n})\cdot(\partial_{\nu}\mathbf{n}) \tag{19}\] Provided that the system has an insulating gap, \(\mathbf{n}\) is well defined in \(\mathbf{k}\) space, and the quantum metric \(g^{\mu\nu}\) is finite. Near the band inversion transition, however, \(\mathbf{n}\) changes rapidly around \(\mathbf{k}=0\) where the gap is small, leading to a large \(g^{\mu\nu}\) that may give dominant contribution to the quantum weight. Indeed, for the Dirac Hamiltonian Eq. (17), we find \(\mathbf{n_{k}}=(\Delta,v\mathbf{k})/\sqrt{\Delta^{2}+v^{2}k^{2}}\). The trace of quantum metric tensor is \[g_{\mathbf{k}}\equiv\sum_{\mu}g^{\mu\mu}_{\mathbf{k}}=\frac{1}{2}\frac{(d-1)v^{2}k^{2} +d\Delta^{2}}{(v^{2}k^{2}+\Delta^{2})^{2}}. \tag{20}\] As \(\Delta\to 0\), \(g\) diverges as \(1/k^{2}\) near \(\mathbf{k}=0\). Therefore, near the topological phase transition, the quantum weight has non-analytic dependence on the insulating gap \(\Delta\) due to the contribution from low-energy states: \(K^{\mu\nu}=(K/d)\delta_{\mu\nu}\) where the asymptotic form of \(K\) at small \(|\Delta|\) is given by \[K\sim |\Delta| (d=3) \tag{21}\] \[\log(|\Delta|) (d=2)\] \[1/|\Delta| (d=1)\] Importantly, the quantum weight exhibits a logarithmic divergence in two dimensions and a power-law divergence in one dimension. By the general relation Eq. (15) between quantum weight and generalized optical weight, a divergent quantum weight necessarily implies strong optical absorption at low photon energy. Indeed, a well-known example is two-dimensional massless Dirac fermion systems such as graphene. Here, the real part of optical conductivity takes the universal value \(\operatorname{Re}\sigma=\pi e^{2}/(2h)\) over a broad range of frequencies. Therefore, the first negative moment of optical conductivity, \(\int d\omega\operatorname{Re}\sigma/\omega\), has a logarithmic divergence at low energy, which leads to the \(\log|\Delta|\) dependence in the presence of a Dirac mass gap, matching the quantum weight shown by Eq. (21). Unlike the standard spectral weight \(\int d\omega\operatorname{Re}\sigma\) which gives equal weight to low and high frequency, the generalized optical weight \(\int d\omega\operatorname{Re}\sigma/\omega\) gives large weight to optical conductivity at low frequency. By \(f\)-sum rule, the full spectral weight only depends on the electron density and therefore is insensitive to any details of the system. In contrast, the generalized optical weight is directly connected to the ground state wavefunction as shown by Eq. (15), and therefore provides a powerful tool for studying topological phase transitions involving a dramatic change in the ground state, such as the topological band inversion discussed above. ## IV Quantum weight, topological invariant, and 100% magnetic circular dichroism. We have so far established a direct relation between optical absorption and quantum geometry. In particular, the generalized optical weight integrated over all frequencies is given by the quantum weight of occupied bands. In this section, we consider quantum weight of topological bands. We mainly focus on Chern insulators in two dimensions, noting that the generalization to three dimensions and to quantum spin Hall insulators with conserved spin \(U(1)\) symmetry is straightforward. The quantum weight in two dimensional systems is a dimensionless quantity. Interestingly, the quantum weight is lower bounded by the topological Chern number. This lower bound naturally follows from an inequality between the quantum metric and the Berry curvature: \(\sqrt{\det g}=\sqrt{g_{xx}g_{yy}-g_{xy}g_{yx}}\geq|\Omega^{xy}|/2\). This inequality was first derived for single band cases by Roy Roy (1984) and later generalized to multiple band cases (Roy _et al._, 2009). Noting that \(\operatorname{tr}g=(g_{xx}+g_{yy})\geq 2\sqrt{\det g}\) and integrating the inequality over the Brillouin zone, we find the bound on \(K\): \[K\equiv\sum_{i=x,y}K^{ii}\geq|C|, \tag{22}\] where \(C\equiv 2\pi\int[\mathrm{d}\mathbf{k}]\Omega^{xy}\) is the Chern number of the ground state. Note that the quantum metric \(g\) and the Berry curvature \(\Omega\) used here are for the Slater determinant state of the entire occupied band manifold, \(|u_{\mathbf{k}1}u_{\mathbf{k}2}...u_{\mathbf{k}r}|\), which applies to an arbitrary number of occupied bands. While the Chern number is additive, i.e., the Chern number \(C\) for the occupied band manifold is the sum of the Chern number for each band, the quantum weight \(K\) is not. As an example, consider a two-band Hamiltonian of the form shown in Eq. (18), with \(d=2\) and Gamma matrices replaced by \(2\times 2\) Pauli matrices \(\sigma_{x},\sigma_{y},\sigma_{z}\). Then, the unit vector \(\mathbf{n_{k}}\) in \(\mathbf{k}\) space defines a mapping from the two-dimensional Brillouin zone (which is torus) to the Bloch sphere. The Chern number \(C\) and quantum weight \(K\) are given by: \(C=\frac{1}{2}\int[\mathrm{d}\mathbf{k}]\ \mathbf{n}\cdot(\partial_{x}\mathbf{n}\times \partial_{y}\mathbf{n})\), \(K=\frac{1}{4}\int[\mathrm{d}\mathbf{k}]\ \sum_{\mu=x,y}(\partial_{\mu}\mathbf{n})^{2}\), respectively. Then, from the inequality \((1/2)\sum_{\mu=x,y}(\partial_{\mu}\mathbf{n})^{2}\geq|\mathbf{n}\cdot(\partial_{1}\mathbf{ n}\times\partial_{2}\mathbf{n})|\), the bound \(K\geq|C|\) follows immediately. This bound is saturated when \(\mathbf{n_{k}}\) takes special instanton configurations (Roy _et al._, 2009). We may call Chern insulators having minimum quantum weight \(K=|C|\) "minimal Chern insulator". An example is the integer quantum Hall state in Landau level systems: for _any_ integer filling \(\nu\geq 1\), it can be shown that the quantum weight of occupied Landau level manifold is \(K=\nu=|C|\). It should be noted that the concept of minimal Chern insulator applies to systems with an arbitrary number of occupied bands. In multiband cases, \(C\) and \(K\) are defined through the non-Abelian quantum geometric tensor. In the special case of a single occupied band where \(C\) and \(K\) are defined by the Abelian quantum geometric tensor, the condition for a minimal Chern insulator \(K=|C|\) implies two conditions: (1) the so-called trace condition for the Chern band, \(\operatorname{tr}g=|\Omega^{xy}|\), is satisfied at every \(\mathbf{k}\) point, and (2) either \(\Omega^{xy}\) or \(-\Omega^{xy}\) is positive semi-definite over the entire Brillouin zone (Kane and Mele, 1982; Mele and Sachdev, 1984; Mele and Sachdev, 1985). Remarkably, we now show that the mathematical inequality relating quantum weight and topological invariant, Eq. (22), can be alternatively derived and understood from the viewpoint of optical absorption. Let us consider the case of a circularly polarized light at frequency \(\omega\): \(E_{x}(t)=E\cos(\omega t)\) and \(E_{y}=\pm E\sin(\omega t)\), or equivalently, \(\mathbf{E}_{\omega}=E(\hat{x}\pm i\hat{y})\), with \(\pm\) corresponding to left and right handedness. The induced current is given by \(\mathbf{j}_{\omega}=(\sigma_{xx}\pm i\sigma_{xy},\sigma_{yx}\pm i\sigma_{yy})E\). The absorbed power is thus \[\operatorname{Re}(\mathbf{j}_{\omega}^{*}\cdot\mathbf{E}_{\omega})=\left(\operatorname {Re}(\sigma_{xx}+\sigma_{yy})\pm 2\operatorname{Im}\sigma_{xy}^{H}\right)E^{2}. \tag{23}\] Since the absorbed power must be non-negative for every frequency, we must have \[\operatorname{Re}(\sigma_{xx}+\sigma_{yy})\geq 2\big{|}\operatorname{Im} \sigma_{xy}^{H}\big{|}. \tag{24}\] By dividing \(\omega^{n}\) and integrating over frequency, we obtain a general inequality between the moments of \(\sigma^{\rm abs}\) as \[W_{xx}^{n}(\Omega)+W_{yy}^{n}(\Omega)\geq 2\big{|}W_{xy}^{n}(\Omega)\big{|}. \tag{25}\] In particular, for \(n=1\) and \(\Omega=\infty\), we can rewrite this with the quantum weight \(K\) and the Chern number \(C\) with Eq. (13) and (14). Then we will recover the inequality (22) between \(K\) and \(C\). It is remarkable that the mathematical relation (22) is closely linked to the fact that the optical absorption of circularly polarized lights must always be non-negative. Furthermore, it is clear from our derivation above that when the quantum weight of occupied band manifold saturates the Chern number bound \(K=|C|\), the equality in Eq. (24) must also be satisfied at all frequencies--that is to say, the system only absorbs circularly polarized light of one handedness, but not the other at all, namely, exhibits \(100\,\%\) magnetic circular dichroism. In time-reversal-invariant systems with spin-orbit coupling and spin \(s_{z}\) conservation, the Chern number of all occupied bands must be zero, but occupied spin-\(\uparrow\) bands and spin-\(\downarrow\) bands can have equal and opposite Chern numbers: \(C_{\uparrow}=-C_{\downarrow}\equiv C_{s}\). In this case, the quantum weight is bounded by twice the spin Chern number: \(K=K_{\uparrow}+K_{\downarrow}\geq|C_{\uparrow}|+|C_{\downarrow}|=2|C_{s}|\). ## V Quantum weight and topological gap Next, we establish a general upper bound on the quantum weight of real materials: \[K\leq\frac{2\pi\hbar^{2}n}{mE_{g}}, \tag{26}\] with \(n\) the electron density, \(m\) the electron mass, and \(E_{g}\) the energy gap of the insulator (the precise definition of \(E_{g}\) will be discussed below). This inequality applies very broadly to systems whose Hamiltonian takes the form \[H=\frac{\mathbf{p}^{2}}{2m}+V(\mathbf{r})+\mathbf{p}\cdot\mathbf{A}(\mathbf{r})+\mathbf{A}(\mathbf{r}) \cdot\mathbf{p}, \tag{27}\] where \(V\) and \(\mathbf{A}\) can be any function of particle coordinate. Moreover, our result still holds when \(V(\mathbf{r})\) and \(\mathbf{A}(\mathbf{r})\) are spin (or pseudospin) dependent matrices. We now derive the inequality relating quantum weight and energy gap, Eq. (26), from the perspective of optical absorption. We first note that optical absorption in insulators only occurs at frequencies above a threshold \(\omega>E_{g}/\hbar\), where \(E_{g}\) is the minimum energy required to optically excite the system, called the optical gap. \(E_{g}\) must be greater than the gap in the energy spectrum. For noninteracting insulators, \(E_{g}\) is the minimum direct band gap at which the optical transition is allowed. Since the real part of the optical conductivity \(\operatorname{Re}\sigma_{ii}(\omega)\) onsets above \(E_{g}\) and is always positive, the first negative moment of optical conductivity has an upper bound \[\int_{0}^{\infty}d\omega\frac{\operatorname{Re}\sigma_{ii}(\omega)}{\omega} \leq\frac{\int_{0}^{\infty}d\omega\operatorname{Re}\sigma_{ii}(\omega)}{E_{g} /\hbar}. \tag{28}\] By the standard \(f\)-sum rule [3], when the Hamiltonian takes the form of Eq. (27), the full optical spectral weight is given by the charge stiffness: \[\int_{0}^{\infty}d\omega\operatorname{Re}\sigma_{ii}(\omega)=\frac{\pi}{2} \frac{ne^{2}}{m} \tag{29}\] which is independent of any details of the system. Combining Eqs. (28), (29) and Eq. (15) immediately yields the upper bound on the quantum weight Eq. (26). We further offer a heuristic argument for the inequality between quantum weight and energy gap of an insulator, Eq. (26). As shown in [4; 17; 14] (and references therein), the quantum weight \(K\) is directly related to the electronic localization length \(\xi\) in the insulating ground state: \(K\sim(\xi/a)^{2}\) where \(a\) is the lattice constant. On the other hand, the Heisenberg uncertainly principle dictates that for a given energy of confinement \(E_{g}\), the localization length \(\xi\) cannot be smaller than \(\hbar/\sqrt{mE_{g}}\), as in a harmonic oscillator. This leads to the inequality between quantum weight and energy gap of the lowest band, which corresponds to the filling factor \(\nu=1\) or density \(n\sim 1/a^{2}\). As our upper bound for quantum weight Eq. (26) assumes the Hamiltonian of the form Eq. (27), we now discuss its applicability to real materials. As a matter of fact, the _microscopic_ Hamiltonian for _all_ solids takes the form of Eq. (27), with \(m\) the bare electron mass and \(V\) the periodic potential of the ions. Moreover, the external magnetic field and the microscopic spin-orbit interaction \(\hbar/(4m^{2}c^{2})\mathbf{s}\cdot\nabla V\times\mathbf{p}\) can be captured by Eq. (27) with spin-independent and spin-dependent vector potentials respectively. Therefore, the inequality Eq. (26) constitutes a fundamental relation applicable to all real materials. Applied in this way, the mass and the carrier density in Eq. (26) should be the bare electron mass \(m_{0}\) and the total density of electrons including core electrons, in the same spirit that the optical spectral weight integrated over all frequencies counts all electrons. For practical purposes, we often use an effective Hamiltonian \(H_{\text{eff}}\), in the form of continuum or tight-binding model, to describe the low-energy degrees of freedom that are well separated from high-energy ones. For example, the effective theory of doped semiconductors is based on \(k\cdot p\) continuum Hamiltonian of doped electrons or holes with an effective mass. As another example, for tight-binding models, the effective Hamiltonian is described by a \(k\)-dependent matrix. In these cases, the \(f\)-sum rule (29) is modified as [3; 18]: \(\int\mathrm{d}\omega\,\sigma_{\mu\nu}^{L}(\omega)=\frac{\pi}{2}ne^{2}(m_{*}^{ -1})_{\mu\nu}\), where the effective mass is given by \[(m_{*}^{-1})_{\mu\nu}=\frac{1}{n}\int[\mathrm{d}\mathbf{k}]\sum_{v}\,\langle u_{v }|(\partial_{\mu}\partial_{\nu}H_{\text{eff},\mathbf{k}})|u_{v}\rangle\,, \tag{30}\] and \(n\) is the carrier density in the effective theory. Putting together the lower and upper bounds on the quantum weight, Eq. (22) and (26), we arrive at a remarkable relation \[|C|\leq K\leq\frac{2\pi\hbar^{2}n}{mE_{g}}. \tag{31}\] Eq. (31) is a key result of our work. In one stroke, it links together band topology, quantum geometry, and energy gap of insulating states. Interestingly, both lower and upper bounds on quantum weight are saturated simultaneously in the integer quantum Hall state of Landau level systems. Here, the energy gap is \(E_{g}=\hbar eB/m\) and the density is \(n=\nu B/\Phi_{0}\) where \(\nu\) is the filling factor and \(\Phi_{0}=h/e\) is the flux quantum. Then, the upper bound \(2\pi\hbar^{2}n/(mE_{g})=\nu\) is equal to the lower bound--the Chern number \(C=\nu\). Then, the quantum weight must be \(K=\nu\), matching our earlier result, Eq. (12). That both bounds on the quantum weight are saturated is a special feature of Landau level systems. The Landau level example shows that the lower and upper bounds on the quantum weight of insulators, Eq. (31), are both tight for the general case. ## VI Topological gap bound Eq. (31) also implies a remarkable relation of the topological gap and the Chern number: \[E_{g}\leq\frac{2\pi\hbar^{2}n}{m|C|}, \tag{32}\] which provides an upper bound to the energy gap in Chern insulators. Note that the right-hand side depends only on the carrier density, mass and the Chern number of the ground state, and is independent of any details of the system. While the band topology describes the properties of the wavefunction rather than the energy dispersion, the relation (32) shows that in real materials described by the Hamiltonian Eq. (27), the energy dispersion and the band topology cannot be completely independent. Our bound on the topological gap is saturated in Landau level systems as shown above, and therefore is tight for the general case. It is remarkable that our study of optical absorption from the perspective of quantum geometry has led to the discovery of the topological gap bound, a relation between band topology and energy spectrum which makes no reference to optical properties. Nonetheless, its connection to optical absorption is clear and direct. The upper bound on the topological gap can be reached if and only if optical absorption occurs at a single frequency \(\omega=E_{g}/h\), as shown by Eq.(28). Our bound (32) is especially useful when applied to low carrier density systems with nontrivial band topology. As an example, we consider moire superlattices formed from two-dimensional semiconductors. Recent theoretical calculations [19; 20] have shown that twisted homobilyar transition metal dichalcogenide (TMD) \(t\)MoTe\({}_{2}\) and \(t\)WSe\({}_{2}\) host topological bands over a broad range of twist angles. In particular, the topmost moire valence bands in valley \(K\) and \(K^{\prime}\), which carry opposite spins, have equal and opposite Chern numbers \(C_{\uparrow}=-C_{\downarrow}=1\), resulting in a quantum spin Hall insulator at the filling of two holes per unit cell. Band topology is also manifested at the filling of one hole per unit cell, where Coulomb interaction can induce spontaneous full spin/valley polarization and drive the system into a quantum anomalous Hall insulator [20]. This state has been experimentally observed [21; 22; 23]. The moire bands of \(t\)MoTe\({}_{2}\) or \(t\)WSe\({}_{2}\) are formed from the parabolic band of the monolayer by the presence of interlayer tunneling and layer-dependent potential that are spatially modulated by the moire superlattice. Thus, the continuum model of moire bands fits into the general form of Eq. (27). This allows us to apply Eq. (32) to obtain an upper bound on the gap between the topmost and second moire valence bands: \[E_{g}^{\rm max}=\frac{2\pi\nu\hbar^{2}}{m^{*}A_{\theta}}, \tag{33}\] with \(\nu=1\) the filling factor for this case, \(A_{\theta}=\sqrt{3}a_{0}^{2}/(2\theta^{2})\) the area of the moire unit cell, \(a_{0}\) the monolayer lattice constant and \(m^{*}\) the effective mass of holes in MoTe\({}_{2}\) or WSe\({}_{2}\). We emphasize that this bound is completely independent of the form or strength of interlayer tunneling or superlattice potential. We compare the topological gap bound \(E_{g}^{\rm max}\) with the minimum direct gap calculated from the continuum model of Ref. [19], using model parameters fitted to first-principles band structures for \(t\)MoTe\({}_{2}\)[24]. Remarkably, our bound is fairly tight at small twist angles as shown in Fig. 1. At \(\theta=2.1^{\circ}\), the upper bound \(E_{g}^{\rm max}=9.7\,\)meV is only \(1.67\) times the calculated gap of \(5.8\,\)meV. We further calculate the quantum weight \(K\) using the Bloch wavefunction of the continuum model. Indeed, as shown in Fig. 1, \(K\) is found to be lower bounded by the Chern number \(|C|=1\) and upper bounded by \(E_{g}^{\rm max}/E_{g}\) where \(E_{g}\) is the minimum direct gap in the continuum model. Remarkably, the quantum weight is very close to the Chern number throughout the twist angles shown here. On the other hand, its upper bound \(E_{g}^{\rm max}/E_{g}\) shows a deep minimum at \(\theta=2.1^{\circ}\). Our topological gap bound, Eq. (33), provides a key guiding principle for searching large gap topological insulators in two-dimensional electron systems. It is particularly useful for moire materials, where first principles band structure calculation is challenging due to the large unit cell and strong lattice relaxation involved. Without relying on any microscopic details, we have shown that the topological gap is fundamentally limited by the average kinetic energy of charge carriers \(2\pi\hbar^{2}n/m^{*}\). This upper bound can only be increased by increasing the filling factor, reducing the moire period, or choosing materials with smaller effective mass. For \(t\)MoTe\({}_{2}\) with \(m^{*}=0.62m_{0}\), the topological gap between the first and second moire bands cannot exceed \(35\,\)meV at \(\theta=4^{\circ}\). Before concluding this section, we emphasize that Eq. (32) is quite general and holds even for interacting systems, as we will show in the next section. ## VII Generalization to interacting systems In this section, we show that all main results of this work apply to interacting systems as well. The key point is that both the standard \(f\)-sum rule Eq.(29) and our generalized sum rule Eq.(13) can be formulated for an interacting system in terms of the dependence of its ground state energy and wavefunction on twisted boundary condition. Let us first generalize the quantum geometric tensor to the interacting cases. Here we assume that the ground state is unique, leaving the degenerate case to a separate work. We denote ground states of the \(N\)-electron system under twisted boundary condition by \(\ket{\Psi_{\mathbf{\kappa}}}\) which satisfies \[\Psi_{\mathbf{\kappa}}(\mathbf{r}_{1},\ldots,\mathbf{r}_{i}+\mathbf{L}_{\mu}, \ldots,\mathbf{r}_{N})\] \[=e^{i\mathbf{\kappa}\cdot\mathbf{L}_{\mu}}\Psi_{\mathbf{\kappa}}(\mathbf{r}_{1}, \ldots,\mathbf{r}_{i},\ldots,\mathbf{r}_{N}), \tag{34}\] where \(\mathbf{L}_{\mu}=(0,\ldots,L_{\mu},\ldots,0)\) specifies the system size in \(\mu\)-direction \(L_{\mu}\), and the vector \(\mathbf{\kappa}\) specifies the twisted boundary condition. Then, the quantum geometric tensor \(Q^{\mu\nu}\) for these many-body ground states is defined as \[Q^{\mu\nu}=\bra{\partial_{\mu}\Psi_{\mathbf{\kappa}}}\ket{\partial_{\nu}\Psi_{\bm {\kappa}}}-\bra{\partial_{\mu}\Psi_{\mathbf{\kappa}}}\ket{\Psi_{\mathbf{\kappa}}} \bra{\Psi_{\mathbf{\kappa}}}\,. \tag{35}\] where \(\partial_{\mu}\) refers to the derivative with respect to \(\kappa_{\mu}\). The symmetric and antisymmetric) components of \(Q^{\mu\nu}\) define the quantum metric and Berry curvature for the many-body states \(\ket{\Psi_{\mathbf{\kappa}}}\) respectively: \(Q^{\mu\nu}=g^{\mu\nu}-i\Omega^{\mu\nu}/2\). The optical conductivity \(\sigma(\omega;\mathbf{\kappa})\) for many-body systems can be calculated with the Kubo formula as a function of \(\mathbf{\kappa}\). Under the assumption that the system is gapped, the optical conductivity does not depend on \(\mathbf{\kappa}\) in the thermodynamic limit. Therefore, we can identify the optical conductivity of the system with the one averaged over \(\mathbf{\kappa}\), \(\bar{\sigma}(\omega)\), as done in Ref. [25] for the dc Hall conductivity. \(\bar{\sigma}(\omega)\) can be further rewritten with the many-body quantum geometric tensor, and we can derive the bounds (31), (32) for many-body systems. These bounds also hold for effective theory by replacing the carrier density and the mass with the appropriate effective quantities. The bounds for many-body systems reduce to the inequalities (31), (32) in noninteracting systems. Since the bounds for noninteracting systems are tight, the bounds for many-body systems are also tight in general. In conclusion, we establish direct relations between three fundamental properties of insulators--the energy gap, quantum geometry, and topology--through the consideration of optical conductivity of solids. Our work opens new directions of research. ###### Acknowledgements. This work was supported by the U.S. Army Research Laboratory and the U.S. Army Research Office through the Institute for Soldier Nanotechnologies, under Collaborative Agreement Number W911NF-18-2-0048. YO is grateful for the support provided by the Funai Overseas Scholarship. LF was partly supported by the David and Lucile Packard Foundation.
2309.07192
The effect of data augmentation and 3D-CNN depth on Alzheimer's Disease detection
Machine Learning (ML) has emerged as a promising approach in healthcare, outperforming traditional statistical techniques. However, to establish ML as a reliable tool in clinical practice, adherence to best practices regarding data handling, experimental design, and model evaluation is crucial. This work summarizes and strictly observes such practices to ensure reproducible and reliable ML. Specifically, we focus on Alzheimer's Disease (AD) detection, which serves as a paradigmatic example of challenging problem in healthcare. We investigate the impact of different data augmentation techniques and model complexity on the overall performance. We consider MRI data from ADNI dataset to address a classification problem employing 3D Convolutional Neural Network (CNN). The experiments are designed to compensate for data scarcity and initial random parameters by utilizing cross-validation and multiple training trials. Within this framework, we train 15 predictive models, considering three different data augmentation strategies and five distinct 3D CNN architectures, each varying in the number of convolutional layers. Specifically, the augmentation strategies are based on affine transformations, such as zoom, shift, and rotation, applied concurrently or separately. The combined effect of data augmentation and model complexity leads to a variation in prediction performance up to 10% of accuracy. When affine transformation are applied separately, the model is more accurate, independently from the adopted architecture. For all strategies, the model accuracy followed a concave behavior at increasing number of convolutional layers, peaking at an intermediate value of layers. The best model (8 CL, (B)) is the most stable across cross-validation folds and training trials, reaching excellent performance both on the testing set and on an external test set.
Rosanna Turrisi, Alessandro Verri, Annalisa Barla
2023-09-13T10:40:41Z
http://arxiv.org/abs/2309.07192v1
# The effect of data augmentation and 3D-CNN depth on Alzheimer's Disease detection ###### Abstract background and objectivesMachine Learning (ML) has emerged as a promising approach in healthcare, outperforming traditional statistical techniques. However, to establish ML as a reliable tool in clinical practice, adherence to best practices regarding data handling, experimental design, and model evaluation is crucial. This work summarizes and strictly observes such practices to ensure reproducible and reliable ML. Specifically, we focus on Alzheimer's Disease (AD) detection, which serves as a paradigmatic example of challenging problem in healthcare. We investigate the impact of different data augmentation techniques and model complexity on the overall performance. ## Methods We consider Magnetic Resonance Imaging (MRI) data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) to address a classification problem employing 3D Convolutional Neural Network (CNN). The experiments are designed to compensate for data scarcity and initial random parameters by utilizing cross-validation and multiple training trials. Within this framework, we train 15 predictive models, considering three different data augmentation strategies and five distinct 3D CNN architectures, each varying in the number of convolutional layers. Specifically, the augmentation strategies are based on affine transformations, such as _zoom_, _shift_, and _rotation_, applied concurrently or separately. ## Results The combined effect of data augmentation and model complexity leads to a variation in prediction performance up to 10% of accuracy. When affine transformation are applied separately, the model is more accurate, independently from the adopted architecture. For all strategies, the model accuracy followed a concave behavior at increasing number of convolutional layers, peaking at an intermediate value of layers. The best model (**8 CL**, (B)) is the most stable across cross-validation folds and training trials, reaching excellent performance both on the testing set and on an external test set. ## conclusions Our results emphasize several important insights in ML for AD diagnosis. Firstly, we demonstrate that the choice of data augmentation strategy plays a significant role in improving the performance of the models. Secondly, we highlight the importance of investigating the depth of the model architecture, as it has a measurable impact on the final performance. Lastly, our findings underscore the necessity for adhering to rigorous experimental practices in the field of ML applied to healthcare. Deep Learning, Alzheimer's Disease, Data Augmentation, Model Depth, Reproducibility Introduction Advanced Machine Learning (ML) techniques have proven to be highly effective in healthcare applications, such as cancer detection and prognosis [1, 2, 3, 4, 5] and heart diseases prediction [6, 7]. However, it is still premature to assert that ML has been widely accepted as standard in clinical practice. For instance, in [8] the authors reviewed thousands of papers on the use of ML to detect COVID-19, revealing that none of them achieved the required level of robustness and reproducibility necessary for their use in the medical field. The ML community is rightly taking action to solve this issue, by establishing best practices [9, 10, 11] that meet the essential criteria of the scientific method [12, 13] for producing high-quality publications and defining new medical protocols. ### Our contribution To begin, we summarize the general guidelines for reproducible ML pertaining to two key aspects: _data_ and _experimental design and model assessment_. #### Data * Data collection/selection should align with the scientific problem at hand, avoiding bias and information leakage (e.g., utilizing cross-sectional data for diagnostic confirmation or longitudinal data for prognostic purposes) [14]. * Data quality should be assessed by identifying missing values and inconsistencies, and improved by applying appropriate imputation and cleaning methods [15]. * Data harmonization can be used to compensate for heterogeneous data from different acquisition techniques [16]. * Data augmentation can be employed as a solution for small sample size or unbalanced samples per class, a common case in the biomedical field. * The whole data handling process should be described in details in order to ensure reproducibility. #### Experimental design and model assessment * The versioned code used for conducting the experiments should be publicly shared to ensure transparency and reproducibility. * Every decision in the design of the predictive model should be justified, with recognition of uncontrollable factors. [17]. * Details about the samples used in the the training/testing split should be disclosed to guarantee benchmarking. * A well-designed experiment should avoid assessing results on a non-representative testing set. To this aim, resampling strategies [18] such as k-fold cross-validation or boosting can be utilized to comprehensively assess the model's performance. Further, models based on random weights initialization should be repeated for different trials in order to assess their stability. * The performance metrics should be chosen according to the specific scientific objectives of the study [19, 20]. * Testing the model on external datasets is ideal to evaluate its generalization properties [21]. Following these criteria, we took Alzheimer's Disease (AD) as a prominent example of complex disorder and we proposed a Deep Learning (DL) experiment to investigate the impact of data augmentation and model depth in a classification setting. We addressed the problem of discriminating AD subjects from CN individuals by using low-resolution (1.5) MRI scans. We adopted a 3D-Convolutional Neural Network (CNN) [22], eliminating the need for feature engineering processes like ROI selection [23]. This setup is fairly ambitious due to the vast number of possible architectures and training strategies, but it eliminates the need of domain experts supervision. A total of 15 DL models are compared, showing significant variations in prediction accuracy up to 10%. One augmentation strategy consistently outperforms the others. Model accuracy varies as a concave curve at increasing model depth values, peaking at an intermediate numbers of layers. The best model showed excellent accuracy on the testing set and good properties of generalization to an external dataset. It is worth noting that the proposed approach can be readily extended to other similar contexts beyond AD. The paper is structured as follows. The Background section introduces the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset as the standard for AD research and mentions relevant state-of-the art ML papers. The Materials and Methods section details data, methods, and experimental setup, including challenges and choices made. The Results section compares augmentation strategies and architectures. The Discussion section relates findings to state-of-the-art studies, and the Conclusion section illustrates future perspectives. ## 2 Background Many recent studies exploited ML methods to elucidate AD pathophysiological processes, employing data from ADNI [24, 21, 25, 26, 27, 28, 29, 30, 31, 32, 33]. ADNI comprises heterogeneous datasets collected during different temporal phases (ADNI1, ADNI/GO, ADNI2, and ADNI3), each characterized by varying MRI acquisition protocols, see Fig. 1. ADNI1 includes longitudinal acquisitions on 1.5T and 3T scanners with T1- and T2-weighted sequences; ADNI-GO/ADNI2 involves imaging data acquired at 3T with similar T1-weighted parameters to ADNI1; ADNI3 exclusively utilizes MRI obtained from 3T scanners. ADNI heterogeneity allowed for many different experimental setups in literature, with results depending on sample size (ranging from hundreds [23, 34, 35, 36] to thousands [21, 37]), images resolution or sequence type. However, this flexibility and the lack of a universally recognized benchmark hampered a fair comparison among models' performance. Likewise, the absence of a standardized protocol for data handling, including dataset splitting and pre-processing, prevents the development of models transferable to the clinical practice. Despite having potential implications for future clinical applications, published results in this context often deviate from established principles of scientific methodology. In the remainder of this section, we outline significant findings in MRI-based classification on ADNI and discuss their experimental approach in relation to the criteria outlined in the previous section. In [37], authors propose a 2D-CNN model to discriminate among CN, Mild Cognitive Impaired (MCI), and AD subjects reaching an accuracy of 99.3%. Despite their model provides excellent results, the performed pipeline does not satisfy many data and experimental design criteria. Authors state they adopt ADNI3 MRI for a total number of 6625 images. The Data Acquisition section reports that a total number of 7635 images is used in the work. In the same section, authors illustrates the data acquisition procedure in which they download 1290 MRI images from ADNI1 Annual 2 Y Y 3T and ADNI1 Baseline 3T. This makes Figure 1: **MRI collection in ADNI dataset.** Schema representing ADNI phases (ADNI1, ADNI GO, ADNI2, ADNI3). Different phases include a variable proportion of subjects: circles represent CN subjects, triangles represent Mild Cognitive Impaired (MCI), early MCI (EMCI) or late MCI (LMCI) subjects and squares depict AD patients (picture inspired by the data samples schema in the ADNI website). unclear which data has been selected for the experiments. Further, ADNI_i_ Baseline and ADNI_i_ Annual 2 Yr contain MRI exams from the same subjects at the baseline and after two years. We assume that, in two years, the MRI scan will not considerably change both for CN and for some stable patients. As a consequence, the model may have been trained and tested on very similar data. This clearly increases the performance results in terms of accuracy which, however, does not correspond to a real model improvement. We point out that longitudinal acquisitions at different pathological stages should be used only when modeling disease progression and outcome over time. Moreover, although the experimental design and the model are well described, the performance is only evaluated in terms of accuracy, disregarding other important measures, and on one trial only, discarding the weight robustness assessment. Finally, training/testing identifiers and the Python code are not publicly available. In [36], two different 3D-CNN architectures are proposed for performing AD/CN binary classification task, as well as other related tasks (e.g., AD vs early MCI (EMCI) subjects). The experiments are carried out by running 5-fold cross validation. The best architecture reaches 80% of average accuracy in discriminating between AD and CN. The ROC AUC is also reported. Both data and code are available on GitHub. Nonetheless, the proposed pipeline neglects some important aspects. Authors claim that in order to prevent information leakage they only select the first images for each subject within the ADNI dataset, for a total number of 231 images. However, it is not clear if they mixed data from different ADNI phases which would correspond to using heterogeneous images. Further, image processing consists of cropping images to \(110\times 110\times 110\) size, an arbitrary choice that is not discussed. All experiments are run only once, without assessing the weight robustness. In [21], a CNN model takes 3T MRI exams from ADNI Baseline as input to perform a AD binary classification. Despite the promising result of 99.2% accuracy in differentiating between AD and CN, the model is tested only on a set of 65 samples, which may not be large enough to be representative. In [38] the authors investigate the use of three popular pre-trained CNN models and their fine-tuning on 3T T1-weighted MRI from ADNI. Two tasks are performed: i) binary classification between CN and diseased subjects (including progressive MCI (pMCI), stable MCI (sMCI) and AD) ; ii) multi-class classification among CN, sMCI, pMCI, AD. Some essential details are missing on the preprocessing procedure, including how 2D images are obtained from volumetric MRI and how many synthetic samples are obtained by data augmentation. Results show that transfer learning always improves the classification performance on both tasks. AUC curve is also reported. However, the reported results may be unstable or not fully reliable as authors do not adopt any resampling strategy and the testing set only contains 32 samples. Remarkably, the authors also test their models on two external datasets reaching high accuracy. Nevertheless, both datasets have a very limited sample size (3o and 6o samples, respectively). ## 3 Materials and Methods ### Data For our experiments, we adopt the ADNI dataset [24] considering only the ADNI_i_ data collected during screening, which is the baseline exam. This includes 550 1.5T MRI exams from 307 CN subjects and 243 AD patients. We use an additional set of 3T MRI exams to test the best model in a _domain shift_ setting [39]. All data was preprocessed by ADNI experts (more information in [37]). #### Data processing We recall that MRI exams are three-dimensional data describing the structure of the brain. Fig. 2 displays the 2D projection of brain images captured from a CN subject (first row) and an AD patient (second row) on the _sagittal, coronal,_ and _axial_ planes. As already noted, ADNI images were collected with different protocols and scanning systems, hence they are very heterogeneous in size, see Table 1. To enable the use of ML methods, it is necessary to select a common volume size. This choice, often unexplained in literature, defines fundamentals characteristics of the pipeline, such as the amount of information contained in the image and the input space dimension, on which model choice and computational burden depend. In our experiments, images are downsized to \(96\times 96\times 73\). The principle guided this choice derives from computational issues. We first reduced the image dimension, rescaling the image of 50% along all dimensions, and we then resized images to match the smallest one. An alternative strategy may be zero-padding to match the biggest image but this would increase memory requirements. Finally, intensity normalization was applied omitting the zero intensity voxels from the calculation of the mean. This procedure allows to have homogeneous data with a fixed size. #### Data augmentation Data augmentation is a common procedure that simultaneously addresses data scarcity and creates a model invariant to a given set of transformations [40]. Different augmentation strategies can result in varied training sets, affecting model performance and computational cost. In this study, the original set is augmented by applying separately or altogether _zoom_, _shift_, and _rotation_ transformations, as shown in Fig. 3 (see 3 for details). We devised the following strategies to compare the effect of different transformations and samples amount: * **Strategy (A)**. To each image, we simultaneously apply all the transformations. The size of the augmented data will match the number of training samples \(\mathsf{N}\). * **Strategy (B)**. To each image, we separately apply each transformation, generating three different distorted images. The size of the augmented data will be three times the number of training samples 3N. * **Strategy (C)**. To each image, we simultaneously apply all the transformations, as in strategy A. We repeat the process three times so that the number of augmented samples matches the one of strategy B (3N). Therefore, strategies **(A)** and **(C)** rely on the same procedure, while strategies **(B)** and **(C)** generates the same number of samples. We remind that data augmentation is performed only on the training set, leaving validation and testing sets at the original sample size. \begin{table} \begin{tabular}{c|c|c|c} \multicolumn{1}{c|}{**MRI size**} & \multicolumn{1}{c|}{**CN**} & \multicolumn{1}{c|}{**AD**} & \multicolumn{1}{c}{**Total**} \\ \hline \(256\times 256\times 184\) & 8 & 8 & 16 \\ \hline \(256\times 256\times 170\) & 40 & 34 & 74 \\ \hline \(256\times 256\times 160\) & 4 & o & 4 \\ \hline \(256\times 256\times 166\) & 97 & 82 & 179 \\ \hline \(256\times 256\times 162\) & o & 1 & 1 \\ \hline \(192\times 192\times 160\) & 117 & 86 & 203 \\ \hline \(256\times 256\times 146\) & 1 & o & 1 \\ \hline \(256\times 256\times 161\) & 2 & o & 2 \\ \hline \(256\times 256\times 180\) & 38 & 32 & 70 \\ \end{tabular} \end{table} Table 1: Baseline 1.5T ADNI+ dataset. Number of CN and AD MRI scans grouped by size. Figure 2: 2D visualization of 3D MRI scans. Axial, coronal and sagittal planes of two brain images from ADNI dataset. ### Experimental setup #### 3.2.1 _Guide to the model choice_ Choosing the optimal DL model is not straightforward as the vast numbers of network and training parameters makes the brute-force approach unfeasible. Here we illustrate the model choices made a priori based on the issues posed by the addressed task. Type of dataWorking with 3D images presents computational and memory challenges. As a solution, several studies in the literature adopt three 2D projections of the MRI. Nevertheless, this approach requires three separate models, leading to increased overall wall-clock time. Moreover, extracting features from the 2D projections may result in the loss of crucial volumetric information and a simplified representation of the studied phenomenon. In this work, we adopted a 3D CNN that directly extracts volumetric features. limited amount of dataTo overcome the limited dataset size, we implemented the following strategies aimed at controlling model complexity and preventing overfitting: data augmentation; adding an \(\ell_{2}\) penalty; and limiting the number of filters per layer. The latter method resulted in a substantial parameter reduction across the network. For instance, in a 2-layer CNN with 32 and 64 \(3\times 3\times 3\) filters, reducing the number of filters to 8 and 16 (25% of the initial values) leads to a significant reduction of 93% in the number of learnable parameters (from 56256 to 3696). memory capacity3D models usually require a huge amount of memory capacity, that depends both on the input dimension and the model size. To reduce the required memory: i) we re-scaled the images to halve the data dimension; ii) we chose a batch size that balances the memory cost while retaining a representative subset; iii) we balanced the number of filters and the batch size to reduce the computational burden of the activation layer. #### 3.2.2 _Model details_ We report experiments on the CN/AD binary classification. A preliminary analysis performed with a standard training/validation/test split (75%/15% /10%), denoted a very high variance due to the limited sample size of the testing set. For this reason, to guarantee a correct assessment of model performance and stability, we set up a stratified-K-fold cross-validation loop. We set K=7, from Fold o to Fold 6 Figure 3: **Original and transformed MRI image. 2D projections of the original MRI image (first row) and the augmented image obtained by applying _zoom_ (second row), _shift_ (third row), and _rotation_ (last row) transformations.** (training/validation/test, with a proportion of 70%/15%/15%), that ensures having enough data for the learning phase. All folds are fully balanced, with the exception of Fold 6 which has an unbalanced ratio between AD and CN samples as the total amount of samples per class do not match exactly. We adopted as baseline network an architecture with 4 Convolutional Layers (CL) followed by a fully-connected layer, as depicted in Fig. 4. We will refer to this architecture as **4 CL** model. To investigate the optimal CNN depth, we inserted additional convolutional layers without pooling operations so that the number of layers is the only factor impacting in the model. Specifically, we added 2, 4, 6 and 8 convolutional layers in correspondence to the arrows of Fig. 4. We refer to these models as **6 CL**, **8 CL**, **10 CL**, and **12 CL**. For instance, in the **10 CL** architecture 6 convolutional layers are added to the **4 CL** baseline: two layers are inserted in correspondence of the first and second rows, and one layer in correspondence of the third and fourth rows. Additional details on network and training parameters can be found in **??**. In order to test model stability to initial random weights, each model has been run 10 times at fixed parameters. All the experiments have been implemented using the Python programming language and performed on a Tesla K4oc GPU. Samples identifiers and the Python code to reproduce the experiments are available on GitHub. ## 4 Results We compare 15 models obtained by combining different augmentation strategies with varying network depths, then we illustrate in detail the results of the best model. Results based on not-augmented data are not reported, as they were substantially worse than the ones obtained by using augmentation. Figure 4: **jD-CNN Architecture.** Architecture of the **4 CL** baseline network, composed by four blocks of a convolutional and pooling layers, followed by a fully connected (FC) layer. The total number of features (\(8*\mathrm{i}\)) in the \(\mathrm{i}\)-th convolutional layer is marked above each layer, whereas the filter dimension is reported below. In the experiments, we consider other four extended versions of the baseline architecture duplicating the convolutional layer preceding the arrows. ### _Architecture and augmentation choice_ Fig. 5 shows the accuracy on the validation set. As expected, Strategy (A) (in yellow) significantly under-performs the other augmentation types. This is due to the lower number of samples in the augmented data. Although strategies (B) (in green) and (C) (in fuchsia) generate the same amount of data, (B) outperforms (C) in all models, suggesting that applying the transformation separately significantly improves the CNN model. These outcomes hold independently from the adopted CNN architecture. Moreover, the accuracy curves for all augmentation methods show a similar pattern: the best results are obtained for intermediate amounts of layers, while accuracy decreases for higher numbers of convolutional layers. The same behavior can been observed in Fig. 6 where we report for each cross-validation fold the distribution of accuracy in the 10 trials. The **8 CL** model with strategy (B) emerges as the best-performing combination, exhibiting also more stability compared to the other combinations. ### _Best model performance and insight_ The combination of a CNN with 8 convolutional layers and the (B) augmentation strategy (**8 CL**, (B)) turned out to be the best model, reaching an accuracy of \(87.21\pm 0.88\%\) on the validation set and \(81.95\pm 1.26\%\) on the testing set. A complete evaluation of this model is reported in Fig. 7: panel (a) reports mean and standard deviation for Precision, Recall, F1-score, AUC and AUCPRC of CN and AD classes over the 7 folds; panel (b) shows the Confusion matrix obtained by counting True Positive, True Negative, False Positive, and False Negative scores over the 7 folds. Fig. 8 gives an insight on the layers behaviour and how they are learning the optimal model. Panel (a) displays the learned filters of every convolutional layer for one AD patient on the three considered median planes, i.e. _sagittal, coronal_ and _axial_. It is clear that the filters capture more abstract features at increasing depth values. Panel (b) presents, for each convolutional layer, the layer outputs (_embeddings_) of training and test samples projected on a two-dimensional plane through t-SNE [41]. Both projections show that the embeddings are more clustered as the number of layers increases. To further understand the properties and limits of the (**8 CL**, (B)) model, we assessed the effect of dropout, finding that it does not improve its performance (details in **?**). Also, we tested the model on an external dataset (described in **?**) of 3T MRI scans, obtaining an accuracy of 71% and an AUC curve of 0.76 (a complete evaluation can be found in **?**). ## 5 Discussion We analysed the impact of data augmentation strategies and number of convolutional layers in CNN models, considering a total of 15 combinations. Independently from the adopted architecture, Strategy (B) always outperforms the others. As strategies (B) and (C) leverage the same amount of training samples, Figure 5: **Models accuracy at varying of architecture depth and augmentation strategies. Comparison among the proposed CNN-based architectures with the three augmentation strategies, in terms of median accuracy on the validation set. The \(y\)-axis reports the model accuracy distribution on the 10 trials (%) and the x-axis presents varying augmentation strategies (A), (B), and (C) in 5 blocks - one for each CNN architecture.** these results suggest that applying the affine transformations separately may be more effective than combining them simultaneously. We showed that models' performance can differ up to 10% average accuracy, highlighting the importance of correctly investigating the model depth and the set of data transformations. For all augmentation approaches, we found that the curve of the model accuracy at increasing depths tends to be a concave function reaching the maximum for an intermediate depth value. Although the widespread notion for which deeper neural networks better generalize in a general framework, this result is in line with other studies [42, 43] in which authors showed that smaller models perform better when only a limited amount of data is available, as they avoid overfitting. The best model we identified is the combination of a CNN with 8 convolutional layers and the (B) augmentation strategy (**8 CL**, (B)). The model accuracy in validation and testing is \(87.21\pm 0.88\%\) and \(81.95\pm 1.26\%\), respectively, which is 4.2% increase in accuracy with respect to (**4 CL**, (B)) model. Also, Fig. Figure 6: **Models performance and stability across folds.**_Small multiple_ plot for the comparison of the validation accuracy for all architectures and augmentation strategies on all K-fold splits. On all subplots, the y-axis reports the model accuracy distribution on the 10 trials (%) for each split (x-axis). Columns and rows display augmentation strategies and CNN architecture, respectively. The best combination (**8 CL**, (B)) is highlighted with a red border. Figure 7: Evaluation of the (**8 CL**, (B)) model on the testing set. 6 shows how (**8 CL**, (B)) is more stable than all other models with respect to both cross-validation folds and training trials. Although these results appear in line with current state-of-the-art studies, we argue that a true comparison is not completely feasible as other works employ different datasets and data types, the number of samples varies dramatically both in training and testing sets, experimental designs are very heterogeneous and, most importantly, performance is often assessed on one trial, without any variability estimation. As additional evaluation, we tested the best model in a _domain shift_ context, i.e. on 3T MRI data, reaching 71% of accuracy. We remark that this is a very challenging task as the image resolution deeply differs from the one in the training set. ## 6 Conclusion This paper proposes an experimental pipeline for MRI-based binary classification of AD vs CN subjects, emphasizing key criteria for robustness and reproducibility. The experiments have been conducted on a pre-processed subset of ADNI dataset that includes 1.5T MRI scans collected during the screening ADNI. This selection ensures high data quality and harmonization, preventing any potential data leakage. The list of selected samples was made publicly available to enable benchmarking in further studies. Although the dataset is balanced, its sample size is limited. To address potential overfitting and ensure reliable results, resampling, data augmentation, and model complexity reduction strategies were employed. The first solution exploits K-fold cross-validation in order to provide a measure to model variability and robustness in terms of standard deviation. The second approach augments the number of training samples by applying affine transformations to the original image leading to a final sample size that depends on how and how many transformations are applied. The third strategy defines the model architecture in order to reduce the number of learnable parameters. In particular, the last two solutions require to select some parameters following empirical criteria which are often insufficiently discussed in literature. Figure 8: (a) Illustration of the learned filters by the best model for one of the AD samples. Columns show filters for the three median planes and rows show the filters for the input (raw data) and the convolutional layers at increasing depth. (b) Training and test embeddings for each convolutional layer of the (**8 CL**, (B)) model projected by t-SNE. For increasing depth, AD (green) and CN (yellow) samples are better clustered. ure. We believe that if artificial intelligence aims at giving a real contribution in the daily clinical practice, ML methods should be designed and implemented following homogeneous and shared data acquisition protocols and benchmarks, standardized strategies for parameter selection and good practices that ensure robust and reproducible results. To the best of our knowledge this is the first work in the AD domain that digs into these experimental aspects and quantifies the impact on performance estimation. Future work will extend this analysis to other architectures, such as transformers [44], additional affine transformations, different amounts of augmented samples, and, possibly, a multi-class classification setting that includes MCI subjects. ## Acknowledgments Rosanna Turrisi was supported by a research fellowship funded by the DECIPHER-ASL - Bando PRIN 2017 grant (20175NW5MB - Ministry of University and Research, Italy). Data used in preparation of this article were obtained from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database (adni.loni.usc.edu). As such, the investigators within the ADNI contributed to the design and implementation of ADNI and/or provided data but did not participate in analysis or writing of this report. A complete listing of ADNI investigators can be found at: ADNI Acknowledgment List. ## References * [1] Joseph A Cruz and David S Wishart. Applications of machine learning in cancer prediction and prognosis. _Cancer informatics_, 2:117693510600200030, 2006. * [2] Paul Sajda. Machine learning for detection and diagnosis of disease. _Annu. Rev. Biomed. Eng._, 8:537-565, 2006. * [3] Konstantina Kourou, Themis P. Exarchos, Konstantinos P. Exarchos, Michalis V. Karamouzis, and Dimitrios I. Fotiadis. Machine learning applications in cancer prognosis and prediction. _Computational and Structural Biotechnology Journal_, 13:8-17, 2015. * [4] Li Shen, Laurie R Margolies, Joseph H Rothstein, Eugene Fluder, Russell McBride, and Weiva Sieh. Deep learning to improve breast cancer detection on screening mammography. _Scientific reports_, 9(1):1-12, 2019. * [5] Tafadzwa L Chaunzwa, Ahmed Hosny, Yiwen Xu, Andrea Shafer, Nancy Diao, Michael Lanuti, David C Christiani, Raymond H Mak, and Hugo JWL Aerts. Deep learning classification of lung cancer histology using ct images. _Scientific reports_, 11(1):1-12, 2021. * [6] Senthilkumar Mohan, Chandrasegar Thirumalai, and Gautam Srivastava. Effective heart disease prediction using hybrid machine learning techniques. _IEEE access_, 7:81542-81554, 2019. * [7] Sellappan Palaniappan and Rafiah Awang. Intelligent heart disease prediction system using data mining techniques. In _2008 IEEE/ACS international conference on computer systems and applications_, pages 108-115. IEEE, 2008. * [8] Michael Roberts, Derek Driggs, Matthew Thorpe, Julian Gilbey, Michael Yeung, Stephan Ursprung, Angelica I Aviles-Rivero, Christian Etmann, Cathal McCague, Lucian Beer, et al. Common pitfalls and recommendations for using machine learning to detect and prognosticate for covid-19 using chest radiographs and ct scans. _Nature Machine Intelligence_, 3(3):199-217, 2021. * [9] Benjamin J Heil, Michael M Hoffman, Florian Markowetz, Su-In Lee, Casey S Greene, and Stephanie C Hicks. Reproducibility standards for machine learning in the life sciences. _Nature Methods_, 18(10):1132-1135, 2021. * [10] Joelle Pineau, Philippe Vincent-Lamarre, Koustuv Sinha, Vincent Lariviere, Alina Beygelzimer, Florence d'Alche Buc, Emily Fox, and Hugo Larochelle. Improving reproducibility in machine learning research: a report from the neurips 2019 reproducibility program. _Journal of Machine Learning Research_, 22, 2021. * [11] Andrew L Beam, Arjun K Manrai, and Marzyeh Ghassemi. Challenges to the reproducibility of machine learning models in health care. _Jama_, 323(4):305-306, 2020. * [12] John PA Ioannidis. Why most published research findings are false. _PLoS medicine_, 2(8):e124, 2005. * [13] Aaron Stupple, David Singerman, and Leo Anthony Celi. The reproducibility crisis in the age of digital medicine. _NPJ digital medicine_, 2(1):1-3, 2019. * [14] Balajee JM et al. Data wrangling and data leakage in machine learning for healthcare. 2018. * [15] Wei-Chao Lin and Chih-Fong Tsai. Missing value imputation: a review and analysis of the literature (2006-2017). _Artificial Intelligence Review_, 53:1487-1509, 2020. * [16] Konstadina D Kourou, Vasileios C Pzeoulas, Eleni I Georga, Themis P Exarchos, Panayiotis Tsanakas, Manolis Tsiknakis, Theodora Varvarigou, Salvatore De Vita, Athanasios Tzioufas, and Dimitrios I Fotiadis. Cohort harmonization and integrative analysis from a biomedical engineering perspective. _IEEE reviews in biomedical engineering_, 12:303-318, 2018. * [17] Benjamin Haibe-Kains, George Alexandru Adam, Ahmed Hosny, Farnoosh Khodakarami, Massive Analysis Quality Control (MAQC) Society Board of Directors Shraddha Thakkar 35 Kusko Rebecca 36 Sansone Susanna-Assunta 37 Tong Weida 35 Wolfinger Russ D. 38 Mason Christopher E. 39 Jones Wendell 40 Dopazo Joaquin 41 Furlanello Cesare 42, Levi Waldron, Bo Wang, Chris McIntosh, Anna Goldenberg, Anshul Kundaje, et al. Transparency and reproducibility in artificial intelligence. _Nature_, 586(7829):E14-E16, 2020. * [18] Gustavo EAPA Batista, Ronaldo C Prati, and Maria Carolina Monard. A study of the behavior of several methods for balancing machine learning training data. _ACM SIGKDD explorations newsletter_, 6(1):20-29, 2004. * [19] Marina Sokolova and Guy Lapalme. A systematic analysis of performance measures for classification tasks. _Information processing & management_, 45(4):427-437, 2009. * [20] Davide Chicco and Giuseppe Jurman. The advantages of the matthews correlation coefficient (mcc) over f1 score and accuracy in binary classification evaluation. _BMC genomics_, 21:1-13, 2020. * [21] Silvia Basaia, Federica Agosta, Luca Wagner, Elisa Canu, Giuseppe Magnani, Roberto Santangelo, Massimo Filippi, Alzheimer's Disease Neuroimaging Initiative, et al. Automated classification of alzheimer's disease and mild cognitive impairment using a single mri and deep neural networks. _NeuroImage: Clinical_, 21:101645, 2019. * [22] Yann LeCun, Yoshua Bengio, et al. Convolutional networks for images, speech, and time series. _The handbook of brain theory and neural networks_, 3361(10):1995, 1995. * [23] Xiaojing Long, Lifang Chen, Chunxiang Jiang, Lijuan Zhang, and Alzheimer's Disease Neuroimaging Initiative. Prediction and classification of alzheimer disease based on quantification of mri deformation. _PloS one_, 12(3):e0173372, 2017. * [24] Susanne G Mueller, Michael W Weiner, Leon J Thal, Ronald C Petersen, Clifford Jack, William Jagust, John Q Trojanowski, Arthur W Toga, and Laurel Beckett. The alzheimer's disease neuroimaging initiative. _Neuroimaging Clinics of North America_, 15(4):869, 2005. * [25] Margherita Squillario, Giulia Abate, Federico Tomasi, Veronica Tozzo, Annalisa Barla, and Daniela Uberti. A telescope gwas analysis strategy, based on snps-genes-pathways ensamble and on multivariate algorithms, to characterize late onset alzheimer's disease. _Scientific reports_, 10(1):1-12, 2020. * [26] Marlena Osipowicz, Bartek Wilczynski, Magdalena A Machnicka, and for the Alzheimer's Disease Neuroimaging Initiative. Careful feature selection is key in classification of Alzheimer's disease patients based on whole-genome sequencing data. _NAR Genomics and Bioinformatics_, 3(3), 07 2021. lqabo69. * [27] Natalia Briones and Valentin Dinu. Data mining of high density genomic variant data for prediction of alzheimer's disease risk. _BMC medical genetics_, 13(1):1-12, 2012. * [28] Matthew E Stokes, M Michael Barmada, M Ilyas Kamboh, and Shyam Visweswaran. The application of network label propagation to rank biomarkers in genome-wide alzheimer's data. _BMC genomics_, 15(1):1-13, 2014. * [29] Xia Jiang, Binghaming Cai, Diyang Xue, Xinghua Lu, Gregory F Cooper, and Richard E Neapolitan. A comparative analysis of methods for predicting clinical outcomes using high-dimensional genomic datasets. _Journal of the American Medical Informatics Association_, 21(2e2):e312-e319, 04 2014. * [30] Daoqiang Zhang, Yaping Wang, Luping Zhou, Hong Yuan, and Dinggang Shen. Multimodal classification of alzheimer's disease and mild cognitive impairment. _NeuroImage_, 55(3):856-867, 2011. * [31] Chunfeng Lian, Mingxia Liu, Jun Zhang, and Dinggang Shen. Hierarchical fully convolutional network for joint atrophy localization and alzheimer's disease diagnosis using structural mri. _IEEE Transactions on Pattern Analysis and Machine Intelligence_, 42(4):886-893, 2020. * [32] Janani Venugopalan, Li Tong, Hamid Reza Hassanzadeh, and May D Wang. Multimodal deep learning models for early detection of alzheimer's disease stage. _Scientific reports_, 11(1):1-13, 2021. * [33] Scott E Counts, Milos D Ikonomovic, Natasha Mercado, Irving E Vega, and Elliott J Mufson. Biomarkers for the early detection and progression of alzheimer's disease. _Neurotherapeutics_, 14(1):35-53, 2017. * [34] Siqi Liu, Sidong Liu, Weidong Cai, Sonia Pujol, Ron Kikinis, and Dagan Feng. Early diagnosis of alzheimer's disease with deep learning. In _2014 IEEE 11th International Symposium on Biomedical Imaging (ISBI)_, pages 1015-1018, 2014. * [35] Sadiq Alinsaif and Jochen Lang. 3d shearlet-based descriptors combined with deep features for the classification of alzheimer's disease based on mri data. _Computers in Biology and Medicine_, 138:104879, 2021. * [36] Sergey Korolev, Amir Safiullin, Mikhail Belyaev, and Yulia Dodonova. Residual and plain convolutional neural networks for 3d brain mri classification. In _2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017)_, pages 835-838, 2017. * [37] Ahmad Waleed Salehi, Preety Baglat, Brij Bhushan Sharma, Gaurav Gupta, and Ankita Upadhya. A cnn model: Earlier diagnosis and classification of alzheimer disease using mri. In _2020 International Conference on Smart Electronics and Communication (ICOSEC)_, pages 156-161, 2020. * [38] Hamed Ghaffari, Hassan Tavakoli, and Gila Pirzad Jahromi. Deep transfer learning-based fully automated detection and classification of alzheimer's disease on brain mri. _The British Journal of Radiology_, 95(1136):20211253, 2022. * [39] Colin R Buchanan, Susana Munoz Maniega, Maria C Valdes Hernandez, Lucia Ballerini, Gayle Barclay, Adele M Taylor, Tom C Russ, Elliot M Tucker-Drob, Joanna M Wardlaw, Ian J Deary, et al. Comparison of structural mri brain measures between 1.5 and 3 t: Data from the lothian birth cohort 1936. _Human Brain Mapping_, 42(12):3905-3921, 2021. * [40] Connor Shorten and Taghi M Khoshgoftaar. A survey on image data augmentation for deep learning. _Journal of big data_, 6(1):1-48, 2019. * [41] Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. _Journal of machine learning research_, 9(11), 2008. * [42] Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning (still) requires rethinking generalization. _Communications of the ACM_, 64(3):107-115, 2021. * [43] Davide Del Vento and Alessandro Fanfarillo. Traps, pitfalls and misconceptions of machine learning applied to scientific disciplines. In _Proceedings of the practice and experience in advanced research computing on rise of the machines (learning)_, pages 1-8. 2019. * [44] Max Jaderberg, Karen Simonyan, Andrew Zisserman, et al. Spatial transformer networks. _Advances in neural information processing systems_, 28, 2015. # The effect of data augmentation and 3D-CNN depth on Alzheimer's Disease detection Rosanna Turrisi\({}^{\star}\), Alessandro Verri & Annalisa Barla\({}^{1,2}\) for the Alzheimer's Disease Neuroimaging Initiative\({}^{\lx@sectionsign}\) **Appendix** ## Appendix A Data ### The ADNI dataset The ADNI cohort is a longitudinal multicenter study that aims at developing clinical, genetic and biomedical biomarkers for AD early detection. ADNI started collecting data since early 2000s and experienced four different phases, including over 2000 subjects affected by different degrees of cognitive impairment. However, different data modalities are not available for all subjects and therefore some scientific questions are still out of reach due to data scarcity. In this work, we considered a pre-processed subset of 550 1.5T T1-weighted MRI scans from ADNI1. This includes 307 CN subjects and 243 AD patients. Further, to evaluate the model ability to generalize to new datasets in the context of _domain shift_, we utilized 3T T1-weighted MRI scans as dataset for external validation. Specifically, this consists of 8o pre-processed images from ADNI1, including 47 healthy subjects and 33 patients with AD. ### Data pre-processing Experts by ADNI pre-processed the MRI exams in order to correct the image geometry distortion (by Gradwarp), the image intensity non-uniformity (by B1 calibration and N3). Finally, the images have been scaled for gradient drift using phantom data as detailed on the ADNI website. Note that not all these techniques have been simultaneously applied to each image, as the preprocessing procedure varies based on the acquisition system. It is worthwhile to mention that the ADNI screening folder contains images with a second type of scaling, referred to as _scaled_2_, which we used in place of the _scaled_ when it is not available. The ADNI consortium did not report details on differences between the two scaling methods. ### Data augmentation In this work, we studied the effect of various augmentation strategies in which we differently combined three affine transformations, described in the following. * _Zoom_. The in/out zoom is applied by randomly generating the zoom percentage, in a range from o to 20%. * _Shift_. The shift is differently generated for each image dimension, so that shift\(<0.4\). * _Rotation_. The rotation is defined by randomly generating a rotation angle, between -5 and 5 degrees, for each image dimension. It is clear that there is not a golden rule for picking the optimal transformation parameters and that any value is somewhat arbitrary. Here we assume that brain image acquisitions differ one from another only for small variations. This assumption motivated our choices. ## Appendix B Experimental setup We adopted as baseline network an architecture with 4 Convolutional Layers (CL) followed by a fully-connected layer (**4 CL** model). The number of filters in the i-th layer was set to \(8*\mathrm{i}\). All convolutional layers have filters with a \(3\times 3\times 3\) kernel. Padding is performed so that the original image and the feature map have the same size. We applied batch normalization to each convolutional layer. Successively, pooling was applied with decreasing size (i.e., \(4\times 4\times 4\) \(3\times 3\times 3\), \(2\times 2\times 2\), \(2\times 2\times 2\)). The pooling size was chosen in order to decrease the layer size and, consequently, the computational cost. To investigate the optimal CNN depth, we inserted additional convolutional layers without pooling operations so that the number of layers is the only factor impacting in the model. Specifically, we 2, 4, 6 and 8 convolutional layers obtaining five models increasing number of layers 4, 6, 8, 10, 12. Each model was trained using the Adam optimizer [1] with a learning rate set to 0.001. We trained the network to minimize the cross-entropy loss function with \(\ell_{2}\)-penalty weighted by 0.01. We allowed a maximum number of 200 epochs using early stopping if the performance does not increase after 20 epochs (patience). The batch size was 50. The choice of the described parameters was guided by the the criteria discussed in the Materials and Methods section and an exploratory analysis on the smallest model. Figure 1: **Validation and test accuracy of best model.** Evaluation of (**8 CL**, (B)) model on validation and testing set in terms of percentage accuracy distribution on each fold (in silver) of the cross-validation and on average (in green), for all trials. Figure 2: **Evaluation of the best model during training.** Four plots presenting, at evolving epochs, (1) the cross-entropy loss function values, (2) the probability distribution of training outputs, (3) training accuracy and (4) validation accuracy of the (**8 CL**, (B)) model. ## Appendix C Results ### Best model performance and insight Fig. 1 shows the model performance distribution on validation and testing set for all trials. Note that performance on Fold 6 of the testing set is associated with a very high variability due to the unbalanced AD/CN ratio, as anticipated in the Methods and Materials section. To better describe the behavior of the best model, we explored how some key characteristics change with evolving epochs, see the plots in Fig. 2. In subplot 1, we can look at the cross-entropy loss function to assess proper model convergence as it decreases over time (epochs). In subplot 2, we can observe the training probability distribution of the predicted class, which is represented by the output of the last feed-forward layer. Here we verified that the learning process was not happening in an overfitting regime, as the training probability improves in terms of median and variability, but never reaching 100%. Subplots 3 and 4 display the training and validation accuracy across the epochs: their behavior is comparable, confirming that the model shows very good generalization properties. We noticed that both accuracy curves do not show a step-wise behavior. Hence, in further studies, one could reduce the patience (currently, 2o) in the early stopping criterion and stop the training when the performance does not increase after a few epochs (e.g. 5 epochs). This would significantly reduce the computational cost and the training time of the experiments. ### Ablation study on Dropout Table 1 reports the classification accuracy on the validation and testing sets and the number of training epochs for the (**8 CL**, (B)) model in which dropout is applied with dropping probability ranging from o (i.e., no dropout) to o.5. All the results are averaged on 10 trials. All models are comparable in terms of computational cost, whereas (**8 CL**, (B)) model without dropout slightly outperforms the others. ### Generalization across image resolution We evaluated the prediction ability of the (**8 CL**, (B)) model on unseen dataset that presents a shift domain. Specifically, we tested the generalization across image resolution. Results pro vided 71% of accuracy and an AUC curve of 0.76. Fig 3 reports the AUC curve (left) and the the confusion matrix (right).
2305.00528
ICQ: A Quantization Scheme for Best-Arm Identification Over Bit-Constrained Channels
We study the problem of best-arm identification in a distributed variant of the multi-armed bandit setting, with a central learner and multiple agents. Each agent is associated with an arm of the bandit, generating stochastic rewards following an unknown distribution. Further, each agent can communicate the observed rewards with the learner over a bit-constrained channel. We propose a novel quantization scheme called Inflating Confidence for Quantization (ICQ) that can be applied to existing confidence-bound based learning algorithms such as Successive Elimination. We analyze the performance of ICQ applied to Successive Elimination and show that the overall algorithm, named ICQ-SE, has the order-optimal sample complexity as that of the (unquantized) SE algorithm. Moreover, it requires only an exponentially sparse frequency of communication between the learner and the agents, thus requiring considerably fewer bits than existing quantization schemes to successfully identify the best arm. We validate the performance improvement offered by ICQ with other quantization methods through numerical experiments.
Fathima Zarin Faizal, Adway Girish, Manjesh Kumar Hanawal, Nikhil Karamchandani
2023-04-30T17:00:03Z
http://arxiv.org/abs/2305.00528v1
# ICQ: A Quantization Scheme for Best-Arm Identification Over Bit-Constrained Channels ###### Abstract We study the problem of best-arm identification in a distributed variant of the multi-armed bandit setting, with a central learner and multiple agents. Each agent is associated with an arm of the bandit, generating stochastic rewards following an unknown distribution. Further, each agent can communicate the observed rewards with the learner over a bit-constrained channel. We propose a novel quantization scheme called Inflating Confidence for Quantization (ICQ) that can be applied to existing confidence-bound based learning algorithms such as Successive Elimination. We analyze the performance of ICQ applied to Successive Elimination and show that the overall algorithm, named ICQ-SE, has the order-optimal sample complexity as that of the (unquantized) SE algorithm. Moreover, it requires only an exponentially sparse frequency of communication between the learner and the agents, thus requiring considerably fewer bits than existing quantization schemes to successfully identify the best arm. We validate the performance improvement offered by ICQ with other quantization methods through numerical experiments. ###### Contents * 1 Introduction * 2 Related work * 3 Problem setup * 4 Proposed quantization scheme and algorithm * 5 Analysis of the ICQ-SE algorithm * 6 Numerical experiments * 7 Conclusion * A Proofs Introduction The _multi-armed bandit_ (MAB) problem is a sequential decision-making model involving a learner and an environment, where in each round, the learner chooses from \(K\) actions or arms. Each arm is associated with a reward distribution that is _a priori_ unknown to the learner. The learner then receives a random reward drawn from the distribution of the chosen arm. We consider the _pure exploration_ variant [1] of this problem, where the learner is required to identify the best arm, i.e., the arm with the highest mean reward, with accuracy better than a prescribed confidence level. The learner is evaluated based on the number of samples (_sample complexity_) it requires to identify the best arm. Thus a 'desirable' algorithm is one that identifies the best arm with a smaller sample complexity, for a given confidence level. The pure exploration setting is well-studied [2, 3, 4] under the assumption that the learner gets to observe the reward values with full precision. We consider a distributed variant of the pure exploration MAB setup where the learner cannot observe the reward samples directly, but through intermediate agents that act as an interface for each arm. Unlike the traditional pure exploration MAB setup, the learner no longer has access to the rewards with full precision, i.e., the agents observe the rewards obtained and communicate aggregated information to the learner over noiseless, bit-constrained channels. A key point is that each agent'represents' a single arm, i.e., each agent pulls and observes rewards from only one fixed arm associated with it. **Motivation.** Such distributed learning setups with limited communication arise in many real-world systems. For example, in wireless networks with bandwidth constraints involving remote and low-complexity agents, the cost of communicating the rewards could become a performance bottleneck. Reducing the number of bits transmitted would result in lower power consumption and wireless interference. This is particularly significant in IoT networks where devices are typically resource-constrained and battery-powered [5, 6, 7, 8]. It may then be easier to have the agents process/compress the observations locally, before passing on information in a more condensed format. Finally, when learning from privacy-sensitive data, quantization can be used to hide the exact rewards obtained in such a way that the specific contents remain unintelligible to the learner, but there is still enough information to carry out the overall learning task, as in [9]. **Contributions.** To overcome constraints on the precision with which information can be sent from the agents to the learner, we develop a quantization scheme called _Inflating Confidence for Quantization_ (ICQ) for the best-arm identification problem. Our quantization scheme allows agents to communicate the reward information with fewer bits while still allowing the learner to extract enough information to identify the best arm. The key idea behind the scheme is to generate a high-probability range for the mean reward estimator at the agent that is smaller than the actual range of the rewards to reduce the range over which quantization needs to be done. This is done using appropriately defined confidence intervals for the quantized values. While we build our scheme on top of the successive elimination framework proposed for the standard best-arm identification problem [1], and develop an algorithm called ICQ-SE, a key feature of our proposed quantization strategy is that it can be used in conjunction with a broad class of alternate schemes (such as LUCB [10] and lil'UCB [11], to obtain corresponding algorithms ICQ-LUCB and ICQ-lil'UCB, and so on). This 'universality' is clearly desirable and draws inspiration from [12] where the proposed quantization strategy had a similar feature with relation to a broad class of regret-minimization algorithms. Our algorithm ICQ-SE has the following features (shown in Section 5): 1. the learner needs to communicate with the agents only exponentially sparsely; 2. requires only \(B\geq 1\) number of bits for each round of communication for bounded rewards; 3. ensures order-optimal sample complexity compared to the distributed setup with no bit constraints; and 4. can be easily modified to be used with other confidence bound based algorithms (see Remark 1). In Section 6, through simulations, we show that this scheme performs better than other quantization schemes in the MAB literature, for both bounded and unbounded rewards. Related work MAB problems are well explored in the literature in various settings like expected regret minimization, simple regret minimization, and Best Arm Identification (BAI) [13]. For BAI problems [14, 10, 11, 2, 3, 4, 1], the goal is to identify the best arm either with high confidence within a given budget (_fixed budget_) or with fewer samples for a given threshold on the probability of making a mistake (_fixed confidence_). The algorithms developed for these settings assume that the learner has access to samples from the reward distributions with full precision. However, this assumption need not hold in federated setups [15] where the learners and the agents need to exchange information, and any bottleneck in communication needs to be taken into account. Some recent works have addressed such issues by modeling communication bottlenecks as capacity constraints [16, 12, 17] or as a limited resource that comes at an additional cost [18]. Our work is closer to [16, 12, 17], which propose quantization methods to improve the performance of learning algorithms under channel capacity constraints. In [12], the authors propose a quantization scheme, named QuBan, that can be used over a large class of MAB algorithms in the regret minimization setting to achieve order-optimal regret. QuBan differs from our method as it does not make use of confidence bounds for the mean reward estimators. In [16], the authors propose an adaptive quantization scheme and a decision-making policy for the Linear Stochastic Bandit setting and show that \(B=\mathcal{O}(d)\) bits guarantees order-optimal regret, where \(d\) is the dimension of the arm set. Their scheme is the closest to ours, where confidence bounds for the mean reward estimators are used to find a smaller range to quantize on, albeit for the regret minimization setting. Moreover, similar to us, they show that using 1 bit for each transmission from the agent to the learner is sufficient to ensure order-optimal regret compared to the unquantized setting when the reward distributions have bounded support. In [17], a pure exploration setting is considered, where a learner and multiple clients identify the best arm together, with each client being allotted a disjoint subset of the arms. Like us, they also propose a quantization scheme for communicating rewards between the clients and the learner; the difference being that their scheme only works with bounded rewards. Also, it is hard to control the number of bits used by their algorithm whereas our proposed scheme ICQ-SE provably allows for a trade-off between the amount of communication and performance. See Section 6 for a more detailed comparison. ## 3 Problem setup In this section, we define notation that will be used throughout the paper and formalize our system model. The overall setup has been illustrated in Figure 1. Broadly, we study a distributed multi-armed bandit (MAB) problem where the decision-making and observing entities are separated. They must communicate their'results' (decisions or observations) to each other over a noiseless channel using a finite number of bits, with the overall goal of performing a learning task. Figure 1: Block diagram illustrating the overall setup, shown here for the case with 3 agents, i.e., \(K=3\). **The distributed MAB.** There is a central learner, connected to each of the \(K\) distributed agents, via a noiseless channel that is bit-constrained from the agent to the learner. The learner and the agents work together to solve a fixed confidence stochastic MAB problem consisting of \(K\) arms. Agent \(i\in\{1,\ldots,K\}\triangleq[K]\) has access to arm \(i\) of the MAB, which is associated with a reward distribution \(\nu_{i}\). We assume that the distributions \(\{\nu_{i}\}_{i=1}^{K}\) are \(\sigma^{2}\)-subgaussian1 and bounded on \([a,b]\). For \(i\in[K]\), let \(\mu_{i}\) denote the mean of the distribution \(\nu_{i}\) and \(r_{i,t}\) denote the \(t^{\text{th}}\) reward sample drawn from \(\nu_{i}\). For each arm \(i\), \(\{r_{i,t}:t\geq 1\}\) is an i.i.d. process; furthermore the reward samples are independent across different arms. We also assume that arm \(1\) is the best arm for notational convenience, i.e., \(\mu_{1}\geq\mu_{i}\) where \(i\neq 1\). Also, define the suboptimality gaps \(\Delta_{i}=\mu_{1}-\mu_{i}\) for \(i\in[K]\). Footnote 1: A random variable \(X\) is said to be \(\sigma^{2}\)-subgaussian if for any \(t>0\), \(\mathbb{P}\left(|X-\mathbb{E}[X]|>t\right)\leq 2\exp\left(-t^{2}/2\sigma^{2}\right)\) The broad objective for the learner is to identify the arm with the highest mean reward by sequentially selecting arms and sampling from their associated reward distributions. **The communication model.** Our communication model is summarized in Figure 1. Communication between the learner and the agents happens in rounds. At the beginning of _(communication) round_\(i\), based on the information the learner has seen till \(i-1\), it broadcasts an action \(c_{i}\) to all the \(K\) agents, where \(c_{i}\) encodes the information about the actions to be taken by each agent at round \(i\). The agents respond to the learner after a fixed synchronized duration and the learner updates its estimate of the best arm based on the information it has received from the agents at round \(i\). This constitutes one round. Each agent communicates at most once in each round. We assume that each agent is only capable of collecting samples from its associated arm reward distribution, aggregating the information from the samples it has seen so far and transmitting it to the central learner. The agents cannot share information between each other and can communicate only with the central learner. This is commonly the case with low-complexity and resource-constrained devices such as drones and sensors. Moreover, they are only allowed to use a finite number of bits for each transmission to the learner. We assume that these transmissions are completed without any errors or erasures. Note that we do not assume that communication from the learner to the agents is bit-constrained and also do not put any computational restrictions on it; this is true for several application settings and similar assumptions are also common in the literature [16, 17, 12]. **Performance metrics.** We consider the _fixed confidence_ BAI problem for the distributed MAB setting described above. The goal here is to find _sound_ strategies that find the optimal arm in a finite number of rounds for a given confidence level \(\delta\). Formally, if the strategy stops after using \(\tau_{\delta}\) samples and outputs arm \(J_{\tau_{\delta}}\), a sound strategy ensures that \(\mathbb{P}(\tau_{\delta}<\infty,J_{\tau_{\delta}}\neq 1)\leq\delta\). Since we are dealing with a communication-constrained setting, we also use the metrics of _(communication) round complexity_\(\tau_{r,\delta}\) (the number of communication rounds needed) and _communication complexity_\(B_{\delta}\) (the total number of bits used by the algorithm) to study the performance of any sound strategy. An online learning algorithm for this distributed MAB setup has the following components: (1) a _sampling rule_ at the learner that at each time considers the history of communicated messages received from the agents thus far, and prescribes the arm(s) to be pulled in the next round; (2) a _communication rule_ at each agent that prescribes the rounds at which the agent will communicate with the learner and also specifies the content of the message; (3) a _stopping rule_ at the learner that specifies when the learner will stop sampling arms any further; and (4) a _recommendation rule_ at the learner that specifies its estimate for the best arm after the algorithm terminates. **Objective.** We develop learning algorithms for the distributed MAB setting outlined above where the agents need to limit the amount of communication with the learner. This is done via restricting both the frequency of communication (in terms of rounds) as well as the sizes of the messages exchanged (in terms of bits). Using too little communication can of course lead to a large penalty in terms of the sample complexity, and therein lies the main technical challenge of designing communication and quantization strategies which can effectively navigate this trade-off. In the following sections, we propose a class of policies parameterized by the frequency of communication and the number of bits used in each message, and then explicitly characterize the impact of these parameters on the sample complexity. Proposed quantization scheme and algorithm This section is devoted to the description of our proposed quantization scheme _Inflating Confidence for Quantization_ (ICQ) and its application to Successive Elimination as ICQ-SE. Algorithm 1 describes the actions to be taken by the learner while Algorithm 2 describes the agent operation (definitions of the additional notation involved can be found in (1), (4) and (5)). We provide more details below. ``` 1:procedureICQ-SE-learner(\(K,\delta,B,\{b_{i}\}\)) 2: Let \(S\leftarrow\{1,\ldots,K\}\) 3: For \(1\leq j\leq K\), let \(\tilde{\mu}_{j,0}\) be sampled uniformly from \([a,b]\) 4:for\(1\leq i<\infty\)do 5:for\(j\in S\)do 6: Instruct agent \(j\) to sample \(b_{i}\) times 7: Receive quantized value \(\mathbf{s}_{j,i}\) from agent \(j\) 8:\(L_{i,j}\leftarrow[\text{LCB}(j,i-1,\delta)-U^{\prime}(i,\delta),\ \text{UCB}(j,i-1, \delta)+U^{\prime}(i,\delta)]\) 9: Decode \(\tilde{\mu}_{j,i}=\text{dec}(\mathbf{s}_{j,i},B,L_{i,j})\) 10:endfor 11:\(S\gets S\setminus\{m\in S:\max\limits_{j\in[K]}\text{LCB}(j,i,\delta) \geq\text{UCB}(m,i,\delta)\}\) 12: STOP if \(|S|=1\) 13:endfor 14:return only element in \(S\) 15:endprocedure ``` **Algorithm 1** ICQ-SE algorithm (learner-side) **Successive Elimination.** The broad idea behind the _Successive Elimination_ (SE) framework [1] for a classical MAB setting (where there is only a learner observing full-precision rewards and no intermediate agents) is to characterize high-confidence bounds for the means of the distributions of each arm. One derives confidence widths \(U^{\prime}(i,\delta)\) such that the empirical mean \(\hat{\mu}_{j,i}\) of arm \(j\) at round \(i\) lies in the interval \([\hat{\mu}_{j,i}-U^{\prime}(i,\delta),\hat{\mu}_{j,i}+U^{\prime}(i,\delta)]\) around the actual mean \(\mu_{j}\) with a 'high' probability (we make this formal later). The upper limit is called the Upper Confidence Bound (UCB) and the lower limit is called the Lower Confidence Bound (LCB). The learner constantly keeps track of a set \(S\) of _active arms_, i.e., the set of arms still in contention to be the best arm. The set \(S\) is initialized to be the set of all arms \([K]\). At the end of a round, if the UCB of any arm \(k\) lies below the LCB of any other arm \(j\), then arm \(k\) is removed from the active set. Thus, under the high probability event that these confidence bounds contain the actual reward means, removing arms whose UCBs lie below the LCB of some other arm would guarantee that the algorithm is removing only suboptimal arms. The algorithm makes a mistake only when these high probability events do not occur. **High-level description of the algorithm.** In our setting, the learner no longer has access to full-precision rewards, and instead receives quantized estimates from the agents associated with each arm. In line with the SE framework, we also refer to the agents corresponding to the active arms as _active agents_. As we would like to reduce the number of bits used, it is inefficient for the agent to communicate each sample that it sees. We thus consider a batched approach that ensures that communication happens in a sparse manner. The learner pulls active arms (through the agents) in batches; in particular, during (communication) round \(i\), the agents pull and observe rewards from their associated arms \(b_{i}\) times before sending (a summary of) the results to the learner. We also define \(t_{i}\) to be the cumulative sum of arm pulls for each active arm till round \(i\), i.e., \(t_{i}=\sum_{j=1}^{i}b_{j}\). We show our results for an _exponentially-sparse_ communication framework similar to [18], i.e., \(t_{i}=\alpha^{i}\) for some \(\alpha>1\). At the beginning of round \(i\), the learner instructs each agent in \(S\), the set of active agents, to sample from their associated reward distributions \(b_{i}\) times. At the end of round \(i\), each active agent must have made a total of \(t_{i}\) cumulative arm pulls and sends to the learner a quantized estimate of the empirical mean of the rewards obtained. The learner first decodes the quantized estimate, then decides which arms will remain in the active set using the SE framework. This update requires defining new confidence intervals that account for quantization; details will be provided later. This marks the end of a communication round. The algorithm terminates when there is only one arm left in the active set, which is the recommended arm. Before describing the working of the algorithm in more detail, it is instructive to look at the quantization part separately. **Quantization scheme.** Each agent calculates the empirical mean of the observed rewards, which must first be quantized and encoded into a bit string to be sent over the bit-constrained channel. Similarly, at the learner, we must be able to obtain a decoded estimate of the empirical mean from the encoded bit string. This is achieved as follows. We first fix an interval that we 'expect' the empirical mean to belong to with high probability (this will become clear later), divide it into \(2^{B}\) equal bins, then transmit a bit string that will be decoded at the learner as the midpoint of the bin. We formalize this below. Let \([\alpha,\beta]\) be the 'expected' real interval as described above. First, divide \([\alpha,\beta]\) into \(2^{B}\) bins of equal width \(\frac{\beta-\alpha}{2^{B}}\), and associate with the midpoint of each bin, a \(B\)-length bit string. Then, the encoder \(\mathsf{enc}(x,B,[\alpha,\beta])\) returns the bit string \(\mathbf{s}\) associated with its nearest bin midpoint (even if \(x\notin[\alpha,\beta]\)), and the decoder \(\mathsf{dec}(\mathbf{s},B,[\alpha,\beta])\) returns the midpoint (a real number in \([\alpha,\beta]\)) corresponding to the bit string \(\mathbf{s}\). We may simply refer to them as \(\mathsf{enc}(x)\) and \(\mathsf{dec}(\mathbf{s})\) when \(B\) and the interval are clear from context. The quantization scheme is summarized in Figure 2. Note that this quantization scheme uses \(B\) bits for each transmission. Another point to note is that if \(x\in[\alpha,\beta]\), the quantization error between \(x\) and the decoded quantized value \(\mathsf{dec}(\mathsf{enc}(x))\) is at most \(\frac{\beta-\alpha}{2\cdot 2^{B}}\), i.e., if \(x\) is known to lie within the interval, then the quantization error is at most a factor of \(\frac{1}{2^{B+1}}\) times the width of the interval. **Details of the algorithm.** At round \(i\), consider the agent \(j\in[K]\) making its \(k^{\text{th}}\) cumulative arm pull, where \(1\leq k\leq t_{i}\). It observes a reward \(r_{j,k}\). At the end of this round, it calculates the empirical mean of the rewards from arm \(j\) observed over all rounds upto and including round \(i\), \(\hat{\mu}_{j,i}=\frac{1}{t_{i}}\sum_{k=1}^{t_{i}}r_{j,k}\). By defining the confidence width \[U^{\prime}(i,\delta)=\sigma\sqrt{\frac{2\log(4Kt_{i}^{2}/\delta)}{t_{i}}}, \tag{1}\] for each round \(i\) and arbitrary \(\delta>0\), it can be shown (as in the proof of Lemma 3) from the subgaussian concentration inequality [13] and a union bound that \[\mathbb{P}\left(\cup_{i\geq 1}\cup_{j\in[K]}|\hat{\mu}_{j,i}-\mu_{j}|>U^{ \prime}(i,\delta)\right)\leq\delta. \tag{2}\] Thus, at any round \(i\), the actual mean of arm \(j\) lies in the interval \([\hat{\mu}_{j,i}-U^{\prime}(i,\delta),\hat{\mu}_{j,i}+U^{\prime}(i,\delta)]\) w.h.p. However, since the communication channel is bit-constrained, the agents cannot simply transmit the infinite precision real number \(\hat{\mu}_{j,i}\) as is -- they instead transmit a quantized version of \(\hat{\mu}_{j,i}\) as described above. Let \(\tilde{\mu}_{j,i}\) be the decoded estimate of the mean of arm \(j\) that the learner recovers at the end of round \(i\), i.e., \(\tilde{\mu}_{j,i}=\mathsf{dec}(\mathsf{enc}(\hat{\mu}_{j,i}))\). Figure 2: Illustration of the quantization scheme when \(B=3\) — the blue lines given by \(\ell_{i}\) mark the separation between the \(2^{B}\) equal bins, the red points denote the midpoints of these bins, and the green ‘\(\times\)’s (the values to be quantized) get mapped to their nearest midpoints. To account for the potential increase in error due to the quantization necessitated by the bit-constrained channel, we introduce a slack in the confidence interval through a different confidence width \(U(i,\delta)\). The goal is to obtain a concentration bound for \(\tilde{\mu}_{j,i}-\mu_{j}\) in terms of \(U(i,\delta)\), in a form similar to (2). We now provide an intuitive explanation to motivate an expression of \(U(i,\delta)\) that achieves exactly this. (Note that this is not meant to be a proof; that this does indeed work will be shown in Section 5). First note that a straightforward application of the triangle inequality gives \[|\tilde{\mu}_{j,i}-\mu_{j}|\leq|\tilde{\mu}_{j,i}-\hat{\mu}_{j,i}|+|\hat{\mu}_{ j,i}-\mu_{j}|. \tag{3}\] The first term corresponds to the quantization error and the second term corresponds to the error in the empirical mean itself. An interval in which the latter lies w.h.p. is taken care of by the bound (2), so it is enough to establish such an interval for the quantization error. Recall from the 'Quantization scheme' description before Figure 2 that the quantization error is at most \(\frac{1}{2^{B+1}}\) times the width of the interval if the empirical mean \(\hat{\mu}_{j,i}\) is known to originally lie in the interval. Thus, our task is to find an appropriate interval in which \(\hat{\mu}_{j,i}\) lies w.h.p. to perform the quantization. As the latest estimate of the mean that the learner has access to at the beginning of round \(i\) is the quantized estimate at round \(i-1\), i.e., \(\tilde{\mu}_{j,i-1}\), we construct the interval to be centered around \(\tilde{\mu}_{j,i-1}\). We also want that this interval contain the new empirical mean at round \(i\), i.e., \(\hat{\mu}_{j,i}\), w.h.p. We now find a recursive expression for \(U(i,\delta)\) that inductively satisfies these properties. We first make the inductive assumption that \(\tilde{\mu}_{j,i-1}\) lies in \([\mu_{j}-U(i-1,\delta),\mu_{j}+U(i-1,\delta)]\) w.h.p. Combined with the knowledge that \(\hat{\mu}_{j,i}\) lies in \([\mu_{j}-U^{\prime}(i,\delta),\mu_{j}+U^{\prime}(i,\delta)]\) w.h.p. from (2), we have that the interval \([\tilde{\mu}_{j,i-1}-U^{\prime}(i,\delta)-U(i-1,\delta),\tilde{\mu}_{j,i-1}+ U^{\prime}(i,\delta)+U(i-1,\delta)]\) contains \(\hat{\mu}_{j,i}\) w.h.p. This is illustrated in Figure 3. Note that we have found an interval that we 'expect' the empirical mean to belong to, as promised when defining the quantization scheme relative to an interval. Thus, we perform the quantization over an interval of width \(2[U^{\prime}(i,\delta)+U(i-1,\delta)]\) centered at \(\tilde{\mu}_{j,i-1}\), and hence, the quantization error is at most \(\frac{1}{2^{B}}[U^{\prime}(i,\delta)+U(i-1,\delta)]\). Motivated by the above and (3), define, for \(i\geq 1,j\in[K]\), \[U(i,\delta)=\frac{1}{2^{B}}\left[U^{\prime}(i,\delta)+U(i-1,\delta)\right]+U^ {\prime}(i,\delta), \tag{4}\] where we set \(U(0,\delta)=b-a\) to ensure that the induction hypothesis (namely, that \(\tilde{\mu}_{j,i-1}\) satisfies its confidence bounds) holds for the index \(i-1=0\). Now that we have an appropriate confidence width \(U(i,\delta)\) that we expect (heuristically, from the above discussion; this will be proved in Section 5) to provide a similar confidence bound for \(\tilde{\mu}_{j,i}\) as (2) does for \(\hat{\mu}_{j,i}\), we define the appropriate LCB and UCB for our algorithm, as \[\begin{split}\text{LCB}(j,i,\delta)&=\tilde{\mu}_ {j,i}-U(i,\delta),\\ \text{UCB}(j,i,\delta)&=\tilde{\mu}_{j,i}+U(i,\delta ).\end{split} \tag{5}\] Figure 3: Motivation for defining \(U(i,\delta)\) as (4) – Start with \(\tilde{\mu}_{j,i-1}\), then by the inductive hypothesis, \(\mu_{j}\) should lie within \(U(i-1,\delta)\) of \(\tilde{\mu}_{j,i-1}\) (blue interval). From (2), \(\hat{\mu}_{j,i}\) lies within \(U^{\prime}(i,\delta)\) of \(\mu_{j}\) (red interval). Putting them together, \(\hat{\mu}_{j,i}\) lies within \(U(i-1,\delta)+U^{\prime}(i,\delta)\) of \(\tilde{\mu}_{j,i-1}\). Analysis of the ICQ-SE algorithm The following result can be shown for the sample complexity of SE run on the distributed MAB setup outlined above if the channel from the agent to the learner is not bit-constrained: **Theorem 1**.: _SE is a sound algorithm. Moreover, with probability at least \(1-\delta\), it successfully identifies the best arm using at most_ \[\mathcal{O}\left(\sum_{j\neq 1}\frac{102\alpha\sigma^{2}}{\Delta_{j}^{2}}\ln \left(\frac{64\sigma^{2}\sqrt{4K\delta}}{\Delta_{j}^{2}}\right)+1\right)\] _samples._ We now show that ICQ-SE is a sound algorithm and also analyze its sample complexity. We restrict our attention to ICQ-SE just to provide concrete results, but a similar analysis can be carried out for other confidence bound-based algorithms as well (see Remark 1). We do so by a sequence of lemmas and theorems, similar to a standard analysis of Successive Elimination-type algorithms, such as in [1] and [18]. The main novelties in our work are Lemma 2 and Theorem 6, where we relate the confidence intervals \(U(i,\delta)\) and \(U^{\prime}(i,\delta)\) at the agents and the learner respectively, thereby allowing us to prove similar results as for vanilla Successive Elimination. Proofs of all lemmas and theorems stated below can be found in the Appendix. Recall that \(\mu_{j}\) is the mean of arm \(j\in[K]\), \(\hat{\mu}_{j,i}\) is the empirical mean of arm \(j\) at round \(i\) at the agent (which will then be encoded and sent to the learner), and \(\tilde{\mu}_{j,i}\) is the decoded estimate of the mean of arm \(j\) at round \(i\) at the learner. Note that the only concentration bound we know about these quantities _a priori_ is (2), which relates \(\hat{\mu}_{j,i}\) and \(U^{\prime}(i,\delta)\). The learner, however, observes \(\tilde{\mu}_{j,i}\) and constructs confidence intervals of width \(U(i,\delta)\). Our goal is for the learner to be able to identify w.h.p. the best arm in \([K]\). To this end, it is desirable to have a concentration bound on the quantities available at the learner, namely, \(\tilde{\mu}_{j,i}\) and \(U(i,\delta)\). Lemma 2 below relates the following -- (1) the event that in some round \(i\) (hence the union over rounds), the estimated mean of arm \(j\), \(\hat{\mu}_{j,i}\), falls outside the confidence interval of width \(U^{\prime}(i,\delta)\) centered about the true mean \(\mu_{j}\), and (2) the identical event at the learner, except with the decoded mean \(\tilde{\mu}_{j,i}\) and confidence width \(U(i,\delta)\). **Lemma 2**.: _For all arms \(1\leq j\leq K\), and any \(\delta>0\),_ \[\bigcup_{i=1}^{\infty}\left\{|\tilde{\mu}_{j,i}-\mu_{j}|>U(i,\delta)\right\} \subseteq\bigcup_{i=1}^{\infty}\left\{|\hat{\mu}_{j,i}-\mu_{j}|>U^{\prime}(i, \delta)\right\}.\] Using Lemma 2, we obtain the desired confidence bound on the decoded means at the learner, stated formally below. **Lemma 3**.: _For any \(\delta>0\), define the event_ \[\mathcal{E}:=\bigcup_{j=1}^{K}\bigcup_{i=1}^{\infty}\left\{|\tilde{\mu}_{j,i} -\mu_{j}|>U(i,\delta)\right\},\] _then \(\mathbb{P}\left(\mathcal{E}\right)\leq\delta\)._ The above lemma establishes that the event \(\mathcal{E}^{c}\), wherein for each arm \(j\) and at every round \(i\), the decoded estimate at the learner \(\hat{\mu}_{j,i}\) is sufficiently close to the actual mean \(\mu_{j}\), occurs with large probability. It then follows that with high probability, any arm that is eliminated during the successive elimination procedure must be suboptimal, as stated in Theorem 4. We now show that ICQ-SE is a sound algorithm and provide an upper bound for its sample complexity in the exponentially sparse regime (i.e., with \(t_{i}=\alpha^{i}\)) in Theorem 6. **Theorem 4**.: _With probability \(\geq 1-\delta\), the best arm remains in the active set S until termination._ To prove our main result in Theorem 6, we require a technical lemma that relates the confidence widths \(U(i,\delta)\) and \(U^{\prime}(i,\delta)\). **Lemma 5**.: _Consider \(B\geq 1\) and let \(t_{i}=\alpha^{i}\) where \(\alpha\in\mathbb{N}\) such that \(\alpha<2^{2B}\). Then for \(i\geq 1\),_ \[U(i,\delta)\leq 2c\,U^{\prime}(i,\delta),\] _where_ \[c=\left(1+\frac{2}{2^{B}}\right)\frac{2^{B}}{2^{B}-\sqrt{\alpha}}.\] **Theorem 6**.: _Consider \(B\geq 1\) and let \(t_{i}=\alpha^{i}\) where \(\alpha\in\mathbb{N}\) and \(1<\alpha<2^{2B}\). With probability at least \(1-\delta\), ICQ-SE will terminate and successfully identify the best arm after using_ \[\mathcal{O}\left(\sum_{j\neq 1}\frac{410\alpha c^{2}\sigma^{2}}{\Delta_{j}^{2 }}\ln\left(\frac{256c^{2}\sigma^{2}\sqrt{4K\delta}}{\Delta_{j}^{2}}\right)+1\right)\] _samples,_ \[\mathcal{O}\left(\sum_{j\neq 1}\log_{\alpha}\left(\frac{410\alpha c^{2} \sigma^{2}}{\Delta_{j}^{2}}\ln\left(\frac{256c^{2}\sigma^{2}\sqrt{4K\delta}}{ \Delta_{j}^{2}}\right)+1\right)\right)\] _rounds, and_ \[\mathcal{O}\left(B\sum_{j\neq 1}\log_{\alpha}\left(\frac{410\alpha c^{2} \sigma^{2}}{\Delta_{j}^{2}}\ln\left(\frac{256c^{2}\sigma^{2}\sqrt{4K\delta}}{ \Delta_{j}^{2}}\right)+1\right)\right)\] _bits, where \(c\) is as defined in Lemma 5._ Comparing Theorem 6 with the equivalent result for Successive Elimination with no quantization in Theorem 1, we see that the upper bound for the sample complexity is worse only by a constant factor, i.e., we have order-optimal performance. Additionally, note that \(c\) depends on the choice of \(B\) and \(\alpha\), and in fact decreases as \(B\) and \(\alpha\) increase. Combined with the upper bound on sample complexity in Theorem 6, we thus see a trade-off between the performance of the algorithm and the number of bits that it is allowed to use. In addition to the above theoretical result, we also investigate this trade-off via numerical simulations in Section 6. **Remark 1**.: _The quantization scheme ICQ can in fact be used for any other algorithm that uses confidence bounds, such as LUCB [10] and lil'UCB [11], to obtain algorithms ICQ-LUCB and ICQ-li'UCB. This is because the crux of this scheme is the recursive definition for \(U(\cdot,\cdot)\) that we obtain by separating the quantization error and the error in the empirical mean itself, via (3). A similar analysis can be carried out for ICQ-LUCB and ICQ-li'UCB too, that we omit here for brevity._ **Remark 2**.: _We use the boundedness of the rewards only in the definition of \(U(0,\delta)\). Recall that the algorithm proceeds with an initial random guess for the mean of each arm \(\{\tilde{\mu}_{j,0}\}_{j=1}^{K}\). As \(U(i,\delta)\) is defined in a recursive fashion, for the induction in the proof of Lemma 2 to hold, \(U(0,\delta)\) needs to be such that \(\{\tilde{\mu}_{j,0}\}_{j=1}^{K}\) are good guesses for the actual means with high probability satisfying Lemma 2. For \([a,b]\)-bounded rewards, this holds trivially by taking \(U(0,\delta)=(b-a)\), as \(\{|\tilde{\mu}_{j,0}-\mu_{j}|>U(0,\delta)\}=\{|\tilde{\mu}_{j,0}-\mu_{j}|>(b- a)\}=\emptyset\). For unbounded rewards, however, we can no longer use a constant number of bits in each round. Specifically, in round 1, it is not possible to obtain a high-probability guess for the mean by using a bounded number of bits. Nonetheless, we can use a different quantization scheme just for the first round to ensure that the quantization error is bounded. The problem is then reduced to that with bounded rewards from round 2, whence we can use ICQ-SE as is. In Section 6, we demonstrate this on Gaussian rewards by running QuBan [12] (designed for unbounded rewards) in the first round._ Numerical experiments In this section, we present results of numerical experiments comparing the performance of ICQ-SE with other quantization algorithms proposed for multi-armed bandits in the literature. In addition to the unquantized setting as a baseline, we compare ICQ-SE with the quantization schemes QuBan [12] and Fed-SEL [17]. Each of these schemes is implemented on top of the same batched Successive Elimination algorithm that ICQ-SE uses to highlight the difference between the quantization schemes. The algorithms are compared based on their (expected) sample complexity \(E\left[\tau_{\delta}\right]\), (expected) round complexity \(E\left[\tau_{r,\delta}\right]\) and (expected) communication complexity \(E\left[B_{\delta}\right]\). In all our experiments, the performance of each algorithm was averaged over 4000 iterations. QuBan [12] was proposed for the regret minimization setting where each sample that the agent observes is quantized and sent to the learner. A key feature of QuBan is that the agent uses shorter codewords to quantize samples close to the current estimate of the mean at the learner, while reward samples which are farther away are assigned longer codewords. This helps to minimize the expected number of bits used at each round. While this is a sound approach, it could result in a higher number of bits being used unnecessarily for our framework. QuBan has a parameter \(\epsilon>0\) that provides a trade-off between the number of bits used and the performance of the algorithm (a smaller value of \(\epsilon\) provides a smaller regret using a higher number of bits). In Fed-SEL [17], in each round \(i\), the entire interval \([a,b]\) is divided into bins of length \(U^{\prime}(i,\delta)\) and the empirical mean is quantized to the midpoint of one of these bins. The main drawback of this approach is that the number of bits used at each round is inversely proportional to the confidence bound at each round, i.e., the number of bits used per round grows with the number of rounds, making it hard to control the cumulative number of bits that the algorithm uses. The first set of experiments in Figures 4(a), 4(b), 4(c), 4(d) and 4(e) analyze the dependence of the performance of ICQ-SE on the parameters \(\alpha\) (controlling the sparsity of communication) and \(B\) (the number of bits in each transmission). We consider a five-armed multi-armed bandit instance where each arm is associated with a \(\text{Beta}(\gamma,1-\gamma)\) distribution, with \(\gamma\) generated uniformly at random from \([0,1]\). In Figures 4(a) and 4(b), we observe that the number of communication rounds used by ICQ-SE to converge decreases with \(\alpha\) while the number of samples used increases with \(\alpha\). This is expected because increasing \(\alpha\) results in sparser communication between the agents and the learner reducing the round complexity while the number of samples used increases. Figure 4(c) shows the dependence of the cumulative number of bits used by the algorithm to converge (which we call _communication complexity_) on \(\alpha\). The decrease in the communication complexity with \(\alpha\) is a natural consequence of the decrease in the communication round complexity. Figures 3(a), 3(b), and 3(c) compare the performance of ICQ-SE, QuBan and Fed-SEL. We consider a five-armed multi-armed bandit instance where each arm is associated with a bounded support reward distribution, in particular the \(\text{Beta}(\gamma,1-\gamma)\) distribution with \(\gamma\) generated uniformly at random from \([0,1]\). We observe that ICQ-SE with \(B=3\) performs comparably with QuBan (\(\epsilon=0.5\)), and better than QuBan (\(\epsilon=2\)) and Fed-SEL in terms of sample and round complexity, while using a much lesser number of bits than all of them. In Figures 3(d), 3(e), and 3(f), we compare the performance of ICQ-SE and QuBan when the reward distributions associated with the arms are Gaussian (and hence unbounded; recall Remark 2). We consider a five-armed multi-armed bandit instance where each arm is associated with a Gaussian reward distribution of standard deviation \(0.125\) whose means are generated uniformly at random from the interval \([0,N]\), where \(N\) is a sample drawn from a Gaussian distribution \(\mathcal{N}(0,9)\). We use QuBan with \(\epsilon=2\) for the first round of ICQ-SE as discussed in Remark 2. We again observe that ICQ-SE with \(B=3\) performs comparably with the others in terms of sample and round complexity while using a much lesser number of bits. We make no comparison with Fed-SEL in this case because it is unclear how to extend the scheme to unbounded rewards, since it starts by dividing the finite-size reward range into bins of length \(U^{\prime}(i,\delta)\). The final set of experiments in Figures 3(g), 3(h) and 3(i) compare the dependence of the performance of ICQ-SE and QuBan on the hardness of the underlying bandit instance when the reward distributions associated with the arms are Gaussian. We consider a five-armed multi-armed bandit instance where each arm is associated with a Gaussian distribution of standard deviation \(0.125\). Four of the arms have mean \(0\) and the remaining arm has mean \(\Delta\in[0,1]\). A lower \(\Delta\) implies that the mean of the optimal arm is closer to that of the non-optimal arms, resulting in a harder instance. As expected, the performance of all the algorithms improves as the hardness of the instance decreases. Moreover, we see the same trend with ICQ-SE (\(B=3\) as earlier, especially on the harder instances. Finally, in Figure 5, we also numerically analyze the impact of varying \(\alpha\) (controlling the sparsity of communication) and \(B\) (the number of bits in each transmission) on the performance of ICQ-SE. ## 7 Conclusion We propose ICQ, a novel quantization scheme for the distributed best-arm identification problem where the learner does not have access to full-precision rewards, and analyze ICQ-SE, which is the application of ICQ to the Successive Elimination algorithm for this setting. Future lines of work include: (1) using a variable-length and adaptive quantization scheme in each round to reduce the communication complexity; for example, a Lloyd-Max quantizer based on the empirical distribution, (2) characterizing a lower bound Figure 4: Figures 4(a), 4(b) and 4(c) demonstrate the dependence of the performance of ICQ-SE on \(\alpha\). Figures 4(d) and 4(e) demonstrate the dependence on \(\beta\). Figures 4(a), 4(b) and 4(c) compare ICQ-SE with QuBan [12] and Fed-SEL [17] for bounded rewards while Figures 4(d), 4(e) and 4(f) compare ICQ-SE with QuBan for unbounded rewards. Finally, Figures 4(g), 4(h) and 4(i) compare the dependence of ICQ-SE and QuBan on the hardness of the underlying instance. on the communication complexity required to ensure a certain sample/round complexity, and (3) developing quantization schemes for the fixed budget variant of the best-arm identification problem.
2307.16533
Fight or Flight: Cosmic Ray-Induced Phonons and the Quantum Surface Code
Recent work has identified cosmic ray events as an error source limiting the lifetime of quantum data. These errors are correlated and affect a large number of qubits, leading to the loss of data across a quantum chip. Previous works attempting to address the problem in hardware or by building distributed systems still have limitations. We approach the problem from a different perspective, developing a new hybrid hardware-software-based strategy based on the 2-D surface code, assuming the parallel development of a hardware strategy that limits the phonon propagation radius. We propose to flee the area: move the logical qubits far enough away from the strike's epicenter to maintain our logical information. Specifically, we: (1) establish the minimum hardware requirements needed for our approach; (2) propose a mapping for moving logical qubits; and (3) evaluate the possible choice of the code distance. Our analysis considers two possible cosmic ray events: those far from both ``holes'' in the surface code and those near or overlapping a hole. We show that the probability that the logical qubit will be destroyed can be reduced from 100% to the range 4% to 15% depending on the time required to move the logical qubit.
Bernard Ousmane Sane, Rodney Van Meter, Michal Hajdušek
2023-07-31T09:56:33Z
http://arxiv.org/abs/2307.16533v1
# Eight or Flight: ###### Abstract Recent work has identified cosmic ray events as an error source limiting the lifetime of quantum data. These errors are correlated and affect a large number of qubits, leading to the loss of data across a quantum chip. Previous works attempting to address the problem in hardware or by building distributed systems still have limitations. We approach the problem from a different perspective, developing a new hybrid hardware-software-based strategy based on the 2-D surface code, assuming the parallel development of a hardware strategy that limits the phonon propagation radius. We propose to flee the area: move the logical qubits far enough away from the strike's epicenter to maintain our logical information. Specifically, we: (1) establish the minimum hardware requirements needed for our approach; (2) propose a mapping for moving logical qubits; and (3) evaluate the possible choice of the code distance. Our analysis considers two possible cosmic ray events: those far from both "holes" in the surface code and those near or overlapping a hole. We show that the probability that the logical qubit will be destroyed can be reduced from 100% to the range 4% to 15% depending on the time required to move the logical qubit. Quantum computer, Quantum error correction, Cosmic ray This work is supported by JST Moonshot R&D Grant (JPMIMS2061). ## I Introduction Quantum computers promise a change in the range of problems that can be solved via digital computation but face the crucial challenge of error handling. Error correction can be achieved through coding theory, building a logical qubit which is a grouping of several (or many) physical qubits to reduce the probability of errors. These error correction techniques have been proven to work in quantum systems where errors are uncorrelated [1, 2, 3, 4, 5]. Despite this, many worry about cosmic rays impinging on quantum chips since a cosmic ray event (CRE) can produce phonons that induce correlated errors that affect multiple qubits at the same time [6, 7, 8]. A phonon is a quasiparticle, an emergent phenomenon that behaves like an independent quantum particle but isn't a fundamental one. A phonon is a material's quantum vibration unit (usually a crystalline lattice). We don't see them at room temperature because everything constantly vibrates, but phonons can be necessary for several critical quantum technologies at the millikelvin operating temperatures. In superconducting wires, phonons can break apart the pairs of electrons known as Cooper pairs that are central to the operation of superconducting qubits [9, 10, 11]. Quantum error correction would be hindered by these correlated errors [12, 13]. With important computations on fault-tolerant quantum computers expected to take days or even months [14, 15], CREs occurring at the rate of several per minute will impose an unacceptable upper bound on the length of computations that can be successfully executed. Transmon superconducting qubits use two slightly different energy levels [16, 17]. By convention, the \(\ket{1}\) state is the higher energy state, and the \(\ket{0}\) is the ground state. In this scenario, the cosmic ray strike causes a lot of phonon vibrations in the chip substrate, causing \(\ket{1}\) to decay to \(\ket{0}\), but not \(\ket{0}\) to excite to \(\ket{1}\). This asymmetry is one of the signatures that error detection can use to detect strikes. McEwen _et al._[6] set up the system in a state of all ones and looked for correlated decays to zeroes. Hence, they show how the errors spread from the strike location and the damage they can cause across the chip. They determined that in their system, the effects take \(25\) milliseconds or so to fade away, a very long time compared to solid-state qubit lifetimes. As a solution, the authors in [18] (including two of the authors of this paper) developed a distributed quantum error correction scheme with two levels of encoding (an intra-chip surface code concatenated with an inter-chip CSS code such as the Steane code). They showed that their proposal reduces the rate of data loss from CREs from the physical event rate of once every \(10\) seconds to \(1\) loss per month. Suzuki _et al._ propose a fault-tolerant architecture based on superconducting and surface code, in which errors are avoided by dynamically increasing the code distance [19]. The effect of cosmic rays can also be reduced through engineering approaches that involve changes in hardware [20, 21, 22]. Through changes on the material level, researchers aim to minimize the surface impact of the cosmic ray by trapping quasiparticles or channeling the cosmic ray's energy [20, 21, 22]. To tackle CREs, most of the mitigation strategies try to encompass one or more of the following goals: 1. Reduce the incidence of strikes, 2. Reduce the range or rate of propagation, 3. Reduce the impact on logical states when a strike happens. However, these approaches still have their limitations. In silicon, sound propagates anisotropically, but we can use 2.5km/sec., or 2.5mm/\(\mu\)sec, as a reasonable value [6]. The measured lifetime of phonons in superconducting chips corresponds to propagation distances greater than \(60\) meters before the phonons dissipate. This distance is unlimited compared to the size of superconducting chips. Unfortunately, quantum error correcting codes use physical qubits that are physically close together, presenting a challenge. Thus, we must begin by assuming hardware improvements and then ask how software can contribute to the solution. The first approach above involves reducing the cross section or adding shielding if ambient radioactivity is involved to minimize the strike probability. The second approach is to change the structure of the chip substrate, attempting to dampen the vibrations and reduce their propagation. The third approach involves correcting errors so rapidly that they don't propagate or encode states in long-range correlations so that local destruction isn't a problem. This paper focuses on software-based strategies for mitigating the effects of cosmic ray hits on systems using the 2-D surface code. Our proposed approach is to flee the area: move logical qubits far enough from the strike's epicenter to preserve our logical information. Hence, the hybrid solution we propose begins with the hardware strategy, which limits the radius of phonon propagation, and ends with software strategy-based surface code. ### Contributions Our main contributions are: * The use of 2-D array surface codes in mapping against cosmic ray impacts: we map the qubits so that whenever a cosmic ray occurs, we can minimize the time steps that affect the moving of potentially vulnerable qubits. * To establish goals for the hardware parameters needed to use our approach We know that a fast-moving state depends on the position of the escapement (the nearest open trajectory) and the density of our mapping. Hence, we propose this new hybrid hardware-software-based strategy based on 2-D surface code. The manuscript is organized as follows: Surface code is presented in Section II. Then, we describe the hardware and software strategies to fight against the cosmic ray in Section III. We present the proposed strategy to flee the strike's epicenter in Section IV. We describe our solution for multiple logical qubits in section V. We analyze our proposals in section VI. We conclude this paper with a discussion in Section VII. ## II Surface code A surface code is a topological code in which syndromes are measured locally, and fault tolerance can be achieved. The qubits are arranged on a lattice to facilitate interactions between neighboring qubits as illustrated in Fig. 1. The 2-D surface code encodes logical qubits in the relationship between boundaries on a specialized, 2-D lattice cluster state [4, 23, 24]. This relationship can take several forms: a single, independent block (in which logical gates can be executed either transversally or using lattice surgery [25]), a block with a single "hole" cut into it (made by simply not measuring the stabilizers inside of the area, creating a new boundary for the surface), or by using pairs of holes, making for a flexible arrangement and allowing many qubits on a single, large surface. This paper focuses on the Raussendorf two-hole form [26]. Roughly half of the qubits are data qubits, and half are syndrome qubits; stabilizers for groups of four qubits (or three along the edges) are all measured simultaneously for a set of X stabilizers and then for a set of Z stabilizers, as shown in Fig. 2. Lattice cycle time \(t_{c}\) refers to the time it takes to measure the \(X\) and \(Z\) stabilizers and correct any detected errors. Across the appropriate region of the lattice, \(N\) data qubits will have \(N-1\) stabilizers, leaving the single degree of freedom that becomes our logical qubit. Because the state is encoded in the relationship between two boundaries, a single-qubit logical gate is executed by flipping a string of data qubits connecting the two boundaries. Consequently, logical errors also involve flipping such a string; the code distance \(d\) is the number of data qubits in the shortest such string. Because of the structure of the stabilizer measurements and the possibility of errors in stabilizer measurement, this code distance extends Fig. 1: In the 2-D surface code, qubits are coupled only to their neighbors. Half of the qubits (blue) hold an entangled data state. One-quarter of the qubits are used for X stabilizer measurements and the remaining one-quarter for Z stabilizer measurements (red). Fig. 2: The circuits that perform the surface code Z stabilizer measurement and the X stabilizer measurement can be partially overlapped in execution. not only in the spatial dimension but also in the temporal one, creating a 3-D space-time structure of rectilinear prisms. A hole can be moved from one place to another simply by changing the set of stabilizers on the surface measured in each cycle, extending and shrinking the hole, as in Fig. 3. Two-qubit gates (e.g., CNOT) are executed by _braiding_ two pairs of holes by moving the holes on the surface. The time required for this braiding operation depends on the code distance \(d\). ## III Fight: Distance and Delocalization ### _Hardware Strategies_ Hybrid strategies must be able to physically limit the maximum radius of phonon propagation (which we will call \(r_{\text{max}}\)), or any software strategy will inevitably be overcome. Hardware techniques come with tradeoffs we must evaluate. If length \(l\) is the physical qubit lattice spacing (e.g., marked as 1mm in Fig. 4), increasing \(l\) can slow the effective rate at which lattice cell-to-lattice cell propagation occurs, providing more time for software-based strategies to work and more time and distance for the phonons to dissipate. However, the desired physical interaction determines the preferred value of \(l\). If time \(t_{c}\) is the lattice cycle time, a shorter \(t_{c}\) allows logical qubits to be moved more quickly, benefiting the software flight strategies below. Fig. 4: With a two-hole logical qubit and a cosmic ray hit in between, both holes must move on clear, open rectilinear trajectories to flee the effects. Fig. 3: A hole can flee vertically or horizontally any lattice distance along an available open trajectory (blue diagonally shaded areas) in a fixed amount of time (determined by the code distance \(d\)). Suppose the two holes forming a logical qubit are laid out horizontally. In that case, horizontal movement (above) requires twice as long as vertical movement (below) because one hole blocks the movement of the other, preventing both holes from moving simultaneously. The arrows indicate the inter-hole relationship that expresses the logical qubit. However, the time required for the circuit in Fig. 2 is determined by the two-qubit gate times, which are determined by the relative qubit interaction frequency and strength and the measurement time, where longer measurement times may be higher fidelity. Higher-fidelity physical operations reduce the required code distance, reducing the physical area of a logical qubit and hence the cross-section presented to potential incoming cosmic rays and shortening the hole move time. However, this reduction in code distance naturally reduces the physical distance between holes, and this tradeoff is investigated below. ### _Software strategies_ Once the hardware requirements are met, the next step is software. As mentioned earlier, the idea is to move the logical qubit far enough to prevent the CRE from destroying it. In this paper, as in our previous work [18], we only consider errors caused by cosmic rays. We assume that the code distance has been set to suppress logical state errors due to ordinary decoherence thoroughly enough that we can ignore them. We must be able to detect a cosmic ray strike as quickly as possible. While the details of such a determination remain to be determined, we can describe the general outline. As noted earlier, CREs preferentially result in \(|1\rangle\rightarrow|0\rangle\) decay. If all of the qubits in a Z stabilizer decohere simultaneously, the syndrome qubit will still be measured as 0, indicating no error. However, the X stabilizer will result in 50/50 measurements of 0 and 1. This sudden asymmetry in syndrome values is a marker of CREs, especially if seen growing in a ring. In Fig. 5 and other places, we will designate the time at which a CRE is unequivocally detected either \(\Delta\) or Delta. Complementing the hardware strategies above, software (including the flexible elements of error correction) can contribute to solving the problem in ways we study in several sections. The simplest solution is to use a code distance that results in physical hole separation exceeding the dissipation radius \(r_{\text{max}}\). However, the hole must also have a radius \(>r_{\text{max}}\), or the phonon front can consume a single hole. Such a large physical radius is likely impractical, so we investigate a more dynamic approach in the next section. ## IV Flight: Detection and Escape Our proposed strategy is to _flee the area_: move logical qubits far enough from the strike's epicenter to preserve our logical information. This is more easily accomplished using the 2-D surface code than other codes. In addition, as the state remains vulnerable during the move or while we maneuver other holes to allow us to move a particular hole, we must examine how well we can protect the logical qubit state in these cases. Implementing this strategy will involve several phases. With time measured in units of the surface code cycle time \(t_{c}\), assume detection of a hit in time \(t=\Delta\) and an immediate and accurate response after that time. With phonon propagation velocity \(v_{p}\), the phonon front has advanced to a ring of radius \(r_{\Delta}=\Delta v_{p}\) before software mitigation begins. The steps above are illustrated in Figs. 4 and 5. First, consider the unconstrained movement of a single hole. The complete sequence of events is as follows: 1. \(t=0\): The cosmic ray event occurs. 2. \(t=\Delta\): The strike is detected via a combination of hardware and software. 3. \(t=\Delta+1\): The move of the hole(s) begins. 4. \(t=\Delta+1+d\): The move completes. 5. \(t=\frac{r_{\text{max}}}{v_{p}}\): The logical data (hole) waits out the storm as the phonons dissipate. With appropriate management of resources, continuing the planned computation during the storm should be possible. Fig. 5: The hole can flee any distance along an available rectilinear path in a fixed amount of time. Max size at \(t=r_{\text{max}}/v_{p}\), move begins at \(t=\Delta+1\), detection at \(t=\Delta\), event at \(t=0\). Fig. 6: An example of multi-qubit mapping using 2-D array surface codes. In this mapping, the detection time determines the qubits’ movement order. Wherever the CRE is detected, it enables easy qubit moves and limits time steps for moving our logical qubits. Figure 7: Examples of logical qubits moving and mapping a large number of qubits 6. The hole is returned to its original location, or other preparations are made for continuing the computation and weathering the next strike. During the movement of the hole (from \(t=\Delta\) to \(t=\Delta+1+d\)), the phonon radius continues to grow, and the state is vulnerable until the move completes and the hole is far enough away for the phonons to dissipate. Our approach must meet several constraints for this to work reliably: 1. Phonon propagation must not overwhelm an entire logical qubit in less than \(t=\Delta+1+d\). 2. There must be somewhere for the logical qubit to move a sufficient distance away. The first constraint dictates the size of a logical qubit which depends on the speed of detection and movement. The lower the displacement speed, the greater the required length of the logical qubit. The second constraint dictates the radius \(r_{\text{max}}\) that must be achieved in hardware and the density and placement of other logical qubits (discussed more in Sec. V) used in the software. Hence we make the following assumptions: #### Iv-B1 Assumptions We assume the following conditions: \[v_{p}(\Delta+1)<(x_{0}-v_{p}(\Delta+1))+l(d-1) \tag{1a}\] \[r_{\text{max}}<(x_{0}-v_{p}(\Delta+1))+dl+l(d-1) \tag{1b}\] where \(r_{\text{max}}\) is the maximum radius of the phonon ring, \(dl\) is the distance traveled by the logical qubit between \(t=\Delta+1\) and \(t=\Delta+1+d\) (the time the move is completed), \(x_{0}\) is the distance from the cosmic ray location to the hole position at \(t=0\), \((x_{0}-v_{p}(\Delta+1))\) is the distance between the hole and the cosmic ray position when we start the move (\(t=\Delta+1\)), \(l\) is the physical qubit lattice spacing, and \(d\) is the code distance (distance between the holes). Before the displacement process is triggered, the first condition (1a) ensures that the logical qubit is not entirely compromised. Whenever the ring of phonons reaches its maximum radius, the condition (1b) guarantee that the holes are at a sufficient distance or the ring phonons have consumed less than \(d-1\) qubits. Hence, ((1a) & (1b)) should be satisfied for our mitigation technique to work. ## V Multiple Logical Qubits As mentioned in the previous section, in a multi-qubit environment, our ability to move a hole quickly depends on the position of the hole relative to the nearest open trajectory and the density of qubits we have mapped onto the surface. Hence, we propose an approach based on the mapping. This approach uses 2-D array surface codes in mapping against cosmic ray impacts. If the logical qubits are closely packed, they can only be moved one at a time, increasing the time for the qubit nearest the strike site to flee by the number of qubits between it and the most immediate open space. In essence, we map the qubits so that whenever a CRE occurs, we can minimize the time steps affecting potentially vulnerable qubits' moving. Figure 6 illustrates a basic surface code mapping for several logical qubits. There is a displacement priority for each logical qubit based on the position and detection order of the CRE. depending on this mapping, no matter where the CRE is detected, the number of time steps in our surface code array does not exceed \(3\) as shown in Fig. 7(a). Using this simple mapping as a starting point, Figure 7(b) proposes a generalization. The latter is nothing more than a concatenation (left and right) and layering (top and bottom) of Fig. 6. Moreover, it can be extended continuously. ## VI Evaluation On the one hand, the main objective of this analysis is to show that it is possible to satisfy the necessary conditions ((1a) & (1b)) of our technique for mitigation of CREs. On the other hand, we illustrate how the distance \(d\) of the code varies based on the space between qubits \(l\), the maximum radius of the phonon \(r_{\text{max}}\), and the CRE detection time \(\Delta\). We consider two basic CRE cases: * far from both holes (exactly halfway between): \(x_{0}=\frac{d}{2}\) * close to or overlapping with one hole: \(x_{0}=0\) ### _Simulation settings_ We used a Python tool (CpModel) for the simulation that efficiently solves and evaluates problems with constraints. We consider the following parameters: * When we evaluate \(d\) depending on the value of \(l\): \[\begin{split} r_{\text{max}}=63\text{mm},\Delta=[1,25],dl=[1,100000 0]\ and\\ l=[1,60]\end{split}\] (2) * When we evaluate \(d\) depending on the value of \(r_{\text{max}}\): \[\begin{split} r_{\text{max}}=[1,100],\Delta=[1,25],dl=[1,100000 ]\ and\\ l=1\text{mm},5\text{mm},and10\text{mm}\end{split}\] (3) * When we evaluate \(d\) depending on the value of \(\Delta\): \[\begin{split} r_{\text{max}}=63mm,\Delta=[1,25],\\ dl=[1,1000000]\ and\ l=1mm,5mm,\ and\ 10mm\end{split}\] (4) We chose \(\Delta\) and \(dl\) randomly from the intervals defined in (2), (3), and (4). The speed of the phonons propagating through the chip varies depending on the material and ranges between about \(1-8\) km/s [11]. We have used the \(2.5\text{mm}/\mu\text{sec}\) in our simulations as in Silicon, the most popular material for substrates. We assume also that with a hardware strategy, the phonon propagation radius will be limited to \(r_{\text{max}}=63\) mm. In addition, we have selected \(l=1\text{mm}\), as in the Google Sycamore quantum processor [6]. ### _Evaluation and Discussions_ Based on the assumptions ((1aa) & (1ab)), after determining how long it takes to detect a CRE and flee from it (\(\Delta+1\)), we choose the code distance \(d\) such that we can always get away from the CRE and never lose a logical qubit. Figures 8, 9, and 10 illustrate the possible choices of the distances \(d\) under several scenarios. As you can observe in Fig. 8, the minimum necessary distance \(d\) of the code varies inversely to the separation between the qubits \(l\). When the inter-qubit spacing is small, the required code distance \(d\) becomes large. This value decreases considerably when the spacing increases. This corresponds to our expectations. Indeed, when there is enough space between the qubits, the probability of compromise of the data becomes low, allowing the choice of a small code distance \(d\). Figure 8 compares the variation of the code distance when the cosmic ray hits in between the holes and near a hole. We observe that for a small spacing, the distance is wider when the cosmic ray strikes close to the hole than when it occurs in the middle. However, as the spacing increases, the distance tends to be the same in both cases. In other words, depending on the spacing between the qubits, we can set a distance that protects the logical qubit regardless of where the comic ray occurs. In Figs. 9(a), 9(b), and 9(c), we can see the variation of the code distance \(d\) depending on the maximum radius of the phonon ring \(r_{\max}\), where \(l=1\)mm, \(l=5\)mm, and \(l=10\)mm, respectively, when the cosmic ray hits halfway between the holes. Figs. 9(d), 9(e), and 9(f) show the variation of the distance \(d\) of the code when the cosmic ray strikes near one hole based on the maximum radius of the phonon ring \(r_{\max}\), where \(l=1\)mm, \(l=5\)mm, and \(l=10\)mm, respectively. As the maximum radius of the phonon ring increases, the required distance \(d\) increases linearly as illustrated in Figs. 9(a), 9(b), 9(c), 9(d), 9(e), and 9(f). In other words, the more the phonon propagation can be limited, the lower a code distance is required. However, we also observe that this distance decreases when we gradually increase the spacing between the qubits. In fact, as the spacing \(l\) varies from \(1\)mm to \(5\)mm, the distance changes from \([10,70]\) to \([4,20]\). It also varies from \([10,70]\) to \([3,11]\) when the spacing is equal to \(1\)mm, \(10\)mm. So once we master the maximum radius of the phonon ring on the hardware side, we can define an appropriate distance by considering the spacing. A larger spacing also makes the same distance susceptible to protecting our logical qubit for various maximum phonon ring radiuses. This justifies the horizontal lines that widen more and more. Moreover, the case where the cosmic ray hits near one hole requires a vast distance compared to when it occurs halfway between the holes as illustrated in Figs. 9(g), 9(h), and 9(i). When the spacing between qubits increases, this difference becomes less important, so the curves tend to overlap. ### _Evaluation of the probability of failure_ Given our system's mapping, the logical qubit remains vulnerable when a CRE occurs inside one of these holes. Let's call this event \(\mathcal{A}\). We denote by \(\mathcal{E}\) the event of a CRE affecting \(d-1\) qubits. Thus the evaluation of the probability of the event (\(\mathcal{E}\wedge\overline{\mathcal{A}}\)) allows us to estimate the probability of success of our mitigation technique. (\(\mathcal{E}\wedge\overline{\mathcal{A}}\)) means that the CRE occurs somewhere other than inside a logical qubit's hole and affects less than \(d-1\) of its qubits. Since \(\mathcal{E}\) and \(\overline{\mathcal{A}}\) are two independent events then \(\mathcal{P}(\mathcal{E}\wedge\overline{\mathcal{A}})=\mathcal{P}(\mathcal{E}) \times\mathcal{P}(\overline{A})\) where \(\mathcal{P}(\overline{A})=1-\mathcal{P}(\mathcal{A})\). Suppose, for instance, we have the mapping in Fig. 11: where the logical qubits are separated by \(d\), and the dashed line designates a region where a logical qubit might be accommodated. (\(\frac{10d}{4}\times\frac{5d}{4}\))\(d/(d/4\times d/4)\) is the maximum number of holes (each square \(d/4\) by \(d/4\)) we can have in this frame. Consequently, the probability of a CRE occurring inside a hole is equal to \(\mathcal{P}(\mathcal{A})=\frac{2}{50}\) (a logical qubit consists of two holes). Let us evaluate \(P(\mathcal{E})\). As in our previous paper Fig. 8: Variation of the necessary _code distance_\(d\) when the cosmic ray events hit exactly halfway between the holes (\(x_{0}=d/2\)) and close to one hole (\(x_{0}=0\)) depending on the physical distance (\(l\)) between qubits where \(r_{\max}=63\)mm, \(\Delta=1\mu\)sec, \(dl=1\)mm. [18], we consider that the cosmic ray event distribution follows the Poisson distribution. Therefore \(\mathcal{N}(t)\rightarrow\mathcal{P}(\lambda\tau)\) and \[\mathcal{P}(\mathcal{E})=\mathcal{P}[\mathcal{N}(t)<d-1]=e^{-\lambda\tau}\sum_{ k=0}^{d-2}\frac{(\lambda\tau)^{k}}{k!}\] , where \(\lambda\) represents the chip's CRE rate, and \(\tau\) is the time needed to move our logical qubit to a safe location. Hence \[\mathcal{P}(\mathcal{E}\wedge\overline{\mathcal{A}})=(1-\frac{2}{50})\times(e ^{-\lambda\tau}\sum_{k=0}^{d-2}\frac{(\lambda\tau)^{k}}{k!})\] So, we can suppress the failure rate caused by a CRE in the chip to \[1-\mathcal{P}(\mathcal{E}\wedge\overline{\mathcal{A}})=1-(1-\frac{2}{50}) \times(e^{-\lambda\tau}\sum_{k=0}^{d-2}\frac{(\lambda\tau)^{k}}{k!})\] The estimation of this probability (\(1-\mathcal{P}(\mathcal{E}\wedge\overline{\mathcal{A}})\)) is illustrated in Fig. 12 where \(\lambda=1/10\), and \(\tau\in[10^{-4},1s]\), \(\tau\) in second (\(s\)). Fig. 9: Variation of the _distance_\(d\) when the cosmic ray events hit exactly halfway between the holes and close to one hole depending on the maximum radius of the phonon ring \(r_{\max}\) where \(\Delta=1\mu\)sec, \(r_{\max}=[1,100]\), and \(dl=1\)mm ## VII Discussions and Limitations Quantum computers are expected to have a wide range of applications, which is why they generate much interest. However, their alleged Achilles heel of error correction seems vulnerable to a type of error called a cosmic ray event (CRE). Unlike classical errors, CREs produce correlated errors that can destroy the data held in several qubits at the same time. Hardware or distributed systems developed recently to counter this error are limited. Hence, this paper presents a different perspective by developing a hybrid hardware/software-based strategy based on the 2-D surface code with a hardware strategy that limits the phonon propagation radius. The software strategy we propose is to flee the area: move the logical qubits far enough away from the strike's epicenter to maintain our logical information. We provide the necessary specifications on the hardware side to support this approach on the software side easily. We propose a mapping based on surface code that enables easy qubit moves and limits time steps for moving our logical qubit to a safe location regardless of the CRE position. In addition to cosmic rays, ambient radioactivity can also produce correlated errors with high frequency. The ambient radioactivity produces phonons at a rate of about \(20\) mHz, while cosmic rays contribute at a rate of \(10\) mHz, in a typical \(1\)\(cm^{2}\) chip [27]. In light of the fact Fig. 10: Variation of the _distance_\(d\) when the cosmic ray events hit exactly halfway in between the holes and close to one hole depending on the detection time \(\Delta\) where \(r_{\max}=63\)mm, and \(dl=1\)mm that ambient radioactivity is involved, we can minimize the strike probability (because these particles have low energy) by reducing the cross-section or by adding shielding. Our technique is effective for both cosmic ray phonons and ambient radioactive phonons. This paper has focused only on superconducting quantum processors, which require electrons to stay in pairs, and those pairs can be split apart by energy much less than a single eV [6]. Quantum dots and other semiconductor-based approaches are likely susceptible to similar processes but with different constant rates. Ion trap and similar individual-atoms-in-a-vacuum methods (neutral atoms, Rydberg atoms) probably are not susceptible; not only is the probability of an individual atom getting hit vanishingly small but also the mechanism by which the errors could propagate is missing entirely. In an ion trap, an individual atom would get kicked out of the system entirely, and the system would have to recognize that and recover. Quantum error correction can handle qubit loss quite well, but the engineering in keeping the rest of the atoms in place, inserting a new atom where the old one was lost, and rebuilding the code is a lot of work. Ion trap engineers are working on such approaches because their biggest problem is an imperfect vacuum, so stray atoms flying around occasionally collide with their data atoms. Photonic quantum computers certainly would not be susceptible to cosmic rays, except in the necessary photon detectors where their impact is well understood. However, any scientific work has its limits. As you can see, our solution establishes hardware requirements that cannot yet be met. Hopefully, this work will give hardware engineers a boost of motivation to look in this direction. However, even optimistically, our logical qubit remains vulnerable during the move, and this work only considers one quantum chip. As part of our future work, we will study the combination of this work return and distributed, fault-tolerant protection using the same hybrid hardware-software strategy [18].
2310.20328
ChiSCor: A Corpus of Freely Told Fantasy Stories by Dutch Children for Computational Linguistics and Cognitive Science
In this resource paper we release ChiSCor, a new corpus containing 619 fantasy stories, told freely by 442 Dutch children aged 4-12. ChiSCor was compiled for studying how children render character perspectives, and unravelling language and cognition in development, with computational tools. Unlike existing resources, ChiSCor's stories were produced in natural contexts, in line with recent calls for more ecologically valid datasets. ChiSCor hosts text, audio, and annotations for character complexity and linguistic complexity. Additional metadata (e.g. education of caregivers) is available for one third of the Dutch children. ChiSCor also includes a small set of 62 English stories. This paper details how ChiSCor was compiled and shows its potential for future work with three brief case studies: i) we show that the syntactic complexity of stories is strikingly stable across children's ages; ii) we extend work on Zipfian distributions in free speech and show that ChiSCor obeys Zipf's law closely, reflecting its social context; iii) we show that even though ChiSCor is relatively small, the corpus is rich enough to train informative lemma vectors that allow us to analyse children's language use. We end with a reflection on the value of narrative datasets in computational linguistics.
Bram M. A. van Dijk, Max J. van Duijn, Suzan Verberne, Marco R. Spruit
2023-10-31T10:15:20Z
http://arxiv.org/abs/2310.20328v1
ChiSCor: A Corpus of Freely Told Fantasy Stories by Dutch Children for Computational Linguistics and Cognitive Science ###### Abstract In this resource paper we release ChiSCor, a new corpus containing 619 fantasy stories, told freely by 442 Dutch children aged 4-12. ChiSCor was compiled for studying how children render character perspectives, and unravelling language and cognition in development, with computational tools. Unlike existing resources, ChiSCor's stories were produced in natural contexts, in line with recent calls for more ecologically valid datasets. ChiSCor hosts text, audio, and annotations for character complexity and linguistic complexity. Additional metadata (e.g. education of caregivers) is available for one third of the Dutch children. ChiSCor also includes a small set of 62 English stories. This paper details how ChiSCor was compiled and shows its potential for future work with three brief case studies: i) we show that the syntactic complexity of stories is strikingly stable across children's ages; ii) we extend work on Zipfian distributions in free speech and show that ChiSCor obeys Zipf's law closely, reflecting its social context; iii) we show that even though ChiSCor is relatively small, the corpus is rich enough to train informative lemma vectors that allow us to analyse children's language use. We end with a reflection on the value of narrative datasets in computational linguistics. ## 1 Introduction All of us tell stories on a daily basis: to share experiences, contextualise emotions, exchange jokes, and so on. There is a rich tradition of research into how such storytelling develops during infancy, and its relations with various aspects of children's linguistic and cognitive development (for an overview see Cremin et al., 2016). ChiSCor (**Ch**ildren's **S**tory **C**or**pus) was compiled to give a unique impulse to this tradition: it allows for (computationally) studying how children render character perspectives such as perceptions, emotions, and mental states throughout their cognitive and linguistic development. Existing research connecting language and cognition has largely relied on standardised tests (for review see Milligan et al., 2007). Yet, recently researchers across fields have urged for data reflecting phenomena they study in their natural context. For instance, computational linguists call for better-curated and more representative language datasets (Bender et al., 2021; Paullada et al., 2021), language pathologists question whether standardised linguistic tests capture children's actual linguistic skills (Ebert and Scott, 2014), and cognitive scientists call for more naturalistic measures of socio-cognitive competences (Beauchamp, 2017; Nicolopoulou and Unlittabak, 2017; Rubio-Fernandez, 2021). Following these considerations, ChiSCor has three key features: it contains fantasy stories that were told _freely_, within children's _social_ classroom environments, and stories are supplemented with relevant _metadata_. As such, ChiSCor documents a low-resource language phenomenon, i.e. freely produced and socially embedded child language. This paper makes the following contributions. First, we release ChiSCor and describe its compilation, data, and annotations in detail (Sections 2 and 3). Second, we show how ChiSCor fuels future work on the intersection of language, cognition, and computation, with three brief case studies (Section 4). We explore the Dependency Length Minimization hypothesis (Futrell et al., 2015) with ChiSCor's language features and show that the syntactic complexity in children's stories is strikingly stable over different age groups. Also, we extend emerging work on Zipf's law in speech (e.g. Lavioletain and Arnon, 2023; Linders and Louwerse, 2023) and find that ChiSCor's token distribution approximates Zipf better than a reference corpus consisting of language written by children, which we explain by appealing to the Principle of Least Effort. Furthermore, we show that ChiSCor as a small corpus is rich enough to be used with NLP-tools traditionally thought to require large datasets. We train informative lemma vectors with ChiSCor, that can be used to analyse how coherently children use specific lemmas of interest, and potential bias in their language use. Together, our case studies demonstrate that even though storytelling is a cognitively challenging task, the language children employ is no less sophisticated (an observation also supported by Van Dijk and Van Duijn, 2021; van Dijk et al., 2023). And although corpora of narratives are often smaller, we show that we can (and should) leverage NLP-tools to unravel linguistic and cognitive mechanisms at work in children's language productions. As discussed in Section 5, we see this as an important stepping stone towards building more ecologically valid language models. ## 2 Background and relevance Various resources of Dutch child language exist. Before the 2000s, corpora typically consisted of child speech gathered in unstructured home settings involving smaller numbers of younger children (e.g. Schlichting, 1996; Wijnen and Verrips, 1998). Later, more structured language elicitation (e.g. with picture books) from larger samples of children was more common (e.g. Kuijper et al., 2015), and recently we have seen large corpora documenting thousands of essays in school settings (Tellings et al., 2018), and many hours of speech recordings in human-machine interaction contexts (Cucchiarini and Van hamme, 2013). Although these resources are valuable, what is currently lacking is a corpus of speech samples that are i) produced freely in natural social settings, while being ii) sufficiently independent or 'decontxtualised' to be a good reflection of children's capacities, and iii) containing metadata about children's backgrounds. The rest of this section will discuss these three characteristics, on the basis of which ChiSCor was compiled. **i)** The stories in ChiSCor were collected on a large scale in natural settings, because language as a social phenomenon is highly context-sensitive. The corpora mentioned above that include such settings are often limited in scale, whereas the newer corpora are large-scale, but cover language produced for a machine interface or in school assignment context, thus are not socially embedded. **ii)** The stories in ChiSCor concern a special form of _decontextualized_ language use, in which children cannot draw on cues (like picture books), feedback from interlocutors (as they could in a conversation), or much shared background knowledge with the audience (that hears a new fantasy story). Thus, the cognitive demands in producing decontextualized language are high, since children have to simultaneously plan the story, monitor their language use, and make sure the audience can follow the plot (Nicolopoulou, 2019). As such, eliciting freely-told narratives is an acknowledged method for sampling an individual child's language skills on phonological, lexical, syntactic, and pragmatic levels (Southwood and Russell, 2004; Ebert and Scott, 2014; Nicolopoulou et al., 2015), as well as for assessing cognitive abilities, including memorizing, planning, organizing world knowledge (McKeugh and Genereux, 2003), and Theory of Mind (Nicolopoulou, 1993). Furthermore, proficiency in decontextualized language is known to be a good predictor of literacy and academic achievement (Snow and Dickinson, 1991). As far as we know, no larger-scale corpora of decontextualized Dutch child speech exist, and in the international context such corpora are also rare. **iii)** Existing resources often contain data on children's age and gender, but not on their backgrounds such as the educational levels of parents, which ChiSCor does contain (see Section 3). Metadata on subjects included in datasets becomes increasingly important, e.g. for gauging how representative language samples are (Bender et al., 2021), but also for follow-up work where e.g. partitioning the dataset is desired. ## 3 Corpus compilation ### Data collection We contacted primary schools, a day care and a community center in the South and South-West of The Netherlands to offer storytelling workshops, in the period 2020-2023. Workshops generally consisted of three stages: first, we openly brainstormed with children about what stories are, without enforcing our own ideas (e.g. what is a story, where can you find stories, what do you like about stories); second, we invited children to freely fill in the details of a fantasy story initiated by us as experimenters (e.g. filling in names, settings, events in a variation on the King Midas avarice myth); third and most importantly, we challenged children to individually make up and tell a fantasy story to their class peers, which we recorded. Our storytelling workshop was inspired by the Story Telling Story Acting (STSA) paradigm, originally developed by Paley (1990) and used as a framework in empirical studies by Nicolopoulou and Richner (2007), Nicolopoulou et al. (2015) and Nicolopoulou et al. (2022). Work by Nicolopoulou generally targets younger children using a longitudinal research practice integrated in the school curriculum, which involves both telling stories and acting them out. Our approach differs in that we included all primary school age groups (4-12y), but focused on storytelling only. Like in the STSA paradigm, children told stories live to an audience of peers, which comes close to narration in everyday social life: children explored themes like friendship and conflict, excitement over real and imagined events, and storytelling was interactive in the sense that their class peers reacted with laughter, disbelief, and so on. High-quality recordings were made with a Zoom H5 recorder. Recordings were manually transcribed into verbatim and normalised versions. In the normalised stories employed in the case studies (Section 4), noise such as false starts and broken-off words was manually corrected with as little impact on semantics and syntax as possible. Our project was approved by the Leiden University Science Ethics Committee (ref. 2021-18). Caregivers were informed beforehand and could optionally provide additional metadata, which ~33% (148) did. Our corpus, metadata, and code are available on OSF.1 See for more details on the data Table 1 and for sample stories Table 2. Footnote 1: [https://shorturl.at/bv60X](https://shorturl.at/bv60X). ### Metadata Here we highlight two variables from the metadata we collected: children's age and the educational levels of caregivers. Most ages are well-represented (Figure 1), but older children (ages 10-12) are under-represented; less teachers from older age groups signed up for the workshop. For educational levels, we see that ~53% of the children has two highly educated caregivers (in the Dutch system, a higher degree equals a minimum of 15 years of education), while ~24% has caregivers with two vocational (or lower) degrees (a vocational degree equals a maximum of 12 years of education) (Van Elk et al., 2012). Thus, in the part of our sample for which extra metadata is available, children from caregivers with higher socioeconomic status (SES) are over-represented. Yet, selection bias is higher in the metadata than in the language samples in ChiSCor as a whole: while \begin{table} \begin{tabular}{c c c} **Type** & **Quantity** & **Details** \\ \hline Audio & \(-\)11.5 hours & 619 44.1kHz.avav files \\ Text & 619 stories & \(\sim\)74k words, verbatim and normalized.txt files \\ Metadata & All 442 children & School grade (reflecting age group) \\ Extra metadata & 148 children & Exact age, reading time, education parents, no. of siblings, \\ & & gender, lang. disorder (y/n), home language Dutch (y/n) \\ Linguistic features & All 619 stories & E.g. vocabulary perplexity, vocabulary diversity, syntactic tree depth, \\ & & words before root verb, syntactic dependency distance \\ Annotations & All 619 stories & Character complexity (see Section 3.3) \\ \hline \end{tabular} \end{table} Table 1: Details on ChiSCor’s data. Besides the Dutch stories, ChiSCor also features an additional set of 62 English stories, for which audio, text, (extra) metadata, linguistic features and annotations are also available. \begin{table} \begin{tabular}{||c|c|c||} \hline **Level** & **Example** & **ID** \\ \hline \hline \multirow{3}{*}{Actor} & _Once upon a time there was a castle._ & \multirow{3}{*}{093101} \\ & _There stood a throne in the castle and a princess sat on the throne._ & \\ & _And the princess had a unicorn._ & \\ \hline \multirow{3}{*}{Agent} & _Once upon a time there as a prince and he saw a villain._ & \multirow{3}{*}{023101} \\ & _And then he called the police._ & \\ \cline{1-1} \cline{3-3} & _And then the police came._ & \\ \cline{1-1} \cline{3-3} & _And then he was caught. The end._ & \\ \hline \multirow{3}{*}{Person} & _Once upon a time there was a girl._ & \multirow{3}{*}{010101} \\ & _She really wanted to play outside. Her mother did not allow it._ & \\ \cline{1-1} \cline{3-3} & _She went outside anyway and her mother asked where are you going?_ & \\ \cline{1-1} \cline{3-3} & _And the girl said I am going outside. The end._ & \\ \hline \end{tabular} \end{table} Table 2: Translated stories from ChiSCor, traceable with ID. Underscoring shows the character the label is based on. we were able to include stories told by children from schools in more challenged neighbourhoods in ChiSCor, metadata depended on caregivers filling out forms, which caregivers with higher SES did more often. ### Annotations Here we highlight two types of annotations available in ChiSCor: socio-cognitive annotations in the form of character complexity annotations, and linguistic annotations in the form of automatically extracted features. Regarding **social cognition**, ChiSCor provides character complexity annotations that involve one label per story indicating the 'depth' of the most complex character encountered in a story (examples in Table 2). Character depth can be used as a window into the socio-cognitive skills of storytellers and was adapted from Nicolopoulou and Richter (2007) and Nicolopoulou (2016). The scale ranges from 'flat' _Actors_ merely undergoing or performing simple actions, to _Agents_ having basic perceptive, emotional, and intentional capacities, possibly in response to their environments, to "fully-blown' _Persons_ with (complex) intentional states that are explicitly coordinated with the storyworld. Labelling was done with CATMA 6 (Gius et al., 2021) and in-text annotations are available on OSF. Labelling character depth requires expert annotation, given that children's stories often progress in non-obvious ways. Interrater agreement was obtained in two rounds. Two experts A and B first labelled a random subset of 8% of stories, yielding moderate agreement (Cohen's \(\kappa\) =.62). After calibration (discussing disagreements to consensus), A labelled the rest of the corpus, and B labelled another random 8% as second check, for which Cohen's \(\kappa\) =.84 was obtained, indicating almost perfect agreement (Landis and Koch, 1977). Regarding **linguistic features**, we extracted mean dependency distance between syntactic heads and dependents as measure of syntactic complexity with spaCy 3.5 (Honnibal and Johnson, 2015). We follow Liu (2008) and Liu et al. (2017) and calculated mean dependency distance with \(DD(S)=\frac{1}{n-s}\sum_{i=1}^{n}|DD_{i}|\), where \(DD_{i}\) is the absolute distance in number of words for the \(i\)-th dependency link, \(s\) the number of sentences and \(n\) the number of words in a story. Language employing larger dependency distances is more demanding for working memory, thus harder to process (Grodner and Gibson, 2005; Futrell et al., 2015). We further elaborate on dependency distance in a case study in Section 4.1. We emphasise that many more linguistic features are included on OSF than we can discuss here, e.g. lexical perplexity and syntactic tree depth as common measures of linguistic proficiency and development (e.g. McNamara et al., 2014; Kyle, 2016; Van Dijk and Van Duijn, 2021). ## 4 Case studies with ChiSCor We conduct three small case studies to illustrate ChiSCor's potential. Since we aim to show ChiSCor's versatility to the broader community, we draw in Study 1 (Section 4.1) on ChiSCor's own linguistic annotations and metadata; in Study 2 leverage ChiSCor in a corpus linguistics-style analysis on Zipf's law in child speech (Section 4.2), and in Study 3 show the feasibility of using ChiSCor with NLP-tools that are traditionally thought to require larger corpora (Section 4.3). ### Case study 1: Syntactic Complexity The Dependency Length Minimization (DLM) hypothesis states that languages have evolved to keep syntactically related heads and dependents close together (such as an article modifying a noun), so that anticipation of a noun after an article is not stretched over many intervening words, which increases cognitive load and/or working memory costs (Futrell et al., 2015). Although DLM has been observed for various languages in various studies (e.g. Gildea and Temperley, 2010; Futrell et al., 2015), as far as we know, DLM for child speech has not been explored. ChiSCor concerns live storytelling, which is known to be a cognitively intense language phenomenon (see Section 2), which makes the DLM interesting to explore in ChiSCor's context. It is intuitive to expect that children employ smaller dependency distances to Figure 1: Ages of 148 children and educational levels of their caregivers. Bars in each plot stack up to 100%. reduce cognitive load. We leverage ChiSCor's linguistic features (dependency distance as explained in Section 3.3) and metadata (age groups) to analyse the developmental trend under the DLM. Especially for younger children (e.g. 4-6y), DLM could be expected to be more pronounced, given that they are arguably less proficient language users with little formal language training in school. Our modelling approach was as follows. In a linear model we included contrast-coded predictors, such that each predictor indicated the mean dependency distance difference with the previous grade ('backwards difference coding'), to model a trend over age groups. Dependency distance conditioned on age is plotted in Figure 2 for 442 stories of 442 children, and coefficients of the model are given in Table 3. Note that for those children who told multiple stories, we included only the first story to maximize independence of observations. Dependency distance appeared to be surprisingly stable across age groups: no single predictor significantly predicted dependency distance (Table 3, all \(p>.05\)), nor did all predictors together (\(F_{6,435}=1.078,p=.38,R_{adj}^{2}<.01\)). Contrary to expectations, it was not the case that younger children, as less proficient language users, employ shorter dependency distances, nor do children employ longer dependency distances as they grow older. Interestingly, in backwards difference coding, the intercept is the grand mean of dependency distance of all groups (2.66), which is close to the mean dependency distance (2.52) found for Dutch written by adults and reported by Liu (2008). We make a start with trying to explain why, in storytelling for younger children (4-6y), we find higher dependency distances than expected. Manual examination of narratives from this group showed that children often use syntactically complex constructions to refer to past events, even when simpler alternatives are available or preferred. The typical tense for narrative contexts is the Simple Past (SP) for many languages (Zeman, 2016), and SP can be used for completed and ongoing events in the past (Boogaart, 1999) in the storyworld. SP is syntactically simple; it requires only a single inflected verb. Young children, however, often use Present/Past Perfect (PrP/PaP) and Past Progressive (PP) constructions. These forms are used to indicate ongoing (PrP/PP) and completed (PaP) events in the past, and are syntactically similar in that they all involve an auxiliary depending on a (past) participle (PrP/PaP) or infinitive (PP) that is typically at utterance-final position, thus creating complex syntax. Figure 3 provides an illustration from our data of a child narrating a completed past event in PaP, which pushes dependency distance well beyond the average reported by Liu (2008), although the more efficient option would be SP. Although it is known that young children in experimental contexts also refer to past events with PrP and PP constructions instead of SP (Schaer-laekens and Gillis, 1993; Van Koert et al., 2010), in the context of decontextualized language use and the DLM our finding was unexpected. We find a Figure 3: Top: original utterance from story 033201 in PaP with mean dep. dist. = 3.2. Bottom: paraphrase in SP (bottom) with mean dep. dist. = 2. \begin{table} \begin{tabular}{c c c c} **Predictor** & \(\beta\) & SE & \(p\) \\ \hline _Intercept_ & 2.66 &.02 &.00 \\ Diff. 6-7/4-6 & -.09 &.07 &.20 \\ Diff. 7-8/6-7 &.11 &.07 &.13 \\ Diff. 8-9/7-8 & -.09 &.06 &.16 \\ Diff. 9-10/8-9 &.12 &.07 &.08 \\ Diff. 10-11/9-10 &.01 &.10 &.91 \\ Diff. 11-12/10-11 & -.03 &.12 &.81 \\ \end{tabular} \end{table} Table 3: Coefficients of the linear model. Each predictor indicates the difference in DD with the previous age group. Figure 2: Dependency Distance (DD) conditioned on age groups as customary in Dutch primary education. Dashed line indicates mean DD reported by Liu (2008). Stars indicate means. possible explanation in work by Van Koert et al. (2010): separating tense (auxiliary) from lexical information (verb) yields more complex syntax on the one hand, but makes processing easier for an audience on the other hand. After all, the audience does not have to decode different types of information packed in a single inflected verb. The trade-off between syntactic simplicity and ease of processing could indeed explain why ChiSCor's spoken narratives, produced live in front of an audience of peers, contain relatively high proportions of PrP and PP. Follow-up work would be needed to further substantiate this idea. ### Case study 2: Zipf distributions Zipf distributions, where token frequencies are proportional to their rank \(r\) according to \(f(r)\propto\frac{1}{r^{\alpha}}\) with \(\alpha=1\)(Zipf, 1932), were found for many language samples (Xiao, 2008; Ferrer i Cancho, 2005; Yu et al., 2018; Smith, 2007; Tellings et al., 2014; Lavi-Rotbain and Arnon, 2023), but are also subject to debate (for review see Piantadosi, 2014); is Zipf a trivial mathematical artefact or a fundamental property of human cognition and language? As Linders and Louwerse (2023) note, to answer this question we should analyze Zipf in more natural forms of communication, such as speech instead of written language, and invoke cognitive mechanisms underlying Zipf, such as the Principle of Least Effort (PLE). The PLE assumes that senders prefer efficient communication using infrequent, hence often shorter and ambiguous words, whereas receivers prefer larger vocabularies of longer, infrequent words to more easily decode messages. Zipf distributions are considered the balanced trade-off between sender and receiver needs (Cancho and Sole, 2003). The PLE is salient in ChiSCor's context: since live storytelling is a cognitively intense form of de-contextualized language use (Section 2), this could lead to a bias in storytellers towards frequent tokens, to alleviate cognitive load, a prediction made by Linders and Louwerse (2023). Yet, at the same time, if receiver needs are neglected, they cannot follow along; receivers cannot ask for clarification during storytelling as would be possible in e.g. normal conversations, which is something senders take into account to prevent losing their audience, which equals losing the point of storytelling. This balance is arguably less pronounced in written discourse, where there is opportunity to reconsider earlier parts, and no immediate interaction, thus less pressing receiver needs. Here we pit the token distribution of ChiSCor against that of BasiScript, a corpus of _written_ child language (subsection 'free essays', -3.4M tokens from thousands of Dutch children of 7-12 year (Tellings et al., 2018)), to compare Zipfian distributions in speech to the written domain. We followed Piantadosi (2014) in performing a binomial split on the observed frequency of each token to avoid estimating frequency and rank on the same sample. We used Zipf's original formula introduced above rather than derivations to model token distributions, following Linders and Louwerse (2023). We log-transformed (base 10) token rank and frequency to model Zipf linearly with \(log(frequency)=log(intercept)+slope*log(rank)\). We see in Figure 4 that both corpora approximate the plotted Zipf lines with good model fits (\(R^{2}\geq.90\)). Yet, ChiSCor approximates the Zipf line more closely than BasiScript, with a slope closer to \(-1\), supporting the idea that in live storytelling, balancing sender _and_ receiver needs is more pressing than in written language, even though in live storytelling a bias towards frequent tokens seems intuitive. The larger negative slope (-1.13) fitted for BasiScript indicates that senders rely more on frequent tokens and employ less infrequent tokens, which confirms the prediction that in written discourse, receiver needs are less pressing. Follow-up work could investigate Zipf distributions in both corpora beyond tokens, e.g. on parts-of-speech or utterance segments (Lavi-Rotbain and Arnon, 2023; Linders and Louwerse, 2023). Figure 4: Rank-frequency plots of ChiSCor and BasiScript. Dashed lines indicate Zipf’s law with \(\alpha=1\), blue/orange lines indicate model fits. ### Case study 3: Lexical Semantics with Word2Vec The third case study demonstrates the usability of ChiSCor as a relatively small corpus with common NLP-tools. We use a Word2Vec model Mikolov et al. (2013) to visualize lexico-semantic differences in children's language use in ChiSCor and BasiScript. It is commonly assumed that training high quality word vectors requires large corpora (> 100 million tokens) Mikolov et al. (2013); Altszyler et al. (2016); ChiSCor and BasiScript are much smaller with ~74k and ~3.4m tokens respectively. Still, it is worthwhile to see how well ChiSCor allows a computer to infer lexico-semantic information, since vector representations are the starting point for many downstream NLP tasks, and research in computational and cognitive linguistics (e.g. Beekhuizen et al., 2021; Samir et al., 2021). We obtained lemma vectors from both ChiSCor and BasiScript (introduced in Section 4.2) with Word2Vec as implemented in Gensim 4.1.2 Rehurek and Sojka (2010). For ChiSCor, the CBOW algorithm yielded the best result, for BasiScript this was Skip-gram. Vector quality was evaluated visually during training with reduced-dimensionality plots of a set of 35 common nouns, verbs, connectives, etc. that occur proportionally in both corpora. The end results are given in Figure 5. Here we see that overall vectors from both corpora allow intuitive syntactic groupings (e.g. conjunctions 'but'/'because', and verbs 'to think'/'to know'), and semantic groupings (e.g.'mommy'/'daddy', 'not'/'none'). To verify this quantitatively, we computed cosine similarities between the 595 possible pairs of the 35 lemmas plotted in Figure 5 with \(\cos(\mathbf{v},\mathbf{w})=\frac{\mathbf{v}\cdot\mathbf{w}}{\|\mathbf{v}\| \mathbf{w}\|}\), where \(\mathbf{v}\) and \(\mathbf{w}\) are two lemmas from one corpus, and computed their overlap. We found a fair correlation \(\rho(595)=.45,p<.01\)Akoglu (2018), which is salient: it shows that from ChiSCor as relatively small corpus, rich lexico-semantic information can be learned as effectively as from BasiScript, which is 46 times larger. Lemma vectors also allow us to analyze how children use particular lemmas of interest. There is some nuance in the groupings in Figure 5: for ChiSCor, especially the verbs referring to cognitive states ('to think', 'to know', 'to wish', 'to want') and perceptual states ('to hear', 'to see') are more clearly grouped and positioned compared to BasiScript (where e.g. 'to wish', 'to see', and 'to want' have less obvious positions). Since these lemmas have about equal relative frequencies in both corpora, it is likely that for these verbs, the lemma _context_ is semantically more clear and coherent in ChiSCor compared to BasiScript. On the other hand, conjunctions ('but', 'because', 'therefore') are more coherently grouped in BasiScript compared to ChiSCor (where 'therefore' has a less obvious position). Apparently, children use verbs referring to cognitive/perceptual states more coherently in ChiSCor, while conjunctions are more coherently used in BasiScript. In live storytelling, communicating clearly and coherently what was thought and/or perceived seems more critical than in written storytelling, as the audience cannot access earlier information as they could in a written story, and this information is critical for understanding and relating to narratives more generally Zunshine (2006). On the other hand, in written stories, children have more time to reflect on, and, if necessary, correct their use of conjunctions to link clauses, making the context more clear and coherent. This example shows that ChiSCor is usable with common NLP-tools to unravel children's language use in detail, even though it is relatively small. Lemma vectors can also reveal bias in children's speech. A well-known gender bias in language is the women-home/man-work stereotype Boluk Figure 5: t-SNE projections van der Maaten and Hinton (2008) of the latent Word2Vec space of 100-dimensional lemma vectors of ChiSCor (left) and BasiScript (right). Lemma positions should not be compared _between_ but _within_ plots, as the axes of the plots have no explicit interpretation. basi et al., 2016; Wevers, 2019), which in ChiSCor and BasiScript can be investigated with gendered categories'mommy', and 'daddy', and attributes 'home' and 'to work'. As we see in Figure 5,'mommy' and 'daddy' occupy similar positions, so initially we do not expect much difference in their cosine similarity with 'home' and 'to work'. A standard approach to verify this, is to compute the difference in cosine similarity of an attribute with one category versus another, e.g. 'home' and'mommy' vs. 'daddy'. For ChiSCor, difference scores were small: for 'home' and'mommy' vs. 'daddy'.031, for 'to work' and'mommy' vs. 'daddy'.076. The difference scores were comparably small for BasiScript:.049 and.001 respectively. These smaller scores indicate that neither gender is much more strongly associated with one attribute than the other, suggesting little gender bias in the corpora, contra earlier work on bias in child language (e.g. Charlesworth et al., 2021). Still, future work should leverage ChiSCor and incorporate more gendered categories (e.g.'she', 'he'), more attributes (e.g. 'baby', 'office'), average these vectors and apply more advanced vector arithmetic to put this initially surprising result to the test. ## 5 Discussion Storytelling datasets are relatively scarce, which is a shortcoming in existing resources, given that live storytelling challenges children to leverage both linguistic, cognitive, and social competences to tell a story that engages an audience. These competences can be analysed through stories, manually or with computational tools, to learn more about child development. We demonstrated that ChiSCor has properties that other established language samples also have, such as a Zipfian token distribution. Moreover, ChiSCor's close fit to the Zipfian curve testifies to the _social context_ of the language contained in it and the Principle of Least Effort that is likely at work there (Section 4.2). In addition, even though storytelling is a cognitively demanding task, we demonstrated that the stories in ChiSCor are syntactically surprisingly complex, and we offered a tentative explanation why especially younger children may employ complex syntax, which could be related to ChiSCor's context of live storytelling in front of an audience (Section 4.1). Lastly, we have shown that ChiSCor can be used to learn a semantic vector space that is as intuitive as the semantic space of a much larger reference corpus (Section 4.3). This opens up possibilities for using ChiSCor with tools that are traditionally deemed fit only for much larger corpora, to assess the coherence of contexts in which children use particular words of interest. For example, we found that words detailing cognitive and perceptual states were more clearly differentiated in ChiSCor compared to BasiScript as a corpus of written child language. Such words concern information that is critical to understand a plot that cannot be consulted again in live storytelling, possibly leading children to use these words more carefully and coherently. The social context of ChiSCor's narratives and its influence on language production invite us to reflect on a more general issue: the dominance of written (web) text in computational linguistics and NLP. Researchers increasingly question scraping together increasingly larger uncurated and undocumented resources (Bender et al., 2021; Paullada et al., 2021), that is, datasets without metadata, and it is subject to debate how helpful such large-scale written datasets are in e.g. understanding language acquisition and modelling cognition (e.g. Warstadt and Bowman, 2022; Mahowald et al., 2023). Indeed, spoken language is different from written language in many ways, as Linders and Louwerse (2023) note: it is mainly acquired naturally (unlike writing) and predates writing in both the evolutionary and developmental sense. Most critically, speech is typically situated in a social setting with other language users, evanescent, spontaneous, and grounded in a particular context, to mention just a few out of many defining characteristics. Still, with Large Language Models (LLMs) as prime current example of the reliance on large written datasets, such datasets have helped disclose what is _in principle_ learnable from word co-occurrence statistics and a simple word prediction training objective, such as the capacity to represent language input hierarchically (Manning et al., 2020). Although we should take LLMs serious as the current best yet data-hungry distributional learners we have (Contreras Kallens et al., 2023; Van Dijk et al., 2023), the next challenge is to achieve the same performance with more ecologically valid, smaller datasets and smaller neural architectures; here, corpora like ChiSCor could be part of the solution. Since ChiSCor has information on the age groups of the children who produced the language, future work could, for example, partition ChiSCor to employ train and/or test sets that more realistically model children's language use at different stages of their development. And since ChiSCor covers language from the speech domain, it provides an interesting opportunity to explore training language models on language with a different nature. Still, we do not mean to claim that ChiSCor solves all issues regarding LLMs and training data, but we hope to contribute a dataset that can be a part of the move towards better datasets for computational linguistics, a dataset that, in the words of Bender et al. (2021), 'is only as large as can be sufficiently documented'. Lastly, we like to emphasize that since ChiSCor features high-quality audio besides text, it naturally opens directions for multi-modal research. For example, research on detecting characters' emotions will benefit from adding information on prosody. Also, research aimed at improving speech-to-text models will benefit from the voices of 442 unique children of different ages, and accompanying transcripts, that can be used for fine-tuning existing speech-to-text models. ## 6 Conclusion This paper introduced ChiSCor as a versatile resource for computational work on the intersection of child language and cognition. ChiSCor is a new corpus of Dutch fantasy stories told freely by children aged 4-12 years, containing high-quality language samples that reflect the social settings in which they were recorded in many details. We provided three case studies as examples of how ChiSCor can fuel future work: studying language development with ChiSCor's out-of-the-box age metadata and linguistic features, modelling Zipf distributions with ChiSCor, and linking ChiSCor to common NLP-tools to study children's language in action. Besides verbatim and normalised texts, ChiSCor comes with 442 high-quality audio samples of 442 children, metadata on the backgrounds of 148 children, annotations of character complexity, and extracted linguistic features that will be useful for a variety of researchers. In addition to Dutch stories, ChiSCor comes with a small additional set of 62 English stories with the same additional metadata and annotations as for the Dutch stories. Four years have passed since we started compiling ChiSCor. We look back on many great moments with the children who were happy to share their fantasies and cleverly constructed plots with us. We encourage readers of this paper to have a look at the corpus--both for research purposes and for fun. ## Limitations Within the subset of our corpus that contains extra metadata (Section 3.2,) older children and children from lower socioeconomic backgrounds are underrepresented. This may limit the generalizability of future work done with ChiSCor. This is partly due to a bias resulting from the way our metadata was obtained; the larger set of 619 stories is likely more balanced. A second limitation concerns character depth annotations: a large part of character depth labels depends on one expert. A third limitation is that for BasiScript, a license has to be signed before one can use it. Thus, we cannot provide its lexicon or the corpus on OSF, which makes parts of our study less directly reproducible. ## Ethics statement In compiling this corpus, the researchers were frequently in touch with school principals, teachers, children and parents to find an appropriate way to collect, store and analyse the stories and metadata. Our study was reviewed and approved by the Leiden University Science Ethics Committee (ref. 2021-18). Regarding model efficiency, the spaCy models used to extract linguistic information are pre-trained, easy to use, and extraction of lexical and syntactic information did not take more than a couple of minutes. Further, the Gensim models used to train word vectors are also lightweight, easy-to-use, and equally efficient qua training time. ## Acknowledgements This research was done in the context of Max van Duijn's research project 'A Telling Story' (with project number VI.Veni.191C.051), which is financed by the Dutch Research Council (NWO). Besides the children, their teachers, and caregivers, we thank Isabelle Blok, Yasemin Tunbul, Nikita Ham, Iris Jansen, Werner de Valk, and Lola Vandame for help with data collection. We further thank Ageliki Nicolopoulou, Arie Verhagen, and Tom Breedveld for feedback on our annotations and analyses. We thank Li Kloostra for extensive comments on the final version of this paper. Lastly, we thank three anonymous reviewers for their constructive feedback.
2304.00167
Towards "Anytime, Anywhere" Community Learning and Engagement around the Design of Public Sector AI
Data-driven algorithmic and AI systems are increasingly being deployed to automate or augment decision processes across a wide range of public service settings. Yet community members are often unaware of the presence, operation, and impacts of these systems on their lives. With the shift towards algorithmic decision-making in public services, technology developers increasingly assume the role of de-facto policymakers, and opportunities for democratic participation are foreclosed. In this position paper, we articulate an early vision around the design of ubiquitous infrastructure for public learning and engagement around civic AI technologies. Building on this vision, we provide a list of questions that we hope can prompt stimulating conversations among the HCI community.
Wesley Hanwen Deng, Motahhare Eslami, Kenneth Holstein
2023-03-31T23:12:56Z
http://arxiv.org/abs/2304.00167v2
# Towards "Anytime, Anywhere" Community Learning and Engagement around the Design of Public Sector AI ###### Abstract. Data-driven algorithmic and AI systems are increasingly being deployed to automate or augment decision processes across a wide range of public service settings. Yet community members are often unaware of the presence, operation, and impacts of these systems on their lives. With the shift towards algorithmic decision-making in public services, technology developers increasingly assume the role of de-facto policymakers, and opportunities for democratic participation are foreclosed. In this position paper, we articulate an early vision around the design of ubiquitous infrastructure for public learning and engagement around civic AI technologies. Building on this vision, we provide a list of questions that we hope can prompt stimulating conversations among the HCI community. 2020 11 Wesley Hanwen Deng, Morahare Eslami, and Kenneth Holstein (2023). Towards "Anytime, Anywhere" Community Learning and Engagement around the Design of Public Sector AI 1 (April 2023), 4 pages. [https://doi.org/10.1145/nnnnnn.nnnnnn](https://doi.org/10.1145/nnnnnn.nnnnnn) ## 1. Background Data-driven algorithmic and AI systems are increasingly deployed to augment or automate public sector decision-making in high-stakes settings such as child welfare (Beng et al., 2020), recidivism prediction (Beng et al., 2020), and public health care (Hansen et al., 2020). Yet, community members are often unaware of the presence, operation, and impacts of these systems on their lives (Hansen et al., 2020). With this shift towards algorithmic decision-making, technology developers increasingly assume the role of de-facto policymakers (Beng et al., 2020). Community members whose lives are directly impacted by these technologies are typically excluded from decision-making around their design and development. As a consequence, AI systems are often designed in ways that inadvertently amplify historical inequities, disproportionately harming the most marginalized members of our communities (Hansen et al., 2020; Deng et al., 2020; Lee et al., 2020). There is a great need to empower community members, especially those with lower technology literacy, to learn about and help to shape how AI technologies affect their communities (Beng et al., 2020; Lee et al., 2020; Lee et al., 2020; Lee et al., 2020) Recent HCI research has focused on enhancing AI literacy and engagement around the design and oversight of civic AI technologies. For example, Long et al. co-designed a series of exhibits with community members aiming to enable informal learning experiences around AI technologies in public spaces like museums (Lee et al., 2020; Lee et al., 2020). Lee et al. designed the "WeBuildAI" framework with the goal of enabling community members and relevant stakeholders with low AI literacy to "build AI systems that represented their own beliefs," (Hernandez et al., 2017). More recently, Alfrink et al. designed and implemented a framework for "contestable AI" framework as an initial step towards urban infrastructure that allows community members to contest the design and use of camera cars (Alfranco et al., 2016; Alfranco et al., 2016). However, **we lack methods that can support sustained, and continuous community learning and civic engagement** around public sector AI systems (cf. (Alfranco et al., 2016; Alfranco et al., 2017; Alfranco et al., 2018)). In addition, existing approaches often **fail to reach the most marginalized community members**, falling prey to broader challenges in fostering civic engagement (Alfranco et al., 2017; Alfranco et al., 2018; Alfranco et al., 2018). ## 2. Research Vision and Open Questions We envision a future in which diverse community voices are empowered to shape decisions around the design, development and oversight of public sector AI technologies that impact their communities. To this end, we invite the HCI community to collectively explore ways to support "anytime, anywhere" learning and engagement around the design of public sector AI technologies (Figure 1). For example, how might public and community spaces (e.g., bus stops, parks, public libraries) be reimagined as sites for learning and civic engagement around the design of public sector technologies? Holding public conversations and events in accessible public places can help ensure that a wide range of people have the opportunity to participate and share their perspectives (Alfranco et al., 2018). It is particularly important to engage with those who may be disproportionately impacted by AI, including communities that have historically been underrepresented or marginalized in the technology design. How can we design to empower diverse community members, spanning a broad range of backgrounds and relevant literacies, to articulate their desires for new technologies that can better address their actual needs, as well as their concerns about technologies currently in use? How can we equip children and young adults with the informal learning opportunities and skills necessary for advocacy around public sector AI systems, including the ability to propose better alternatives to existing systems? As AI practitioners continue to recognize the importance of engaging users in the design and development of their AI systems (Alfranco et al., 2018; Alfranco et al., 2018), how might we develop tools and guidelines to facilitate meaningful collaboration between AI practitioners and community members? By exploring such questions as a community, we hope to contribute towards a future in which government decision-makers, researchers, and technology developers recognize that, when properly empowered to do so, community members can be truly valuable collaborators in design and decision-making around public sector AI technologies. In our envisioned future, the excuse that community members are "not technical enough" to meaningfully engage around AI system design would be used less and less often as a justification against community involvement around impactful policy decisions that are disguised as purely "technical" decisions. We believe that achieving this vision requires innovation on local infrastructure for community learning and engagement, and that HCI researchers will have a critical role to play. ## 3. Discussion In the workshop, we hope to open up conversations around how we might reimagine public spaces as environments for lifelong learning and civic engagement, and prepare community members to critically and constructively engage in a world where invisible, imperfect algorithms increasingly shape major aspects of their lives. In particular, we hope to spark discussion around the following three questions: * How might we empower **diverse** community members to engage as learners, co-designers, and overseers of AI technologies that are intended to benefit their communities? * How can we provide opportunities for community members to engage at **a range of levels**, offering a "low floor" of brief, informal learning and design engagements, and a "high ceiling" of opportunities for longer-term civic engagement? * How might we enable civic learning and engagement as **mutual, bi-directional** processes in which community members and public sector decision-makers and technology designers continuously communicate with and learn from each other?
2309.08048
Padding Aware Neurons
Convolutional layers are a fundamental component of most image-related models. These layers often implement by default a static padding policy (\eg zero padding), to control the scale of the internal representations, and to allow kernel activations centered on the border regions. In this work we identify Padding Aware Neurons (PANs), a type of filter that is found in most (if not all) convolutional models trained with static padding. PANs focus on the characterization and recognition of input border location, introducing a spatial inductive bias into the model (e.g., how close to the input's border a pattern typically is). We propose a method to identify PANs through their activations, and explore their presence in several popular pre-trained models, finding PANs on all models explored, from dozens to hundreds. We discuss and illustrate different types of PANs, their kernels and behaviour. To understand their relevance, we test their impact on model performance, and find padding and PANs to induce strong and characteristic biases in the data. Finally, we discuss whether or not PANs are desirable, as well as the potential side effects of their presence in the context of model performance, generalisation, efficiency and safety.
Dario Garcia-Gasulla, Victor Gimenez-Abalos, Pablo Martin-Torres
2023-09-14T22:20:57Z
http://arxiv.org/abs/2309.08048v1
# Padding Aware Neurons ###### Abstract Convolutional layers are a fundamental component of most image-related models. These layers often implement by default a static padding policy (_e.g._, zero padding), to control the scale of the internal representations, and to allow kernel activations centered on the border regions. In this work we identify Padding Aware Neurons (PANs), a type of filter that is found in most (if not all) convolutional models trained with static padding. PANs focus on the characterization and recognition of input border location, introducing a spatial inductive bias into the model (_e.g._, how close to the input's border a pattern typically is). We propose a method to identify PANs through their activations, and explore their presence in several popular pre-trained models, finding PANs on all models explored, from dozens to hundreds. We discuss and illustrate different types of PANs, their kernels and behaviour. To understand their relevance, we test their impact on model performance, and find padding and PANs to induce strong and characteristic biases in the data. Finally, we discuss whether or not PANs are desirable, as well as the potential side effects of their presence in the context of model performance, generalisation, efficiency and safety. ## 1 Introduction Convolution has passed the test of time. Older than its competitors [7], convolutional neurons have been successfully integrated with memory-based models (_e.g._, LSTM [13], GRU [27]), attention-based architectures [25] and generative tasks [19]. However, convolution has an undesired side-effect: the implicit reduction of internal representations [1] caused by the impossibility of applying the convolved filter on border locations. To avoid this reduction, the most frequently used technique is _padding_, adding synthetic data around the border of the input, so that kernels can activate there, and produce an output for every input. The most popular padding type is, by far and wide, zero-padding (adding zeros to the input border). That is, a static padding, the same for every sample and location. Previous works noticed this constant signal adds a bias that reduces generalisation [1, 2, 14, 17], and several dynamic padding methods have been proposed to prevent it [12, 17, 23, 27], with very limited adoption 1. The reason for this popularity is simple: models obtain better top-of-the-line metrics with static padding, when trained and tested on data from the same source. So far, the padding bias has been excused. Footnote 1: [https://pytorch.org/vision/stable/models.html](https://pytorch.org/vision/stable/models.html) In this work we dig deeper into how padding influences models. To do so, we provide evidence on how much model complexity is dedicated to the data edge bias (between 1% and 3%), and the magnitude of this shortcut in the model's outcome. This is characterized by the presence of _padding aware neurons_ (PANs), a symptom of padding bias. Our work shows how PANs are likely present in the vast majority of models trained with static padding, and proposes a diagnosis methodology which allows to locate them through their activation patterns. ## 2 Setting This work has been implemented using PyTorch 1.12.0 [18], torchvision 0.13.0 [16], numpy 1.23.1 [9] and scipy 1.8.1 [22], the latter for Kolgomorov-Smirnov statistics. All models are provided pre-trained by PyTorch. These are: * ResNet-50 [10], trained on ILSVRC2012 [20], named _ResNet101_Weights.IMAGENET1K_V2_ in torchvision. * MobileNetV3 [11], trained on ILSVRC2012, named _MobileNet_V3_Large.Weights.IMAGENET1K_V2_ in torchvision. * GoogLeNet [21], trained on ILSVRC2012, named _GoogLeNet_Weights.IMAGENET1K_V1_ in torchvision. For each of these models, we analyse all convolutional layers with kernels bigger than 1x1. Notice these pre-trained models are frequently used as source for fine-tuning other models. We use a random batch from Caltech101 [6] in SS3, for generating activations. In SS4 we use the validation split of ILSVRC2012 for assessing bias. The code necessary to reproduce the experiments of this work can be found in [https://gitlab.com/paper14/padding-aware-neurons](https://gitlab.com/paper14/padding-aware-neurons). ## 3 Definition & Analysis _Padding aware neurons_, or PANs for short, are convolutional filters that learn to recognise the padding added to the input by some layers (_e.g._, a convolutional layer). PANs pass information on border location through the network, introducing a spatial bias into the model which may or may not be desirable, depending on the domain of application [2]. Padding is often implemented as a vertical or horizontal edge (_e.g._, zero padding), which makes PANs a type of edge detector. Edge detectors are fundamental vision kernels. The most popular ones include Pre-witt, Sobel and the Laplacian of Gaussian (shown in Figure 2). These kernels look for value contrasts anywhere in the input [15, 24], but are maximised when the value contrast is centred on the kernel (_e.g._, centre square of a 3x3). This is visible in the symmetry exhibited by the filters of Figure 2. On the edges defined by padding, which are never centred on the kernel, edge detectors still activate moderately. In contrast to a regular edge detector, a PAN would maximize its output when the edge is located at the border of the filter, in order to discriminate the padding edges from other edges in the input. An example of one such kernels are shown in Figure 1. We hypothesise the existence of two types of PANs: nascent and downstream. Nascent PANs react when directly exposed to a padding area of the inputs of their layer, while downstream PANs react to the presence of padding as conveyed by PANs in previous layers (_i.e._, they do not directly perceive padded values). In this work we focus on nascent PANs, which may have a configuration analogous to the kernel shown in Figure 1. Beyond these toy examples, we consider any neuron that activates distinctively - be it strongly or weakly - on padded areas as a PAN. Notice a PAN can react to one _or more_ borders of the input. These include top row (T), bottom row (B), leftmost column (L) and right-most column (R), but also any combination of these (_i.e._, T, B, L, R, TB, TL, TR, BL, BR, LR, TBL, TBR, BLR and TBLR) in their non-overlapping definition (_e.g._, T \(\cap\) BT = \(\emptyset\)). ### Finding Edge Detectors Considering the complexities of characterising PANs through their high dimensional kernels [3, 8], we decide to use their activations instead. Next, we propose a method to identify nascent PANs by looking at the activations they produce on a padded input sampling. To be precise, we consider four padding regions of the input (\(top\) and \(bottom\) rows, \(left\) and \(right\) columns, all with corner overlap) of size one pixel on the short axis2, and the remaining of the input (\(centre\), with no overlap). We record the activations a given neuron produces on those five regions while processing a batch of in-distribution data. Footnote 2: Only the first/last row/column of the input guarantees the receptive field of the kernel covers the entire padded area, regardless of kernel size. From these activations, we obtain five empirical probability density functions (PDF) per neuron (\(A_{top}\), \(A_{bottom}\), \(A_{left}\), \(A_{left}\), \(A_{right}\)). By comparing every border PDF against \(A_{centre}\) we obtain four Kolgomorov-Smirnov test (KS), which measure how distinct padding activations are for a given neuron. At this point its important to notice the sample size difference between border and center activations. \(A_{top}\), \(A_{bottom}\), \(A_{left}\), \(A_{right}\) all include the same number of values, \(N\). \(A_{centre}\) on the other hand includes \((N-2)^{2}\) activations, which grow quadratically _w.r.t._\(N\) assuming a stride of one. There is another difference between border and central activations. While border regions are entirely composed by edge data (the one defined by padding), central areas are partly so. While \(A_{top}\), \(A_{bottom}\), \(A_{left}\) and \(A_{right}\) contain only edge activations, \(A_{centre}\) contains a majority of non-edge activations and a few data-driven edge activations. This skews the centre PDF _w.r.t._ the border ones, and turns the KS statistic into a measure of how distinctively are edge activations. A sort of _padding-like edge detector_. Notice this method can not find edge detectors which are not straight vertical or horizontal. Figure 3 shows an example of border and centre PDFs for two neurons, together with the corresponding \(KS\) values while using the two-sided KS, where the null hypothesis is that the two distributions are identical. Computing the KS values for all neurons in a model Figure 3: \(A_{top}\), \(A_{bottom}\), \(A_{left}\), \(A_{right}\) and \(A_{centre}\) PDFs for two convolutional neurons of the ResNet50. Legend shows KS value of centre against every border region. Top plot: Neuron 51 from layer _conv_1_, an edge detector. Bottom plot: Neuron 101 from layer conv2_2, a regular neuron. Figure 2: Traditional edge detector filters. Prewitt (1st col.), Sobel (2nd col.) and Laplacian of Gaussian (3rd col.). shows the overall activation divergence between centre and border locations. The KS distributions shown in Figure 4 indicate most neurons have low KS values regardless of layer depth, with a mean KS between 0.1 and 0.3 on all cases. In other words, most convolutional neurons have no discriminative power between activations in a padded border and the centre. Notice each neuron contributes with 4 values to each plot of Figure 4 (\(KS(A_{top},A_{centre})\), \(KS(A_{bottom},A_{centre})\), \(KS(A_{left},A_{centre})\) and \(KS(A_{right},A_{centre})\)), which causes more KS values to be close to zero (_e.g._, a vertical edge detector will most often generate low KS values for the top and bottom PDFs). Overall, results that indicate potential edge detector and PAN neurons (those with high KS values) are a minority found in most layers, regardless of depth. ### Finding PANs A KS test between the complete \(A_{centre}\) and a border PDF cannot properly discriminate between PANs and the rest of edge detectors, as the presence of non-edge activations in \(A_{centre}\) dominates its PDF. To discriminate PANs from regular edge detectors using the KS test, we need a distribution of \(A_{centre}\) PDF which is comparable to border PDFs, that is, one which contains only edge activations. To that end, we define a simple hypothesis: the centre region of an input (of size \((N-1)^{2}\)) will include _at least_ as many edges as a padded border (of size \(N\)). Notice this hypothesis, as well as the PDF reliability, grows weaker with the reduced input sizes typical of deeper layers. Leveraging this hypothesis we define an heuristic: we truncate \(A_{centre}\) by keeping only the \(k\) highest (\(A_{centre}^{+}\)) and \(k\) lowest (\(A_{centre}^{-}\)) values of \(A_{centre}\), where \(k\) is the number of values in a padded border. We keep both the highest and lowest, since a PAN may detect padding by activating particularly strongly or weakly on it. For \(A_{centre}^{+}\) we use the KS-test with the _less_ hypothesis (\(KS^{+}\)), _i.e._, : \(A_{centre}^{+}\) distribution is less than that of a margin (top, down, left or right), and for \(A_{centre}^{-}\), we use the _greater_ hypothesis (\(KS^{-}\)), _i.e._, as before but comparing with \(A_{centre}^{-}\) instead. The effect of using the truncated _centre_ PDF, is shown in Figure 5. The plot shows a neuron with negative activations for the top border, with the rest of activations being closer to zero. The computed \(KS(A_{top},A_{centre})\) is 0.53. These results indicate this neuron is a vertical edge detector. However, when compared with the truncated \(A_{centre}^{-}\), the same \(A_{top}\) is no longer distinctive (\(KS^{-}(A_{top},A_{centre}^{-})=0.0\)), which indicates this neuron is not a PAN. Given these insights, we label as PANs neurons which Figure 4: Stacked distribution of KS distances for the first and last four convolutional 3x3 layers of the ResNet50. Notice each neuron contributes with four values to each plot, \(KS(A_{top},A_{centre})\), \(KS(A_{bottom},A_{centre})\), \(KS(A_{left},A_{centre})\) and \(KS(A_{right},A_{centre})\). Figure 5: Histogram of neuron activations on the border regions, the center (purple) and the center truncated on the minus side (brown). Legend shows to Kolmogorov-Smirnov test. \(KS\) corresponds to border vs center. \(KS^{-}\) corresponds to border vs truncated center. Model: ResNet50. Layer: Conv3_2. Neuron idx: 46. hold (1) a high \(KS(A_{top|bottom|left|right},A_{centre})\) and, (2) a high \(KS^{+}(A_{top|bottom|left|right},A_{centre}^{+})\) or a high \(KS^{-}(A_{top|bottom|left|right},A_{centre}^{-})\). We set a threshold \(\theta=0.5\) in the rest of the paper. \(\theta\) can be modified to reduce or increase the requirements needed for PAN detection. The distributions of PANs identified using this methodology with \(\theta=0.5\) is shown in Table 1. On the models considered and with \(\theta=0.5\), PANs represent roughly 2% of all convolutional filters, and can be found at different depths. This may be caused by the explicit information about the presence of padding being lost or integrated (thus mixing with other activations) into other neurons after going through several layers. The disappearance of explicit padding information, however, does not preclude the information being used by the model, but it can motivate the model to periodically re-locate explicit padding so that the next few layers can more easily use that information. Later layers seem to include a remarkable amount of PANs, likely influenced by the large number of neurons found there. This could be influenced by the reduced reliability of the KS method when applied on inputs with small width and height, but it could also indicate padding location plays an important role on the final prediction. Overall, applying the methodology to _thousands_ of filters yields _hundreds_ of edge detectors and _dozens_ of PANs per model. By slightly weakening the restrictions required to be labelled as a PAN their number can be easily doubled (_e.g._, ResNet includes 193 PANs when using \(\theta=0.4\)). ### PAN exploration Let us analyse neurons identified as PANs by the previously proposed method. For each neuron we look at their histogram of activations for the centre (complete and truncated PDF) and border regions. We also show these same plots, when inference is made replacing the zero padding policy by a reflect padding policy. Finally, we show activation maps for a couple of samples to understand its spatial response. The top plot of Figure 6 shows a PAN, with distinctively low activation values on all four borders, even when compared against the lowest values produced within the larger central area (_i.e._, \(A_{centre}^{+}\), in pink). With \(\theta=0.5\), the PAN is detected as TBLR. An inspection of the activations produced by the kernel on two inputs (bottom plot of Figure 6) shows how this PAN has a preference for the bottom and top padding, which is consistent with \(KS^{+}(A_{left|right},A_{centre}^{+})<KS^{+}(A_{top|bottom},A_{centre}^{+})\) (as shown in the top plot). Notice \(A_{left}\) and \(A_{right}\) have a bimodal distribution, peaking both at -10 and at -4. This is caused by particularly strong activations on corner positions, which are high even within \(A_{top}\) and \(A_{bottom}\). This neuron, beyond being padding aware, is also corner aware, a behavior found on other neurons (_e.g._, conv1_0, 17; conv3_1, 212; conv4_1, 296; conv4_2, 447). When the padding is changed from _zero_ to _reflect_, as shown in the middle plot of Figure 6, the neuron no longer detects padding. The distributions of activation values for border regions become indistinguishable from the distribution in the centre. Another representative neuron is shown in Figure 7. In this case the PAN activates distinctively high on the left and right padding. Since \(A_{left}\) is significantly higher than \(A_{right}\), this may be primarily a LPAN that also detects the right border by complement. This is in fact a behaviour compatible with the kernel shown at the centre of Figure 1. For the top and bottom padding locations, this neu \begin{table} \begin{tabular}{l l l l l l l l l l l l l l l l l l l l l l} _Model\(\backslash\)Depth_ & \(1\) & \(2\) & \(3\) & \(4\) & \(5\) & \(6\) & \(7\) & \(8\) & \(9\) & _10_ & _11_ & _12_ & _13_ & _14_ & _15_ & _16_ & _17_ & _18_ & _19_ & _20_ & All \\ \hline ResNet & 0 & 4 & 3 & 0 & 0 & **10** & 1 & 3 & 0 & **16** & 1 & 1 & 1 & 1 & 2 & **30** & 8 & - & - & - & 81 \\ & 0\% & **6\%** & 4\% & 0\% & 0\% & **7\%** & 0\% & 2\% & 0\% & **6\%** & 0\% & 0\% & 0\% & 0\% & 0\% & 5\% & 1\% & - & - & - & 2.0\% \\ \hline MobileNet & 0 & 5 & 1 & 5 & 0 & 3 & 2 & 1 & 5 & 6 & 3 & **11** & 6 & 9 & **46** & **35** & - & - & - & - & 138 \\ & 0\% & **31\%** & 1\% & **6\%** & 0\% & 2\% & 1\% & 0\% & 2\% & 3\% & 1\% & 2\% & 0\% & 1\% & **4\%** & 3\% & - & - & - & - & 2.7\% \\ \hline GoogLeNet & 0 & **8** & 2 & 1 & 0 & 0 & 0 & **8** & 1 & 7 & 0 & 2 & 1 & 1 & 0 & **12** & **10** & 5 & 0 & 0 & 58 \\ & 0\% & 4\% & 1\% & 3\% & 0\% & 0\% & 0\% & **16\%** & 0\% & **10\%** & 0\% & 3\% & 0\% & 1\% & 0\% & **9\%** & 3\% & 3\% & 0\% & 1.7\% \\ \end{tabular} \end{table} Table 1: Number of PANs found in different models, layer-wise. First row is absolute number of PANs, second row is percentage of PANs relative to layer size (rounded down). In bold, top three values per model on either category. Only 2D convolutional layers with kernels 3x3 or larger considered. Computed using \(\theta=0.5\) Figure 6: Top plot: Activation histogram of a PAN, where all four borders have high KS. Includes distributions for border regions, and central locations (complete and truncated). Legend shows KS confidence _w.r.t._ truncated distribution (_i.e._, \(KS^{+}(A_{border},A_{centre}^{+})\)). Middle plot: Same as top, using padding reflect. Bottom plots: Activation heatmap on an input with zero and reflect padding. Model: ResNet50. Layer: Conv3_1. Neuron idx: 41. Figure 7: Top plot: Activation histogram of a PAN, where the left and right borders have high KS. Includes distributions for border regions, and central locations (complete and truncated). Legend shows KS confidence _w.r.t._ truncated distribution (_i.e._, \(KS^{+}(A_{border},A_{centre}^{+})\)). Middle plot: Same as top, using padding reflect. Bottom plots: Activation heatmap on an input with zero and reflect padding. Model: ResNet50. Layer: Conv2_1. Neuron idx: 67 ron's activations are indistinguishable from those on central locations. The long tail of the top and bottom distributions speaks of potential corner detection capabilities. All this is illustrated by the bottom plot of Figure 7, which shows activations on two inputs. Notice some edges are detected in centre locations, but not as strongly as on the left and right padding. The middle plot of Figure 7 shows the same activations when zero padding is replaced by reflect padding. When this is the case, the neuron no longer detects padding, with \(A_{left}\) and \(A_{right}\) becoming aligned with the rest of distributions. The last neuron discussed here is the _downstream_ PAN of Figure 8. Following the proposed methodology, this neuron is detected as a potential edge detector (\(KS(A_{top},A_{centre})=0.66\)), but not as a PAN (\(KS^{+}(A_{top},A_{centre})=0.0\)) (see top plot). Its spatial activations on two different inputs (bottom plots of Figure 8) indicate this is no regular edge detector. It activates distinctively on the _second_ highest row of the input, as if it was detecting the top padding from afar. This explains the bimodal behaviour of this neuron in the top plot, where the truncated \({}^{+}centre\) distribution (which includes most of the second row) peaks both at around two (activations of the second highest row) and zero (activations on the rest of centre). Since the kernel of this neuron is 3x3, it cannot directly detect the padding from this location (_i.e._, on the second highest row activations, the kernel is located entirely on the unpadded input). This neuron gets the information about image border location from a previous layer, and turns off (see middle plot of Figure 8) when static padding is removed. ### Nascent PAN types Through the analysis defined in the previous sections we have characterised and identified several types of nascent PANs, those that directly detect padding in the input. Nascent PANs frequently have a multi-modal behaviour, detecting two or more padding edges. This multi-border detection can be generic (_i.e._, several borders detected indistinguishably), or it can be distinct for different border types. The neuron shown in Figure 6, for example, can discriminate between horizontal borders (top and bottom), vertical borders (left and right) and the rest of the input. But it cannot discriminate among horizontal borders (between top and bottom padding), or among vertical ones Figure 8: Top plot: Activation histogram of a neuron which is an edge detector candidate and a potential downstream PAN, not detected as PAN. Includes distributions for border regions, and central locations (complete and truncated). Legend shows KS confidence _w.r.t._ truncated distribution (_i.e._, \(KS^{+}(A_{border},A_{centre}^{+})\)). Notice the truncated centre distribution on the high side (pink) is bimodal, with one peak around _zero_ and one around _two_. Middle plot: Same as top, using padding reflect instead of zero. Bimodal distribution disappears. Bottom plots: Activation heatmap on an input with zero and reflect padding. Model: ResNet50. Layer: Conv3_2. Neuron idx: 158 (left and right padding). On the other hand, the neuron shown in Figure 7 can discriminate between left and right padding. This later behaviour is consequence of the asymmetrical kernels PANs may have, exemplified in the kernels of Figure 1. We identify 14 possible types of nascent PANs based on which padding borders they detect (_i.e._, T, B, L, R, TB, TL, TR, BL, BR, LR, TBL, TBR, BLR and TBLR). We study the distribution of nascent PAN types with the proposed method in Table 2. Single border detectors (_i.e._, T, B, L, R) are the most frequent types, representing about 75% of all identified PANs. The rest are mostly PANs which can detect complementary borders (_i.e._, TB, LR), or all four borders (_i.e._, TBLR). Complementary borders detecting PANs are likely to be mirrored variations of the kernel shown in the middle of Figure 1, while the four borders PAN may be asymmetrical versions of the bottom Laplacian of Gaussian filter shown in Figure 2. ## 4 Performance and Bias Once we have established the existence and pervasiveness of PANs in models trained with zero padding, let us now assess the role these neurons play in model behaviour. To do so, we study their influence in the network output using four versions of the same pre-trained ResNet50, without fine-tuning: * The _original_ model, using the default zero-padding. * The _reflect_ model, where the padding of all convolutional neurons has been changed to PyTorch's reflect. * The _PAN-reflect_ model, where the padding of the neurons identified as PANs by the previous methodology (for ResNet50, 2.0% of convolutional neurons, 81 overall) has been changed to reflect. The rest of neurons preserve zero-padding. * The _RAND-reflect_ model, where the padding of randomly sampled non-PANs has been changed to reflect and the rest preserve zero-padding. The random subset has the same size (2.0% of neurons) and follows the same layer distribution as _PAN-reflect_. This is the control set. We use the quantitative differences in the outputs of these models to study the impact padding has towards specific classes (_i.e._, the amount of padding bias). Then, we study the influence of PANs in the context of particular data samples. ### Bias influence To verify to which extend PANs add relative location bias to the model, we compare the soft-max outputs of _original_ with those of _PAN-reflect_. To be precise, we compute the odds of the prediction probability for each class. Assuming samples to be i.i.d., this can be computed as the quotient of the sum of soft-max outputs for all images \(i\) in the dataset: \[Odds(c)=\frac{P(c|M_{Pan-reflect})}{P(c|M_{original})}=\frac{\sum_{i}M_{Pan-reflect }(i)[c]}{\sum_{i}M_{original}(i)[c]}\] And analogously for _RAND-reflect_. For _PAN-reflect_, odds above \(1\) for a class \(c\) indicate a higher confidence in the prediction of \(c\) in the absence of PANs. This can also be interpreted as padding being used as evidence against that class. Conversely, values below \(1\) would imply padding is being used as evidence toward the class. Figure 9 presents the logarithm of the odds per class, computed on the ILSVRC validation set for both _PAN-reflect_ and _RAND-reflect_. All classes are affected, a few severely so. Table 3 lists all classes whose odds change by more than 7%. We choose a threshold instead of the top-K to illustrate how the odds change in an asymmetrical manner: there are more classes which use padding as evidence toward the class (odds \(<1\)) than those that use it against. Remarkably, classes for which padding is used as evidence against it seem to be mostly fine-grained types (mainly animal species and dogs, with the exception of _sliding door_), which hints at the relevance of padding for overfitting. Conversely, there are no animals among the classes that use padding as positive evidence. Using a 5% threshold yields consistent results: out of the 111 classes with negative log odds, the only animal is the _English Foxhound_, whereas for the 99 classes with positive log odds, there are only five classes which are not fine-grained animals. To verify if findings are related with the relevance of padding or with the noise added by the data distribution, let us consider the results while using _RAND-reflect_ (orange in Figures 9 and 10). In this case, the distribution of PANs' odds is characteristically different from that of random, similarly-sampled neuron sets. While PANs seem to affect most classes to a large degree, either positively or negatively, the random set effect on classes is very limited. Only a few classes are affected, with the most common result being no output change. These results indicate PANs strongly and homogeneously alter most classes' prior, whereas an equally sized random subset of neurons does not. Repeating this experiment with model _reflect_ changes the input distribution of 100% of convolutional layers, whereas the previous two experiments (with _PAN-reflect_ and _RAND-reflect_) changed only 2% of neurons. As a result, the _reflect_ odds suffer more extreme changes than either one of the above. No tendency around which classes receive positive and which negative log odds was found. In this particular experiment, we believe the larger odds variance has to do with noise added to the distributions, rather than due to some intrinsic quality of how padding is used. ### Sample influence The previous section shows a clear influence of padding in the overall performance and behaviour of the model. However, the class-scale at which analysis is made means that the effect of PANs on single predictions is aggregated in the mean for each class. To analyse this facet, we look for the individual samples with the largest change in the network's output. We compute this change as the Manhattan distance between the logits of the _original_ and the _PAN-reflect_ model. disagreements between models is remarkably low (67 images). The limited impact PANs have on samples which are not part of the model training set, could also be the result of padding information being used for overfitting particularly hard training samples. When repeating the experiment on _RAND-reflect_, these effects disappear. The sample with the 5000th highest divergence with the _PAN-reflect_ has around 4 distance units, whereas for _RAND-reflect_ with this distance happens on the 13th sample. This alone shows _PAN-reflect_ affects with more strength to orders of magnitude more samples than _RAND-reflect_. Of those 13 samples, 12 of them are incorrectly predicted as _tench_, which indicates the preference of these randomly chosen 2% of neurons for this class. ## 5 Discussion The use of static padding in convolutional layers provides the model with a stable signal of a perceptual edge. That much was known from previous works [17, 1, 2, 14]. This paper reveals the extent of this inductive spatial bias, identifying a set of neurons specialized in locating and exploiting it (what we call PANs), which account for at least 1.5%-3% of all deep CNNs convolutional filters. Considering PANs are likely to be inheritable (as long as the fine-tuned model keeps zero padding) and the fact that PANs were found on popular pre-training sources, one can assume PANs are a widespread phenomenon. Experiments indicate padding information is used to change the prior of most classes. PANs seem to be used as evidence against fine-grained classes (_i.e._, animals), and seldom as evidence for them. For the ILSVRC task we derive two different hypothesis for explaining this. Either samples from fine-grained class are generally better framed, which keeps the padding away from the patterns most relevant for the class, resulting in a spatial bias that can be leveraged; or padding is used as a reference to identify arbitrary patterns in particularly hard samples, helping overfit on examples from the long tail [5]. Testing both these hypothesis remains future work as it requires its own experimental setup. The desirability of PANs in a model depends on the application, and its definition of un/desirable bias. On tasks with fixed framing (_e.g._, fundus retina images [26], static \begin{table} \begin{tabular}{c|c|c|c} Class & Odds & Class & Odds \\ \hline drum & 0.91 & cheetah & 1.10 \\ muzzle & 0.91 & Norfolk Terrier & 1.09 \\ packet & 0.91 & sliding door & 1.09 \\ sunscreen & 0.92 & Irish Water Spaniel & 1.08 \\ barrette & 0.92 & box turtle & 1.08 \\ tandem bicycle & 0.92 & Dobermann & 1.08 \\ candle & 0.92 & Flat-Coated Retriever & 1.08 \\ tent & 0.92 & Alaskan Malamute & 1.08 \\ tray & 0.92 & gossamer-winged butterfly & 1.07 \\ comic book & 0.93 & West Highland White Terrier & 1.07 \\ Windsor tie & 0.93 & Greater Swiss Mountain Dog & 1.07 \\ tile roof & 0.93 & guenon & 1.07 \\ backpack & 0.93 & & \\ overskirt & 0.93 & & \\ buckle & 0.93 & & \\ lab coat & 0.93 & & \\ shoal & 0.93 & & \\ paper knife & 0.93 & & \\ whistle & 0.93 & & \\ ice pop & 0.93 & & \\ stethoscope & 0.93 & & \\ barbell & 0.93 & & \\ lakeshore & 0.93 & & \\ megalith & 0.93 & & \\ scarf & 0.93 & & \\ \end{tabular} \end{table} Table 3: Classes with odds beyond 7% computed between the _PAN-reflect_ and the _original_ model. Classes with odds above one, increase their confidence in the absence of PAN information, are less frequent and are mostly composed of animals. For odds below 1, we check \(\frac{1}{odds(c)}>1.07\).
2309.17168
Charge-parity switching effects and optimisation of transmon-qubit design parameters
Enhancing the performance of noisy quantum processors requires improving our understanding of error mechanisms and the ways to overcome them. In this study, we identify optimal ranges for qubit design parameters, grounded in comprehensive noise modeling. To this end, we also analyze a previously unexplored error mechanism that can perturb two-qubit gates due to charge-parity switches caused by quasiparticles. Due to the utilization of the higher levels of a transmon, where the charge dispersion is significantly larger, a charge-parity switch will affect the conditional phase of the two-qubit gate. We derive an analytical expression for the infidelity of a diabatic controlled-Z gate and see effects of similar magnitude in adiabatic controlled phase gates in the tunable coupler architecture. Moreover, we show that the effect of a charge-parity switch can be the dominant quasiparticle-related error source of a two-qubit gate. We also demonstrate that charge-parity switches induce a residual longitudinal interaction between qubits in a tunable-coupler circuit. We present a performance metric for quantum circuit execution, encompassing the fidelity and number of single and two-qubit gates in an algorithm, as well as the state preparation fidelity. This comprehensive metric, coupled with a detailed noise model, empowers us to determine an optimal range for the qubit design parameters Substantiating our findings through exact numerical simulations, we establish that fabricating quantum chips within this optimal parameter range not only augments the performance metric but also ensures its continued improvement with the enhancement of individual qubit coherence properties. Our systematic analysis offers insights and serves as a guiding framework for the development of the next generation of transmon-based quantum processors.
Miha Papič, Jani Tuorila, Adrian Auer, Inés de Vega, Amin Hosseinkhani
2023-09-29T12:05:27Z
http://arxiv.org/abs/2309.17168v4
# Charge-parity switching effects and optimisation of transmon-qubit design parameters ###### Abstract Enhancing the performance of noisy quantum processors requires improving our understanding of error mechanisms and the ways to overcome them. A judicious selection of qubit design parameters, guided by an accurate error model, plays a pivotal role in improving the performance of quantum processors. In this study, we identify optimal ranges for qubit design parameters, grounded in comprehensive noise modeling. To this end, we commence by analyzing a previously unexplored error mechanism that can perturb diabatic two-qubit gates due to charge-parity switches caused by quasiparticles. We show that such charge-parity switching can be the dominant quasiparticle-related error source in a controlled-Z gate between two qubits. Moreover, we also demonstrate that quasiparticle dynamics, resulting in uncontrolled charge-parity switches, induce a residual longitudinal interaction between qubits in a tunable-coupler circuit. Our analysis of optimal design parameters is based on a performance metric for quantum circuit execution that takes into account the fidelity and frequencies of the appearance of both single and two-qubit gates in the circuit. This performance metric together with a detailed noise model enables us to find an optimal range for the qubit design parameters. Substantiating our findings through exact numerical simulations, we establish that fabricating quantum chips within this optimal parameter range not only augments the performance metric but also ensures its continued improvement with the enhancement of individual qubit coherence properties. Conversely, straying from the optimal parameter range can lead to the saturation of the performance metric. Our systematic analysis offers insights and serves as a guiding framework for the development of the next generation of transmon-based quantum processors. ## Introduction While quantum processors continue to progress towards practical use, the errors present in current systems are still the most limiting factor. A dominant error in superconductor-based quantum computers is decoherence. There have been several proposals to mitigate it, either by designing new qubit types [1, 2, 3] or by further optimizing the existing designs, typically focusing on increasing the coherence times of the circuit [4, 5, 6]. However, in the latter case, one often encounters trade-offs between different circuit properties. For example in transmon qubits, which have emerged as the most widely-used qubit type in large-scale experiments [7, 8, 9, 10, 11], the suppression of charge noise comes at the cost of low anharmonicity which sets a lower bound on the duration of single-qubit operations [12]. This illustrates the importance of understanding the different errors affecting the quantum hardware, as well as the fact that an informed design of the circuit parameters requires the acknowledgment of a plethora of possible error sources which are not necessarily limited only to the coherence properties of the circuit, but also include leakage errors in single qubit gates due to low anharmonicity, state preparation errors due to finite-temperature heating effects [13, 14, 15], as well as the parity-switching error presented in this manuscript. To elaborate further, only taking into account the coherence properties of the transmon and the low anharmonicity, one way to achieve better performance is to increase the transmon anharmonicity (in order to suppress potential leakage errors) while keeping the frequency of the transmon fixed. The latter condition, under the assumption of constant quality factors, ensures the coherence properties of the circuit remain unchanged. However this inadvertently leads into the regime where the transmon charge dispersion becomes more significant. A good understanding of the charge-dispersion related errors is therefore needed in order to make informed design choices of the circuit parameters. One error that is exponentially more pronounced in the low \(E_{J}/E_{C}\) regime is related to the presence of charge-like excitations of the superconducting condensate, referred to as quasiparticles and the charge-parity of the transmon, thus prompting us to analyze these effects further so that a trade-off between the different error sources can be made. Quasiparticles can be created through several mechanisms, and are known to cause different types of incoherent errors in superconducting qubit realizations [16, 17]. Particularly, quasiparticle tunnelling across the Josephson junction results in energy relaxation and dephasing in superconducting qubits [18, 19, 20, 21, 22, 23]. Such detrimental quasiparticle-induced effects have, in turn, motivated research for finding mitigation strategies such as normal-metal traps [24, 25, 26], band-gap engineering [27, 28, 29] and improved qubit design [30]. There have also been efforts towards designing new types of superconducting qubits that are expected to be intrinsically robust against quasiparticle tunneling [31]. Suppression of charge-noise susceptibility of the transmon is achieved by adding a large shunt capacitor in parallel with a Josephson junction of a Cooper-pair box [12]. However, the energy levels of the transmon are not completely independent of the offset charge on the transmon island, but instead have a weak \(2e\)-periodic charge dispersion. Since the presence of a quasiparticle shifts the island charge by \(e\), the energy spectrum of the transmon can, thus, be divided into two distinct manifolds based on the parity of the number of quasiparticles on the island as shown in Fig. 1a. The switching of the charge-parity can occur either due to a preexisting quasiparticle tunneling across the Josephson junction on to the transmon island, or due to photon-assisted breaking of a Cooper pair, as pictured in Fig. 1b [16]. The timescale of these stochastic parity-switches is referred to as the parity-switching time and, as we argue in the Results Section, it is typically much shorter compared to the time needed to obtain meaningful statistics from the quantum computer. Since the difference between the two parity manifolds is strongly suppressed, quasiparticle effects in transmons have mainly been analyzed in the context of quasiparticle-induced decoherence. However, the parity-dependent energy splitting of the higher-excited states is much larger compared to that of the first-excited and ground states, as shown in Fig. 1a. Therefore, parity-switching in the second excited state can potentially become a notable source of error for example in diabatic CZ (controlled-Z) or more generally CPHASE (controlled-phase) gates realized in the tunable coupler architecture [32, 33, 34, 35, 36, 37, 38, 39, 40, 41]. Besides the ability to perform fast and high-fidelity gate operations, one of the main reasons for the introduction of the tunable coupler is the fact that the static residual-ZZ interaction between the qubits can be completely suppressed by tuning the coupler transmon to a specific frequency [32, 34], thus leading to reduced cross-talk and spectator qubit related decoherence [42]. When determining this frequency, it has been shown that the higher levels of the tunable coupler system are relevant [32, 35], thus implying that quasiparticle dynamics might affect our ability to effectively decouple the qubits. The tunable-coupler setup is currently one of the most promising platforms for large-scale quantum computing [7, 8, 10, 11] and, thus, it is important to study its susceptibility to parity-switching errors during quantum gates and idling. In this paper, we first develop an analytical theory of parity switches in a tunable-coupler based architecture, and demonstrate that the effects of a parity switch in a two-qubit gate can be a relevant source of error, even in the transmon regime. Moreover, we show that this previously unidentified error can, in certain regimes, be the dominant quasiparticle-induced error mechanism during a diabatic two-qubit gate, indicated by a comparison to currently achievable parity switching rates observed in superconducting circuits. Furthermore, we demonstrate that the inherent stochastic nature of parity switching events limits the ability to suppress any unwanted longitudinal interactions between the qubits coupled through tunable couplers [43]. We find that the magnitude of the unwanted interactions make this effect relevant as coherence times advance into the millisecond regime [44, 45]. Secondly, we develop an analysis of the optimal circuit parameters which is based on the accurate modeling of these different error sources as well as on a performance metric related to the gate infidelities of a given quantum circuit. We analyze the dependence of the performance metric as a function of transmon qubit design parameters, and we find an optimal range of the parameters for which the performance metric is maximized. This demonstrates how informed design choices can aid the improvement of current quantum computers, by taking into account the different possible error mechanisms. Therefore, we believe our analysis and results can serve as a guideline when designing transmon-based quantum processors. ## Results ### Modeling the Dynamics Here, we present how a parity switch affects the tunable coupler architecture and derive analytical results describing the magnitude of the effect. The Hamiltonian of an individual transmon, not taking into account potential higher order contributions to the Josephson energy [46], is given by [47, 12] \[\hat{H}=4E_{C}\left(\hat{n}-n_{g}+\frac{P-1}{4}\right)^{2}-E_{J}\cos\hat{\phi}, \tag{1}\] where the operator \(\hat{n}\) represents the dimensionless charge and \(\hat{\phi}\) is the superconducting phase operator across the Josephson junction. The variables \(E_{C}\), \(E_{J}\) and \(n_{g}\) represent the charging energy of an electron (i.e. the energy required to add a single electron of the Cooper-pair to the transmon island), Josephson energy and dimensionless offset charge, respectively. The variables \(\hat{n}\) and \(\hat{\phi}\) are related via the canonical commutation relation \([\hat{\phi},\hat{n}]=i\). Additionally we have included a discrete parity variable \(P\in\{-1,+1\}\), corresponding to the parity of the number of _electrons_ that have tunneled across the junction. We observe that the parity term has the same effect as a shift of the offset charge by \(\Delta n_{g}=1/2\). The eigenenergies of the Hamiltonian in Eq. 1 have been analytically determined already in Ref. [12] and the low-energy manifold of a single transmon can be approximated in the asymptotic limit of \(E_{J}/E_{C}\gg 1\) as \[\hat{H}/\hbar\simeq[\omega+\delta\omega(P,n_{g})]\,\hat{a}^{\dagger}\hat{a}+ \frac{\alpha+\delta\alpha(P,n_{g})}{2}\hat{a}^{\dagger}\hat{a}^{\dagger}\hat{ a}\hat{a}, \tag{2}\] where \(\hat{a}\) are bosonic annihilation operators, and \(\hbar\omega\simeq\sqrt{8E_{J}E_{C}}-E_{C}\) and \(\hbar\alpha\simeq-E_{C}\) are the expressions for the transmon (angular) frequency and the anharmonicity in the asymptotic limit, respectively. Here, we have already taken into account the fact that the different parities have almost identical parameters, and the small differences between them are taken into account with the two parameters depending on the parity, \(\delta\omega(P,n_{g})\) and \(\delta\alpha(P,n_{g})\). The parity \(P\) therefore divides the eigenstates of the Hamiltonian in Eq. 1 into two distinct manifolds, as illustrated in Fig. 1a. Denoting the eigenenergies of the original transmon Hamiltonian in Eq. 1 as \(E_{i}\) with \(i\in\{0,1,2,3,...\}\), the difference between the energy levels of the different parity states is asymptotically approximated by [12] \[E_{m}^{+}(n_{g})-E_{m}^{-}(n_{g})\simeq\varepsilon_{m}\cos(2\pi n_{g}), \tag{3}\] Figure 1: A schematic representation of the effect of a parity switch on a single transmon and on a two-qubit gate. **a** The energy diagram of a single offset-charge-sensitive transmon with \(E_{J}/E_{C}=10\) with two distinct parity manifolds, marked with + and -. While the ground state \(|0\rangle\) also comprises of two distinct parity levels, the difference between them is not visible. **b** Illustration of two parity switching mechanisms. The vertical axis represents the energy, with the left and right regions corresponding to the two sides of the Josephson junction with the middle region corresponding to the insulator. The light grey area corresponds to the density of states of a BCS superconductor on both sides of the junction. _Orange_: A high energy photon breaks a Cooper pair (dark blue) thus generating two quasiparticles (light blue), with one quasiparticle tunneling across the junction. _Green_: A preexisting quasiparticle tunnels across the junction. **c** The energy level diagram of the states involved in the operation of a diabatic CPHASE gate. The Rabi oscillation between the levels \(|101\rangle\leftrightarrow|002\rangle\) is marked with blue arrows, however the larger charge dispersion of the second excited state means that both parity states of the \(|002\rangle\) level cannot be considered degenerate anymore. **d** The lumped element model of the tunable-coupler circuit used in the implementation of diabatic CPHASE gates, consisting of two computational transmons (dark and light blue) referred to in the following as Qubits 1 and 2 (with indices \(q_{1}\) and \(q_{2}\)), a flux-tunable coupler (green), denoted with the index \(c\), and capacitive couplings between the transmons (black). The readout resonators and drive lines for the implementation of single-qubit gates are not included in the schematics. where the superscript refers to the parity and the charge dispersion \(\epsilon_{m}\) is given by \[\epsilon_{m}\simeq(-1)^{m}E_{C}\frac{2^{4m+5}}{m!}\sqrt{\frac{2}{\pi}}\left( \frac{E_{J}}{2E_{C}}\right)^{\frac{m}{2}+\frac{3}{2}}e^{-\sqrt{\delta E_{J}/E_{C }}}. \tag{4}\] While the exponential suppression of the charge dispersion with the ratio \(E_{J}/E_{C}\) is well-known and the main reason for the introduction of the transmon, the formula in Eq. 4 also predicts a significant increase in the charge dispersion of higher excited states. This means that even though the effect of a parity switch may be small in the computational subspace, the effect can be significantly more pronounced if higher-excited states are involved in the operation of two-qubit gates. More specifically \(|\epsilon_{2}/\epsilon_{1}|\simeq 2^{5/2}\sqrt{E_{J}/E_{C}}\sim 40\), for \(E_{J}/E_{C}\sim 50\). This difference is even more pronounced since certain effects scale with the square of the charge dispersion, as we show in the following. Due to this rapid scaling, we neglect the effect on the first excited (i.e. \(\delta\omega=0\)) state and only focus on the second excited state. The parameters of the Hamiltonian in Eq. 2 are related to the eigenenergies of the Hamiltonian in Eq. 1 via the relations \[E_{1}^{P} =\omega+\delta\omega(P,n_{g}), \tag{5}\] \[E_{2}^{P} =2\omega+\alpha+2\,\delta\omega(P,n_{g})+\delta\alpha(P,n_{g}), \tag{6}\] and by additionally defining the parity-averaged frequency and anharmonicity as \(\hbar\omega=[E_{1}^{+}(n_{g})+E_{1}^{-}(n_{g})]/2\) and \(\hbar(2\omega+\alpha)=[E_{2}^{+}(n_{g})+E_{2}^{-}(n_{g})]/2\)[22]. Together with Eq. 3 and by neglecting the first-excited state charge dispersion (i.e. \(\epsilon_{2}\gg\epsilon_{1}\)) due to the aforementioned reasons, we arrive at \(\delta\alpha(P,n_{g})=P\epsilon_{2}\cos(2\pi n_{g})/2\). #### The Diabatic CPHASE Gate We consider a non-adiabatic, i.e. diabatic, CPHASE gate based on the two-qubit gate scheme using tunable couplers that was analyzed in Refs. [32, 33, 34, 35, 36] with similar schemes proposed in Refs. [37, 38, 39, 40]. We show the circuit schematics of the tunable-coupler setup in Fig. 1d. Here, the two computational transmons, which we refer to as Qubits 1 and 2 (\(q_{1,2}\)), are capacitively coupled with each other and to a third, frequency-tunable, transmon which is referred to as the tunable coupler or simply coupler (\(c\)). The introduction of the coupler enables on-demand gate operation between the two computational qubits. Such a system can be modeled with the Hamiltonian [34, 35] \[\hat{H}/\hbar=\sum_{i\in\{q_{1},c,q_{2}\}}\omega_{i}\hat{a}_{i}^{\dagger}\hat{ a}_{i}+\frac{\alpha_{i}}{2}\hat{a}_{i}^{\dagger}\hat{a}_{i}^{\dagger}\hat{a}_{i} \hat{a}_{i}-\sum_{\begin{subarray}{c}i,j\in\{q_{1},c,q_{2}\}\\ i\neq j\end{subarray}}g_{ij}(\hat{a}_{i}^{\dagger}-\hat{a}_{i})(\hat{a}_{j}^{ \dagger}-\hat{a}_{j}). \tag{7}\] The dependence of the transmon parameters on the parity and offset charge is not explicitly shown above in order to simplify the notation. However, we note that the whole system has \(2^{3}=8\) distinct parity states. The main operation principle of the diabatic CPHASE gate is shown in Fig. 1c. The CPHASE gate is implemented by tuning the frequency of the coupler using a flux pulse closer to the frequency of the computational transmons. The conditional phase is collected during a Rabi oscillation between the \(|101\rangle\) and \(|002\rangle\) states of the system, as illustrated in Fig. 1c. Here, the labeling of the states is defined as \(|q_{1}\,c\,q_{2}\rangle\). Since the second excited state of one of the computational transmons is significantly populated during the gate operation, the non-degeneracy of the two parity levels can have a direct effect, and therefore quasiparticle tunneling and photon-assisted pair breaking in the transmon can become a notable source of error in the gate operation. We emphasise here that the couplings between the transmons \(g_{ij}=\beta_{ij}\sqrt{\omega_{i}\omega_{j}}\) also depend on the frequencies, meaning that while \(g_{q_{1}q_{2}}\) is constant, \(g_{q_{1},2c}\) is time dependent. The dimensionless prefactors \(\beta_{ij}\) depend on the coupling capacitances, as well as self-capacitances of the transmons in the lumped-element circuit model [34]. The computational basis in this scheme is formed by the _eigenstates_ of the Hamiltonian from Eq. 7 in the idling configuration (defined below), rather than the local (uncoupled) transmon states [35]. Since the couplings act only as a perturbation to the uncoupled states, we identify the full Hamiltonian eigenstates corresponding to the uncoupled states. More specifically, the computational state \(|ij\rangle\), \(i,j\in\{0,1\}\) is the eigenstate \(|\psi\rangle\) of the Hamiltonian in Eq. 7 with the maximal overlap \(|\langle\psi|i_{q_{1}}0_{c}j_{q_{2}}\rangle|\). This notation is employed throughout this manuscript and we omit the subscript indices \(q_{1,2}\) and \(c\) from now on. The kets with three indices (e.g. \(|ijk\rangle\)) always denote the local (uncoupled) Fock states of the three-transmon system. The kets with only two indices (e.g. \(|ij\rangle\)) are used to denote the _eigenstates_ of the whole system that are closest to the local (uncoupled) state \(|0j\rangle\). We denote with \(\omega_{j}\) the angular frequency of the computational state \(|ij\rangle\). In general, the state \(|11\rangle\) accumulates conditional phase with the rate \[\zeta_{\text{ZZ}}=\omega_{11}-\omega_{01}-\omega_{10}+\omega_{00}. \tag{8}\] However, since the coupler frequency is tunable, it is typically possible to find one or two frequencies \(\omega_{c}\) for which \(\zeta_{\text{ZZ}}=0\)[32, 34, 37]. These special frequencies are referred to as the coupler idling frequencies and denoted with \(\omega_{c}^{\text{idle}}\). A variety of pulse shapes can be used to implement the gate effectively and without inducing too many unwanted transitions [32, 35, 48]. Even though our analytical analysis makes minimal assumptions about the pulse shape, we need to choose a specific form for the numerical simulations. In our simulations, we use the so-called flattop Gaussian pulse described by the following formula [32, 49] \[f(t)=\frac{1}{2}\left[\mathrm{erf}\left(\frac{t-\tau_{b}}{\sqrt{2}\sigma} \right)-\mathrm{erf}\left(\frac{t-\tau_{b}-\tau_{c}}{\sqrt{2}\sigma}\right) \right]-C. \tag{9}\] The flattop Gaussian is obtained by a convolution of a step function with duration \(\tau_{c}\) and a Gaussian with parameter \(\sigma\). The reasoning behind this choice is that the convolution of the flattop pulse with a Gaussian strongly suppresses the spectral component of the flattop pulse at higher frequencies, thus reducing the probability for unwanted transitions. An additional rise time of \(\tau_{b}\) is also introduced, which we fix to \(\tau_{b}=2\sqrt{2}\sigma\) in the remainder of this paper. Since any gate must have a finite duration, we introduce a cut-off at time \(T=2\tau_{b}+\tau_{c}\). The constant \(C\) is then introduced to correct for the discontinuity at the beginning and the end of a pulse with finite duration. Consequently, the coupler frequency during the gate operation is given by \[\omega_{c}=\omega_{c}^{\mathrm{idle}}+Af(t), \tag{10}\] where \(A\) is the pulse amplitude. In the simulations, we optimize the pulse amplitude and duration \(\tau_{c}\) such that the process fidelity of the gate is maximized. ### Effective Model The tunable coupler circuit Hamiltonian in Eq. 7 is difficult to analyze and we typically rely on numerical studies [35]. It is therefore beneficial to introduce an effective Hamiltonian that can approximate the physics of the system. Similar to Refs. [34, 50], we introduce the Schrieffer-Wolff transformation as a means to decouple the computational states from the coupler, and assume that the decoupled coupler remains in the ground state during the gate operations. Unlike the approach taken in Refs. [34, 50], where the Hilbert space of the local transmons is truncated to the computational subspace, in our case, we must also include the \(|02\rangle\) state to account for the Rabi oscillation used to accumulate the conditional phase. More details of the transformation can be found in the Methods section. Consequently, we consider here the truncated Hilbert space spanned by the states \(\{|00\rangle,|01\rangle,|10\rangle,|11\rangle,|02\rangle\}\). In our analytical considerations, we thus neglect the possibility of leakage, which is a good assumption for high-fidelity gates. We emphasize that the leakage effects are included in our numerical simulations which have been made with the full Hamiltonian defined in Eq. 7. We additionally neglect the counter-rotating terms within the truncated Hilbert space and, thus, obtain the effective Hamiltonian \[\hat{H}_{\mathrm{eff}}/\hbar\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! i.e. \(\hat{\omega}_{q_{1}}-\hat{\omega}_{q_{2}}\approx\hat{\omega}_{q_{2}}\), the Rabi oscillation frequency between the single-excitation states \(|01\rangle\) and \(|10\rangle\) is lower than that of the transition between \(|11\rangle\) and \(|02\rangle\) because \(|\hat{\omega}_{q_{1}}-\hat{\omega}_{q_{2}}|\gg\tilde{g}_{01,10}\). Thus, we can neglect the interaction between the single-excitation states. Since we are interested in the operator acting on the computational subspace of the system, we further truncate the subspace by excluding the non-computational \(|02\rangle\) state. After accounting for the single-qubit phases, which is typically done via virtual \(Z\)-rotations [51], the effective time-evolution operator \(\hat{U}(t)\) is equivalent to \[\hat{U}(t)\hat{=}\!\!\left(\begin{array}{ccccc}|00\rangle&|01\rangle&|10 \rangle&|11\rangle\\ 1&0&0&0\\ 0&1&0&0\\ 0&0&1&0\\ 0&0&0&\sqrt{P_{11}}e^{\phi(t)}\end{array}\right)\!\!\!\left|\begin{array}{ ccc}|00\rangle&|01\rangle\\ |01\rangle&|10\rangle\\ |11\rangle\end{array}\right.\!\!. \tag{12}\] Here, we have defined the conditional phase \(\phi(t)\) and the population of the \(|11\rangle\) state \(P_{11}\). Note that this operator is not necessarily trace-preserving, as part of the population of the \(|11\rangle\) state might remain in the \(|02\rangle\) state, due to potential calibration errors. The equation for \(P_{11}\) is derived by recognizing the block-diagonal structure of the effective Hamiltonian in Eq. 11. This simplifies the analysis, reducing it to a standard two-level Rabi oscillation, with \(P_{11}\) given by [52] \[P_{11}(t)=1-\frac{2\tilde{g}_{11,02}^{2}}{\Omega^{2}}\left[1-\cos(\Omega t) \right], \tag{13}\] where we have defined the qubit-qubit detuning and the Rabi frequency of the transition as \(|11\rangle\leftrightarrow|02\rangle\) as \(\tilde{\Delta}=\tilde{\omega}_{q_{1}}-\tilde{\omega}_{q_{2}}\) and \(\Omega=\sqrt{(\tilde{\Delta}-\tilde{\omega}_{q_{2}})^{2}+4\tilde{g}_{11,02}^{ 2}}\), respectively. Calculating the conditional phase is less straightforward since the accumulated phase of the \(|11\rangle\) state needs to have the single-excitation phases subtracted from it. Nevertheless, we can readily obtain \[\phi(t)=\frac{1}{2}\left[(\tilde{\alpha}_{q_{2}}-\tilde{\Delta})t+\pi\left(1 -\text{sign}\{\cos(\Omega t/2)\}\right)\right]+\arctan\left(\frac{\tilde{ \Delta}-\tilde{\alpha}_{q_{2}}}{\Omega}\tan(\Omega t/2)\right). \tag{14}\] Eqs. 13 and 14 can be used to asses the susceptibility of the gate parameters to a small perturbation, such as a parity switch. Due to the larger charger dispersion of the second-excited state, and the fact that the second-excited state of Qubit 2 (\(q_{2}\)) is populated during the gate operation, we can assume that the main contribution of the parity switch is the perturbation of the anharmonicity \(\alpha_{q_{2}}\). By treating the parity-dependent contribution to the anharmonicity \(\delta\alpha\) from Eq. 2 as a small perturbation, we can obtain the parity-dependent expressions for the conditional phase and \(|11\rangle\) population \[\phi(t_{g},P_{q_{2}}) \approx\phi_{0}+\frac{\partial\phi}{\partial\alpha_{q_{2}}}\bigg{|} _{t=t_{g}}\delta\alpha_{q_{2}}(P_{q_{2}},n_{g}), \tag{15}\] \[P_{11}(t_{g},P_{q_{2}}) \approx 1+\frac{1}{2}\frac{\partial^{2}P_{11}}{\partial\alpha_{q_{2}} ^{2}}\bigg{|}_{t=t_{g}}\left[\delta\alpha_{q_{2}}(P_{q_{2}},n_{g})\right]^{2}, \tag{16}\] where the Taylor expansion of the optimal gate parameters for a small perturbation of \(\alpha_{q_{2}}\) evaluated at the mean (parity averaged) anharmonicity \(\alpha_{q_{2}}\) from Eq. 2 was employed. The Taylor expansion performed here assumes that both parity states suffer from an error of the same magnitude (e.g., the error of the conditional phase for both parities has the same absolute value) and we show in the remainder of the text that this assumption results in higher process fidelities of the gate. At this point, we stress again that \(\delta\alpha(P,n_{g})=P\epsilon_{2}\cos(2\pi n_{g})/2\). While the above expressions are completely general also in the non-perturbative regime, the relations given in Eqs. 13 and 14 can be used to obtain analytical expressions for \(\partial\phi/\partial\alpha_{q_{2}}\) and \(\partial^{2}P_{11}/\partial\alpha_{q_{2}}^{2}\). We realize from Eq. 13 that the implementation of a high-fidelity gate with an arbitrary conditional phase \(\phi_{0}\) requires \(P_{11}(t_{g})=1\), otherwise some population remains outside of the computational subspace. From this observation we readily arrive to the condition for the gate time \(\Omega t_{g}=\cdot 2\pi\), in which \(n\in\mathbb{N}\) is an integer number. This condition enables us to further simplify the relations for the susceptibility of the conditional phase to a parity switch, and we obtain up to the leading order, \[\frac{\partial\phi}{\partial\alpha_{q_{2}}}\bigg{|}_{t=t_{g}}\approx\frac{t_{g}}{2h}, \tag{17}\] while \[\frac{\partial^{2}P_{11}}{\partial\alpha_{q_{2}}^{2}}\bigg{|}_{t=t_{g}}\sim \mathcal{O}(g_{q_{1}c}^{2}g_{q2c}^{2})+\mathcal{O}(g_{q_{1}q_{2}}g_{q_{1}c}g_{ q_{2}c}). \tag{18}\] In the derivation of Eq. 17 we have neglected the terms proportional to \(g_{q_{1}c}g_{q_{2}c}/(\Delta_{q_{2}c}+\alpha_{q_{2}})^{3}\) and \(g_{q_{2}c}^{2}/(\Delta_{q_{2}c}+\alpha_{q_{2}})^{3}\) and higher orders. Additionally Eq. 18 only contains the scaling of the result. To summarize, the derived relations (equations 17 and 18) are therefore valid under the following assumptions: 1. The initial assumptions used to derive the effective Hamiltonian in Eq. 11 are valid, meaning that: 1. The second-order perturbation theory used for the Schrieffer-Wolff transformation is valid, i.e. \(g_{q_{2}c}^{2}/(\omega_{q_{2}}-\omega_{c})^{2}\ll 1\). 2. The gate has low leakage outside of the considered subspace. 3. The rotating wave approximation for the counter-rotating coupling terms is justified. 2. The Rabi frequency associated with the \(|01\rangle\leftrightarrow|10\rangle\) transition is much longer compared to the Rabi frequency of the \(|11\rangle\leftrightarrow|02\rangle\) transition. In the perturbative regime this is fulfilled if \(\sqrt{\hat{\Delta}^{2}+4\hat{g}_{01,10}^{2}}\ll\Omega\). 3. The coupler frequency in the interaction regime is relatively constant. If this is not the case, the time dependency of the perturbative parameters must be taken into account. 4. The gate has low leakage outside of the computational subspace. 5. The simplified formula in Eq. 17 additionally neglects the terms proportional to \(g_{q_{1}c}g_{q_{2}c}/(\Delta_{q_{2}c}+\alpha_{q_{2}})^{3}\) and \(g_{q_{2}c}^{2}/(\Delta_{q_{2}c}+\alpha_{q_{2}})^{3}\), and smaller. 6. We have assumed that the main contributor to the perturbation is the Qubit 2 (\(q_{2}\)) whose second-excited state is populated during the gate operation. However, if the system is designed in such a way that the charge dispersion of any of the other two transmons is significantly larger, their effects might not be negligible anymore. Note that both assumptions about the leakage are automatically fulfilled if the gate has a high-fidelity. ### Gate Fidelity Limitations So far we have quantified how a parity switching event can affect the parameters of the gate unitary. In order to describe the gate performance in a quantum circuit, we also need to consider how frequent are parity switching events. #### Parity-Switching Lifetimes Experiments of parity switching lifetimes typically show the parity switching time to lie in the broad range of \(T_{P}\sim 100\,\mu\mathrm{s}-1\,\mathrm{s}\)[47, 53, 54, 55, 56, 57]. Even though the parity switching lifetime of transmons might increase in the future, e.g., due to better design and improved shielding, it appears that the parity lifetime may be fundamentally upper bounded by the high-energy quasiparticle burst events, which are observed to happen once every \(10-50\,\mathrm{s}\)[58, 59, 60, 61]. #### Kraus Operator Description Comparing the realistic range of parity lifetimes of superconducting qubits to the duration of a single two-qubit gate \(t_{g}\), which is typically in the range of tens to hundreds of nanoseconds [32, 33, 37, 40, 48], we observe that \(t_{g}\ll T_{P}\). However, any meaningful application of a quantum computer will include re-running an algorithm, comprised of a large number of non-parallel gates \(N_{\mathrm{gates}}\), in order to reduce the statistical uncertainty of the observable being evaluated. Denoting the number of runs by \(N_{\mathrm{shots}}\), the uncertainty of any observable generally scales as \(1/\sqrt{N_{\mathrm{shots}}}\)[52] and therefore \(N_{\mathrm{shots}}\) must be large. All together, the time needed to execute a full algorithm \(T_{\mathrm{alg}}\) roughly scales as \(T_{\mathrm{alg}}\propto N_{\mathrm{shots}}\cdot N_{\mathrm{gates}}\cdot t_{g}\), but it may be realistically even longer, due to the time needed to measure and reinitialize the quantum computer, and any possible pulse schedule compilation of the control electronics [62]. As an example, in Refs. [7, 9, 63] the state of the art devices were run for a total of several minutes in order to obtain meaningful results, which implies the following clear separation of timescales \[t_{g}\ll T_{P}\ll T_{\rm alg}. \tag{19}\] The left side of Eq. 19 indicates that the probability for a parity switch occurring during the operation of a single gate is very low, while the right-hand side suggests that a large number of parity switches can occur during an execution of an algorithm. This means that the effect of the charge-parity switch (CPS) on a diabatic CPHASE gate can be described by the following Kraus operators [64] acting on the two-qubit density matrix \(\hat{\rho}\) \[{\rm CPHASE}_{\rm CPS}[\hat{\rho}]=\hat{U}_{-}\hat{\rho}\hat{U}_{-}^{\dagger}+ \hat{U}_{+}\hat{\rho}\hat{U}_{+}^{\dagger}, \tag{20}\] where \(\hat{U}_{\pm}\) are Kraus operators corresponding to the different parity implementations of the two-qubit gate. Eq. 20 can be interpreted as a stochastic application of two different gate operators; by assuming that the target conditional phase is \(\phi_{0}\), they can be written as \[\hat{U}_{\pm}(t)=\frac{1}{\sqrt{2}}\begin{pmatrix}1&0&0&0\\ 0&1&0&0\\ 0&0&1&0\\ 0&0&0&\sqrt{1-\frac{\delta P_{1}}{4}}\,e^{i\phi_{0}\pm i\frac{\delta\phi}{2} }\end{pmatrix}, \tag{21}\] with \(\delta\phi=\partial\phi/\partial\alpha_{q_{2}}\,\varepsilon_{2}^{q_{2}}\cos (2\pi n_{g}^{q_{2}})\) and \(\delta P_{11}=\partial^{2}P_{11}/\partial\alpha_{q_{2}}^{2}\,\left[\varepsilon _{2}^{q_{2}}\cos(2\pi n_{g}^{q_{2}})\right]^{2}/2\), which is a result of Eqs. 15 and 16, together with \(\delta\alpha(P,n_{g})=P\varepsilon_{2}\cos(2\pi n_{g})/2\). We have additionally assumed that both parities are equally likely, however this assumption can also be easily relaxed. We have defined the Kraus operators in Eq. 21 so that there are small errors associated with each parity state. We show in the following that this corresponds to higher process fidelities, compared to having one parity state with a perfect fidelity and the second parity state with a larger error. This assumption is, therefore, equivalent to optimally calibrating the gate with respect to the parity switching error. ### Gate Fidelity Using the Kraus operator description in Eq. 21 allows us to make a formal analysis of performance metric of an arbitrary conditional phase gate in the presence of parity switches and resulting phase and leakage errors. The process fidelity \(\mathcal{F}\) can be computed by [65, 66] \[\mathcal{F}[{\rm CPHASE}_{\rm CPS}]=\frac{\frac{1}{2}\sum_{i=+,-}\left|{\rm tr }\left\{\hat{U}_{\rm CPHASE}^{\dagger}\hat{U}_{i}\right\}\right|^{2}+1-L}{d+1}, \tag{22}\] with the leakage parameter \(L=1-{\rm tr}\left\{\hat{U}_{\rm CPHASE}^{\dagger}\left[\sum_{i=+,-}\hat{U}_{ i}\hat{U}_{i}^{\dagger}\right]\hat{U}_{\rm CPHASE}\right\}/d\). Here \(\hat{U}_{\rm CPHASE}\) is the unitary operator of an ideal CPHASE gate and \(d\) is the dimension of the computational Hilbert space, which in our case is \(d=4\). The infidelity \(1-\mathcal{F}\) of the operation given in Eq. 20 for small perturbations \(\delta\phi\) and \(\delta P_{11}\) is given by the following expression \[\mathcal{F}\approx 1-\frac{3}{80}\left[\frac{\partial\phi}{\partial\alpha_{q_{ 2}}}\bigg{|}_{t=t_{g}}\varepsilon_{2}^{q_{2}}\cos(2\pi n_{g}^{q_{2}})\right]^ {2}+\frac{1}{32}\frac{\partial^{2}P_{11}}{\partial\alpha_{q_{2}}^{2}}\bigg{|} _{t=t_{g}}\left[\varepsilon_{2}^{q_{2}}\cos(2\pi n_{g}^{q_{2}})\right]^{2}. \tag{23}\] Here, we observe that the effect on the fidelity is of second-order in the charge dispersion, due to the coherent nature of a conditional phase error. However, we have shown in Eq. 40 of the Methods section that the infidelity of a series of \(N\) gates is given by \(1-\mathcal{F}_{N}\approx N(1-\mathcal{F})\). This indicates that the error scales linearly with the number of gates, as is typical of incoherent errors, but quadratically in the error parameter \(\delta\phi\), as is expected from a coherent error [67, 68]. In deriving Eq. 40, one can observe that the numerical prefactors in front of both terms (here \(3/80\) and \(1/32\)) increase, and therefore the process fidelity decreases, if the gate is not calibrated in such a way that the error is equally distributed between both parity states. For this reason, we have chosen the Kraus operators as given in Eq. \(\hat{U}_{\pm}\). We conclude from Eq. 23, together with Eqs. 17 and 18, that the dominant effect of the parity switch event is the shift in the conditional phase, rather than leakage, since \((\partial\phi/\partial\alpha_{q_{2}})^{2}\gg\partial^{2}P_{11}/\partial \alpha_{q_{2}}^{2}\). Moreover the magnitude of the shift in conditional phase is given by Eq. 15 and Eq. 17. Further simplifying Eq. 40 therefore results in \[1-\mathcal{F}\approx\frac{3}{320}\left[\varepsilon_{2}^{q_{2}}t_{g}\cos(2\pi n _{g}^{q_{2}})/\hbar\right]^{2}. \tag{24}\] ### Numerical simulations Here, we compare the above results with the numerically exact treatment of full the Hamiltonian in Eq. 7. The parity switch effect is taken into account in the numerical experiments by considering the transmon Hamiltonian from Eq. 2. In other words, the system is simulated for all \(2^{3}=8\) possible parity states and the results are averaged accordingly. According to Eq. 3, the magnitude of the effect also depends on the offset voltage of Qubit 2, which is typically not known. Thus, we assume for simplicity that \(n_{g}^{q_{2}}=0\). Alternatively, as long as all the effects remain second order in the charge dispersion, one can also define an average charge dispersion as \(\bar{\epsilon}_{2}^{q_{2}}=\epsilon_{2}^{q_{2}}\sqrt{\int_{0}^{1}\mathrm{d}n_{ g}\cos^{2}(2\pi n_{g})}=\epsilon_{2}^{q_{2}}/\sqrt{2}\). ### Parity-Switch Induced Gate Errors To achieve high-fidelity gate simulations, it is crucial to carefully select the Hamiltonian parameters in Eq. 7. We have described how these parameters were chosen so that high-fidelity gates are possible with arbitrary ratios of \(E_{J}/E_{C}\) of Qubit 2 in Table 2 of the Methods section. We also note that in general, we use parameters that closely resemble those in the implementation presented in Ref. [32]. Here, we examine the effect of a parity-switch on the fidelity of a CZ gate and compare the perturbative results to a full simulation of the Hamiltonian in Eq. 7. The full numerical treatment includes also the ramping up and down of the pulse, as described in Eq. 9. The pulse parameters in Eq. 9 are set to \(\tau_{b}=2\sqrt{2}\sigma\) with \(\sigma=5\,\mathrm{ns}\), and the amplitude of the pulse \(A\) and \(\tau_{c}\) are optimized such that the gate fidelity is maximal. More details about the numerical simulations are given in Ref. [36]. The parity of the whole system is then switched from one state to the other according to charge dispersion given by Eqs. 3 and 4. Since in our analytical approach we approximate the realistic pulse shape shown in Eq. 9 with a square pulse (i.e., we neglect any dynamics during the ramping up and down of the flux pulse), we have to be careful when comparing the results to those obtained with the numerical data. Particularly, the pulse duration \(T\) in the numerics is in general different than the gate duration \(t_{g}\) we have defined in our analytical derivation, since the full duration \(T\) also includes the time needed to ramp Figure 2: Effects of a parity switch on the fidelity of a CZ gate. **a** Analytical predictions (red circles) of the conditional-phase difference of the CZ gate between the two parities. The theory predictions from Eqs. 17 and 15 are compared to the phase extracted from a full numerical simulation of the gate (blue crosses), with the Hamiltonian parameters from Table 2. The inset displays the relative absolute error between the analytical predictions and the numerical results. The \(x\)-axis represents the \(E_{J}/E_{C}\) ratio (bottom) and second excited state charge dispersion (top) of the qubit with the higher frequency (Qubit 2). The inset displays the relative error of the analytical prediction. **b** Gate infidelity after a parity switch on Qubit 2, where the theory predictions (red circles) only take into account the effect on the conditional phase, according to Eq. 24. The green lines are estimates of the upper bound for the infidelity contribution of quasiparticle induced decoherence on the two-qubit gate system, with the parity lifetimes \(T_{P}\) measured by different references: [1] Risté, D., _et al._ (2013) [54], [2] Diamond, S., _et al._ (2022) [33] and [3] Kurter, C., _et al._ (2022) [56]. More specifically, references [1] and [2] have reported the values \(T_{P}=1.25\) ms and \(T_{P}=2.5\) ms respectively. In reference [3], parity-switching times between 1 ms and 1.5 s were demonstrated. For this reference, we have used the value of \(T_{P}=20\) ms, which approximately corresponds to the median parity-switching time of all the samples. The grey region is the region in which errors due to unwanted transitions during the gate operation are more prominent and the effect of a parity switch is negligible. It therefore represents the lower bound of the infidelity, which is largely independent of the ratio \(E_{J}/E_{C}\). the pulse up and down. When we compare our analytical results with numerical simulations, we first extract the number \(n\) of Rabi oscillations from the simulated data by monitoring the population of the \(|002\rangle\) state during the gate operation. After \(n\) is determined, the analytical gate time is adjusted such that \(t_{g}=n\pi/\Omega\). The effective gate time obtained in this manner does not differ significantly from the duration of the flat part of the pulse, typically less than 5 ns. The conditional phase of the gate in the simulations is obtained from propagating the state \(|\psi(t=0)\rangle=\frac{1}{2}(1,1,1,1)^{\mathrm{T}}\) and extracting the conditional phase of the \(|11\rangle\) state. Fig. 2a compares the analytical results in Eqs. 15 and 17 and to the numerically obtained values and confirms that the two approaches agree up to a good accuracy. Since the magnitude of the charge dispersion in the numerical analysis is the same as in the analytical treatment, the error stems completely from the approximations made in evaluating \(\partial\phi/\partial\alpha_{q_{2}}\). Furthermore, Fig. 2a also clearly shows that the parity-switching-induced shift in the conditional phase scales exponentially with the ratio of \(E_{J}/E_{C}\). This, in turn, is due to the scaling of the charge dispersion in Eq. 4. In Fig. 2b, we show numerical data of the full gate fidelity after the parity switch and, as a comparison, the corresponding analytical result calculated using Eq. 24. The process fidelity in the numerical example is obtained from propagating a number of input states, reconstructing the effective superoperator of the gate from these simulations and subsequently using Eq. 22 to obtain the process fidelity of the gate. We observe that the infidelity of the numerical simulation flattens for \(E_{J}/E_{C}\gtrsim 75\), which is due to other errors in the gate implementation, such as leakage transitions during the pulse ramping up and down. In this region (in grey) the effect of a parity switch is not seen since it is simply too small compared to other errors. On the other hand at lower values of \(E_{J}/E_{C}\), the numerical results overlap with the parity switching error predicted by the shift in the conditional phase. While clearly demonstrating the magnitude of the error, this result also shows that the leakage error contribution in Eq. 23 is negligible in the perturbative regime. The gate durations in Fig. 2 are typically 45 ns \(\lesssim t_{g}\lesssim 60\) ns, with \(\tau_{c}\sim t_{g}\). Additionally, Fig. 2b compares the magnitude of the error due to quasiparticle related decoherence to the parity switch induced error described in this work. Since the quasiparticle induced characteristic decay times \(T_{1}\) and \(T_{\phi}\) depend on a large number of parameters and there are two possible quasiparticle generating mechanisms [see Fig. 1b][16], we only provide an upper bound based on the parity switching time observed in the references cited in the caption of Fig. 2. This upper bound is determined by noting that in the computational subspace[55] \[\Gamma_{00}^{+-}+\Gamma_{11}^{+-}+\Gamma_{01}^{+-}+\Gamma_{10}^{+-}\approx 2/T _{P}, \tag{25}\] where the rates \(\Gamma_{ij}^{+-}\) represent the transition rates between states \(|i^{\pm}\rangle\rightarrow|j^{\mp}\rangle\) in different parity manifolds. Since amplitude damping noise has a larger effect on the infidelity compared to pure dephasing (see Table 1), we furthermore assume the worst-case scenario in which each quasiparticle parity switching event produces a \(T_{1}\) decay, so that \(1/T_{1}^{qp}=\Gamma_{01}^{+-}+\Gamma_{10}^{+-}\approx 2/T_{P}\). This expression for \(T_{1}^{qp}\), together with experimentally measured values of \(T_{P}\), provides an approximate upper bound for the magnitude of the effect of the decoherence. Even though each green line in Fig. 2b uses a constant measured \(T_{P}\), independent of \(E_{J}/E_{C}\), the plotted infidelity contribution is not constant due to the varying gate duration \(T=2\tau_{b}+\tau_{c}\). Very interestingly we observe from Fig. 2b that depending on the ratio of \(E_{J}/E_{C}\) and the parity lifetimes \(T_{P}\), the contribution of parity switching to the infidelity of the two-qubit gate system can dominate the contribution from quasiparticle-induced relaxation. Note that the two-qubit gates are also typically the noisiest building blocks of a quantum algorithm[7, 8, 11, 67]. We have also provided additional numerical results in the Supplementary Information, showing that the effect of a parity switch on the leakage becomes the main contribution to the infidelity at shorter gate times. We have additionally included a comparison of the infidelity of a two-qubit gate due to \(1/f\)-type charge noise in the Supplementary Information in order to compare the two error sources whose contributions scale with the charge dispersion of the transmon. However, unlike the charge dispersion of the transmon which is maximized when \(\cos(2\pi n_{g})=1\), the low-frequency charge noise decoherence rate is maximal at \(\sin(2\pi n_{g})=1\). This means that these two errors are mutually exclusive, i.e. if we were hypothetically able to tune \(n_{g}\) to the value where the charge dispersion of the transmon is equal to zero, that point corresponds to the maximal decay rate due to low-frequency charge noise[12]. Nonetheless, in the comparison we have assumed in both the charge-parity switching analysis and the low-frequency charge noise analysis that we are at the noise hotspot, i.e. the value of \(n_{g}\) where the effects are maximal, thus slightly overestimating both errors. We observe that at lower \(E_{J}/E_{C}\) the charge-parity switching error is dominant and vice versa. The crossover between the infidelity due to the charge-noise-induced dephasing in the computational subspace and the parity switching exhibits a crossover at \(E_{J}/E_{C}\sim 80\), which corresponds to a gate infidelity of \(1-\mathcal{F}\sim 10^{-8}\), i.e. the charge-parity switching error is dominant for \(E_{J}/E_{C}\gtrsim 80\). However, due to the increased charge dispersion in the higher-excited levels of the transmon and the utilization of the second-excited state, we have additionally analyzed the effect of the second-excited state charge-noise-induced dephasing which was found to be much larger compared to the effects in the computational subspace. In this case, the crossover between the charge-parity error, which was again dominant at lower \(E_{J}/E_{C}\), was found to occur at \(E_{J}/E_{C}\sim 65\) and infidelities on the order of \(1-\mathcal{F}\sim 10^{-6}\). We note here that our analysis overestimates the effect of the charge noise in a realistic scenario by neglecting the time correlations in the noise and should be treated as an upper bound of the magnitude of the effect. These results demonstrate that the parity-switching error is the dominant charge dispersion related error of the two-qubit gate in the regime of \(E_{J}/E_{C}\lesssim 65\). ### Idling Interaction One of the main benefits of the tunable coupler architecture is the ability to also suppress any residual interactions between the qubits during idling [34, 35, 43], with unwanted ZZ coupling strengths demonstrated to be below 1 kHz [32]. Here, we show that an uncontrolled parity switching event sets a lower bound on the minimum achievable unwanted ZZ interaction in such systems. While the conditional phase during the gate operation is obtained by transferring the population outside of the computational subspace, the residual interactions during the idling period are a consequence of dispersive shifts of the energy levels of the computational states, as described in Eq. 8. However, in the presence of parity switches in all three transmons of the tunable-coupler setup, the ZZ-coupling rate can be tuned to _exactly_ zero only for one of the eight equally likely parities. By distributing the errors equally across all parities and assuming the charge dispersions are small, the idling coupling strength (depending on the three individual parities) can be written as \[\tilde{\zeta}_{\text{ZZ}}(P_{q_{1}},P_{c},P_{q_{2}})\simeq\sum_{i\in\{q_{1},< \,q_{2}\}}\frac{P_{i}}{2}\left(\frac{\partial\zeta_{\text{ZZ}}}{\partial \alpha_{i}}\epsilon_{2}^{i}+\frac{\partial\zeta_{\text{ZZ}}}{\partial\alpha_ {i}}\epsilon_{1}^{i}\right), \tag{26}\] with \(P_{i}\in\{-1,+1\}\,\forall i\), and the derivatives are evaluated at the idling point. While we have still assumed \(\epsilon_{2}\gg\epsilon_{1}\), we have also acknowledged that the first-excited-state charge dispersion and the derivatives \(\partial\zeta_{\text{ZZ}}/\partial\omega_{i}\) can in some cases be significantly larger compared to \(\partial\zeta_{\text{ZZ}}/\partial\alpha_{i}\). Unlike the gate fidelity results, the above expression is first order in the charge dispersion. However, in order to evaluate the derivatives in Eq. 26, or the general value of the coupling rate \(\zeta_{\text{ZZ}}\) with fixed Hamiltonian parameters (without including parity effects), it is necessary to go to fourth-order perturbation theory in the coupling strengths, as was done in Refs. [32, 35]. While the complete fourth-order expression is impractical for obtaining any analytical insight, by further assuming the hierarchy of the system parameters \(\Sigma_{ic}\gg\Delta_{ic}\gg g_{ic}\gg g_{12}\), where \(\Sigma_{ij}=\omega_{i}+\omega_{j}\) and \(\Delta_{ij}=\omega_{i}-\omega_{j}\), for \(i,j\in\{q_{1},c,q_{2}\}\), the cumbersome expressions can be significantly simplified into [35] \[\zeta_{\text{ZZ}}\approx\frac{2\left[(\alpha_{q_{1}}+\alpha_{q_{2}})\tilde{ \delta}_{01,10}^{2}-2\nu\tilde{g}_{01,10}(2\alpha_{q_{1}}\alpha_{q_{2}}+( \alpha_{q_{1}}-\alpha_{q_{2}})\Delta_{q_{1}q_{2}})\right]}{(\Delta_{q_{1}q_{2} }+\alpha_{q_{1}})(\Delta_{q_{1}q_{2}}-\alpha_{q_{2}})}+2\nu^{2}\left[4\alpha_{ c}+\frac{(\alpha_{q_{1}}+\alpha_{q_{2}})\Delta_{q_{1}q_{2}}^{2}}{(\Delta_{q_{1}q_{2} }+\alpha_{q_{1}})(\Delta_{q_{1}q_{2}}-\alpha_{q_{2}})}\right], \tag{27}\] where \(\nu=g_{q1c}g_{q2c}/(2\Delta_{q1c}\Delta_{q2c})\sim 10^{-3}\). The difference between the derivatives obtained from Eq. 27 and the numerically exact result is shown in Fig. 3a from which we observe that a qualitatively good agreement can be found in the vicinity of the two idling frequencies of the system (vertical Figure 3: Comparison of the analytical formulas with numerical results obtained via exact diagonalization in the idling regime. **a** Largest derivatives in Eq. 26 obtained numerically (solid lines) and from Eq. 27 (filled circles) using the parameters from Table 2 and \(\alpha_{q_{2}}=-270\cdot h\) MHz. The \(x\)-axis represents the coupler frequency, with the two idling frequencies denoted with black dashed vertical lines. The filled circles correspond to the derivatives obtained using Eq. 27. **b** Comparison of numerical and analytical results for the root-mean-squared coupling strength at the idling point (defined in Eq. 28), with the bar graph displaying the numerical values of the coupling for the 8 different parity states for the highlighted data point. The parameters of the simulation are chosen identically as in Fig. 2 and are listed in the Methods section, more specifically in Table 2. The parameters for the bar plot correspond to the parameters in panel a. black dotted lines). We therefore evaluate the error between the analytical formulas and the numerically exact expression in more detail below. Since the system has eight uncontrolled, rapidly (compared to experimental timescales, as in Eq. 19) switching parity states, we define a parity-averaged idling interaction strength by averaging over all parities: \[\langle\tilde{\xi}_{\text{ZZZ}}^{2}\rangle_{\text{CPS}}=\frac{1}{2^{3}}\sum_{P_{ \text{f}_{1}},P_{\text{c}},P_{\text{tr}_{2}}\in\{-1,+1\}}\left[\tilde{\xi}_{ \text{ZZ}}(P_{q_{1}},P_{c},P_{q_{2}})\right]^{2}. \tag{28}\] This definition is the parity-averaged mean-square of the idling ZZ strength defined in Eq. 26. Parity-averaged coupling strengths are shown in Fig. 3b, where we observe a good agreement between the analytical result and the numerical data. The possible values of the quantity \(\zeta_{\text{ZZ}}(P_{q_{1}},P_{c},P_{q_{2}})\) for the third data point [also shown in panel a] are visualized in the bar graph. The bar graph demonstrates that Qubit 2 (\(q_{2}\)) is the dominant contributor to the perturbation, with small corrections due to the parity of Qubit 1 (\(q_{1}\)). This can be seen by noting that the residual coupling strength is dependent mostly on the parity of Qubit 2 (\(q_{2}\)). The parity of the coupler in this set of parameters is largely irrelevant due to the high ratio of \(E_{J_{c}}/E_{C_{c}}\approx 250\), but this is not always the case. For example, considering the gate implementation from Ref. [37], the ratio of \(E_{J_{c}}/E_{C_{c}}\) for the coupler is much smaller and therefore, the coupler parity in this implementation has a much larger effect on the strength of the residual ZZ coupling. The above results therefore present a fundamental limit on the magnitude of unwanted interactions that can be achieved in the tunable coupler architecture. However, the overall effect on an algorithm is more complex as it depends on the duration of the execution (since the interaction is always "on"), which in turn depends on the coherence times. We estimate that this error becomes relevant if the coherence times are of the order of \(1/\sqrt{\langle\tilde{\xi}_{\text{ZZ}}^{2}\rangle_{\text{CPS}}}\), since a significant _unwanted_ conditional phase is accumulated if the algorithm duration \(\Delta T\) (which is determined by the coherence times) is long enough such that \(\Delta T\cdot\sqrt{\langle\tilde{\xi}_{\text{ZZ}}^{2}\rangle_{\text{CPS}}}\sim 1\). For \(E_{J}/E_{C}\sim 50\), the coherence time (or algorithm duration) needed to observe the parity switching induced residual coupling strength is on the order of 1 ms. Note that current fabrication processes are indeed already approaching this value [44, 45]. #### Optimal Qubit Parameters Having established the magnitude of the parity-induced error on a two-qubit gate, we have shown that this error can be mitigated by increasing the \(E_{J}/E_{C}\) ratio of the transmon. However, there are other error sources present in such architectures [36], and while increasing the \(E_{J}/E_{C}\) ratio will suppress the parity-switching errors, it may also increase the contribution of other possible error sources. Therefore, in order to find better parameters for future transmon-based quantum computers, we must evaluate the contributions of all errors affecting the system. In particular, we estimate optimal regions for the qubit parameters \(E_{J}\) and \(E_{C}\), where the errors contributing to the gate and state preparation infidelities are minimized. We consider a number of different error sources relevant to superconducting qubits: 1. \(T_{1}\) decay due to the coupling to a bath of two-level systems [69, 70, 71, 72, 73, 74, 75]. 2. \(T_{\phi}\) pure dephasing due to the coupling to magnetic flux noise [69, 76, 77, 78]. 3. Leakage affecting single-qubit gates due to low anharmonicity [79]. 4. State preparation errors due to finite-temperature heating effects [13, 14, 15], without the presence of active reset. 5. Errors in the two-qubit gate operation due to parity switch effects that are analyzed in this manuscript. In Methods, we show how the above error sources scale with the transmon Hamiltonian parameters \(E_{J}\) and \(E_{C}\). We have not included any errors related to the control and calibration of the individual gates, as such errors do not explicitly depend on the qubit parameters and their inclusion, therefore, would not significantly alter the presented results. In other words, we assume that perfect control and calibration are possible. Similarly, measurement errors are present, but have no explicit dependence on \(E_{J}\) and \(E_{C}\). For simplicity, we again consider the same tunable coupler system as in the previous section, but arranged in a square grid, as pictured in Fig. 4a. In order to avoid frequency crowding issues with such a connectivity [80], the qubits in the array are divided into low and high-frequency transmons. In this case, it is therefore sufficient to consider only a single pair of qubits with different parameters. Moreover, we assume that the qubits are always detuned by approximately one anharmonicity and the anharmonicity of the qubits is similar. We have found that these two conditions are sufficient to be able to perform high-fidelity two-qubit gates, as described in Table 2. Since the parameters of the transmons are related by this condition, we can parametrize the whole system in terms of \(E_{J}\) and \(E_{C}\) of the higher-frequency transmon. Some examples of these parameter values that were obtained from experimental demonstrations of the diabatic CZ gate are plotted in Fig. 4b. In order to quantify the performance of an algorithm execution with a specified pair of parameters \(E_{J}\) and \(E_{C}\) in mind, we define a performance metric \(\mathcal{P}\) which we will then maximize. We further define \(\mathcal{P}\) as one minus a weighted sum of the infidelity contributions of all the relevant errors listed in Table 1. This sum can be written as \[1-\mathcal{P}=\sum_{i=T_{1},T_{\phi},\text{parity}}w_{\text{TQG},i}(1-\mathcal{ F}_{\text{TQG},i})+\sum_{i=q_{1},q_{2}}\sum_{j=T_{1},T_{\phi},\text{leak.}}w_{\text{SQ},i,j}(1- \mathcal{F}_{\text{SQG},i,j})+\sum_{i=q_{1},q_{2}}w_{\text{SP},i}(1-\mathcal{ F}_{\text{SP},i}), \tag{29}\] where the summation runs across all infidelity contributions, or more explicitly for the simple algorithm pictured in Fig. 4c \[1-\mathcal{P} =\frac{2}{5}(\Gamma_{1}^{q_{1}}+\Gamma_{1}^{q_{2}})r_{\text{TQG}} +\frac{1}{5}(\Gamma_{\phi}^{q_{1}}+\Gamma_{\phi}^{q_{2}})r_{\text{TQG}}+\frac{ 3}{80}\left(\frac{r_{\text{TQG}}}{2\hbar}\epsilon_{2}^{q_{2}}\right)^{2}\] \[+2\left[\frac{1}{3}\Gamma_{1}^{q_{1}}r_{\text{SQG}}+\frac{1}{6} \Gamma_{\phi}^{q_{1}}r_{\text{SQG}}\right]+\frac{1}{3}r_{\text{leak.}}^{q_{1} }+2\left[\frac{1}{3}\Gamma_{1}^{q_{2}}r_{\text{SQG}}+\frac{1}{6}\Gamma_{\phi} ^{q_{2}}r_{\text{SQG}}\right]+\frac{1}{3}P_{\text{leak.}}^{q_{2}}\] \[+\left[\frac{10(r_{\text{TQG}}+2t_{\text{SQG}})}{T_{1,0}}\right] \left(P_{[1]}^{q_{1}}+P_{[1]}^{q_{2}}\right). \tag{30}\] The first term in Eq. 29 and first line in Eq. 30 correspond to the errors of the two-qubit gate, the second term and line to single-qubit gate errors and the last to the state preparation error. A similar fidelity approximator was defined in Ref. [7]. We have introduced additional weights in the sum, in order to account for the relative number of single and two-qubit gates, and also to correctly take into account the fact that the error in state preparation occurs only once, while the gate error is significantly amplified after a number of applications of the operation. As the determination of the position of maximal \(\mathcal{P}\) within the \((E_{C},E_{J})\) landscape, using which we find the optimal range for parameters, depends exclusively on the relative values of the weights \(w_{i,j}\), we proceed by assuming that all error terms associated with the two-qubit gate are assigned weights of \(w_{\text{TQG}}=1\). As pictured in Fig. 4c, we analyze a circuit where we perform four single-qubit gates per each two-qubit gate, with half those single-qubit gates being \(\pi\) rotations which are more susceptible to leakage. Note that this ratio of single to two-qubit gates arises naturally with the introduction of randomized compiling into the algorithm [67]. The weights \(w_{\text{SQ},T_{1,\phi}}\) for the decoherence induced infidelity during a single-qubit gate therefore have a value of \(2\) (since a \(\pi\) and \(\pi/2\) rotation are applied) per qubit, while the leakage error \(w_{\text{SQG},\text{leak.}}=1\) per each qubit. Lastly, the thermal state preparation error is weighted so that \(w_{\text{SP}}=10(t_{\text{TQG}}+2t_{\text{SQG}})/T_{1,0}\), since this quantity is the inverse of the approximate number of single and two-qubit gates that we can perform within a specified coherence time \(T_{1,0}\) at the reference point. Therefore, the performance metric \(\mathcal{P}\) is an approximator for the process fidelity of the simple circuit displayed in Fig. 4c. The values of \(1-\mathcal{P}\) from Eq. 30 for certain parameters are plotted in Fig. 4d. Due to the fact that the maximum of \(\mathcal{P}\) is an optimal solution only for the circuit pictured in Fig. 4c, we have plotted an optimal region of parameters defined by the 10th percentile of the points with the lowest value of \(1-\mathcal{P}\). This optimal region is also contrasted with the optimal region obtained with a density matrix simulation of the same circuit, and the good agreement with between the curves shows that \(\mathcal{P}\) is a valid performance metric. The color scale corresponding to the \(1-\mathcal{P}\) has four distinct less-favorable regions, corresponding to the error sources in Table 1. For small \(E_{C}\) and, consequently, low anharmonicity \(\alpha_{q_{2}}\), the overall fidelities are low due to relatively large errors caused by leakage during single-qubit gates. For small \(E_{J}\) and large \(E_{C}\), \(E_{J}/E_{C}\) has a low value and therefore the system experiences relatively large errors arising from parity switching events. If both \(E_{J}\) and \(E_{C}\) are large, the coherence times are short and, therefore, the dominating source of infidelity. Moving perpendicularly to the constant frequency contours towards lower values of \(E_{C}\) and \(E_{J}\), results in lowered frequency and, thus increased errors caused by thermal excitations. On the other hand, when considering near-future transmons with coherence times close to \(0.5\,\mathrm{ms}\)[44, 45] in Fig. 4d, we observe that the optimal region is shifted towards larger values of \(E_{C}\), and is even limited by the parity-switching effects in the bottom right corner, thus demonstrating the importance of this effect in future QPU design, when two-qubit gate infidelities surpass the limit of \(10^{-3}\) infidelity. By comparing the data in Fig. 4b and d, we observe that the implementation by Xu, H., _et al._ (2021) [33] is close to the parity-switching induced error region. Since the pure dephasing times of the system were not reported we are not able to asses the relative contribution of the parity switching error to this gate implementation. However, using the results shown in Eqs. 17 and 23, we observe that the parity-switching-error induced infidelity for the parameters is estimated to be \(1-\mathcal{F}\approx 2\cdot 10^{-3}\). As a comparison, the reported \(\Gamma_{1}\) decay times (measured in the idling configuration) in the reference were \(\Gamma_{1}^{q_{1}}=20.8\,\mathrm{\mu s}\) and \(\Gamma_{1}^{q_{2}}=28.8\,\mathrm{\mu s}\)[33]. Together with the reported gate time, these decay times correspond to an infidelity contribution of approximately \(6\cdot 10^{-4}\), as given by the expression in Table 1. These results show that for this specific implementation of the diabatic CZ gate, the described effects of the parity switches are comparable, and possibly even greater than the \(T_{1}\) decay induced infidelity. The shifts of the optimal region with increasing coherence times seen in Fig. 4d mean that the qubit parameters must be adapted to the currently achievable coherence times. Fig. 5a shows how the performance metric \(\mathcal{P}\) increases with the achievable coherence properties of the system, provided that the parameters \(E_{J}\) and \(E_{C}\) are adapted to the coherence times. On the other hand, Fig. 5b displays the value of the function \(\mathcal{P}\) without changing \(E_{J}\) and \(E_{C}\). Although the performance metric \(\mathcal{P}\) on panels a and b is initially largely coherence limited, the assumption of fixed parameters \(E_{J}\) and \(E_{C}\) in the simulations of Fig. 5b shows that simply increasing the coherence times does not necessarily give better fidelities if the effects of other sources of error are not taken into account. In this case, the parity switching error becomes the dominant error source, as seen from the upper panel in Fig. 5b. It is also important to mention what other aspects of successfully operating a transmon tunable coupler based quantum computer were not included in the presented analysis. * We are neglecting any possible cross-talk effects between next-nearest neighbors. * Idling errors were discussed in this work, but not included in this analysis, as their contribution depends significantly on the specific algorithm being implemented. Taking also this effect into account, the area with low \(E_{J}/E_{C}\) is even less favorable. Figure 4: Determining optimal transmon parameters. **a** Schematic representation of the square grid architecture, with high-(dark blue) and low-frequency (light blue) transmons, connected via tunable couplers (green). **b** Values of \(E_{J}\) and \(E_{C}\) of Qubit 2 (\(q_{2}\)) from different experimental implementations of the tunable coupler transmon architecture. Both parameters are extracted from the reported \(\omega_{q_{2}}\) and \(\alpha_{q_{2}}\), and therefore the points are only an approximation [46]. The annotations refer to the following references [1] Collodo, M. C., _et al._ (2020) [49], [2] Xu, Y., _et al._ (2020) [41][3] Sung, Y., _et al._ (2021) [32], [4] Wu, Y., _et al._ (2021) [11, 5] Xu, H., _et al._ (2021) [33] and [6] Google Quantum AI (2022) [8]. The black lines represent constant ratios of \(E_{J}/E_{C}\) and the red lines correspond to contours of constant qubit frequency \(\omega_{q_{2}}\). **c** Schematic representation of the circuit used to infer the weights in Eqs. 29 and 30, with the state preparation pictured on the left (in purple), single-qubit \(\pi\) and \(\pi/2\)-rotations in green and blue respectively, and the two-qubit CZ gate in orange. \(N\) is the number of times the pictured circuit (without the state preparation) is repeated before measurement and therefore an integer determined by the reference coherence time. In our case, we consider \(N=\left|\frac{T_{1,0}}{10(\text{r}_{\text{PGG}}+2\text{s}_{\text{QG}})}\right|\). **d** The function \(1-\mathcal{P}\) defined in Eq. 30 plotted for different values of the second (higher frequency) transmon \(E_{J}\) and \(E_{C}\). We consider a single-qubit gate implemented with a Gaussian DRAG pulse with a duration of \(t_{\text{SQG}}=16\,\text{ns}\), a two-qubit gate duration of \(t_{\text{TQG}}=50\,\text{ns}\) and three different reference coherence times indicated on top of each panel. For all three cases, the reference \(T_{\theta,0}=T_{1,0}\) at \(E_{J}=12\cdot h\,\text{GHz}\) and \(E_{C}=0.2\cdot h\,\text{GHz}\) for Qubit 2 (\(q_{2}\)) with the parameters of Qubit 1 (\(q_{1}\)) given in Table 2. All the parameters (transmon parameters and decay times) are scaled accordingly to different values of \(E_{J}\) and \(E_{C}\), and for each qubit individually, as described in Table 2. The lighter green contour marks the region in the plot with the lowest values of \(1-\mathcal{P}\), defined by the 10th percentile of the plotted values. The darker green contour is obtained with a density matrix simulation of the circuit from panel c with the same errors, but instead of evaluating the function \(1-\mathcal{P}\), it is obtained by minimizing the infidelity of the state before measurement. * Other sources of decoherence are expected to have smaller contributions and therefore do not significantly affect the findings presented here. * The TLS environment is random, meaning that the scaling shown in Table 1 only holds for the average of a large number of qubits. Additionally, the \(T_{1}\) decay rate is also heavily influenced by the design, i.e., the exact geometry of the capacitor pads of the transmon, meaning that the presented results are only valid for a comparison of qubits with similar designs. * Recently, two-qubit gates with tunable couplers idled below the qubit frequencies have been demonstrated [37, 38]. In such implementations, the presence of thermal coupler excitations limit the gate performance at lower qubit frequencies in a more complex manner. Our analysis in the low frequency regime therefore only holds for the original tunable coupler proposals, where the coupler is idled above the computational transmons. ## Discussion We have provided a novel framework for the optimization of circuit parameters that can be used to guide the future design of transmon-based quantum computers. Our findings reveal the presence of a distinct global performance peak within the \(E_{J}\) and \(E_{C}\) parameter space, which has not been identified before. Moreover, our optimization procedure can be straightforwardly extended to more error sources, provided that the scaling of the error as a function of the system parameters is known analytically, or the infidelity contribution can be evaluated numerically. The latter is typically realistic as long as the error is sufficiently local, i.e., it depends only on the parameters of a handful of transmons at most. While we have based our analysis on transmon qubits connected via tunable couplers, the same principles, albeit with different error sources, can be applied to different types of qubits [2] or co-design chips [81]. Additionally, more parameters than just \(E_{J}\) and \(E_{C}\) can be optimized, e.g., also the gate durations can be considered as free parameters since they are realistically easy to adjust in experiments. The limiting factor here are the errors for which the analytical behavior is unknown and numerical interpolation in a large parameter space is too demanding. Additionally, we have established how parity switching affects the commonly implemented tunable-coupler mediated diabatic CPHASE gate in a transmon based quantum computer, both analytically and numerically. We have shown that the parity switching error can be the main quasiparticle-related error source of the two-qubit gate. Moreover, we have demonstrated that the experimental implementation of the gate presented in Ref. [33] may have a comparable, if not larger, contribution of parity-switching errors compared to all \(T_{1}\) decay mechanisms. While the tunable-coupler-based diabatic CPHASE gate is more relevant due to its implementation in leading large-scale experiments [7, 8, 10, 11], we believe that the effects described in this manuscript should be considered in any current or future transmon-based quantum gate which utilizes higher-excited states. One of the primary anticipated advantages of incorporating tunable couplers into the system is the potential for on-demand complete suppression of \(ZZ\)-type interactions among the qubits. However, our research in this context has revealed that the stochastic nature of parity switches imposes constraints on this proposition, practically establishing a lower bound on the Figure 5: The relative contributions to the value of \(1-\mathcal{P}\) (in Eq. 29) from the error sources listed in Table 1 (top) and the value of the performance metric \(\mathcal{P}\) as a function of coherence times with assuming \(T_{\phi,0}=T_{1,0}\) at the reference point (bottom). Here we considered the same reference point as used in Fig. 4d. **a** A well-designed system, where the parameters \(E_{J}\) and \(E_{C}\) are optimally adapted so that \(\mathcal{P}\) is maximized for each value of the coherence time on the \(x\)-axis. **b**\(\mathcal{P}\) at various coherence times while keeping \(E_{J}\) and \(E_{C}\) fixed, corresponding to the values marked as point [5] in Fig. 4b. achievable minimum \(ZZ\) coupling strength. The magnitude of this "always on" interaction should be an important consideration when running longer algorithms. More specifically, this effect becomes relevant if the algorithm is long enough to accumulate a considerable conditional phase due to the unwanted coupling strengths shown in Fig. 3b. Since current coherence times are approaching the 1 ms limit [44, 45], the residual idling strength can become relevant if the described effects are not taken into account in the design of the transmon parameters. One way of mitigating the parity-switching effects would be to attempt to tune the offset charge \(n_{g}\) to the point where both parity manifolds are degenerate. However, such a solution is not practical, since the environmental charge noise would result in a drift of the offset charge \(n_{g}\) as was demonstrated in Refs. [82, 83]. We further note that \(\cos(2\pi n_{g})=0\) is the low-frequency charge noise hotspot [12] in which the qubit frequency is maximally sensitive to the fluctuations of the offset charge. Therefore, the qubit is expected to have lower coherence times at this particular value for \(n_{g}\). It is important to acknowledge the potential influence of other noise mechanisms, not explicitly addressed in this study, on the optimal design parameters. Consequently, we have chosen to present a range of optimal parameters, rather than prescribing a single optimal value for \(E_{J}\) and \(E_{C}\). Our findings, as depicted in Fig. 4d, reveal that a two-fold variation in the reference coherence time only marginally adjusts the optimal parameter domain. This observation underscores the robustness of our results, suggesting that the presence of additional, potentially sub-leading noise mechanisms omitted from our simulations is unlikely to precipitate a drastic alteration in the presented outcomes. ## Methods ### The Schrieffer-Wolff Transformation The Schrieffer-Wolff (SW) transformation used in this manuscript was first introduced in Ref. [84] and a similar transformation has been applied to the computational subspace of the two-qubit system in Refs. [34, 50]. The aim of the SW transformation in our case is not to diagonalize the system, but rather to decouple the coupler states from the computational transmons, thus enabling us to only study the reduced system, i.e. we want to simplify the full Hamiltonian into a more tractable reduced model, containing only the relevant states (i.e. the states which have a significant population). The reason for this is the fact that the computational basis of the system is defined by the eigenstates of the system which have the maximum overlap with states of the form \(\ket{i_{q_{1}}0_{c}j_{q_{2}}}\), i.e. with the coupler always in the ground state. Any excitations of the coupler therefore lead to errors, so in order to analyze the ideal gate dynamics, we constrain ourselves only to the energy levels of the computational subspace and the second excited state used in the Rabi oscillation during the CPHASE gate operation. In general, a SW transformation is obtained by noting that any unitary operator can be written as \(\hat{U}=e^{\hat{S}}=\mathds{1}+\hat{S}+\frac{1}{2}\hat{S}^{2}+\ldots\), where \(\hat{S}\) is anti-hermitian, \(\hat{S}=-\hat{S}^{\dagger}\). Consequently a unitary transformation of an arbitrary Hamiltonian \(\hat{H}\) can be expanded in terms of \(\hat{S}\) as \[\hat{U}\hat{H}\hat{U}^{\dagger}=\hat{H}+\left[\hat{S},\hat{H}\right]+\frac{1}{ 2}\left[\hat{S},\left[\hat{S},\hat{H}\right]\right]+\mathcal{O}(\hat{S}^{3}). \tag{31}\] As is typical in perturbation theory, we introduce the index \(\alpha\) for bookkeeping purposes, and split the full Hamiltonian into a diagonal part and two off-diagonal perturbations, so that \(\hat{H}=\hat{H}_{0}+\alpha\hat{V}_{1}+\alpha^{2}\hat{V}_{2}\). Additionally we rewrite the operator \(\hat{S}\) as a first order operator \(\hat{S}\rightarrow\alpha\hat{S}\), since if the perturbation is small, \(\hat{U}\) should be close to identity. In our case, \(\hat{H}\) is given in Eq. 7 and the first-order perturbation \(\hat{V}_{1}=-\sum_{i=q_{1},q_{2}}\eta_{i\varepsilon}(\hat{a}_{i}^{\dagger}- \hat{a}_{i})(\hat{a}_{\varepsilon}^{\dagger}-\hat{a}_{\varepsilon})\) corresponds to the capacitive couplings of the two transmons (Qubits 1 and 2, \(q_{1,2}\)) to the coupler (\(c\)), the direct coupling between the qubits \(\hat{V}_{2}=-\hbar g_{q_{1}q_{2}}(\hat{a}_{q_{1}}^{\dagger}-\hat{a}_{q_{1}})( \hat{a}_{q_{2}}^{\dagger}-\hat{a}_{q_{2}})\) is a second-order perturbation, while \(\hat{H}_{0}\) is a sum of the three independent anharmonic oscillator Hamiltonians. This hierarchy is chosen due to the fact that in all practical scenarios \(g_{q_{1}q_{2}}\ll g_{q_{1}c},g_{q_{2}c}\)[34]. By plugging the ansaetze into Eq. 31, and grouping the terms with the same order of \(\alpha\), we obtain \[\hat{U}\hat{H}\hat{U}^{\dagger}=\hat{H}_{0}+\alpha\left(\hat{V}_{1}+\left[ \hat{S},\hat{H}_{0}\right]\right)+\alpha^{2}\left(\hat{V}_{2}+\left[\hat{S}, \hat{V}_{1}\right]+\frac{1}{2}\left[\hat{S},\left[\hat{S},\hat{H}_{0}\right] \right]\right)+\mathcal{O}(\alpha^{3}). \tag{32}\] Looking at the first-order term, it is natural to choose \(\hat{S}\) such that \(\left[\hat{S},\hat{H}_{0}\right]=-\hat{V}_{1}\), i.e. so that we cancel any couplings to the coupler states up to lowest order. However in order to do so and account for the couplings to the higher state correctly, we generalize the transformation from Refs. [34, 50] \[\hat{S}_{i} =\sum_{n_{q_{i}},n_{c}\in\{0,1\}}\sqrt{(n_{q_{i}}+1)(n_{c}+1)}\left[ \frac{g_{q_{i}c}}{\Delta_{q_{i}c}+n_{q_{i}}\Delta_{q_{i}}-n_{c}\alpha_{c}}\left( \hat{\pi}^{n_{q_{i}}+1,n_{q_{i}}}_{q_{i}}\hat{\pi}^{n_{c},n_{c}+1}_{c}-\hat{\pi }^{n_{q_{i}},n_{q_{i}}+1}_{q_{i}}\hat{\pi}^{n_{c}+1,n_{c}}_{c}\right)\right.\] \[-\left.\frac{g_{q_{i}c}}{\Sigma_{q_{i}c}+n_{q_{i}}\alpha_{q_{i}}+n _{c}\alpha_{c}}\left(\hat{\pi}^{n_{q_{i}}+1,n_{q_{i}}}_{q_{i}}\hat{\pi}^{n_{c} +1,n_{c}}_{c}-\hat{\pi}^{n_{q_{i}},n_{q_{i}}+1}_{q_{i}}\hat{\pi}^{n_{c},n_{c}+1 }_{c}\right)\right], \tag{33}\] \[\hat{S} =\hat{S}_{1}+\hat{S}_{2}. \tag{34}\] We have additionally defined \(\Delta_{q_{i}c}=\omega_{q_{i}}-\omega_{c}\), \(\Sigma_{q_{i}c}=\omega_{q_{i}}+\omega_{c}\) and the operators \(\hat{\pi}^{n,m}_{k}=|n\rangle\langle m|\), acting in the Hilbert space of \(k\in\{q_{1},c,q_{2}\}\). Since we have assumed the coupler remains in the ground state at all times, the effective Hamiltonian is defined on the set of states \(\{|0_{q_{1}}0_{c}0_{q_{2}}\rangle,|0_{q_{1}}0_{c}1_{q_{2}}\rangle,|1_{q_{1}}0 _{c}0_{q_{2}}\rangle,|1_{q_{1}}0_{c}1_{q_{2}}\rangle,|0_{q_{1}}0_{c}2_{q_{2}}\rangle\}\). Additionally, we neglect any couplings outside of the subspace of interest, however the resulting effective Hamiltonian still contains terms coupling the levels \(|000\rangle\leftrightarrow|101\rangle\) and \(|000\rangle\leftrightarrow|002\rangle\). These couplings are neglected in the rotating-wave approximation, as these transitions do not conserve the total occupation number. By additionally setting the energy of the ground state to zero we arrive at the effective subspace Hamiltonian from Eq. 11. The perturbative parameter values are given by \[\tilde{\omega}_{q_{i}} =\omega_{q_{i}}+\frac{g^{2}_{q_{i}c}}{\Delta_{q_{i}c}}+\frac{2g^{ 2}_{q_{i}c}}{\Sigma_{q_{i}c}+\alpha_{q_{i}}}+\frac{g^{2}_{q_{i}c}}{\Sigma_{q_{ i}c}}, \tag{35}\] \[\tilde{\alpha}_{q_{i}} =\alpha_{q_{i}}-\frac{2g^{2}_{q_{i}c}}{\Delta_{q_{i}c}+\Delta_{q_ {i}}}+\frac{2g^{2}_{q_{i}c}}{\Sigma_{q_{i}c}+\alpha_{q_{i}}}+\frac{4g^{2}_{q_{ i}c}}{\Sigma_{q_{i}c}+\alpha_{q_{i}}}+\frac{g^{2}_{q_{i}c}}{\Sigma_{q_{i}c}+2 \alpha_{q_{i}}},\] (36) \[\tilde{g}_{01,10} =g_{q_{1}q_{2}}+\frac{g_{q_{1}c}g_{q_{2}c}}{2}\left(\frac{1}{ \Delta_{q_{1}c}}+\frac{1}{\Delta_{q_{2}c}}-\frac{1}{\Sigma_{q_{1}c}}-\frac{1 }{\Sigma_{q_{2}c}}\right),\] (37) \[\tilde{g}_{11,02} =\sqrt{2}\left[g_{q_{1}q_{2}}+\frac{g_{q_{1}c}g_{q_{2}c}}{2} \left(\frac{1}{\Delta_{q_{1}c}}+\frac{1}{\Delta_{q_{2}c}+\alpha_{q_{2}}}-\frac {1}{\Sigma_{q_{1}c}}-\frac{1}{\Sigma_{q_{2}c}+\alpha_{q_{2}}}\right)\right]. \tag{38}\] ### Fidelity Scaling Scaling of the fidelity with the number of gates, i.e. computing the process fidelity of sequential application of \(N\) gates, can be found by using Eq. 20 to first define the map corresponding to a series of gates \[\left(\text{CPHASE}_{\text{CFS}}\right)^{N}[\hat{\rho}]=\sum_{k=0}^{N}\binom{N}{ k}\hat{U}_{-}^{k}\hat{U}_{+}^{(N-k)}\hat{\rho}\,\hat{U}_{-}^{(N-k)}\hat{U}_{+}^{k}, \tag{39}\] where we have used the fact that the operators in Eq. 21 are diagonal and therefore commute with each other and also \(\hat{U}_{+}^{\dagger}=\hat{U}_{-}\). Using the effective Kraus operators from Eq. 39, and only considering the conditional phase error, we combine this with the fidelity definition from Eq. 22, and obtain \[\mathcal{F}[\left(\text{CPHASE}_{\text{CFS}}\right)^{N}] =\frac{d+\sum_{k=0}^{N}\binom{N}{k}\left|\text{tr}\left\{\left( \hat{U}_{\text{CPHASE}}^{\dagger}\right)^{N}\hat{U}_{-}^{k}\hat{U}_{+}^{(N-k)} \right\}\right|^{2}}{d^{2}+d} \tag{40}\] \[=\frac{4+\frac{1}{2^{N}}\sum_{k=0}^{N}\binom{N}{k}\left[10+6\cos \left(\frac{\delta\phi}{2}(N-2k)\right)\right]}{20}\] (41) \[\approx 1-\frac{3}{80}(\delta\phi)^{2}\frac{1}{2^{N}}\sum_{k=0}^{N} \binom{N}{k}(N-2k)^{2}\] (42) \[=1-\frac{3}{80}N(\delta\phi)^{2}. \tag{43}\] This result indicates that calibrating the gate such that the error is evenly split between the two parities not only increases the single-gate fidelity but also leads to a more generous scaling of the infidelity \(\propto N\), compared to the purely coherent error case for which the error scales as \(\propto N^{267}\). ### Gate parameters for high-fidelity simulations Finding good gate parameters, both for the Hamiltonian as well as for the pulse for high-fidelity simulations, is not a trivial task. Here, we discuss how to find optimal Hamiltonian parameters at different qubit frequencies and anharmonicities. In general, the only prerequisite is that the qubits are detuned by approximately one anharmonicity, which can be seen from Eq. 11. As seen from Table 2, we keep some of the parameters in the simulation fixed, while others depend on \(\alpha_{q_{2}}\), which is varied. These parameters are based on the experimental values from Ref. [32] and the coupling coefficients from the Hamiltonian in Eq. 7 are related to \(\beta_{ij}\) via the following relation \(g_{ij}=\beta_{ij}/\sqrt{\alpha_{i}\omega_{j}}\), as described in the main text. The small perturbation to \(\alpha_{q_{1}}\) is there in order to ensure that the energy levels are significantly non-degenerate for perturbation theory to apply. This is also completely realistic as a typical fabrication procedure results in seemingly random deviations from the designed values. More specifically, the gates simulated in Fig. 2(a,b) are obtained by varying \(\alpha_{q_{2}}\) with values \(\alpha_{q_{2}}\in[195,230,250,270,300]\,h\,\text{MHz}\). Additionally, in order to certify that the analyzed effect is not limited to the choice of parameters presented above, the second data point of Fig. 2(a,b) at \(E_{J}/E_{C}\approx 50\), was generated in the same way as in Table 2, but with the change to \(\alpha_{q_{2}}=5.1\cdot 2\pi\text{GHz}\). The coupler frequency in the idling configuration \(\alpha_{c}^{\text{idle}}\), i.e. before and after performing a gate, is determined by diagonalizing the Hamiltonian to fulfill condition Eq. 8. The pulse parameters from Eq. 9 are obtained by numerically optimizing the fidelity of the gate, with fixed \(\sigma=5\,\text{ns}\) and \(\tau_{b}=2\sqrt{2}\sigma\). Typical values of the amplitude are \(A\sim 1-1.2\cdot h\,\text{GHz}\) and \(\tau_{c}\sim 60\,\text{ns}\). ### Scaling of Transmon Error Sources In this section we derive the scaling of the noise parameters shown in Table 1 with the transmon energies \(E_{J}\) and \(E_{C}\). The noise is modeled by appending the appropriate noise channel after the gate unitary, and the calculated infidelity is thus independent of the unitary dynamics, as was shown in Ref. [85]. ### Charge noise \(T_{1}\) The \(T_{1}\) decay time of transmon devices is believed to be currently limited by the presence of a discrete number of environmental two-level systems (TLSs) which couple to the qubit via their electrical dipole [69, 70, 71, 72, 73, 74, 75]. Therefore assuming that the charge noise in the system is weak enough, the interaction Hamiltonian between a transmon and an environmental TLS can be derived from Eq. 1, by replacing \(n_{g}\to n_{g}+\delta\hat{n}_{g}\), as in Refs. [70, 75], so that \[\hat{H}_{\text{q-TLS}}=8E_{C}\,\hat{n}\otimes\delta\hat{n}_{g}=-4\sqrt{2}E_{C }\left(\frac{E_{J}}{8E_{C}}\right)^{\frac{1}{4}}i(\hat{a}-\hat{a}^{\dagger}) \otimes\delta\hat{n}_{g}. \tag{44}\] In the above equation, we have used the asymptotic expression for the number operator \(\hat{n}\), derived already in Ref. [12]. The operator \(\delta\hat{n}_{g}\) is defined in the Hilbert space of the TLS, and is related to the parameters of the TLS - more specifically, its electrical dipole [73]. More importantly, \(\delta\hat{n}_{g}\) does not explicitly depend on \(E_{J}\) and \(E_{C}\). Since the majority of the \(T_{1}\) experiments on transmons display exponential decays [86], the dynamics can be captured by the Lindblad equation. As a second-order approximation is assumed in the derivation of the Lindblad equation, the resulting decay rates are found to be proportional to the square of the coupling coefficient of the transmon to the TLS environment [64]. In our case this translates to \[\Gamma_{1}\propto E_{C}^{3/2}E_{J}^{1/2}. \tag{45}\] Here we have omitted the noise spectrum of the environment, since current models assume a flat noise spectrum without any dependence on the qubit frequency [73]. ### Flux noise \(T_{\phi}\) While it is not strictly necessary for the computational transmons to be flux-tunable, it is often desired as flux-tunability additionally enables the implementation of an iSWAP gate with the same architecture [32, 34]. Having slow flux tunability is also desirable in order to avoid resonances with TLSs in the environment which can severely limit the \(T_{1}\) decay time [74, 75], as well as helping with the issue of frequency crowding [80]. However, unlike the TLS environment producing \(T_{1}\) dynamics, the noise spectrum of magnetic-flux noise is typically observed to have a \(1/f^{\alpha}\) frequency dependence [69, 76, 77, 78], with \(\alpha\sim 1\). The large noise-spectrum amplitude at lower frequencies means that the long-time correlation results in non-Markovian dynamics [87]. Assuming that the noise is slow enough, in order for the adiabatic approximation to hold, the interaction Hamiltonian due to a slowly fluctuating magnetic environment \(\delta\hat{\Phi}\) is given by [12] \[\hat{H}_{\text{q-flux}}=\frac{\partial\hat{H}}{\partial\Phi}\otimes\delta\hat{ \Phi}=h\frac{\partial\omega}{\partial\Phi}\hat{a}^{\dagger}\hat{a}\otimes \delta\hat{\Phi}. \tag{46}\] The form of the flux dispersion \(\partial\omega/\partial\Phi\) of a split-junction transmon, with Josephson energies \(E_{J_{1}}\) and \(E_{J_{2}}\), is determined by the relations [12] \[E_{J}(\Phi) =E_{J\Sigma}\cos(\pi\Phi)\sqrt{1+d^{2}\tan^{2}(\pi\Phi)}, \tag{47}\] \[\hbar\omega(\Phi) =\sqrt{8E_{C}E_{J}(\Phi)}-E_{C}, \tag{48}\] with \(E_{J\Sigma}=E_{J_{1}}+E_{J_{2}}\) and \(d=|E_{J_{1}}-E_{J_{2}}|/(E_{J_{1}}+E_{J_{2}})\). Identically as in the \(T_{1}\)-decay example, we assume that the environment operator \(\delta\hat{\Phi}\) does not depend on the transmon parameters. Realistically, the magnitude of this operator depends on the dot product between the magnetic-dipole operator of the spins and the surface vector of the SQUID loop. Even though the Markovian approximation does not hold anymore, the decay rate due to this noise still scales with the square of the coupling coefficient [88, 89], which brings us to \[\Gamma_{\phi}\propto\left(\frac{\partial\omega}{\partial\Phi}\right)^{2} \propto E_{C}E_{J\Sigma}. \tag{49}\] While we have explicitly acknowledged only the first order flux dispersion \(\partial\omega/\partial\Phi\), it is easy to see that the above scaling is valid even if \(\partial\omega/\partial\Phi=0\), and we must take into account the second, or any other order dispersion \(\partial^{n}\omega/\partial\Phi^{n}\). Additionally, in the main text we assume that \(E_{J}\approx E_{J\Sigma}\), which is a realistic assumption precisely since large deviations from this condition result in an increased sensitivity to flux noise. #### Single-Qubit Gate Leakage The low anharmonicity of the transmon is a limiting factor in the operation of single-qubit gates [79, 36] as fast operations drive a part of the population from the computational subspace into the second excited state. A straightforward and effective scheme for mitigating this effect while implementing single-qubit gates known as Derivative Removal by Adiabatic Gate (DRAG) was presented in Ref. [79]. Since typical pulse amplitudes used to perform single-qubit operations are typically much lower compared to the qubit frequency, and the drive is resonant with the qubit, we assume that the rotating-wave approximation is accurate. In the frame rotating with the qubit frequency, the effective Hamiltonian depends on the pulse parameters, anharmonicity, and the detuning between the drive and qubit frequencies. This indicates that the amount of leakage does not explicitly depend on the qubit frequency. Thus, the process fidelity, similar to the one defined in Eq. 22, but in this case for a single-qubit rotation around the \(x\) or \(y\)-axes, depends only on the qubit anharmonicity (or charging energy), since \(\alpha\simeq-E_{C}\) in the transmon limit. This means that even though analytical results are not available, the relationship can be determined numerically and interpolated. This relationship, albeit with different parameters, has already been plotted in Ref. [79], and generally follows a dependence of \(P_{\text{leak}}\propto E_{C}^{-7}\), \(5\lesssim\gamma\lesssim 6\), with higher exponents observed at lower \(E_{C}\). The independence of the single-qubit gate infidelity of the qubit frequency (within the transmon regime) was also verified numerically. The single-qubit gate parameters assumed in Fig. 4d and Fig 5 were a DRAG Gaussian pulse, with a \(\sigma=4\,\text{ns}\) and a total duration \(t_{\text{SQG}}=4\sigma\). The amplitudes of both DRAG components are numerically optimized before interpolating the dependence of \(P_{\text{leak}}\). on \(\alpha\), which was used to generate Fig. 4d. The pulse drive frequency is assumed to be resonant with the qubit. More details are available in Ref. [36]. We note here, that most of the SQG infidelity is due to leakage, rather than phase errors. #### Thermal Excitation Error While other error sources affect the gate performance, the thermal-excitation error considered here only affects the state preparation. The process fidelity, which is defined as the fidelity averaged over Haar random distributed input states, is therefore not applicable, since this error only affects one input state. We therefore replace the process infidelity [65] with a state infidelity. This can also be qualitatively thought of as replacing the Haar random distribution with a delta-like distribution with a peak at the \(|0\rangle\) state. By modeling the thermal excitation as a bit-flip channel \(\mathcal{E}[\beta]=(1-P_{|1\rangle})\beta+P_{|1\rangle}\delta_{x}\beta\delta_{x}\) it is straightforward to see that the state infidelity [52] \[\mathcal{F}=\langle 0|\mathcal{E}[|0\rangle\langle 0|]|0\rangle=1-P_{|1\rangle }=\frac{1}{1+e^{-\beta\omega}}, \tag{50}\] where we have additionally assumed that the temperature is low enough such that the population of the higher-excited states is negligible.
2302.14713
Security in Distributed Systems by Verifiable Location-Based Identities
Proof-of-Location (PoL) is a lightweight security concept for Internet-of-Things (IoT) networks, focusing on the sensor nodes as the least performant and most vulnerable parts of IoT networks. PoL builds on the identification of network participants based on their physical location. It introduces a secondary message type to exchange location information. Via these messages, the nodes can verify the integrity of other network participants and reach a consensus to identify potential attackers and prevent malicious information from spreading. The paper presents the concretization of the concept to allow implementation on real hardware. The evaluation based on this implementation demonstrates the feasibility of PoL and enables identifying further steps to develop a deployable protocol.
Simon Tschirner, Katharina Zeuch, Sascha Kaven, Lorenz Bornholdt, Volker Skwarek
2023-02-28T16:23:39Z
http://arxiv.org/abs/2302.14713v1
# Security in Distributed Systems by Verifiable Location-Based Identities ###### Abstract Proof-of-Location (PoL) is a lightweight security concept for Internet-of-Things (IoT) networks, focusing on the sensor nodes as the least performant and most vulnerable parts of IoT networks. PoL builds on the identification of network participants based on their physical location. It introduces a secondary message type to exchange location information. Via these messages, the nodes can verify the integrity of other network participants and reach a consensus to identify potential attackers and prevent malicious information from spreading. The paper presents the concretization of the concept to allow implementation on real hardware. The evaluation based on this implementation demonstrates the feasibility of PoL and enables identifying further steps to develop a deployable protocol. Wireless sensor networks, Internet-of-Things, Consensus, Security, Trust. ## I Introduction The Internet-of-Things (IoT) belongs to the fastest developing fields during the past decade. With its growth, it is safe to assume that there are (soon) tens of billions of IoT-devices existing [1]. Their application spread to almost every field of life, covering everything from industrial production plants to children toys, from large scale sensor applications for environmental monitoring to health care applications, from automotive devices to home appliances. Among these applications are many safety-critical tasks, where a malfunction or malicious tampering with devices can impact human health, may have a large financial impact, or tampers with privacy. Their increasing spread and importance also increase the interest attackers are developing for IoT-devices. At the same time, IoT-networks are prone to attacks, due to their typical properties. Alaba et al. [2] provide a comprehensive review on IoT-security. Naturally, inter-connectedness combined with little resources at their disposal characterises IoT-devices. Requirements on costs, size and energy consumption are strong, which means that their design provides computation and memory capacity close to the bare minimum required to fulfil their tasks. Traditional security mechanisms, e.g. encryption of memory and data transmission, in turn requiring strong computational power, contradicting the common light-weight nature of IoT-devices [3]. An aspect of IoT security is the ability to trust in data received from a device that is part of the network. This use case is typical for wireless sensor networks (WSN), which are, when connected via a gateway to the internet, also a part of the IoT. In large WSNs, sensor nodes (SN) cover an area, exceeding the transmission range of a single SN. These scenarios use multi-hop communication, where multiple SNs forward data from its source to the sink (gateway). The WSN has to ensure that sensed data will not be altered (in an undesired way) on its path from the original SN to the sink. Thus, IoT requires security mechanisms despite containing SNs as the least performant parts. Recent surveys identified many security issues in IoT [3, 4], while approaches to secure IoT have been proposed in the literature as well, e.g. [5]. One lightweight security approach focused on WSNs is provided by [6], a so-called _Proof-of-Location_ (PoL). They propose to secure a WSN by using a trust mechanism consisting of a combination of redundant localisation of sensor nodes and consensus generation within a WSN. The main purpose of this paper is to address the challenge arising from a combination of the light-weight nature of IoT devices with the computational complexity of effective security mechanisms. Thereby, the following research questions will be addressed: 1. How can PoL become an implementable, usable protocol? 2. Can PoL serve as a short-range communication light-weight security mechanism? This paper further defines the PoL protocol, reaching an implementable state. Detailed extensions for the concept of PoL include its implementation and evaluation in the field of short-range communication. The presented extensions incorporate the understanding of an IoT device's identity proposed by Wohnert et al. [7]. The rest of the paper is structured as follows: First, the background focuses on the usage of identity to create trust in data from IoT devices. Further identity-based attacks, avoidable by using the presented concept, are depicted. Section III describes how the presented concept has been developed to create a solution to the identified identity-based attacks. The description of the concept itself follows in Section IV. Implementation details are shown in Section V and experimental results from a concrete implementation are presented in Section VI. Finally, conclusions and future work are summarised. ## II Trust and Identities in the Internet of Things Securing a WSN contains two aspects:
2309.09356
Hysteresis resulting from Lennard-Jones interactions
The fundamental mechanism of hysteresis in the quasistatic limit of multi-stable systems is associated with transitions of the system from one local minimum of the potential energy to another. In this scenario, as system parameters are (quasistatically) varied, the transition is prompted when a saddle-node bifurcation eliminates the minimum where the system resides in. The objective of the present work is to specify this generic mechanism for systems of interacting particles assuming a natural single-well (Lennard-Jones) interaction potential for each pair of particles. We show multi-stability and present details of hysteresis scenarios with the associated bifurcations and transitions in a case study of constrained four-degrees-of-freedom four particle systems on the plane.
Dmitrii Rachinskii, Andrei Zagvozdkin, Oleg Gendelman
2023-09-17T19:43:10Z
http://arxiv.org/abs/2309.09356v1
# Hysteresis resulting from Lennard-Jones interactions ###### Abstract The fundamental mechanism of hysteresis in the quasistatic limit of multi-stable systems is associated with transitions of the system from one local minimum of the potential energy to another. In this scenario, as system parameters are (quasistatically) varied, the transition is prompted when a saddle-node bifurcation eliminates the minimum where the system resides in. The objective of the present work is to specify this generic mechanism for systems of interacting particles assuming a natural single-well (Lennard-Jones) interaction potential for each pair of particles. We show multi-stability and present details of hysteresis scenarios with the associated bifurcations and transitions in a case study of constrained four-degrees-of-freedom four particle systems on the plane. **Keywords: Hysteresis, multi-stability, bifurcation, gradient flow, energy dissipation, quasistatic limit** **MSC Subject Classification: 34C55, 70F40** ## 1 Introduction In this work, we revisit fundamentals of hysteresis modeling. Phenomenological models of hysteresis, which describe experimentally observed complex constitutive relations of materials and media, are ubiquitous in engineering and quite diverse. Examples include models of a stress-strain constitutive relation in elastoplastic materials (e.g. Prandtl's elastic-ideally plastic element [22]; Prandtl-Ishlinskii hysteresis model and its generalizations [6]; Moreau's sweeping process [10]; rate-independent yield criteria [7, 21]; Armstrong-Frederick [2], Chaboche [8], Mroz nonlinear hardening rules [11]); related models of dry friction and creep-fatigue damage counting (Maxwell-slip friction model [1]; rainflow-counting algorithm of calculating fatigue [25]); magnetizing field-magnetization constitutive laws of magnetic materials (Preisach independent domain model [23]; Bouc-Wen, Jiles-Atherton, Stoner-Wohlfarth models [4, 26, 28]; Krasnosel'skii-Pokrovskii and Mayergoyz-Friedman models [5, 9]); pressure-saturation constitutive equations for flows in porous media (Parlange and Mualem hysteresis models [12, 13]); coupling of mechanical, magnetoelectric and temperature variables in smart materials such as piezoelectric and magnetostrictive materials, shape-memory alloys and shape-memory polymers, to mention a few. The aforementioned models are intrinsically meso- or macroscopic and usually are loosely related to the microstructure. From one side, it can be considered as a sort of advantage--media and systems with broad variety of the microstructures can exhibit similar hysteresis behavior and can be described by similar models. From the other side, one always encounters a problem of adequate attribution of specific parameters to the effective models. Arguably, the best way is to evaluate parameters from the "first principles", i.e. starting from the potentials of interatomic interactions, or potential energy landscape. Unfortunately, such relationships are usually far beyond the reach. The dynamic hysteretic behavior is governed by an interplay of the structural modifications and dynamic dissipation. Each of these two intrinsic components in microscopic models is not understood at the level of quantitative predictions, except possibly for very few very simple models. The goal of this work is to explore the minimal requirements to the interaction potential that warrant the hysteretic behavior. The dissipation is assumed to be overwhelming. As such, the system is considered in a quasistatic response regime. This simplification leaves aside a number of important dynamic features of the process [14, 15, 16, 17, 18, 19, 20], but still leaves a hope to achieve a useful approximation to realistic dynamics. The fundamental mechanism of hysteresis associated with multi-stability and bifurcation is revealed by the following classical example (see e.g. [27]). Let us consider a one-degree-of-freedom particle in the potential well \[V(x;h)=\frac{x^{4}}{4}-\frac{x^{2}}{2}-hx, \tag{1}\] where \(h\) is the external field, which is varied quasistatically (i.e. the inertia is ignored). Assume that initially \(h\) is large, so that \(V\) has a unique minimum point \(x=x_{*}(h)\) located on the positive semi-axis \(x>0\), and the particle sits at this minimum, see Figure 1. Suppose that \(h\) decreases. At the critical value \(h_{*}=2/3\sqrt{3}\), the potential acquires the double well shape by developing the second minimum on the negative semi-axis \(x<0\) through the saddle-node bifurcation, see Figure 2. At the critical value \(h=-h_{*}\), the positive minimum is eliminated through the other saddle-node bifurcation, and the particle transitions to the remaining negative minimum \(x=-x_{*}(-h)\). Next, assuming that from this point \(h\) increases, the particle will be located at the negative minimum of \(V\) until this minimum is eliminated through the saddle-node bifurcation at \(h=h_{*}\), at which point the particle will transition back to the positive minimum \(x=x_{*}(h)\), closing the hysteresis loop. This simple system displays important features of hysteresis. First, within the bi-stability range, \(-h_{*}<h<h_{*}\), the state of the system (the position of the particle at the positive or negative minimum of the potential) is determined both by the concurrent and past values of the input \(h\), hence one talks about history-dependence. Second, the history-dependence with the associated hysteresis loop manifests itself in the quasistatic limit of slow variations of \(h\) (this fact is referred to as rate-independence of hysteresis [3]). Third, each transition of the particle from one minimum of \(V\) to another is associated with an irrecoverable energy loss. Generalizing the above example to multi-particle systems, the energy potential of a system with many degrees of freedom can have a large number of minimum points (metastable states). Further, as input variations cause the energy landscape to change, the same bifurcation mechanism (demonstrated by the double well potential) leads to a complex pattern of transitions between the states, creating a structure of hysteresis loops of the material constitutive law at a macrolevel. As one example, the Preisach model of magnetic hysteresis considers \(N\) non-interacting particles, each in a double well potential (1), i.e. the energy potential of the system is \[V(x_{1},\ldots,x_{N};h)=\sum_{i=1}^{N}\left(\frac{x_{i}^{4}}{4}-\frac{x_{i}^{ 2}}{2}-(ha_{i}+b_{i})x_{i}\right),\] Figure 1: Particle in the double-well potential (1). where \(h\) is the input; \(a_{i},b_{i}\) are parameters. This potential has up to \(2^{N}\) minima for a particular value of \(h\), and produces a specific structure of nested hysteresis loops (known as return-point memory), which are characterized by the so-called wiping-out and congruency properties [9], see Figure 3. A hysteresis loop is an evidence that the system goes through one sequence of states as \(h\) increases and then through a different sequence of states as \(h\) decreases; or, that the system goes through the same sequence of states (in the reversed order) as \(h\) decreases, but the transitions from one state to another and the reversed transitions occur at different values of \(h\) (as in Figure 2 in the case of the double well potential). As we see, in the Preisach model (and other phenomenological models of hysteresis phenomena), hysteresis of an individual particle is postulated. In this paper, we ask the following question: _can hysteresis emerge in a system of particles interacting via naturally non-hysteretic potentials?_ To be more specific, we limit our discussion to Figure 3: (a) A sample input \(h=h(t)\) of the Preisach model. (b) Input-output diagram of the Preisach model depicting input \(h\) (magnetizing field) vs output \(m\) (magnetization) for the input shown on panel (a). The output is given by \(\text{m}=\sum_{i=1}^{N}c_{i}\,\text{sign}(x_{i})\), where \(c_{i}\) are parameters (cf. (1.1)). The state \((x_{1},\ldots,x_{N})\) and output value \(\text{m}\) at a given time \(t\) depend both on the concurrent value of \(h\) and a sequence of past extremum values of \(h\), which are known as running main extremum values. Figure 2: Transitions of the particle from one to another (local) minimum of the potential energy shown in Figure 1 as the exogenous field parameter \(h\) (input) changes quasistatically. At the points \(h=\pm h_{*}\), one minimum collides with the local maximum and disappears in a saddle-node bifurcation causing a transition to the other minimum. systems of identical particles and the classical Lennard-Jones interaction. As a starting point, we make an observation that a chain of particles with the nearest neighbor interaction does not display hysteresis if the particles are elongated along a straight line (see the next section). Therefore, we look at systems of particles on a plane. As the main result, we answer affirmatively to the above question by presenting examples of simple 4-particle (constrained) planar configurations, which exhibit hysteresis. We provide a detailed analysis of the associated bifurcation scenarios (Section 3). The paper is concluded with a discussion of these results. ## 2 Preliminaries We consider a collection of \(N\) particles in the potential field with the potential \[V({\bf r};h)=V({\bf r}_{1},...,{\bf r}_{N};h)=\sum_{1\leq i<j\leq N}\Phi_{ij}(r _{ij})+h\sum_{i}{\bf a}_{i}\cdot{\bf r}_{i},\qquad{\bf r}=({\bf r}_{1},\dots,{ \bf r}_{N}), \tag{1}\] where \({\bf r}_{i}\) is the position of the \(i\)-th particle; \(h\) is a scalar input variable (such as the amplitude of external forcing, load, external field etc.); \(r_{ij}=|{\bf r}_{i}-{\bf r}_{j}|\) is the Eucledian distance between the \(i\)-th and \(j\)-th particles; \(\Phi_{ij}\) is the interaction potential of the pair of particles; \({\bf a}_{i}\) are vector-valued parameters; and, dot stays for the dot product. It is assumed that the two-particle interaction potential is the Lennard-Jones potential \[\Phi_{ij}(r)=4\varepsilon_{ij}\Phi_{1}(r),\qquad\Phi_{1}(r)=\left(\frac{ \sigma}{r}\right)^{12}-\left(\frac{\sigma}{r}\right)^{6}, \tag{2}\] see Figure 4. We consider the quasistatic evolution of the system in response to quasistatic variation of the parameter \(h\). This evolution has intervals of (relatively) slow and fast dynamics. During the slow evolution, the system sits in a local minimum of the potential \(V\), say \({\bf r}_{*}^{-}={\bf r}_{*}^{-}(h)\), until this minimum point is eliminated via a saddle-node bifurcation as \(h\) is varied. The saddle-node bifurcation is the only generic mechanism creating/eliminating minimum points of \(V\). At the bifurcation point \(h=h_{b}\), the system transits to another minimum following the fast antigradient dynamics \[\dot{\bf r}=-\nabla_{\bf r}V({\bf r};h_{b}),\qquad{\bf r}=({\bf r}_{1},\dots, {\bf r}_{N}),\] i.e. the system follows the one-dimensional unstable manifold of the saddle-node equilibrium point \({\bf r}_{*}^{-}(h_{b})\) of the gradient field to another local minimum point \({\bf r}_{*}^{+}(h_{b})\) Figure 4: The Lennard – Jones interaction potential of a pair of particles for \(\sigma=1\). of the potential. This transition is (infinitely) fast compared to the slow dynamics following a minimum of \(V\). The antigradient transition dynamics ensures that \[\dot{V}=-|\nabla_{\mathbf{r}}V(\mathbf{r};h_{b})|^{2},\] hence \(V\) decreases along the transition trajectory, and \[V(\mathbf{r}_{*}^{-};h_{b})-V(\mathbf{r}_{*}^{+};h_{b})>0.\] This positive quantity represents an irreversible energy loss (dissipation) associated with the transition. If, after (several) transitions, \(h\) returns to its initial value and the system returns to the same minimum of \(V\) where it started from, then hysteresis is observed. We start by showing that a one-dimensional chain of \(N\) particles with nearest neighbor interactions and an external forcing applied at the ends of the chain does not exhibit hysteresis. Namely, let us consider the potential \[V(x_{1},...,x_{N};h)=h(x_{1}-x_{N})+\sum_{i=1}^{N-1}\Phi_{1}(x_{i+1}-x_{i}),\] where \(x_{1}<\cdots<x_{N}\) are positions of the particles on a straight line, \(\Phi_{1}\) is the Lennard-Jones two-particle interaction potential (cf. (2)), and the opposite forces \(-h\) and \(h\) are applied to the two particles at the ends of the chain, see Figure 5. Using the variables \(q_{i}=x_{i+1}-x_{i}\), the potential reads \[V(q_{1},\ldots,q_{N-1};h)=\sum_{i=1}^{N-1}\bigl{(}\Phi_{1}(q_{i})-hq_{i}\bigr{)}.\] This potential does not have critical points for \(h>h_{*}=126/169\) (in this case, the external force expanding the chain exceeds the maximal attraction force between the particles, and the chain breaks). For \(0<h<h_{*}\), the potential has one local minimum and one local maximum point. For \(h<0\), the minimum is global and is a unique critical point of \(V\) (the maximum disappears at infinity as \(h\) becomes negative: \(h>0\) corresponds to the expansion and \(h<0\) to the contraction of the chain by the external forces). Since \(V\) has at most one minimum, the system does not exhibit hysteresis. ## 3 Case study of two-dimensional structures As a prototypical example of hysteresis in a system of particles interacting via the Lennard-Jones potential, we consider the system of four identical particles shown in Figure 6. The particles are placed on the \((x,y)\)-plane; the coordinates of the \(i\)-th particle are denoted by \((x_{i},y_{i})\). Particles 1 and 3 are constrained to the vertical lines \(x=1\) and \(x=-1\), respectively, while particles 2 and 4 are constrained to the horizontal lines \(y=1\) and \(y=-1\), i.e. each particle has one degree of freedom. We use the notation \(q_{1}=y_{1}\), \(q_{2}=x_{2}\) Figure 5: System of \(N=4\) particles with nearest neighbor interactions on a line. Each force (shown by an arrow) has amplitude \(|h|\). \(q_{3}=y_{3}\), \(q_{4}=x_{4}\) for the system coordinates. Assuming the Lennard - Jones pairwise interaction between the particles, the system potential is \[V_{0}^{\sigma}(q_{1},q_{2},q_{3},q_{4})=\sum_{1\leq i<j\leq 4}\Phi_{1}(r_{ij})=\sum_ {1\leq i<j\leq 4}\left(\left(\frac{\sigma}{r_{ij}}\right)^{12}-\left(\frac{ \sigma}{r_{ij}}\right)^{6}\right), \tag{1}\] where \(r_{ij}\) is the Eucledian distance between particles \(i\) and \(j\). We show hysteresis in this system under external forcing. Two types of forcing will be considered. As the first example, we will assume that particles 1 - 4 are acted upon by the constant forces \(h\), \(-h\), \(-h\), \(h\), respectively, along the lines of their motion as shown in Figure 6. In this case, the full potential of the forced system is \[V_{h}^{\sigma}(q_{1},q_{2},q_{3},q_{4})=h(q_{1}-q_{2}-q_{3}+q_{4})+V_{0}^{ \sigma}(q_{1},q_{2},q_{3},q_{4}). \tag{2}\] This system is discussed in Section 3.2. Another example, in which the constant external forces acting on particles 1 - 4 are \(h\), \(h\), \(-h\), \(-h\), respectively, and and the corresponding potential is \[\hat{V}_{h}^{\sigma}(q_{1},q_{2},q_{3},q_{4})=-h(q_{1}+q_{2}-q_{3}-q_{4})+V_{0 }^{\sigma}(q_{1},q_{2},q_{3},q_{4}), \tag{3}\] is considered in Section 3.3, see Figure 6. ### Unforced system Let us first discuss the unforced system with potential (1). We show that, for certain ranges of the parameter \(\sigma\), this potential has multiple minimum points. In other words, for such \(\sigma\), the system with potential (2) (resp. (3)) is multi-stable when \(h=0\), i.e. the external forcing is zero. Potential (1) is invariant with respect to the action of the dihedral group \(\mathbb{D}_{4}\) of symmetries of the square. A generating set of this group, consisting of the clockwise rotation \(\rho\) by \(\pi/2\) around the origin and the reflection \(\kappa\) over the line \(x=y\), acts on Figure 6: Four constrained particles with pairwise Lennard – Jones interaction under (a) rotational external forcing; (b) expansion. the configuration space of the system by mapping a point \({\bf q}=(q_{1},q_{2},q_{3},q_{4})\) to the points \[\rho(q_{1},q_{2},q_{3},q_{4})=(q_{4},-q_{1},q_{2},-q_{3}),\qquad\kappa(q_{1},q_{2 },q_{3},q_{4})=(q_{2},q_{1},q_{4},q_{3}),\] respectively. We will use the subgroups \(\mathbb{Z}_{4}=\{e,\rho,\rho^{2},\rho^{3}\}\), \(\mathbb{Z}_{2}=\{e,\kappa\}\) of \(\mathbb{D}_{4}\). Let us consider the fully symmetric zero critical point \({\bf q}=0\) of the potential \(V_{0}^{\sigma}\). A direct calculation shows that the eigenvalues of the Hessian of the potential at zero are \[\lambda_{1}=\lambda_{2}=\frac{9\sigma^{6}(\sigma^{6}-2)}{8},\quad\lambda_{3}= \frac{3\sigma^{6}(1663\sigma^{6}-3552)}{2048},\quad\lambda_{4}=\frac{3\sigma^{ 6}(544-129\sigma^{6})}{2048}, \tag{3.4}\] see Figure 7. Hence, \({\bf q}=0\) is a (local) minimum point of the potential for the values of the parameter \(\sigma\) from the interval \[(\sigma_{*},\sigma^{*})=\left(\left(\frac{3552}{1663}\right)^{1/6},\left(\frac {544}{129}\right)^{1/6}\right)=(1.13483,1.27107). \tag{3.5}\] At each end of the stability interval (3.5), the anti-gradient field \(-\nabla V_{0}^{\sigma}\) undergoes a supercritical symmetry braking bifurcation at \({\bf q}=0\). **Symmetry breaking pitchfork bifurcation at \(\sigma=\sigma^{*}\).** As \(\sigma\) increases across the critical value \(\sigma^{*}=1.27107\) where \(\lambda_{4}(\sigma^{*})=0\), the anti-gradient field undergoes a supercritical pitchfork bifurcation producing a pair of minimum points \[{\bf q}^{*}=(q^{*},-q^{*},-q^{*},q^{*}),\qquad\kappa{\bf q}^{*}=-{\bf q}^{*}=(- q^{*},q^{*},q^{*},-q^{*}) \tag{3.6}\] of the potential, which bifurcate from the critical point \({\bf q}=0\) as it changes stability and becomes a saddle. The pair of critical points (3.6) exists for \(\sigma>\sigma^{*}\), they form a \(\mathbb{Z}_{2}\) orbit, and each of them is \(\mathbb{Z}_{4}\)-symmetric because \(\rho{\bf q}^{*}={\bf q}^{*}\). In other words, the pitchfork bifurcation at \(\sigma=\sigma^{*}\) breaks the \(\mathbb{Z}_{2}\)-symmetry of the zero critical point but preserves the \(\mathbb{Z}_{4}\)-symmetry. It is important to observe that the one-dimensional subspace \[L=\{{\bf q}=(q,-q,-q,q),q\in\mathbb{R}\} \tag{3.7}\] of points fixed by the symmetry group \(\mathbb{Z}_{4}\) in the configuration space of the system is invariant for the anti-gradient flow, i.e. \(-\nabla V_{0}^{\sigma}(\mathbf{q})\in L\) for \(\mathbf{q}\in L\). Each point of \(L\) in the configuration space corresponds to positioning of the particles in the corners of a square. In particular, the two squares corresponding to the minimum points \(\pm\mathbf{q}^{*}\) of the potential are symmetric to each other with respect to the bisector line \(x=y\), see Figure 8. Since \(L\) contains the critical points \(\pm\mathbf{q}^{*}\) given by (3.6), these points can be found as minimum points of the restriction of \(V_{0}^{\sigma}\) to \(L\), which is given by \[v_{0}^{\sigma}(q)=V_{0}^{\sigma}(q,-q,-q,q)=\frac{129\sigma^{12}-1088\sigma^{6 }(1+q^{2})^{3}}{2048(1+q^{2})^{6}},\] see Figure 9. In this way, one obtains \[q^{*}=\sqrt{\left(\frac{\sigma}{\sigma^{*}}\right)^{2}-1}\] for the minimum points \(\pm q^{*}\) of the function \(v_{0}^{\sigma}\) and for the components of minimum points (3.6) of the potential \(V_{0}^{\sigma}\). Further, by direct calculation, the eigenvectors of the Hessian at any point of \(L\) are \[(1,1,1,1),\quad(1,-1,1,-1),\quad(-1,-1,1,1),\quad(1,-1,-1,1). \tag{3.8}\] Moreover, the corresponding eigenvalues at the critical points \(\pm\mathbf{q}^{*}\in L\) of the potential equal \[\mu_{1}=\mu_{2}=\frac{1287(\sigma^{*})^{14}}{2176\,\sigma^{2}},\quad\mu_{3}= \frac{3(\sigma^{*})^{14}(7373(\sigma^{*})^{2}-397\sigma^{2})}{17408\,\sigma^{ 4}},\] \[\mu_{4}=\frac{1161(\sigma^{*})^{14}(\sigma^{2}-(\sigma^{*})^{2})}{1024\,\sigma ^{4}}.\] Figure 8: Square shaped formations of the particles on the \((x,y)\)-plane corresponding to minima (3.6) of potential (3.1) (red and blue) and the the square formation corresponding to the zero critical point (magenta). Hence (3.6) are minimum points of the potential for \[\sigma^{*}<\sigma<\sigma^{**}=\sqrt{7373/397}\sigma^{*}=5.47766. \tag{3.9}\] At the point \(\sigma=\sigma^{**}\), the minima \(\pm{\bf q}^{*}\) destabilize in the direction \((-1,-1,1,1)\), which is perpendicular to \(L\), and become saddles. In Section 3.2, we will consider the system with rotational forcing (potential (3.2)) for \(\sigma>\sigma^{*}\) and show hysteresis between the square shaped configurations of particles as the external forcing parameter \(h\) is varied. **Symmetry breaking pitchfork bifurcation at \(\sigma=\sigma_{*}\).** Now, let us consider the other bifurcation point, \(\sigma=\sigma_{*}=1.13483\), where the zero \({\bf q}=0\) of the anti-gradient field loses stability. At this supercritical pitchfork bifurcation point, the additional (non-zero) critical points of \(V_{0}^{\sigma}\) appear in a different anti-gradient flow invariant one-dimensional subspace, namely \[M=\{{\bf q}=(q,q,-q,-q),q\in\mathbb{R}\}\] (cf. (3.7)). More precisely, when the eigenvalue \(\lambda_{3}(\sigma)\) (cf. (3.4)) crosses zero at \(\sigma=\sigma_{*}\) as \(\sigma\) decreases, see Figure 7 (the orange line), the point \({\bf q}=0\) becomes a saddle, and a pair of minimum points of \(V_{0}^{\sigma}\) forming a \(\mathbb{Z}_{2}\)-orbit is created in \(M\). These points \[{\bf q}_{*}=(q_{*},q_{*},-q_{*},-q_{*}),\qquad\rho{\bf q}_{*}=-{\bf q}_{*}=(-q_ {*},-q_{*},q_{*},q_{*}) \tag{3.10}\] are \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\)-symmetric because the points of \(M\) are fixed by the subgroup \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}=\{e,\rho^{2},\kappa,\kappa\rho^{2}\}\) of \(\mathbb{D}_{4}\), i.e. the pitchfork bifurcation at \(\sigma=\sigma_{*}\) preserves the \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\)-symmetry of critical points. Each point of \(M\) in the configuration space corresponds to positioning of the particles in the corners of a rectangle. The two rectangles corresponding to critical points \(\pm{\bf q}_{*}\) of the potential are mapped to each other by the rotation by \(\pi/2\), see Figure 10. The components of critical points (3.10) can be obtained by finding minimum points \(\pm q_{*}\) of the restriction of \(V_{0}^{\sigma}\) to \(M\): \[\hat{v}_{0}^{\sigma}(q) = V_{0}^{\sigma}(q,q,-q,-q)\ =\ \frac{\sigma^{6}}{64}\left(-\frac{16}{(q-1)^{6}}-\frac{16}{(1+q)^{6}}- \frac{2}{(1+q^{2})^{3}}\right.\] \[+ \left.\sigma^{6}\left(\frac{2}{(q-1)^{12}}+\frac{2}{(1+q)^{12}}+ \frac{1}{(32(1+q^{2})^{6})}\right)\right),\] Figure 9: The restriction \(v_{0}^{\sigma}(q)\) of the potential \(V_{0}^{\sigma}=V_{0}^{\sigma}({\bf q})\) to the one-dimensional subspace \(L=\{{\bf q}=(q,-q,-q,q),q\in\mathbb{R}\}\) of the configuration space for \(\sigma=1.4>\sigma^{*}\). The subspace \(L\) is invariant for the anti-gradient field and the action of the symmetry group \(\mathbb{Z}_{4}\). see Figure 11. Figure 11 shows the dependence of \(q_{*}\) on \(\sigma\) obtained by numerical minimization (in Wolfram Mathematica). The eigenvectors of the Hessian on the subspace \(M\) are the same as on \(L\) and are given by (10). The corresponding eigenvalues of the Hessian on \(M\) can be obtained explicitly, they are given by rational expressions in \(q\) and \(\sigma\). Figure 12 presents the eigenvalues of the Hessian evaluated at the critical points \(\pm{\bf q}_{*}(\sigma)\). At the bifurcation value of the parameter, \(\sigma=\sigma_{*}\), they merge with eigenvalues (8) evaluated at the critical point \({\bf q}=0\). One can see that on the interval \((\sigma_{\star},\sigma_{*})=\)(\(1.13431,1.13483\)), all the eigenvalues at the critical points \(\pm{\bf q}_{*}(\sigma)\) are positive, hence \(\pm{\bf q}_{*}(\sigma)\) are minima of the potential \(V_{0}^{\sigma}\). However, at the point \(\sigma_{\star}\), one eigenvalue crosses zero, see Figure 12, and these minima destabilize, one in the direction of the eigenvector \((1,-1,1,-1)\), the other in the direction of the eigenvector \((1,1,1,1)\) (both directions perpendicular to \(M\)), hence the critical points \(\pm{\bf q}_{*}(\sigma)\) become saddles for \(\sigma<\sigma_{\star}\). The corresponding two Figure 11: (a) The restriction of potential (8) to the subspace \(M\) for \(\sigma=1\). (b) Minimum point \(q_{*}\) of the potential shown in panel (a) as a function of \(\sigma\). Figure 10: (a) Rectangular formations of the particles on the \((x,y)\)-plane corresponding to minima (11) of the potential \(V_{0}^{\sigma}\). (b) Trapezoid shaped configurations of particles corresponding to minima (12). simultaneous pitchfork bifurcations at the points \(\pm{\bf q}_{*}(\sigma_{*})\) give rise to a \(\mathbb{Z}_{4}\)-orbit of critical points of the potential, \[{\bf q}_{*},\qquad{\bf q}^{*}=\rho{\bf q}_{*},\qquad-{\bf q}_{*}=\rho^{2}{\bf q }_{*},\qquad-{\bf q}^{*}=\rho^{3}{\bf q}_{*}, \tag{28}\] of which \(\pm{\bf q}_{*}\) belong to the two-dimensional subspace of points fixed by the group \(\mathbb{Z}_{2}=\{e,\kappa\}\), \[N_{0}=\{{\bf q}=(q,q,p,p),\ (q,p)\in\mathbb{R}^{2}\}, \tag{29}\] and \(\pm{\bf q}^{*}\) belong to the two-dimensional subspace of points fixed by the group \(\mathbb{Z}_{2}=\{e,\rho^{2}\kappa\}\), \[N_{1}=\{{\bf q}=(-q,p,-p,q),\ (q,p)\in\mathbb{R}^{2}\}. \tag{30}\] A point of \(N_{0}\) corresponds to a configuration of particles forming an isosceles trapezoid, which is symmetric with respect to the line \(x=y\); a point of \(N_{1}\) corresponds to the particles forming a trapezoid, which is symmetric with respect to the line \(x=-y\); and, \(\mathbb{Z}_{4}\)-orbit (28) corresponds to rotations of an isosceles trapezoid by multiples of \(\pi/2\), see Figure 10. Both \(N_{0}\) and \(N_{1}\) are anti-gradient flow invariant. Minimizing \(V_{0}^{\sigma}\) on \(N_{0}\) provides the branch of critical points \({\bf q}_{*}(\sigma)\). We restrict our attention to the segment of this branch shown in Figure 13a with \(\sigma\) ranging over the interval \((.95,1.145)\), which contains the bifurcation point \(\sigma_{*}\). The eigenvalues of the Hessian are positive on this segment (see Figure 14), hence \({\bf q}_{*}(\sigma)\) is a minimum point of the potential, and so are all four points of \(\mathbb{Z}_{4}\)-orbit (28). The branch containing this segment connects to the the branch of rectangular configurations (29) via the fold bifurcation at \(\sigma=1.148\), see Figure 15 and the subcritical bifurcation at the point \(\sigma_{*}\), see Figure 17. It is worth noting that on the parameter interval \(1.1384<\sigma<1.145\) the minimum at zero co-exists with four minimum points (28) creating muli-stability for \(h=0\). Branches of minima (28) shown in Figure 13a correspond to isosceles trapezoidal configurations of particles located within the square \(-1\leq x,y\leq 1\) (equivalently, \(-1\leq q,p\leq 1\)). In addition, the potential \(V_{0}^{\sigma}\) has a \(\mathbb{Z}_{4}\)-orbit of critical points which also belong to the subspaces \(N_{0}\), \(N_{1}\) but correspond to isosceles trapezoidal configurations with two particles located outside the square \(-1\leq q,p\leq 1\), see Figure 13. These are saddle points with one unstable direction which is perpendicular to the subspace \(N_{0}\) (resp., \(N_{1}\)) where the critical point is located, see Figure 16. There Figure 12: Eigenvalues of the Hessian of potential (28) at its critical points (29) as functions of \(\sigma<\sigma_{*}\) (black lines). They merge with the eigenvalues at zero (colored lines) at the bifurcation point \(\sigma=\sigma_{*}\). The colors match those in Figure 7. is an interval of the parameter \(\sigma\) within which these critical points co-exist with the minimum points shown in Figure 13a. We notice that the restriction of the potential to the subspace \(N_{0}\), \[V_{0}^{\sigma}(q,q,p,p) = \tfrac{\sigma^{6}}{64}\left(-\tfrac{8}{(q-1)^{6}}-\tfrac{128}{(4+( q-p)^{2})^{3}}-\tfrac{128}{((1+q)^{2}+(p-1)^{2})^{3}}-\tfrac{8}{(1+p)^{6}}\right.\] \[+ \left.\sigma^{6}\left(\tfrac{1}{(q-1)^{12}}+\tfrac{128}{(4+(q-p)^ {2})^{6}}+\tfrac{128}{((1+q)^{2}+(p-1)^{2})^{6}}+\tfrac{1}{(1+p)^{12}}\right) \right),\] has a singularity on the lines \(q=1\), \(p=-1\) and at the point \((q,p)=(-1,1)\). The restriction of this potential to the subspace \(N_{1}\) has singuarities at the same locations. In Section 3.3, we will consider the system under expansion (potential (3.3)) for \(\sigma<\sigma_{*}\) and show hysteresis between isosceles trapezoidal configurations of particles as the external forcing parameter \(h\) is varied. ### System under rotational forcing Let us consider potential (3.2) with rotational external forcing for \(\sigma^{*}<\sigma<\sigma^{**}\) (cf. (3.9)). As shown in the previous Figure 14: (a) Positive eigenvalues of the Hessian for the branch of minimum points shown in Figure 13a. (b) Zoom into the two smaller eigenvalues from panel (a). subsection, when the forcing is zero (\(h=0\)), the potential has two minimum points (3.6) corresponding to square-shaped configurations of particles shown in Figure 8. Potential (3.2) is invariant with respect to the action of the group \(\mathbb{Z}_{4}\) but not invariant with respect to the action of the group \(\mathbb{Z}_{2}\). However, we observe that \[V_{h}^{\sigma}(\mathbf{q})=V_{-h}^{\sigma}(\kappa\mathbf{q}).\] In particular, if \(\mathbf{q}\) is a local minimum point of the potential for some \(h\), then \(\kappa\mathbf{q}\) is a local minimum point for the value \(-h\) of the forcing parameter. As in the case without forcing, subspace (3.7) of \(\mathbb{Z}_{4}\)-symmetric points is invariant for the anti-gradient flow of potential (3.2). Restricting the potential to \(L\), we obtain the scalar function \[v_{h}^{\sigma}(q)=4hq+v_{0}^{\sigma}(q)=4hq+\frac{129\sigma^{12}-1088\sigma^{ 6}(1+q^{2})^{3}}{2048(1+q^{2})^{6}},\] Figure 16: (a) Eigenvalues of the Hessian for the branch of critical points shown in Figure 13. Panel (b) zooms into the two smaller eigenvalues from panel (a). One eigenvalue is negative, i.e. the critical points are saddles. Figure 15: The branch of trapezoidal configurations shown in Figure 13a folds at \(\sigma=1.485\) (panel (b)) and connects to the branch of rectangular configurations shown in Figure 11 at \(\sigma=\sigma_{\star}=1.1343\) (panel (a)). Solid and dashed segments of the branches correspond to a minimum and a saddle of the potential, respectively. whose critical points \(q^{*}\) define \(\mathbb{Z}_{4}\)-symmetric critical points \(\mathbf{q}^{*}=(q^{*},-q^{*},-q^{*},q^{*})\) of the potential \(V_{h}^{\sigma}\). Hence, we consider zeros of the derivative \[-\frac{(v_{h}^{\sigma})^{\prime}(q)}{4}=-\frac{(v_{0}^{\sigma})^{\prime}(q)}{4} -h=\frac{3s^{6}q(129s^{6}-544(1+q^{2})^{3})}{2048(1+q^{2})^{7}}-h.\] The graph of the odd function \(-(v_{0}^{\sigma})^{\prime}/4\) has four extremum points for \(\sigma>\sigma^{*}\), see Figure 17. In particular, on the positive semi-axis the local maximum and minimum points satisfy \[0<q_{max}<q_{min},\quad h_{max}:=-\frac{(v_{0}^{\sigma})^{\prime}(q_{max})}{4} >0>-\frac{(v_{0}^{\sigma})^{\prime}(q_{min})}{4}=:h_{min}. \tag{3.14}\] Therefore, hysteresis occurs if the following conditions are satisfied: * The local extremum values of the function \(-(v_{0}^{\sigma})^{\prime}/4\) satisfy \(h_{max}<-h_{min}\), see the blue plot in Figure 17 (the orange plot violates this condition). From this condition, it follows that there is a unique point \(q_{0}\) satisfying (3.15) \[q_{max}<q_{0}<q_{min},\qquad-\frac{(v_{0}^{\sigma})^{\prime}(q_{0})}{4}=- \frac{(v_{0}^{\sigma})^{\prime}(-q_{max})}{4}=-h_{max},\] see Figure 18. * Assuming that the external forcing parameter \(h\) oscillates between \(-h_{0}\) and \(h_{0}\), the amplitude \(h_{0}\) satisfies \(h_{max}<h_{0}<-h_{min}\). * The segment of the straight line (3.7) between the points \(\pm(q_{0},-q_{0},-q_{0},q_{0})\) is transversally stable. In other words, the eigenvalues (3.16) \[\lambda_{1}=\lambda_{2}=\frac{9\sigma^{6}\big{(}\sigma^{6}-2(1+q^{2})^{3} \big{)}}{8(1+q^{2})^{7}},\] (3.17) \[\lambda_{3}=-\frac{3\sigma^{6}\big{(}-96(1+q^{2})^{3}(-37+3q^{2})+\sigma^{6}( -1663+115q^{2})\big{)}}{2048(1+q^{2})^{8}}\] of the Hessian, which correspond to the eigenvectors \((0,1,0,1)\), \((1,0,1,0)\), \((-1,-1,1,1)\) orthogonal to \(L\) (see (3.8)), are positive on the segment \(-q_{0}\leq q\leq q_{0}\). We note that the eigenvalue \[\lambda_{4}=\frac{3\sigma^{6}\big{(}-544(1+q^{2})^{3}(-1+7q^{2})+129\sigma^{6} (-1+13q^{2})\big{)}}{2048(1+q^{2})^{8}}=\frac{(v_{F}^{\sigma})^{\prime\prime} (q)}{4}\] corresponding to the eigenvector \((1,-1,-1,1)\) in the direction of \(L\) is negative on the interval \((-q_{max},q_{max})\) and positive on each of the intervals \((-q_{min},-q_{max})\) and \((q_{max},q_{min})\) which include the points \(-q_{0}\) and \(q_{0}\), respectively. Under these conditions, the system with potential (3.2) exhibits hysteresis as shown in Figure 18. The first of the above three conditions is satisfied for the values of \(\sigma\) from the interval \((\sigma^{*},\sigma^{*})=(1.27107,1.375)\), see Figure 19 which shows the dependence of \(h_{max}\) and \(-h_{min}\) on \(\sigma\). The second condition is satisfied for every pair \((\sigma,h_{0})\) in the region bounded above by the graph of \(-h_{min}(\sigma)\) (red line) and below by the graph of \(h_{max}(\sigma)\) (blue line) on the same figure. The third condition involves the interval \((-q_{0},q_{0})\) where \(q_{0}\) is defined non-locally by equation (3.15) (see Figure 18). Figure 19 shows the dependence of \(q_{0}\) on \(\sigma\) on the interval of interest, \((\sigma^{*},\sigma^{\star})\). As confirmed by Figure 20, the eigenvalues \(\lambda_{1}=\lambda_{2}\), \(\lambda_{3}\) (see (3.16), (3.17)) evaluated at \(q=q_{0}(\sigma)\) are positive for the values of \(\sigma\) from this interval. These eigenvalues are even functions of \(q\), the eigenvalues \(\lambda_{1}=\lambda_{2}\) decrease with \(q\) for \(q\geq 0\), and the eigenvalue \(\lambda_{3}\) also decreases with \(q\) in the domain of interest, i.e. in \[\big{\{}(\sigma,q):\sigma^{*}<\sigma<\sigma^{\star},\ 0\leq q\leq q_{0}( \sigma)\big{\}}. \tag{3.18}\] Hence, Figure 20 ensures that all the transversal eigenvalues are positive on the segment \(-q_{0}(\sigma)\leq q\leq q_{0}(\sigma)\) for each \(\sigma\) from the interval \((\sigma^{*},\sigma^{\star})\), i.e. the third condition is also satisfied on this interval. Hence, we conclude that the system with potential (3.2) exhibits hysteresis if the parameter \(\sigma\) of the potential satisfies \(\sigma^{*}<\sigma<\sigma^{\star}\). It is the same type of hysteresis associated with bi-stability as shown in Figure 2. Clearly, the symmetric range of \(h\) can be replaced by any asymmetric range \(h_{1}\leq h\leq h_{2}\) provided that \(h_{max}<-h_{1},h_{2}<-h_{min}\). ### System under expansion In this section, we consider potential (3.3) for the fixed \(\sigma=1.12<\sigma_{*}\) and vary the force parameter \(h\). This potential is invariant with respect to the action of the subgroup \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}=\{e,\rho^{2},\kappa,\kappa\rho^{2}\}\) of \(\mathbb{D}_{4}\) and satisfies \[\hat{V}_{h}^{\sigma}(\mathbf{q})=\hat{V}_{-h}^{\sigma}(\rho\mathbf{q}).\] Therefore, the planes \(N_{0},N_{1}\) defined by (3.12), (3.13) (which correspond to isosceles trapezoidal formations of particles, see Figure 10) are invariant for the gradient field in the configuration space. Figure 21 presents two branches of critical points located in the plane \(N_{0}\). Eigenvalues along the yellow branch are shown in Figure 22. This critical point is a minimum for \(h=0\). As \(h\) decreases, the smallest eigenvalue becomes negative at \(h=h_{*}=-0.122\). The corresponding saddle-node critical point is \[(q_{1},q_{2},q_{3},q_{4})=(-0.430867,-0.430867,-0.110452,-0.110452).\] Figure 17: Plot of the function \(-(v_{0}^{\sigma})^{\prime}(q)/4\) for \(\sigma=1.33\) (blue) and \(\sigma=1.4\) (orange). Intersections of the graph with a horizontal line \(y=h\) define critical points \(q^{*}(h)\) of the function \(v_{h}^{\sigma}(q)\). Figure 23 shows the transition from the above critical point to the minimum point \[(q_{1},q_{2},q_{3},q_{4})=(-1.69683,-1.69683,-0.0720405,-0.0720405)\] on the blue branch resulting from a small perturbation in a direction perpendicular to the subspace \(N_{0}\) of isosceles trapezoidal configurations. Eigenvalues along the blue branch are shown in Figure 24. As \(h\) increases from the value \(h_{*}\), the smallest eigenvalue becomes negative at \(h=h^{*}=-0.0135\). The corresponding critical point is \[(q_{1},q_{2},q_{3},q_{4})=(-1.58907,-1.58907,-0.110771,-0.110771).\] Figure 23 shows the backward transition from this saddle-node critical point to the minimum point \[(q_{1},q_{2},q_{3},q_{4})=(-0.355008,-0.355008,-0.0969899,-0.0969899)\] on the yellow branch resulting from a small perturbation in a direction perpendicular to the subspace of isosceles trapezoidal configurations \(N_{0}\). Thus, varying \(h\) over an Figure 18: Hysteresis loop for potential (3.2) with external forcing \(h\). The system moves along the straight line \(L\) of \(\mathbb{Z}_{4}\)-symmetric states, hence the position in the configuration space is described by one scalar parameter \(q\). The blue curve is the graph of the function \(-(v_{0}^{\sigma})^{\prime}(q)/4\), see the blue curve in Figure 17. Solid parts of the curve correspond to minimum points of the potential. As \(h\) increases from the minimal value \(-h_{0}\), the point \((q^{*}(h),h)\) follows the solid part on the right branch of the curve moving upwards left in the direction of the local maximum of the curve. In the configuration space, the system sits at the (moving) local minimum point \(\mathbf{q}^{*}(h)=(q^{*}(h),-q^{*}(h),-q^{*}(h),q^{*}(h))\) of the potential. Once the point \((q^{*}(h),h)\) reaches the maximum point of the curve at \(h=h_{max}\), it transits horizontally along the dashed arrow to the left branch of the curve, which corresponds to the local minimum point \(-\mathbf{q}^{*}(-h)=\kappa\mathbf{q}^{*}(-h)\) of the potential. In the configuration space, this event corresponds to the local minimum \(\mathbf{q}^{*}(h)\) disappearing in the saddle-node bifurcation at \(h=h_{max}\), and the system transition to the remaining minimum point \(-\mathbf{q}^{*}(-h)\) along the line \(L\). Now, as \(h\) increases further, the point \((q^{*}(h),h)\) follows the solid segment on the left branch of the curve until it reaches the highest point at \(h=h_{0}\). Similarly, as \(h\) decreases from the maximum value \(h_{0}\), the point \((q^{*}(h),h)\) follows the left branch of the curve downwards right, transits along the horizontal dashed arrow to the right branch of the curve at the point \(h=-h_{max}\), and continues along the right branch until it reaches the rightmost lowest point at \(h=-h_{0}\). In the configuration space, the system sits in the local minimum point \(-\mathbf{q}^{*}(-h)\) until this minimum disappears in the saddle-node bifurcation at \(h=-h_{max}\), at which point the system transitions to the local minimum \(\mathbf{q}^{*}(h)\) along the line \(L\), and then remains at \(\mathbf{q}^{*}(h)\) until \(h\) reaches the value \(-h_{0}\). interval \([h_{0},h^{0}]\) which satisfies \([h_{*},h^{*}]\subset[h_{0},h^{0}]\subset[-0.15,0]\) results in a hysteresis loop. Bifurcations at the points \(h=h_{*},h^{*}\) are subcritical pitchfork bifurcations associated with \(\mathbb{Z}_{2}\)-symmetry breaking of the isosceles trapezoidal solutions, see Figure 25. ## 4 Conclusions A particle in a quasistatically varied double-well potential is a canonical example of hysteresis associated with bi-stability and elimination of a minimum of the potential energy via a saddle-node bifurcation. We explored similar scenarios in systems of particles assuming a natural single-well (Lennard-Jones) interaction potential for each pair of particles. In this setting, if \(N\) identical particles are constrained to a straight line, each particle interacts with its nearest neighbors, and a quasistatically varied external forcing is applied at the ends of the chain, then the potential energy has at most one minimum, hence the system doesn't exhibit hysteresis. Therefore, we considered particles on the plane. Two hysteresis scenarios were shown in a simple (constrained) four-particle system with four degrees of freedom. The first scenario is equivalent to a one-degree-of-freedom particle in a double-well potential because the evolution in the configuration space is restricted to a one-dimensional in Figure 20: Transversal eigenvalues \(\lambda_{1}=\lambda_{2}\) (red) and \(\lambda_{3}\) (blue) evaluated at the point \(q_{0}(\sigma)\) as functions of \(\sigma\) on the interval \((\sigma^{*},\sigma^{*})\). variant attracting manifold (straight line) of symmetric square-shaped configurations. In the second scenario, critical points of the potential which are restricted to an invariant plane of isosceles trapezoidal configurations are destabilized by a symmetry breaking bifurcation in a transversal direction, hence the ensuing transient dynamics towards a minimum occurs outside the plane where the minima are located. Important phenomenological models of hysteresis (such as models of constitutive relations of materials and media) combine, or admit decomposition into, many bi-stable elements. As such, they exhibit specific types of hysteresis, which can be identified by properties of hysteresis loops. For example, hysteresis loops of the Ising, Preisach and Prandtl-Ishlinskii models close after one period (the so-called return point memory property); additionally, all hysteresis loops of the Preisach model corresponding to the same periodic input are congruent to each other; all the loops of the Prandtl-Ishlinskii model are centrally symmetric. It would be interesting to characterize hysteresis of multi-particle systems, in which particles interact via the Lennard-Jones potential (as in (1)-(2)), and compare it to the types of hysteresis exhibited Figure 21: Components \(q,p\) for two branches of isosceles trapezoidal critical points \((q,q,p,p)\in N_{0}\). For the yellow branch, the formation of particles belongs to the square \(-1\leq x,y\leq 1\); for the blue branch, two particles are located outside this square; \(-0.15<h<0\); \(\sigma=1.12\). Figure 22: (a) Eigenvalues for the yellow branch of critical points shown in Figure 21. (b) The smallest eigenvalue corresponding to a direction perpendicular to \(N_{0}\). by standard phenomenological models. One particular example of such multi-particle systems are amorphous media, specifically low-molecular and polymer glasses. Plastic phenomena in these systems are closely related to the succession of bifurcations of their complicated multi-dimensional potential landscape [29, 30]. Numeric simulations, both in athermal quasistatic regime and with molecular dynamics, demonstrate clear hysteretic behavior, in complete agreement with physical intuitive apprehension of plasticity. Still, a direct relationship between this hysteresis and the particularities of interatomic interactions remains mysterious. However, these questions are beyond the scope of this work. It would be also interesting to replace transitions along the anti-gradient field with inertial transition dynamics \(m\ddot{\mathbf{q}}+\gamma\dot{\mathbf{q}}+\nabla V(\mathbf{q};h)=0\). The anti-gradient transitions correspond to the limit of large friction forces. In the opposite frictionless limit (i.e., \(m\ddot{\mathbf{q}}+\nabla V(\mathbf{q};h)=0\)), transitions are initiated by saddle-center bifurcations and end at oscillating regimes. It is worth noting that any type of hysteresis is possible in a two-degrees-of-freedom system if the class of potentials is not restricted. To make this statement precise, an edge-labeled directed graph \(\Gamma\) was associated in [24] with any \(N\)-degree-of-freedom potential energy \(V_{h}(\mathbf{q})\) as follows. With each energy minimum (state) that Figure 23: (a) Transition from the yellow branch to the blue branch at the bifurcation point \(h=-0.122\). (b) The backward transition at the bifurcation point \(h=-0.0135\). Each panel shows the time plots of the coordinates \(q_{i}\) of \(\mathbf{q}\) during the corresponding transition, which follows the anti-gradient dynamics \(\dot{\mathbf{q}}=-\nabla\tilde{V}_{h}^{\sigma}(\mathbf{q})\). Figure 24: (a) Eigenvalues for the blue branch of critical points shown in Figure 21. (b) The smallest eigenvalue corresponding to a direction perpendicular to \(N_{0}\). exists on an input interval \[h_{j}^{-}<h<h_{j}^{+} \tag{10}\] (where \(h_{j}^{\pm}\) are saddle-node bifurcation points), one associates a graph vertex \(v_{j}\). Every vertex has two outgoing directed edges. One edge, labeled \(h_{j}^{-}\), corresponds to the transition from the state labeled \(v_{j}\) to another state as a decreasing input \(h\) reaches the bifurcation value \(h_{j}^{-}\); the other edge, labeled \(h_{j}^{+}\), corresponds to the transition, which occurs when an increasing input reaches the bifurcation value \(h_{j}^{+}\). Since the graph \(\Gamma\) encodes all the transitions between states in response to quasistatic variations of the input, it is called a hysteresis map for \(V_{h}\). By design, for any vertex \(v_{j}\), the labels \(h\) of all the incoming edges satisfy (10). As shown in [24], _any_ edge-labeled directed graph \(\Gamma\) which, at each vertex, has exactly two outgoing edges, with the incoming edge labels \(h\) and outgoing edge labels \(h_{j}^{\pm}\) satisfying (10), is a hysteresis map for some two-degrees-of-freedom potential \(V_{h}(q_{1},q_{2})\). It would be interesting to determine what hysteresis maps correspond to multi-particle potentials (1) with Lennard-Jones interactions. ## Acknowledgments This work was supported by Lady Davis Visiting Professorship at Technion--Israel Institute of Technology.
2309.06065
Electronic structure and optoelectronic properties of halide double perovskites: Fundamental insights and design of a theoretical workflow
Like single perovskites, halide double perovskites (HDP) have truly emerged as efficient optoelectronic materials since they display superior stability and are free of toxicity. However, challenges still exist due to either wide and indirect bandgaps or parity-forbidden transitions in many of them. The lack of understanding in chemical bonding and the formation of parity-driven valence and conduction band edge states have hindered the design of optoelectronically efficient HDPs. In this study, we have developed a theoretical workflow using a multi-integrated approach involving ab-initio density functional theory (DFT) calculations, model Hamiltonian studies, and molecular orbital picture leading to momentum matrix element (MME) estimation. This workflow gives us detailed insight into chemical bonding and parity-driven optical transition between edge states. In the process, we have developed a band-projected molecular orbital picture (B-MOP) connecting free atomic orbital states obtained at the Hartree-Fock level and orbital-resolved DFT bands. From the B-MOP, we show that the nearest neighbor cation-anion interaction determines the position of atom-resolved band states, while the second neighbor cation-cation interactions determine the shape and width of band dispersion and, thereby, MME. The latter is critical to quantify the optical absorption coefficient. Considering both B-MOP and MME, we demonstrate a mechanism of tailoring bandgap and optical absorptions through chemical doping at the cation sites. Furthermore, the cause of bandgap bowing, a common occurrence in doped HDPs, is explained by ascribing it to chemical effect and structural distortion.
Mayank Gupta, Susmita Jana, B. R. K. Nanda
2023-09-12T09:05:14Z
http://arxiv.org/abs/2309.06065v1
Electronic structure and optoelectronic properties of halide double perovskites: Fundamental insights and design of a theoretical workflow ###### Abstract Like single perovskites, halide double perovskites (HDP) have truly emerged as efficient optoelectronic materials since they display superior stability and are free of toxicity. However, challenges still exist due to either wide and indirect bandgaps or parity-forbidden transitions in many of them. The lack of understanding in chemical bonding and the formation of parity-driven valence and conduction band edge states have hindered the design of optoelectronically efficient HDPs. In this study, we have developed a theoretical workflow using a multi-integrated approach involving ab-initio density functional theory (DFT) calculations, model Hamiltonian studies, and molecular orbital picture leading to momentum matrix element (MME) estimation. This workflow gives us detailed insight into chemical bonding and parity-driven optical transition between edge states. In the process, we have developed a band-projected molecular orbital picture (B-MOP) connecting free atomic orbital states obtained at the Hartree-Fock level and orbital-resolved DFT bands. From the B-MOP, we show that the nearest neighbor cation-anion interaction determines the position of atom-resolved band states, while the second neighbor cation-cation interactions determine the shape and width of band dispersion and, thereby, MME. The latter is critical to quantify the optical absorption coefficient. Considering both B-MOP and MME, we demonstrate a mechanism of tailoring bandgap and optical absorptions through chemical doping at the cation sites. Furthermore, the cause of bandgap bowing, a common occurrence in doped HDPs, is explained by ascribing it to chemical effect and structural distortion. ## I Introduction: In the last couple of decades, organic and inorganic halide single perovskites (HSPs) of the formula ABX\({}_{3}\) (e.g. CsPbI\({}_{3}\)) have gained enormous research attention as they demonstrate promising optoelectronic properties [1; 2], solar cell applications [3], non-trivial topological quantum phases which bring the dimension of orbitronics [4; 5] and topotronics [6; 7; 8]. At the same time, there appears to be a large number of disadvantages associated with this class of compounds. The most significant one is the lack of stability on prolonged exposure to light and heat. As most of the promising HSPs are lead (Pb) based, toxicity remains another concern. The halide double perovskites (HDPs) are emerging as an alternate class of compounds which to some extent, overcome the aforementioned disadvantages. HDP has a general formula of A\({}_{2}\)BB\({}^{\prime}\)X\({}_{6}\) where A is a monovalent cation of Group-I, B and B\({}^{\prime}\) are metals with +1 (K, Na, Ag, Au, Cu, In, Tl) and +3 (Bi, Sb, In, Tl) oxidation states, and X is a halide. Most commonly, in HDPs, A-site is Cs, and Cl and Br are considered halogen sites. Compared to the HSPs, HDPs are in general more stable [9] and environmental friendly. They create a large chemical configurational space, and therefore this family is capable of exhibiting diverse electronic structures and, in turn, are suitable for a wide range of applications. These include photovoltaic solar cells [10], photodetectors [11], photocatalysis [12; 13], CO\({}_{2}\) reduction [14], spintronics [15], X-ray detectors [16; 17], water splitting [18], etc. The HDPs are also being actively examined as solar cell absorbers. However, the issue of indirect bandgap in some of them and parity forbidden transition in others [19; 20; 21; 22; 23] are the bottleneck which needs to be addressed. For example, Cs\({}_{2}\)AgBiBr\({}_{6}\) and Cs\({}_{2}\)AgSbCl\({}_{6}\) possess indirect bandgap. On the other hand, though Cs\({}_{2}\)AgInCl\({}_{6}\) exhibits a direct band gap of 3.23 eV, parity forbidden transition at \(\Gamma\), leads to very weak optical absorption near the band gap. The next optical transition in this system happens at \(\sim\) 4.46 eV, which is much higher for an ideal solar cell material [24]. In a recent study, it has been revealed that, for B = In\({}^{+}\), Tl\({}^{+}\) and B\({}^{\prime}\) = Sb\({}^{3+}\), Bi\({}^{3+}\), HDPs show favorable optical absorption suitable for thin-film solar cell applications[25]. Unfortunately, Tl\({}^{+}\) is toxic, and In\({}^{+}\) tends to be unstable against oxidation and form mixed-valence compounds with distorted and complex crystal structures [26]. Despite a few disadvantages, HDPs have attracted considerable attention due to their simple, robust, and easy synthesis process. Since the valence band and conduction band edges are formed out of the covalent hybridization among the orbitals of metal cations (B, B\({}^{\prime}\)) and halide anions (X), cationic and anionic mixing naturally becomes an effective strategy to manipulate the electronic properties and optical behavior. Taking into account these advantages, many design principles have been proposed experimentally [27] and theoretically [28; 29; 30] to modify the electronic structure so as to achieve better optoelectronic tronic performances. Recent studies [31, 32] have shown that, Cs\({}_{2}\)AgIn\({}_{x}\)Bi\({}_{1-x}\)Cl\({}_{6}\) and Cs\({}_{2}\)AgBi\({}_{x}\)Sb\({}_{1-x}\)Cl\({}_{6}\) produce high photoluminescence for the range \(0.8\leq\) x \(\leq 0.9\). The reasons are attributed to the manipulation of bandgap and parity. In another study, Athrey and Sudakar [33, 34] have experimentally demonstrated distortion drive nonlinear bandgap variation and self-trapped exciton (STE) emission on the cationic intermix systems Cs\({}_{2}\)(Ag, Na)BiCl\({}_{6}\). Interestingly, the anionic intermixing (Cs\({}_{2}\)AgBiBr\({}_{6-x}\)Cl\({}_{x}\)) results in linear bandgap variation [35]. In a combined theoretical and experimental study, Slavney et al. [36] reported a change in the bandgap from 1.95 to 1.4 eV in MA\({}_{2}\)AgBiBr\({}_{6}\) by Tl doping at Ag and Bi site, which is close to the ideal bandgap for photovoltaic applications. Many other studies are carried out to demonstrate the tuning of optical properties in HDPs by a similar cation intermixing approach [37, 38, 39, 10]. However, these isolated investigations with limited scopes do not reveal the universal mechanism that alters the electronic structure at the vicinity of the Fermi level. Hence, there is a lack of guiding principles which can be utilized to design HDPs for electronic applications through controlled cationic and anionic intermixing. Developing materials design workflow has become necessary as HDPs are now being intensely investigated in search of stable and highly efficient solar cell materials. In this study, by considering a set of prototype compounds Cs\({}_{2}\)BB\({}^{\prime}\)Cl\({}_{6}\) (B = Ag, Na, In, Tl and B\({}^{\prime}\) = In, Tl, Bi, Sb), we develop a theoretical workflow to establish the relationship between cationic and anionic intermixing and the electronic structure, as well as, the optical absorption in the HDPs. The theoretical workflow, schematically illustrated in Fig. 1, is based on density functional theory (DFT) calculations, an elegant Slater-Koster formalized tight-binding (SK-TB) model, and band projected molecular orbital picture (B-MOP). The optical absorption study is carried out by calculating the momentum matrix elements (MME), which are the outcomes of the solution to model Hamiltonian. Through the workflow, we understand the chemical bonding and parity-driven optical transition between the edge states. With the aid of B-MOP, we infer that the nearest neighbor cation-anion interaction determines the position of atom-resolved band states. On the other hand, second neighbor cation-cation interactions determine shape and width of the band dispersion and, hence, the MMEs. The imaginary part of the dielectric constant and in turn the optical absorptions are calculated using the MMEs. With the aid of both B-MOPs and MMEs, we demonstrate how chemical doping at the cation site can tailor the bandgap and optical absorption. As a byproduct, we demonstrate how the chemical effect and structural distortion together cause bandgap bowing, a common occurrence in doped HDPs. ## II Designing approach and computational details We will first briefly discuss the crystal structure of HDPs. As shown in Fig. 2 (a), it has a single unit rhombohedral primitive unit cell (Fm3m) with two organic or inorganic monovalent A cations, one monovalent B, one trivalent B\({}^{\prime}\) cations, and six halogen anions. A conventional crystal structure of HDP is a cubic unit cell and contains four formula units. The salient feature of the crystal structure is the presence of BX\({}_{6}\) and B\({}^{\prime}\)X\({}_{6}\) octahedra which are alternately arranged and connected with corner-sharing X-anions in all three directions. The A\({}^{+}\) cations occupy the cuboctahedral cavities positions. The approach to design the theoretical workflow is summarized in the flowchart shown in Fig. 1. Hartree-Fock calculations on sample free atoms A, B, B\({}^{\prime}\), and X provide the free atomic orbital energy levels. This, in combination with the DFT calculated band structure, establishes B-MOP describing the possible chemical bondings of the prototype compounds. The B-MOPs enable us to design a parametric tight-binding model Hamiltonian and construction of chemical configuration space. The variation in the parameter and configuration can contribute towards searching for desired electronic structure and optical absorption deterministic momentum matrix so as to maximize optoelectronic efficiencies. Each com Figure 1: A schematic summarizing the design principle to calculate and predict the efficient optoelectronic properties of HDPs. ponent of the flowchart is further described in detail in the remaining part of the paper. The DFT electronic structure calculations are performed on a set of HDPs (see Table 1) using full-potential linearized augmented plane-wave (FP-LAPW) method as implemented in the WIEN2k simulation tool [40]. For structural relaxation, we have used pseudopotential-based Vienna ab-initio Simulation Package (VASP)[41] within the framework of the projector-augmented waves (PAW) method. Relaxations are performed via the conjugate gradient algorithm until the residual force in each atom is \(<\) 0.01 eV/A. A \(k\)-mesh of size 6 \(\times\) 6 \(\times\) 6 is used for the Brillouin zone (BZ) sampling, PBE generalized gradient approximation (GGA) [42; 43] is employed for the exchange-correlation functional with the energy cutoff 400 eV, and the convergence criterion for total energy is set to 10\({}^{-6}\) eV. The lattice constants of HDPs after relaxation are provided in Table 1. The GGA-PBE functional underestimates the bandgap as compared to the experimental bandgap, and hence the GGA-PBE along with the modified Becke-Johnson (mBJ) [44] potential is used to calculate the electronic band structure of HDPs. The results are in good agreement with the experimental bandgap (see Table 1). The number of plane waves is determined by setting \(R_{MT}K_{MAX}\) to 7.0 for all the compounds. The BZ integration is carried out with a Monkhorst-Pack grid with the k-mesh size of 8 \(\times\) 8 \(\times\) 8 (yielding 35 irreducible points). The calculations include spin-orbit coupling (SOC) effect. For the model Hamiltonian studies, we have developed a few codes using MATLAB [45], and the package is made available online [46]. As the next step, we perform first-principles calculations to estimate the electronic properties of a set of HDPs. For all HDPs discussed here, we have used A = Cs and X = Cl as an example, and we believe that results and description will not change by replacing A and X site atoms with their equivalent unless the crystal symmetry is destroyed. Since not all HDPs are experimentally synthesized in their pristine phase, we have conceived the hypothetical cubic Fm3m structure and performed the full relaxation (both atomic positions and crystal lattice parameters) for all of them. The relaxed lattice constant in comparison to the available experimental lattice constant and bandgap calculated with GGA+mBJ+SOC functional is listed in Table 1. The obtained band structures (calculated along the path shown in Fig. 2 (b)) are further used to map the B-MOP as shown in Figs. 3 and 4. Our aim in studying the pristine phases of HDPs is to understand the properties of the end member crystals, which help to predict the properties of the cation mixed phases. ## III Construction of band projected molecular-orbit picture (B-MOP) A MOP examines the possible chemical bondings and provides us with a broader picture of the electronic structure of a material and its universality in a given family. Therefore, without carrying out comprehensive electronic structure calculations, it is possible through MOP to develop an insight into how the modulations in the electronic structure across a family due to chemical substitution and doping. Here, we construct the MOP in three steps: First, the free atomic energy levels of the valence orbitals are estimated using the Hartree-Fock theory, and their relative positions in the energy scale are obtained. In the second step, orbital projected band structures are carried out using DFT. In the third step, the band centers of the projected bands are linked to the free atomic orbital energy levels so as to obtain probable chemical bonding and their strengths and finally draw the MOP. Such an attempt of linking the schematical MOP with eigenstates and eigenvalues in the momentum space has never been done before. Therefore, to distinguish it from the conventional not-to-scale MOPs, we name it as band projected MOP (B-MOP). Figure 3 shows B-MOPs for Ag-based HDPs. It infers the formation of bonding and antibonding spectrum arising from {Ag-(\(s\), \(d\)); Bi/Sb/In/Tl-(\(s\), \(p\))} - X-\(p\) covalent hybridizations. The bonding spectrum consists of \(e_{g}-p\), \(t_{2g}-p\), \(\sigma_{s-p}\), \(\sigma_{p-p}\), \(\pi_{p-p}\) interactions, and the antibonding spectrum consists of \(e_{g}-p^{*}\), \(t_{2g}-p^{*}\), \(\sigma_{s-p}^{*}\), \(\sigma_{p-p}^{*}\), \(\pi_{p-p}^{*}\) interactions. The conservation of basis leaves behind eight non-bonding states combinedly formed by the X-\(p\) orbitals. The strength of covalent hybridization is measured by the energy difference between the two corresponding bonding and antibonding pair. The B-MOPs suggest the strength of hybridization in the increasing order as \(s-p>p-p>d-p\). Now the valence and Figure 2: (a) The rhombohedral (one formula unit) unit cell (shown in shaded gray color) inside a conventional (four formula unit) face-centered cubic cell of the halide double perovskite A\({}_{2}\)BBY\({}_{6}\). The 2\({}^{nd}\) nearest-neighbor (NN) electron-electron hopping interactions take place between B and B\({}^{\prime}\) cations and are mediated by halogen X anions. The 4\({}^{th}\)-NN interactions occur between the same metal cations \(i.e.\), B to B and B\({}^{\prime}\) to B\({}^{\prime}\). (b and c) The first Brillouin zone for the rhombohedral and conventional unit cells, respectively. The \(k\)-path used for band structure calculations is shown in green color. conduction band edge states (VBE and CBE) will be determined from the valence electron count (VEC), which is defined as the sum of the valence electrons in the constituent member element. In the case of Cs\({}_{2}\)AgBiCl\({}_{6}\) (Cs\({}_{2}\)AgSbCl\({}_{6}\)), the VEC is found to be 48 (see Table 1 for other HDPs), and therefore the electron filling implies that VBE is formed by Ag-\(e_{g}\) - X-\(p\) hybridized orbitals and CBE is formed by Bi-\(p\) - X-\(p\) (Sb-\(p\) - X-\(p\)) hybridized orbitals. Similarly, for Cs\({}_{2}\)AgInCl\({}_{6}\) (Cs\({}_{2}\)AgTlCl\({}_{6}\)), with VEC 46, the VBE is formed by Ag-\(e_{g}\) - X-\(p\) and CBE is formed by In-\(s\) - X-\(p\) (Tl-\(s\) - X-\(p\)) hybridized orbitals, respectively. From B-MOPs (Fig. 3 (a, b)), we describe some of the main features that play an important role in establishing the electronic structure in HDPs. For the Ag-based compounds, these are: (I) The \(t_{2g}\) and \(e_{g}\) orbitals dominated bands are narrow, indicating weaker interactions with the Cl-\(p\) states, whereas B/B\({}^{\prime}\)-\(s\) based orbitals dominated bands are wider due to their stronger interactions with Cl-\(p\) states. (II) Depending on the VEC, Ag-HDPs demonstrate two kinds of bandgaps. For Cs\({}_{2}\)AgBiCl\({}_{6}\) and Cs\({}_{2}\)AgSbCl\({}_{6}\), the anti-bonding spectrum \(\sigma^{*}_{s-p}\) is occupied to form the VBE with VBM at X while \(\sigma^{*}_{p-p}\) constitute the CBE with CBM at \(\Gamma\). For Cs\({}_{2}\)AgInCl\({}_{6}\) and Cs\({}_{2}\)AgTlCl\({}_{6}\), which have two less VEC compared to Cs\({}_{2}\)AgBiCl\({}_{6}\) and Cs\({}_{2}\)AgSbCl\({}_{6}\), the \(\sigma^{*}_{s-p}\) is unoccupied to form the CBE with CBM at \(\Gamma\) and \(e^{*}_{g-p}\) is occupied to from the VBE with VBM lies on the \(\Gamma-X\) flat band. (III) Bandgap variation: The free atomic energies of B and B\({}^{\prime}\)-atoms are turning out to be the deterministic factor for the magnitude of the bandgap as they influence the position of the bands. For example, the bandgap difference of \(\sim\)2.64 eV between Cs\({}_{2}\)AgInCl\({}_{6}\) and Cs\({}_{2}\)AgTlCl\({}_{6}\) can be attributed to the free atomic energy difference between In-\(s\) and Tl-\(s\) orbitals (see Fig. 3 (c, d)). Since the VBM is formed by \(e^{*}_{g-p}\) bonding for both cases, the bandgap is determined by the position of the \(\sigma^{*}_{s-p}\) led CBM. With In-\(s\) energy 3 eV higher than that of Tl-\(s\), the latter forms a narrow bandgap of 0.64 eV, and the former forms a wide bandgap of 3.3 eV. In the case of Cs\({}_{2}\)AgBiCl\({}_{6}\) and Cs\({}_{2}\)AgSbCl\({}_{6}\), where Bi/Sb-Cl \(\sigma^{*}_{s-p}\) and \(\sigma^{*}_{p-p}\) form the VBE and CBE respectively, the bandgaps are of a nearly similar order. This is due to the fact that both Bi-\(s\) and Bi-\(p\) energies are lower by a similar magnitude with respect to that of Sb-\(s\) and Sb-\(p\) energies. Hence, the relative positioning of VBM and CBM are similar for both compounds. Similarly, the salient features that we obtain from B-MOP for the HDPs (Cs\({}_{2}\)InBiCl\({}_{6}\), Cs\({}_{2}\)InSbCl\({}_{6}\), Cs\({}_{2}\)TiBiBr\({}_{6}\), Cs\({}_{2}\)TiSbBr\({}_{6}\) without Ag) are as follows (Fig. 3 (c-h)). (I) All are direct bandgap systems with B-Cl-\(\sigma^{*}_{s-p}\) and B\({}^{\prime}\)-Cl-\(\sigma^{*}_{p-p}\) forming the VBE and CBE, respectively. (II) When B is In, the system exhibits a narrower bandgap; when it is Tl, it exhibits a wider bandgap. It is largely attributed to the fact that In-\(s\) free atomic energy levels are higher than that of Tl-\(s\) by 3.0 eV. Hence, in \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline Crystal formula & Lattice Constant(Å) & Lattice Constant(Å) & Bandgap & Bandgap & Nature of & VBM & CBM & VEC \\ & (GGA relaxed) & (Experimental) & GGA-mBJ(eV) & exp (eV) & bandgap & & & \\ \hline Cs\({}_{2}\)AgBiBr\({}_{6}\) & 11.465 & 11.271 [10; 47] & 1.90 & 2.19 [10] & Indirect & Ag-\(e_{g}\)+Bi-\(s\) & Bi-\(p\) & 48 \\ Cs\({}_{2}\)AgBiCl\({}_{6}\) & 10.936 & 10.760 [33] & 2.98 & 2.65 [33] & Indirect & Ag-\(e_{g}\)+Bi-\(s\) & Bi-\(p\) & 48 \\ Cs\({}_{2}\)AgSbCl\({}_{6}\) & 10.809 & 10.699 [38] & 2.41 & 2.57 [38] & Indirect & Ag-\(e_{g}\)+Sb-\(s\) & Sb-\(p\) & 48 \\ Cs\({}_{2}\)AgInCl\({}_{6}\) & 10.560 & 10.480 [10] & 3.28 & 3.57 [48] & Direct & Ag-\(e_{g}\) & In-\(s\) & 46 \\ Cs\({}_{2}\)AgTlCl\({}_{6}\) & 10.784 & 10.559 [25] & 0.64 & 1.96 [25] & Direct & Ag-\(e_{g}\) & Tl-\(s\) & 46 \\ Cs\({}_{2}\)AgAsCl\({}_{6}\) & 9.14 & - & 2.30 & - & Indirect & Ag-\(e_{g}\)+As-\(s\) & As-\(p\) & 48 \\ Cs\({}_{2}\)AgGaCl\({}_{6}\) & 9.00 & - & 2.89 & - & Direct & Ag-\(e_{g}\) & Ga-\(s\) & 46 \\ Cs\({}_{2}\)CuBiCl\({}_{6}\) & 10.60 & - & 0.82 & 1.57 [49] & Indirect & Cu-\(e_{g}\)+Bi-\(s\) & Bi-\(p\) & 48 \\ Cs\({}_{2}\)CuInCl\({}_{6}\) & 10.34 & - & 0.25 & - & Direct & Cu-\(e_{g}\) & In-\(s\) & 46 \\ Cs\({}_{2}\)InBiCl\({}_{6}\) & 11.345 & - & 0.53 & - & Direct & In-\(s\) & Bi-\(p\) & 40 \\ Cs\({}_{2}\)InSbCl\({}_{6}\) & 11.212 & - & 0.64 & - & Direct & In-\(s\) & Sb-\(p\) & 40 \\ Cs\({}_{2}\)TlBiCl\({}_{6}\) & 11.547 & - & 2.09 & - & Direct & Tl-\(s\) & Bi-\(p\) & 40 \\ Cs\({}_{2}\)TlSbCl\({}_{6}\) & 11.420 & - & 2.01 & - & Direct & Tl-\(s\) & Sb-\(p\) & 40 \\ Cs\({}_{2}\)TlBiBr\({}_{6}\) & 12.069 & - & 1.34 & - & Direct & Tl-\(s\) & Bi-\(p\) & 40 \\ Cs\({}_{2}\)TlSbBr\({}_{6}\) & 11.944 & - & 1.27 & - & Direct & Tl-\(s\) & Sb-\(p\) & 40 \\ Cs\({}_{2}\)NaBiCl\({}_{6}\) & 11.026 & 10.833 [34] & 4.15 & 3.0 [33] & Indirect & Bi-\(s\) & Bi-\(p\) & 38 \\ Cs\({}_{2}\)NaSbCl\({}_{6}\) & 10.930 & - & 3.99 & - & Indirect & Sb-\(s\) & Sb-\(p\) & 38 \\ Cs\({}_{2}\)NaInCl\({}_{6}\) & 10.730 & 10.533 [50] & 5.37 & 4.15 [51] & Direct & Cl-\(p\) & In-\(s\) & 36 \\ Cs\({}_{2}\)KInCl\({}_{6}\) & 11.156 & 10.770 [52] & 5.89 & - & Direct & Cl-\(p\) & In-\(s\) & 36 \\ Cs\({}_{2}\)KBiCl\({}_{6}\) & 11.498 & - & 4.32 & - & Direct & Bi-\(s\) & Bi-\(p\) & 38 \\ Cs\({}_{2}\)NaBiBr\({}_{6}\) & 11.615 & 11.357 [53] & 3.29 & 3.10 [53] & Indirect & Bi-\(s\) & Bi-\(p\) & 38 \\ \hline \end{tabular} \end{table} Table 1: DFT obtained GGA+mBJ bandgaps for various halide double perovskites along with the nature of bandgap, orbital compositions of valence and conduction bands, and valence electron counts (VEC). the case of the latter, the VBE goes up to increase the separation between CBE and VBE. Furthermore, if we compare the case of B\({}^{\prime}\) as Bi and Sb, the former show a smaller bandgap as Bi-\(p_{1/2}\) free atomic energy level is lower than that of Sb-\(p_{1/2}\) to lower the position of CBE. When the group-1A atoms (Na, K, etc.) occupy the B-site of HDPs, the B-MOP for such compounds is illustrated in Fig. 4. The B site acts as an electron donor and does not participate in the band formation. The interaction of four B\({}^{\prime}\)-{_s_, _p_} orbitals with 18 Cl-_p_ orbitals give rise to four bonding states (\(\sigma_{s-p},\sigma_{p-p}\), and \(\pi_{p-p}\)), four corresponding anti-bonding states (\(\sigma_{s-p}^{*},\sigma_{p-p}^{*}\), and \(\pi_{p-p}^{*}\)) and fourteen flat bands (shown in yellow). When B\({}^{\prime}\) is Bi, the VEC is 38, and therefore the anti-bonding bands \(\sigma_{s-p}^{*}\) and \(\sigma_{p-p}^{*}\) form the VBE and CBE respectively. It results in a wide and indirect bandgap system (4.0 - 4.5 eV). There is a narrow separation between \(\sigma_{s-p}^{*}\) and flat bands. When B\({}^{\prime}\) is In, the VEC is reduced by two, and the \(\sigma_{s-p}^{*}\) forms the CBE while the Cl-_p_ flat bands form the VBE. These systems exhibit a wide bandgap to the tune of 5.5 - 6.0 eV approximately. B-MOP for four more compounds, namely Cs\({}_{2}\)AgAsCl\({}_{6}\) Figure 3: The band projected molecular orbital picture (B-MOP) of halide double perovskites as envisaged from the following molecular hybridizations: B(Ag)-{_s_, _d_}–Cl-_p_, B(In, Tl)-{_s_, _p_}–Cl-_p_ and B\({}^{\prime}\)(Bi, Sb, In, Tl)-{_s_, _p_}–Cl-_p_ atomic orbitals produces the bonding and antibonding orbitals along with the nonbonding Cl-_p_ orbitals. The free atomic energy levels are estimated from Hartree-Fock’s theory. The interactions among the B and B\({}^{\prime}\) states are not represented explicitly in these MOPs. Cs\({}_{2}\)CuBiCl\({}_{6}\), Cs\({}_{2}\)AgGaCl\({}_{6}\), and Cs\({}_{2}\)CuInCl\({}_{6}\) have been studied. The first two compounds belong to the group of Cs\({}_{2}\)AgBiCl\({}_{6}\) and Cs\({}_{2}\)AgSbCl\({}_{6}\), and the remaining two belong to the group of Cs\({}_{2}\)AgInCl\({}_{6}\) and Cs\({}_{2}\)AgTlCl\({}_{6}\). The detailed analysis of these four compounds is provided in Section XV of SI. The replacement of Cl by Br nearly replicates the B-MOP [35]. However, since Br-\(p\) free atomic energy level is \(\sim\) 0.5 eV higher than that of Cl-\(p\), we notice a couple of changes in the electronic structure, and the most important of them is the reduction of the bandgap. The cause of it can be explained by taking the example of Cs\({}_{2}\)AgBiBr\({}_{6}\) and Cs\({}_{2}\)AgBiCl\({}_{6}\). Here, the prominent interactions that define the VBE and CBE are Ag-\(d\) - Br/Cl-\(p\) and Bi-\(p\) - Br/Cl-\(p\), respectively. We find that the Br-\(p\) energy levels are comparable to that of Ag-\(d\) energy levels while the Cl-\(p\) energy levels lie \(\sim\) 0.5 eV below. Therefore, the Ag-\(d\) -- Br-\(p\) interaction is stronger to push the antibonding \(e_{g}^{*}\) band above for the Br compound as compared to that of the Cl compound. Since there is a large onsite energy difference between Bi-\(p\) and Br/Cl-\(p\) orbitals (\(\epsilon_{(Bi-p)}-\epsilon_{(Br/Cl-p)}\)), the positioning of the CBE (\(\sigma_{(p-p)}^{*}\)) is less affected by (\(\epsilon_{(BrCl-p)}\)). Therefore, the CBE does not see a similar shift as the VBE does, and hence, the Br-based HDPs generally see a lower bandgap. Further discussions are made in the supplementary information (SI) (Section XIX). ## IV Band Gap Improvement using HSE06+G\({}_{0}\)W\({}_{0}\) Analysis DFT is considered as a reliable tool for calculating the fundamental properties of materials in their ground states. However, the common exchange-correlation approximations tend to underestimate quasiparticle (QP) bandgaps. To obtain accurate QP energies, Green's function-based ab-initio many-body perturbation theory (MBPT) can be employed. In the framework of MBPT, the QP eigensystem is determined through the solution of one-particle equations, incorporating a non-local, energy-dependent self-energy operator referred to as \(\Sigma\). In practice, \(\Sigma\) is frequently approximated as iGW, where W denotes the screened Coulomb interaction and G represents the one-particle Green's function. Instead of iterative evaluations of \(\Sigma\) at each step, G\({}_{0}\)W\({}_{0}\), a computationally more efficient one-shot approach, is frequently employed. In the context of modeling halide perovskites, it has become evident that achieving an accurate representation of the electronic structure hinges on the calculation of many-body QP energies. This approach effectively corrects localized electronic states, reducing the mixing of orbitals between B-\(e_{g}\) or B\({}^{\prime}\)-s and X-\(p\) near the valence band maximum (VBM) and conduction band minimum (CBM). Consequently, the method enhances accuracy by precisely pinpointing the positions of the VBM and CBM, ultimately leading to more reliable bandgap values. _Methodology:_ In the case of HDPs, G\({}_{0}\)W\({}_{0}\) calculations have been performed on top of the orbitals obtained from hybrid exchange functional (HSE06@G\({}_{0}\)W\({}_{0}\)) with 0.25 mixing in Hartree-Fock exchange which is included in the VASP [41]. In addition to the SOC parameter, we have taken the number of virtual bands to be almost three times the number of occupied bands. The HSE06@G\({}_{0}\)W\({}_{0}\) band spectra are obtained using VASP interfaced with Wannier90 software [54]. _QP band gap correction:_ While it is computationally expensive to carry out HSE06@G\({}_{0}\)W\({}_{0}\) calculations for all the compounds, for demonstration, we have chosen four compounds, namely Cs\({}_{2}\)AgBiCl\({}_{6}\), Cs\({}_{2}\)AgInCl\({}_{6}\), Cs\({}_{2}\)InBiCl\({}_{6}\), and Cs\({}_{2}\)NaInCl\({}_{6}\), one each from the categories listed in Table S10. The resulting band gap values from the HSE06@G\({}_{0}\)W\({}_{0}\) method, considering SOC effect, have been listed in Table 2, and they exhibit a strong agreement with the experimental values. The HSE0@G\({}_{0}\)W\({}_{0}\) obtained band structures are shown in Fig. 5. For materials with indirect semiconducting characteristics like Cs\({}_{2}\)AgBiCl\({}_{6}\), HSE06@G\({}_{0}\)W\({}_{0}\) yields an exact band gap of 2.65 eV. For the direct bandgap semiconductors, Cs\({}_{2}\)AgInCl\({}_{6}\), Cs\({}_{2}\)NaInCl\({}_{6}\), and Cs\({}_{2}\)InBiCl\({}_{6}\) the QP direct gap improves with values 2.86 eV, 4.31 eV and 0.72 eV respectively. Despite this shift in the band gap value, the overall character of the band spectra closely resembles that generated by the GGA-mBJ approach, as shown in Fig. 5. In Figs. 3 and 4, we have constructed the B-MOPs by considering the band structures obtained from DFT-GGA+mBJ calculations. Similar B-MOPs can be constructed for the DFT-HSE06@G\({}_{0}\)W\({}_{0}\) calculations by adding a constant energy shift to the conduction band spectrum. This is due to the fact that the band disper Figure 4: The B-MOP of HDPs as envisaged from the molecular hybridization of (Na, K)-\(\{s,\)\(p\}\)–X-\(p\) and (In, Bi)-\(\{s,\)\(p\}\)–X-\(p\) atomic orbitals produces the bonding and antibonding orbitals along with the nonbonding X-\(p\) orbitals. sion does not change with HSE06@G\({}_{0}\)W\({}_{0}\). ## V Construction of Tight-Binding Model Hamiltonian While the B/B\({}^{\prime}\)-X interactions primarily build the MOP and hence provide a broad overview of electronic structure, the next-neighbor B\({}^{\prime}\)-B interactions influence the band dispersion considerably. As we will see later, the optical properties are driven by these second-neighbor interactions. To provide a deeper insight to the band dispersion, in the following section, we develop a tight-binding (TB) model Hamiltonian involving B-B\({}^{\prime}\), B\({}^{\prime}\)-B\({}^{\prime}\), and B-B interactions. The model is constructed based on linear combination of atomic orbitals (LCAO) and within the framework of Slater-Koster (SK) formalism. In the second quantization notation, the spin-orbit coupled SK-TB Hamiltonian is given as: \[H=\sum_{i,\alpha}\epsilon_{i\alpha}c_{i\alpha}^{\dagger}c_{i\alpha}+\sum_{ \langle ij\rangle;\alpha,\beta}t_{i\alpha j\beta}(c_{i\alpha}^{\dagger}c_{j \beta}+h.c.)+\lambda\vec{L}\cdot\vec{S}, \tag{1}\] where \(i\), \(j\) and \(\alpha\), \(\beta\) are indices for the sites and orbitals, respectively. The first term represents the on-site energy (\(\epsilon\)), while the second term is for the hopping integrals, with \(t\) being the hopping strength. The effective tight-binding Hamiltonian matrix includes second nearest-neighbor metal-metal (B-B\({}^{\prime}\)) interactions, as well as fourth-neighbor B-B and B\({}^{\prime}\)-B\({}^{\prime}\) interactions. The third term in the Hamiltonian represents the atomic spin-orbit coupling (SOC) effect, with the strength of the SOC \(\lambda\). A full Hamiltonian involves the interactions among both B and B\({}^{\prime}\) valence orbitals as well as X-\(p\) orbitals. The present model Hamiltonian with minimal basis is constructed by excluding the X-X interactions while the B-X and B\({}^{\prime}\)-X interactions are mapped into effective B-B\({}^{\prime}\) interactions [55]. Such a mapping can be validated through Lowdin downfold technique [56; 4]. The choice of the basis set is crucial and depends on the atomic orbitals contributing to the bands near the Fermi level. For example, in Ag-based double perovskites, the bands near the Fermi level are contributed from B(Ag)-{\(s\), \(d\)} and B\({}^{\prime}\)-{\(s\), \(p\)} orbitals, while in Cs\({}_{2}\)(In, Tl)(Bi, Sb)Cl\({}_{6}\), B-{\(s\), \(p\)} and B\({}^{\prime}\)-{\(s\), \(p\)} orbitals contribute to the bands near the Fermi level. The size of the Hamiltonian matrix is dependent on the chosen basis set. The Hamiltonian matrix can be expressed as \[H=\left[\begin{array}{cc}H_{BB}&H_{BB^{\prime}}\\ H_{BB^{\prime}}^{\dagger}&H_{B^{\prime}B^{\prime}}\end{array}\right] \tag{2}\] Here, \(H_{BB}\) and \(H_{B^{\prime}B^{\prime}}\) are the interaction submatrices between the same cations \(i.e.\), B-B and B\({}^{\prime}\)-B\({}^{\prime}\) which correspond to the 4\({}^{th}\) neighbor interactions and \(H^{B\,B^{\prime}}/H^{B^{\prime}B}\) are interaction matrices between two different cations B-B\({}^{\prime}\) and B\({}^{\prime}\)-B which correspond to the 2\({}^{nd}\) neighbor interactions (among \(d\) and \(p\) orbitals for Ag-based HDPs and \(p\) - \(p\) orbitals interactions for Cs\({}_{2}\)(In/Tl)(Bi/Sb)Cl\({}_{6}\)) as shown in Fig. 2 (a) and Fig. S1 in SI. By considering the SOC effect, the basis size is doubled, and sub-matrices take shape: \[H_{BB/B^{\prime}B^{\prime}}=\left[\begin{array}{cc}H^{\uparrow\uparrow}&H^{ \uparrow\downarrow}\\ H^{\downarrow\uparrow}&H^{\downarrow\downarrow}\end{array}\right],H_{BB^{ \prime}/B^{\prime}B}=\left[\begin{array}{cc}H^{\uparrow\uparrow}&0\\ 0&H_{\downarrow\downarrow}.\end{array}\right] \tag{3}\] Here, \(H^{\uparrow\uparrow}_{BB/B^{\prime}B^{\prime}}\) and \(H^{\downarrow\downarrow}_{BB/B^{\prime}B^{\prime}}\) are the Hamiltonian submatrices corresponding to up and down spin components and are connected through the time-reversal symmetry. The non-vanishing \(H^{\uparrow\downarrow}\) and \(H^{\downarrow\uparrow}\) elements of \(H_{BB/B^{\prime}B^{\prime}}\) are due to the SOC effect. The components of the Hamiltonian matrix, describing the interaction between any two atomic orbitals, say \(\alpha\) at site position \(\vec{R}_{i}\) and \(\beta\) at site at position \(\vec{R}_{j}\), is calculated \begin{table} \begin{tabular}{|l|c|c|c|} \hline Structure & Exp. & GGA-mBJ & HSE@G\({}_{0}\)W\({}_{0}\) \\ & (eV) & (eV) & (with SOC) (eV) \\ \hline \hline Cs\({}_{2}\)AgBiCl\({}_{6}\) & 2.65 [33] & 2.98 & 2.65 \\ Cs\({}_{2}\)AgInCl\({}_{6}\) & 3.02 [48] & 3.28 & 2.86 \\ Cs\({}_{2}\)InBiCl\({}_{6}\) & - & 0.53 & 0.72 \\ Cs\({}_{2}\)NaInCl\({}_{6}\) & 4.15 [51] & 5.37 & 4.31 \\ \hline \end{tabular} \end{table} Table 2: Band gap comparison in HDPs calculated using different exchange-correlation functionals. using SK integrals (\(f_{\alpha\beta}\)) [57] \[h_{\alpha\beta}^{ij}(k) = f_{\alpha\beta}(t;l,m,n)e^{i\vec{k}\cdot(\vec{R}_{j}-\vec{R}_{i})}, \tag{4}\] \[h_{\alpha\beta\sigma\sigma^{\prime}}(k) = \sum_{\langle j\rangle}h_{\alpha\beta}^{ij}(k)\delta_{\sigma\sigma ^{\prime}},\] (5) \[H_{\alpha\beta\sigma\sigma^{\prime}} = h_{\alpha\beta\sigma\sigma^{\prime}}+h_{\alpha\beta\sigma\sigma ^{\prime}}^{SOC} \tag{6}\] The \(f_{\alpha\beta}(t;l,m,n)\) depend on the tight-binding hopping parameters \(t\) and direction cosines (\(l\), \(m\), \(n\)) connecting the site \(j\) to \(i\). The \(h_{\alpha\beta\sigma\sigma^{\prime}}^{SOC}\) is driven by the atomistic SOC \(\lambda\mathbf{L}\mathbf{\cdot}\mathbf{S}\). The required \(h_{\alpha\beta\sigma\sigma^{\prime}}\) as well as \(h_{\alpha\beta\sigma\sigma^{\prime}}^{SOC}\) depend on the basis chosen, which varies from compound to compound, see Section XIII of SI for details. As shown in Fig. 6, the bands obtained from the model Hamiltonian are fitted with that of the DFT, and the resulted parameters are listed in Table S10, S11, and S12 of Section XIV of SI. Some of the critical inferences obtained by analyzing the TB parameters are as follows: (I) The \(4^{th}\) nearest-neighbor (B-B/B\({}^{\prime}\)-B\({}^{\prime}\)) hopping interactions are very weak (0 - 50 meV) suggesting that the dispersion is mainly driven through the \(2^{nd}\) nearest-neighbor B-B\({}^{\prime}\) interactions which are 1-2 order higher in strength. (II) Only when Ag and Bi/Sb occupy B and B\({}^{\prime}\) sites, the B-\(s\) - B\({}^{\prime}\)-\(s\) interactions become negligible due to large onsite energy differences. In the rest of the members, this interaction is significant enough to influence the dispersion of VBE and CBE (See the B-MOP in Fig. 3). (III) For HDPs where the ionic Na\({}^{+}\) and K\({}^{+}\) occupy the B-site, the dispersions are due to B\({}^{\prime}\)-B\({}^{\prime}\) and B-X interactions. (IV) The SOC strength is estimated for both B and B\({}^{\prime}\) for each of the compounds considered (see Table S10, S11, and S12 in SI). For Bi and Tl, it is \(\sim\)0.5 eV, and for other B and B\({}^{\prime}\) elements, it is \(\sim\)0.2 eV. Interestingly, these numbers are comparable to the hopping interactions \(t_{\alpha\beta}\). Tight-binding bands for Cs\({}_{2}\)AgAsCl\({}_{6}\), Cs\({}_{2}\)CuBiCl\({}_{6}\), Cs\({}_{2}\)AgGaCl\({}_{6}\), and Cs\({}_{2}\)CuInCl\({}_{6}\) are provided in the SI, Section XV. Furthermore, our findings indicate that while the SOC of a compound has a deterministic effect on its bandgap, it does not influence the parity eigenvalues of its VBE and CBE. ## VI Optical properties calculation The optical absorption coefficient \(\alpha(\omega)\) of a material is determined by its frequency (\(\omega\)) dependent dielectric Figure 6: The DFT (red) and tight-binding bands (blue) for the HDPs. The squared momentum matrix elements (P\({}^{2}\)) corresponding to the valence band edge (VBE) to conduction band edge (CBE) transition are also shown for each of the compounds. The parity of the VBE and CBE are mentioned using Koster notations. constant (\(\epsilon(\omega)=\epsilon_{1}(\omega)+i\epsilon_{2}(\omega)\)): \[\alpha(\omega)=\omega\sqrt{\frac{-\epsilon_{1}(\omega)+\sqrt{\epsilon_{1}^{2}( \omega)+\epsilon_{2}^{2}(\omega)}}{2}}, \tag{7}\] \[\epsilon_{1}(\omega)=1+\frac{2}{\pi}C\int_{0}^{\inf}\frac{\omega^{\prime} \epsilon_{2}(\omega^{\prime})}{\omega^{\prime 2}-\omega^{2}}d\omega^{\prime},\] \[\epsilon_{2}(\omega)=\frac{e^{2}\hbar^{2}}{\pi m_{e}^{2}\omega^{2}}\sum_{v,c} \int_{BZ}d^{3}k|P_{v,c}|^{2}\times\] \[\delta(E_{c}(\vec{k})-E_{v}(\vec{k})-\hbar\omega).\] Here, C is the Cauchy principal value of the integral; \(e\) and \(m_{e}\), respectively, are charge and mass of an electron. P\({}_{v,c}\) in the expression of \(\epsilon_{2}(\omega)\) are the MME corresponding to a transition from valence band at energy \(E_{v}\) to conduction band at energy \(E_{c}\). The Dirac-delta function switches on the MME contribution when a transition occurs from one state to another. The MME for periodic Block functions \(u_{\vec{k}\beta}\)\(e^{i\vec{k}\cdot\vec{R}}\) are obtained as follows [58]: \[\langle\vec{k},\beta|\,\vec{P}\,|\vec{k},\beta^{\prime}\rangle=\frac{m_{e}}{ \hbar}\,\langle u_{\vec{k}\beta}|\,\frac{\partial H(\vec{k})}{\partial\vec{k }}\,|u_{\vec{k}\beta^{\prime}}\rangle\,. \tag{8}\] In the case of optical transition, the transitions from the top valence to the bottom conduction band are generally considered. Therefore, the component of relevant MME can be expressed as follows, \[(P_{v,c})_{x,y,z}=\frac{m_{e}}{\hbar}\sum_{\beta,\beta^{\prime}}u_{\vec{k} \beta,c}\frac{\partial H_{\beta\beta^{\prime}}}{\partial k_{x,y,z}}u_{\vec{k} \beta^{\prime},v}. \tag{9}\] Here, \(u_{\vec{k}\beta,c}\) and \(u_{\vec{k}\beta^{\prime},v}\) respectively represent the eigenvectors associated with the energy eigenvalues \(E_{v}\) and \(E_{c}\). We have calculated the squared MME along the same high symmetric \(k\)-path as that of the band structure. A detailed derivation to calculate the optical properties of materials using the SK-TB model is given in Section XVI of the SI. In Fig. 6, we have shown the band structures for a series of HDPs, and for each of them, the parity eigenvalues (indicated through Koster notations) of VBE and CBE are estimated at the high symmetry points, as shown in the Figure. Corresponding to each of the band structures, the P\({}^{2}\) (VBE \(\rightarrow\) CBE) is also plotted. According to Laporte's rule [59], which is applied only for the centrosymmetric systems as is the case here, the optical transition between the VBE and CBE at any given k-point is allowed only when they have opposite parities (odd-even). The validation of it comes through the P\({}^{2}\) plot.oing beyond the simplistic parity analysis, a thorough group theoretical symmetry analysis is also carried out in Sections XI and XII in the SI to elaborate the selection rules governing the optical transition. For example, in the case of Cs\({}_{2}\)AgBiCl\({}_{6}\) and Cs\({}_{2}\)AgSbCl\({}_{6}\), the transition between VBE and CBE at the high symmetry points W, \(\Gamma\), and X is allowed due to opposite parities while it is forbidden at L due to same parity (odd-odd). Agreeing with it, the P\({}^{2}\) value vanishes at L and is finite elsewhere. In the case of Cs\({}_{2}\)AgInCl\({}_{6}\) and Cs\({}_{2}\)AgTlCl\({}_{6}\), the P\({}^{2}\) is zero along the path \(\Gamma\)-X as both the CBE and VBE have even parity along this path. Therefore, even though these two systems have a direct bandgap, the lowest energy transitions are not allowed, and hence the optical bandgap (defined through transition at L) differs from the electronic bandgap. The compounds Cs\({}_{2}\)(In/Tl)(Bi/Sb)Cl\({}_{6}\) are direct bandgap systems with VBM and CBM lying at \(\Gamma\). The finite value of P\({}^{2}\) implies the lowest energy allowed transition, which is further verified by the parity analysis. Hence, these compounds are much more promising for optoelectronic applications. In the case of Cs\({}_{2}\)(Na/K)BiCl\({}_{6}\), where the VBE and CBE are formed by Bi-\(s\) and Bi-\(p\) respectively (see Fig. 4), while the opposite parity allows the optical transition between VBE and CBE, the P\({}^{2}\) is found to be negligible (of the order \(\sim 10^{-2}\)). On the other hand, the same parities for VBE and CBE of Cs\({}_{2}\)(Na/K)InCl\({}_{6}\) forbidden the optical transition, and expectedly, P\({}^{2}\) is found to be zero. We may note that, since these systems are ionic with large charge transfer from Na/K to Cl, the transition dipole moment for this VBE to CBE transition is naturally weak [60]. The optical transition analysis through Fig. 6 is based on a given \(k\)-path (W-L-\(\Gamma\)-X-L), which does not necessarily provide the complete picture to understand the transition due to the whole BZ, we have calculated the joint densities of states (JDOS). The JDOS provides a measure of the number of all possible optical transitions between the occupied valence band and the unoccupied conduction band separated by photon energy \(\hbar\omega\). \[JDOS=\frac{e^{2}\hbar^{2}}{\pi m_{e}^{2}\omega^{2}}\sum_{v,c}\int_{BZ}d^{3}k\; \delta(E_{c}(\vec{k})-E_{v}(\vec{k})-\hbar\omega). \tag{10}\] The JDOS, obtained for the lowest electronic transition, is plotted in the upper panel of Fig. 7. Also, \(\epsilon_{2}(\omega)\), which can be best described as the MME modulated JDOS, is shown in the lower panel of Fig. 7. As discussed in the above paragraph, the optical transition in Na/K based compound is negligible (see Fig. 7 (f)) even though the JDOS (Fig. 7 (c)) shows some transition probabilities between 4 to 6.5 eV. This is due to the fact that in these ionic systems, the dipole-dipole transition is very weak. The \(\epsilon_{2}(\omega)\) gives a measure of optical bandgap and how it differs from electronic bandgap (as inferred from the JDOS alone). For the case of Cs\({}_{2}\)InBiCl\({}_{6}\), Cs\({}_{2}\)InSbCl\({}_{6}\), Cs\({}_{2}\)TlBiCl\({}_{6}\), and Cs\({}_{2}\)TlSbCl\({}_{6}\), peaks of JDOS and \(\epsilon_{2}\) suggest the direct and strong optical transitions (see Fig. 7 (b and e)). In Cs\({}_{2}\)AgTlCl\({}_{6}\), the first small peak in JDOS below 1 eV is suppressed in \(\epsilon_{2}(\omega)\), implying the optical bandgap is \(\sim 1\) eV (see Fig. 7 (a, d)). Because of a similar reason, in the case of Cs\({}_{2}\)AgInCl\({}_{6}\) the optical bandgap is estimated to be \(\sim 3.6\) eV. Cs\({}_{2}\)AgBiCl\({}_{6}\) and Cs\({}_{2}\)AgSbCl\({}_{6}\) have large JDOS values however their \(\epsilon_{2}\) curves show rather different features. The optical transition for Cs\({}_{2}\)AgSbCl\({}_{6}\) is quite weak compared to Cs\({}_{2}\)AgBiCl\({}_{6}\). ## VII Tailoring of the optoelectronic properties: doping on cationic sites With a good understanding of the electronic structure and optical behavior of the pristine HDPs, in this section, we will show how the optoelectronic properties can be tailored by cation intermixing. For demonstration, we have considered two cases (a) Cs\({}_{2}\)AgIn\({}_{x}\)Bi\({}_{1-x}\)Cl\({}_{6}\) and (b) Cs\({}_{2}\)Na\({}_{1-x}\)Ag\({}_{x}\)InCl\({}_{6}\). A detailed description of the TB model for cation-intermixed HDPs supercells and corresponding TB bands fitted DFT bands are provided in Sections XVII, XVIII, and Fig. S4 of the SI, respectively. In the first case, the end member Cs\({}_{2}\)AgBiCl\({}_{6}\) has an indirect bandgap (unfavorable for optical transition), and Cs\({}_{2}\)AgInCl\({}_{6}\) has a direct bandgap, however, with parity forbidden transitions. Furthermore, the CBE and VBE of the former are made up of Bi-\(p\) and Ag-\(e_{g}\)/Bi-\(s\) orbitals, respectively, while for the latter, these are made up of In-\(s\) and Ag-\(e_{g}\) orbitals, respectively. A careful look at the B-MOP suggests that with Bi and In intermixing, the CBE will be dominated by Bi-\(p\) while VBE will be by Ag-\(e_{g}\) states as well as dopant Bi-\(s\) states. Upon dilution, the Bi-states are expected to be localized and alter the shape of the corresponding band. Thereby, the optical absorption spectra are also expected to change. To verify, in Fig. 8, we have plotted the DFT band structure of Cs\({}_{2}\)AgIn\({}_{x}\)Bi\({}_{1-x}\)Cl\({}_{6}\) (\(x\) = 0.25, 0.5, and 0.75) and P\({}^{2}\), which are estimated from the model Hamiltonian. The salient features of the band structures are as follows: (I) With increasing In, the Bi-\(s\) state dominates the VBE while the Ag-\(e_{g}\) dominated bands are pushed below. (II) The CBE is always formed by the Bi-\(p\) characters. However, its shape changes with doping concentration, which implies new interactions, Bi-\(p\) and In-\(s\) states. (III) For the case of \(x\) = 0.75, both VBE and CBE are narrower across the BZ path and also have a direct bandgap with the VBM and CBM lying at Z. Interestingly, at every high symmetry point (except at R), VBE and CBE have opposite parities to allow optical transitions across the path. The P\({}^{2}\) plot further substantiates it. Similarly, in the second case, we are doping Ag at the Na site of Cs\({}_{2}\)NaInCl\({}_{6}\). As already been discussed, the Cs\({}_{2}\)NaInCl\({}_{6}\) does not exhibit any optical transition as the P\({}^{2}\) vanishes, thanks to the identical parity of CBE and VBE (see Fig. 6). Fig. 9 shows the orbital projected band structure of Cs\({}_{2}\)Na\({}_{1-x}\)Ag\({}_{x}\)Cl\({}_{6}\) ( \(x\) = 0.25, 0.5, and 0.75) and the corresponding P\({}^{2}\) plot. The salient features of the band structures are as follows: (I) The Ag-\(d\) characters tend to dominate the VBE, and for diluted Ag concentration, the VBE becomes flatter like the impurity Figure 8: (Upper panel) DFT obtained orbital projected band structures of Cs\({}_{2}\)AgIn\({}_{x}\)Bi\({}_{1-x}\)Cl\({}_{6}\). The calculations are performed with a structurally relaxed four-formula unit supercell. (Lower panel) The P\({}^{2}\) plot, which is obtained from the TB model Hamiltonian designed for the supercell (see SI). Figure 7: (a-c) Model Hamiltonian obtained joint densities of states (JDOS) of HDPs. The JDOS is calculated for the transition between VBE and CBE. (d-f) Calculated \(\epsilon_{2}(\omega)\) showing the effect of parity forbidden transition and effective optical bandgap of these compounds. bands. (II) The shape of CBE remains largely unchanged though it becomes wider with increasing Ag concentration. (III) The parities of CBE and VBE have altered at certain \(k\) points, and therefore, P\({}^{2}\) is no longer vanishing. Recent studies by Luo \(et.al.\)[61], report the existence of self-trapped excitons (STEs) and, thereby, broadband and white light emission in the Cs\({}_{2}\)Na\({}_{1-x}\)Ag\({}_{x}\)Cl\({}_{6}\). The present study suggests that the STEs are formed by the flat bands, and the localized carrier at the Ag-site becomes the source of the white light emission. The above analysis on cation intermixed HDPs is made by examining the eigenspectrum along certain k-paths. To substantiate the conclusions made, we now consider the full BZ and estimate JDOS and \(\epsilon_{2}\) for Cs\({}_{2}\)AgIn\({}_{1-x}\)Bi\({}_{x}\)Cl\({}_{6}\) and Cs\({}_{2}\)Na\({}_{1-x}\)Ag\({}_{x}\)InCl\({}_{6}\) in Fig. 10. The red curve represents the JDOS for the transition from VBE to CBE, and the blue curve represents \(\epsilon_{2}\). In the former case, we indeed observed increasing JDOS and \(\epsilon_{2}\) with increasing In concentration. as shown in Fig. 10 (a-c). The direct and strong optical transition makes the system highly photoluminescent, which agrees with the recent experimental studies by Appadurai \(et.al.\)[31]. In the case of Cs\({}_{2}\)Na\({}_{1-x}\)Ag\({}_{x}\)InCl\({}_{6}\), any amount of Ag doping induces optical transition. ## VIII Bandgap bowing effect The band positioning due to cation doping, as discussed in the previous section, can be schematically summarized through Figs. 11 (a, c). It shows that with doping, not only there is a reconstitution of VBE and CBE, but also there is a shift of these edge bands. Either the VBE goes up, or CBE comes down, or both happen simultaneously to reduce the bandgap with respect to the parent pristine compounds. This effect, in general, is called bandgap bowing which occurs less often than the linear change in the bandgap as defined by Vegard's law. To quantify the bandgap bowing, in Fig. 11 (c), we have estimated the bandgap as a function of doping concentration in Cs\({}_{2}\)AgIn\({}_{x}\)Bi\({}_{1-x}\)Cl\({}_{6}\) and Cs\({}_{2}\)Na\({}_{1-x}\)Ag\({}_{x}\)InCl\({}_{6}\). The cause of bandgap bowing in the HDPs is a matter of debate in experimental and density functional studies [32; 33; 64; 62; 63; 34]. These studies collectively proposed three probable factors for bandgap bowing: These are (I) change in lattice constant, (II) octahedral distortion, and (III) chemical effect. Through Figs. 8, 9 and 11 (a, b), and related discussion, we have already discussed how the chemical effect plays a role in band repositioning. Our B-MOP indeed has shown that the free atomic orbital energies play a major role in determining the position of VBE and CBE. A similar observation was made by D. Han \(et.al.\)[63]. To understand the role of the lattice on the bandgap bowing, we have carried out three hypothetical experi Figure 10: JDOS (shown in red) and imaginary part of the dielectric constant (shown in blue) for cation intermixed Cs\({}_{2}\)AgIn\({}_{x}\)Bi\({}_{1-x}\)Cl\({}_{6}\) (upper panel) and Cs\({}_{2}\)Na\({}_{1-x}\)Ag\({}_{x}\)InCl\({}_{6}\) (lower panel). Figure 11: (a) The schematic illustration of the orbital resolved band structure in the vicinity of Fermi level and demonstration of bandgap bowing in Cs\({}_{2}\)AgIn\({}_{x}\)Bi\({}_{1-x}\)Cl\({}_{6}\) and Cs\({}_{2}\)Na\({}_{1-x}\)Ag\({}_{x}\)InCl\({}_{6}\). (b) The bandgap (E\({}_{g}\)) as a function of doping concentration \(x\) in Cs\({}_{2}\)AgIn\({}_{x}\)Bi\({}_{1-x}\)Cl\({}_{6}\) (left) and Cs\({}_{2}\)Na\({}_{1-x}\)Ag\({}_{x}\)InCl\({}_{6}\) (right). The DFT obtained bandgap values are shown in black circular dots, and the polynomial fitted curves are shown in blue. The variation of the lattice constant (\(a\)) as a function of \(x\) is also shown. ments on Cs\({}_{2}\)AgIn\({}_{x}\)Bi\({}_{1-x}\)Cl\({}_{6}\). (I) Across the concentration range, the lattice parameter of the doped system is taken as that of Cs\({}_{2}\)AgInCl\({}_{6}\) (10.560 A), and band structure is calculated without relaxing the system. The resulting bandgap as a function of concentration \(x\) is shown in Fig. 12 (a) (cyan solid line). This is a case of increasing compression with decreasing \(x\). We find that, in this case, there is minimal variation in the bandgap. (II) The bandgap as a function of \(x\) is now calculated by taking the lattice parameter as that of Cs\({}_{2}\)AgBiCl\({}_{6}\) (10.936 A). This is a case of increasing expansion with increasing \(x\). The bandgap falls sharply as \(x\) reaches 0.5 and then remains almost unchanged. Together these two experiments imply that (see Fig. 11 (a)), (i) the presence of In-\(s\) orbital pushes down the Bi-\(p\) states in the Bi-rich system, and (ii) in the In-rich system, the expansion significantly reduces the bandgap. (iii) In the third experiment, we carried out volume optimization and calculated the bandgap with and without structural relaxation. The relaxation primarily distorts the octahedra in this system. The results are plotted in Fig. 12 (b). We find the bowing is larger when the octahedra symmetry is maintained, while it reduces when the octahedra are distorted. For further analyzing the role of octahedral distortion on the bandgap, we have calculated the orbital resolved DFT band structures of Cs\({}_{2}\)AgIn\({}_{0.5}\)Bi\({}_{0.5}\)Cl\({}_{6}\) with and without octahedra distortion. The distortion of the octahedra includes the compression of InCl\({}_{6}\) and expansion of AgCl\({}_{6}\), as shown in Fig. 13. The compression strengthens the hybridization between the In-\(\{s,\,p\}\) and Cl-\(p\) orbitals, and therefore corresponding antibonding states (\(\sigma_{s-p}^{*}\), \(\sigma_{p-p}^{*}\), and \(\pi_{p-p}^{*}\)) go higher in energy. Since the BiCl\({}_{6}\) octahedra remain largely unaffected, the position of Bi-\(p\) dominated antibonding states (blue curves) is less perturbed. This leads to a swap in the CBE character and an increase in the bandgap from 1.67 eV to 2.32 eV with distortion (as can be seen in Fig. 13), and as a consequence, the bandgap bowing is weakened, which is in agreement with Fig. 12 (b). Our overall analysis implies that the chemical effect and lattice expansion tend to increase the bandgap bowing while octahedral distortion and lattice compression reduce it. In the case of Cs\({}_{2}\)Na\({}_{1-x}\)Ag\({}_{x}\)InCl\({}_{6}\) the bandgap bowing is primarily driven by the chemical effect as the lattice parameter variation is minimal. Here, the reduction of the bandgap is driven by the appearance of the Ag-\(d\) states as much above the Cl-\(p\) states as shown schematically in Fig. 11 (b). ## IX Conclusion and outlook In conclusion, we carried out a comprehensive electronic structure study by employing density functional calculations, model Hamiltonian formulation, and optoelectronic study by estimating the momentum matrix elements (MME). From our results and analysis, we developed a theoretical workflow to study the electronic and optoelectronic properties of halide double perovskites (pristine and doped) for photovoltaic applications. In this work, we devise band projected molecular orbital picture (B-MOP) as an efficient tool to analyze the electronic structure of covalent hybridized systems in general and halide double perovskites (HDPs) in particular. Based on our understanding of the electronic structure, we could successfully categorize the HDPs into five categories which are based on the valence electron configuration of B and B\({}^{\prime}\), characters of the valence and conduction band edge states, and bandgap as well as optical transition behavior. The list is summarized in Table S10. Figure 12: (a) E\({}_{g}\) for Cs\({}_{2}\)AgIn\({}_{x}\)Bi\({}_{1-x}\)Cl\({}_{6}\) as a function of \(x\), which are estimated by using the lattice parameter of Cs\({}_{2}\)AgInCl\({}_{6}\) (10.56 Å; solid cyan line) and Cs\({}_{2}\)AgBiCl\({}_{6}\) (10.936 Å; solid magenta line) and keeping the structures unrelaxed. (b) E\({}_{g}\) as a function of \(x\) for the volume optimized case\({}_{2}\)AgIn\({}_{x}\)Bi\({}_{1-x}\)Cl\({}_{6}\), and with and without atomic position relaxation. The relaxation distorts the octahedra. Figure 13: Volume optimized Cs\({}_{2}\)AgIn\({}_{0.5}\)Bi\({}_{0.5}\)Cl\({}_{6}\) crystal structure (a) without and (b) with the relaxation of the atomic position. The relaxation distorts the octahedra. The orbital resolved band structures of (a) and (b) are shown in (c) and (d), respectively. The B-MOP obtained from nearest-neighbor cation-anion interactions determines the position and orbital character of the bands. The tight-binding model, which is based on second-neighbor cation-cation interactions, provides insight to determine shape and width of band dispersion. Our study suggests that second-neighbor cation-cation interactions turn out to be deterministic factors for MME and, thereby, optoelectronic properties of HDPs. Through our design principle, we show the possibilities of tuning the bandgap and optical absorption by doping. To demonstrate, we took two prototype examples of doping at the cationic site: Cs\({}_{2}\)AgIn\({}_{x}\)Bi\({}_{1-x}\)Cl\({}_{6}\) and Cs\({}_{2}\)Na\({}_{1-x}\)Ag\({}_{x}\)InCl\({}_{6}\) for \(x=0.25\), \(0.5\), and \(0.75\). We obtain the maximum transition dipole moment at \(x=0.75\) for Cs\({}_{2}\)AgIn\({}_{x}\)Bi\({}_{1-x}\)Cl\({}_{6}\) and at \(x=0.5\) for Cs\({}_{2}\)Na\({}_{1-x}\)Ag\({}_{x}\)InCl\({}_{6}\) which are found to be in good agreement with the previous experimental finding. Our analysis also provides an interesting insight into the bandgap gap bowing, which seems to be a common occurrence in HDPs. We show that the chemical effect plays an enhancement factor in bandgap bowing, while the octahedra distortion tends to minimize it. The findings of the present study provide guiding principles to design efficient optoelectronic materials out of HDPs. We hope that these findings will stimulate theoretical and experimental research on alloying of HDPs to realize their applications in the area of photovoltaic and optoelectronics. The tight-binding model developed in this work (for pristine and cation intermixed) is generic and can be employed to study the electronic structure and optical behavior of 3D HDPs and their 2D and 1D counterparts [65], including vacancy-induced perovskites. ## X Acknowledgment This work is funded by the Department of Science and Technology (DST), India, through the Department of Science and Technology (DST) mission innovation for funding through grant number DST/TMD/IC-MAP/2K20/03 (C). We acknowledge HPCE IIT Madras for providing the computational facility.
2309.12010
Convolution and Attention Mixer for Synthetic Aperture Radar Image Change Detection
Synthetic aperture radar (SAR) image change detection is a critical task and has received increasing attentions in the remote sensing community. However, existing SAR change detection methods are mainly based on convolutional neural networks (CNNs), with limited consideration of global attention mechanism. In this letter, we explore Transformer-like architecture for SAR change detection to incorporate global attention. To this end, we propose a convolution and attention mixer (CAMixer). First, to compensate the inductive bias for Transformer, we combine self-attention with shift convolution in a parallel way. The parallel design effectively captures the global semantic information via the self-attention and performs local feature extraction through shift convolution simultaneously. Second, we adopt a gating mechanism in the feed-forward network to enhance the non-linear feature transformation. The gating mechanism is formulated as the element-wise multiplication of two parallel linear layers. Important features can be highlighted, leading to high-quality representations against speckle noise. Extensive experiments conducted on three SAR datasets verify the superior performance of the proposed CAMixer. The source codes will be publicly available at https://github.com/summitgao/CAMixer .
Haopeng Zhang, Zijing Lin, Feng Gao, Junyu Dong, Qian Du, Heng-Chao Li
2023-09-21T12:28:23Z
http://arxiv.org/abs/2309.12010v1
# Convolution and Attention Mixer for Synthetic Aperture Radar Image Change Detection ###### Abstract Synthetic aperture radar (SAR) image change detection is a critical task and has received increasing attentions in the remote sensing community. However, existing SAR change detection methods are mainly based on convolutional neural networks (CNNs), with limited consideration of global attention mechanism. In this letter, we explore Transformer-like architecture for SAR change detection to incorporate global attention. To this end, we propose a convolution and attention mixer (CAMixer). First, to compensate the inductive bias for Transformer, we combine self-attention with shift convolution in a parallel way. The parallel design effectively captures the global semantic information via the self-attention and performs local feature extraction through shift convolution simultaneously. Second, we adopt a gating mechanism in the feed-forward network to enhance the non-linear feature transformation. The gating mechanism is formulated as the element-wise multiplication of two parallel linear layers. Important features can be highlighted, leading to high-quality representations against speckle noise. Extensive experiments conducted on three SAR datasets verify the superior performance of the proposed CAMixer. The source codes will be publicly available at [https://github.com/summitgao/CAMixer](https://github.com/summitgao/CAMixer). Change detection; Synthetic aperture radar; Shift convolution; Gating mechanism. ## I Introduction Synthetic aperture radar (SAR) image change detection is widely acknowledged as a fundamental task in interpreting and understanding remote sensing data. It has significant implications for various applications, including land cover monitoring such as land cover monitoring [1][2], and disaster monitoring [3][4][5]. With the increasing availability of multitemporal SAR images, the development of reliable change detection methods applicable to real-world scenarios has become crucial [6]. While many supervised and unsupervised methods have been proposed for SAR change detection, supervised methods often require prior knowledge and high-quality labeled samples, which are inconvenient or even difficult to collect in real applications. Furthermore, existing unsupervised methods are commonly based on convolutional neural networks (CNNs), and have limitations in long-range feature modeling. Therefore, in this letter, we primarily focus on developing robust unsupervised SAR change detection method. Recently, Liu et al. [7] introduced a spatial constraint on CNN. This spatial constraint restricts the convolution operations to local regions, thereby improving change detection performance. Saha et al. [8] proposed a Siamese convolutional network. This network employs a shared set of weights to handle multi-temporal SAR images. Wang et al. [9] employed a dual-path denoising network for SAR change detection. The network refines noise labels in training samples. Hafner et al. [10] employed a dual-stream U-Net and performed data fusion of Sentinel-1 and Sentinel-2 images. The fusion of multi-source data, along with the dual-stream architecture, enables accurate urban change detection. Liu et al. [11] proposed a change detection approach based on image translation. By transforming images of different types, it effectively detects changes from multi-source data, providing a versatile solution for unsupervised change detection. Due to the inherent inductive bias in CNNs, existing methods possess the capability to discern subtle changes, such as edges and corners. Hence, the aforementioned CNN-based methods have demonstrated remarkable performance. However, with the emergence of Vision Transformer (ViT) [12], Transformer-based models have achieved significant success in various computer vision and image understanding tasks. These models utilize a global attention mechanism to capture long-range dependencies and compute informative features. Swin Transformer [13] achieves excellent performance in many vision tasks via shifted window self-attention computation. Despite their success, Transformers are rarely applied to multi-temporal SAR image analysis. Therefore, in this letter, we aim to investigate the potential of attention mechanism for SAR change detection task. It is commonly non-trivial to design a robust Transformer-like framework for SAR change detection, since it possess the following challenges: 1) Transformers lack the inherent inductive bias of CNNs, making them less effective when training data is limited. 2) The non-linear transformation of the feed-forward network (FFN) has limitations in robust feature representation and is vulnerable to speckle noise. To address these challenges, we present a Convolution and Attention **Mixer** for SAR change detection, **CAMixer** for short. First, to compensate the inductive bias for Transformer, we combine self-attention with shift convolution in a parallel way. The parallel design enriches feature representations by modeling convolution and attention simultaneously. Additionally, we adopt a gating mechanism in FFN to enhance the non-linear feature representations. The gating mechanism is formulated as element-wise multiplication of two parallel linear layers. Important features can be highlighted, leading to high-quality representations against the speckle noise. In a nutshell, we summarize our contributions in threefold: * We present a convolution and self-attention mixed network for SAR change detection. To the best of our knowledge, we are the first to explore the Transformer-like network for multi-temporal SAR data interpretation. * We propose a gated feed-forward network (GFFN) for non-linear feature transformation. Gating mechanism is formulated as the element-wise product of two parallel paths of linear transformation layers, one of which is activated with the GELU activation. Hence, the GFFN selectively emphasizes important features, thereby mitigating the interference caused by speckle noise. * Extensive experiments conducted on three SAR datasets demonstrate the effectiveness of the proposed CAMixer. In order to benefit other researchers, we have made our code publicly available. ## II Methodology ### _Framework of the Proposed CAMixer_ SAR change detection aims to identify the changes that occur in the same area at different times (\(t_{1}\) and \(t_{2}\)). The overview of CAMixer is shown in Fig. 1. Prcclassification is performed to generate training samples for CAMixer. Specifically, we first compute the difference image by the log-ratio operator. Then, hierarchical fuzzy \(c\)-means [14] are used to classify the difference image into changed, unchanged, and intermediate class. The pixels from changed and unchanged class are selected as training samples. In the proposed CAMixer, several mixing blocks are employed for local and global feature extraction. Finally, the extracted features are reshaped for classification. We now describe the key components of the mixing block: 1) Parallel Convolution and Attention Module (PCAM) and 2) Gated Feed-Forward Network (GFFN). ### _Parallel Convolution and Attention Module (PCAM)_ As shown in Fig. 1, our PCAM is composed of shift convolution and self-attention. **Shift convolution.** Inspired by Wang's work [15], we incorporate shift convolution for local feature extraction. It consists of a series of shift operations and a \(1\times 1\) convolution. The input features are evenly divided into five groups. The first four groups are shifted in different directions (left, right, top, bottom), while the last group remains unchanged. In our implementation, we initially expand the number of channels of the input data \(X\) to \(\beta C\) using a \(1\times 1\) convolution, where \(\beta\) is the expansion ratio and \(C\) is the number of Fig. 1: Illustration of the proposed Convolution and Attention Mixer (CAMixer) for SAR image change detection. The overall pipeline of CAMixer consists of three 3x3 convolutions and three mixing blocks. Each mixing block comprises a parallel Convolution and Attention Module (PCAM) and a Gated Feed-Forward Network (GFFN). Fig. 2: Details of the PCAM. It consists of shift convolution and self-attention. The output of shift convolution and self-attention are fused through element-wise summation. channels. Following the shift operation, we reduce the feature dimension back to the original size through another \(1\times 1\) convolution. This ensures consistency between the input and output feature sizes. Consequently, the shift convolution can be formulated as: \[\hat{X}=W^{2}_{1\times 1}(\text{shift}(W^{1}_{1\times 1}(X))), \tag{1}\] where \(W^{1}_{1\times 1}\) is the first \(1\times 1\) convolution, and \(W^{2}_{1\times 1}\) is the second \(1\times 1\) convolution. Through the shift operation, channels of the input data are shifted, enabling cross-channel information fusion through channel mixing. The second \(1\times 1\) convolution leverages information from neighboring pixels, while the shift convolution facilitates the incorporation of large receptive fields, while maintaining a low computational burden. **Self-Attention Computation.** Inspired by ViT [12], we first divided the image into non-overlapping patches (\(3\times 3\) pixels), and encode each patch into a token embedding. Next, we compute query (\(Q\)), key (\(K\)), and value (\(V\)) via linear transformation of the token embedding. The output of self-attention is calculated by: \[\text{Attention}(Q,K,V)=\text{Softmax}(QK^{T}/\sqrt{d})V, \tag{2}\] where \(\sqrt{d}\) is a scaling factor. Finally, the output of shift convolution and self-attention are fused via element-wise summation. The obtained features are then normalized and fed into the GFFN to generate the input of the next mixing block. ### _Gated Feed-Forward Network_ To enhance non-linear feature transformation, FFN is commonly used to process the output from the attention layer, enabling a better fit for the input of the subsequent attention layer. As illustrated in Fig. 1, we introduce the GFFN to further enhance representation learning. We make two modifications to the FFN: 1) multi-scale convolution and 2) gating mechanism. Firstly, we employ \(3\times 3\) and \(5\times 5\) depth-wise convolutions to enhance the extraction of multi-scale information. Additionally, we utilize the gating mechanism to emphasize the important components of the multi-scale convolutions. The proposed GFFN is formulated as: \[\hat{X}=W^{0}_{1\times 1}\text{Gating}(X)+X, \tag{3}\] \[\text{Gating}(X)=\sigma(W^{1}_{1\times 1}(X))\odot\phi(X), \tag{4}\] \[\phi(X)=W_{3\times 3}(W^{2}_{1\times 1}(X))+W_{5\times 5}(W^{2}_{1\times 1}(X)), \tag{5}\] where \(W^{0}_{1\times 1},W^{1}_{1\times 1}\), and \(W^{2}_{1\times 1}\) are \(1\times 1\) convolution. \(W_{3\times 3}\) denotes \(3\times 3\) depth-wise convolution, and \(W_{5\times 5}\) denotes \(5\times 5\) depth-wise convolution. Here, the \(\odot\) is element-wise multiplication, and \(\sigma\) is the GeLU activation. To improve computational efficiency, we reduce the expansion ratio to 2 with marginal performance loss. ## III Experimental Results and Analysis ### _Datasets and Evaluation Metrics_ We conducted experiments on three datasets, namely the Yellow River, Chao Lake I, and Chao Lake II datasets, to validate the effectiveness of the proposed CAMiker. The Yellow River dataset covers the Yellow River Estuary region in China, with images captured in June 2008 and June 2009 using the Radarset-2 SAR sensor. The Chao Lake I and II datasets cover a region of Chao Lake in China, with images captured in May 2020 and July 2020, respectively, using the Sentinel-1 sensor. During this period, Chao Lake experienced its highest recorded water level. The ground truth change maps for all three datasets were meticulously annotated by experts with prior knowledge. To evaluate the performance of change detection, we employ five evaluation metrics: false positives (FP), false negatives (FN), overall error (OE), percentage of correct classification (PCC), and Kappa coefficient (KC). ### _Analysis of the Parallel Block Number_ There are \(N\) PCAMs in the proposed CAMiker, and it is a critical parameter that may affect the change detection performance. To investigate the relationship between \(N\) and change detection accuracy, we set \(N\) from 0 to 8. Fig. 3 shows that when the number of PCAM increases, the value of PCC first increases and then becomes stable. However, more PCAM would incresse the computational burden. Therefore, we set \(N=3\) for the Chao Lake II dataset, and \(N=5\) for the Yellow River and Chao Lake I datasets. Fig. 4: Visualization of the feature representations on the Chao Lake I dataset. (a) Features before the PCAM. (b) Features after the PCAM. Fig. 3: Relationship between the number of the parallel blocks and the PCC value. ### _Ablation Study_ We conduct ablation experiments to verify the effectiveness of the PCAM and GFFN for the change detection task. We design the following four variants: 1) _Basic Network_ represents the backbone without PCAM and GFFN. (2) _w/o PCAM_ denotes the proposed method without PCAM, (3) _w/o GFFN_ denotes the proposed method without GFFN, and (4) _w/o H-Clustering_ denotes the proposed method employs fuzzy \(c\)-means for preclassification instead of hierarchical clustering [14]. The results in Table I demonstrate that compared to our full model, either _w/o PCAM_ or _w/o GFFN_ consistently exhibited lower performance on all datasets. This indicates that the PCAM significantly enhances the change detection performance, while the GFFN marginally improves it. It shows that GFFN enhances the non-linear feature transformation. Furthermore, the proposed method using hierarchical clustering demonstrates superior performance compared to _w/o H-Clustering_. It is apparent that hierarchical clustering generates more reliable training samples for the proposed CAMixer, consequently enhancing the change detection performance. To further verify the validity of our proposed PCAM, we used the t-SNE [21] tool to visualize the characteristics before and after the module. As shown in Fig. 4, the feature representations after PCAM are noticeably more discriminative. ### _Experimental Results and Comparison_ We compare the proposed CAMixer with five baselines, including PCA-KM [16], NR-CR[17], NR-ELM[18], DDNet [19] and MSAPNet [20]. Fig. 5 illustrates the visual comparison of the change maps generated by different methods on three datasets. The corresponding quantitative evaluation metrics are illustrated in Table II. Fig. 5: Visualized results of different change detection methods on the three dataset: (a) Image captured at \(t_{1}\). (b) Image captured at \(t_{2}\). (c) Ground truth image. (d) Result by PCA-KM [16]. (e) Result by NR-CR [17]. (f) Result by NR-ELM [18]. (g) Result by DDNet [19]. (h) Result by MSAPNet [20]. (i) Result by the proposed CAMixer. _Results on the Yellow River dataset:_ The Yellow River dataset is severely degraded by speckle noise. As a result, it is difficult to obtain satisfactory results by conventional methods. The qualitative results of this dataset are shown in the first row of Fig. 5. It can be observed that the proposed CAMixer suppresses the false alarms effectively, and the change map of CAMixer is the most similar to the ground truth. Furthermore, the CAMixer reports the best PCC value, gaining 0.63% and 0.86% improvement of PCC over DDNet and MSAPNet, respectively. DDNet and MSAPNet are CNN-based methods, and it is evident that CAMixer improves the change detection performance by introducing the Transformer-like architecture. _Results on the Chao Lake I and II datasets:_ The qualitative results of Chao Lake I and II datasets are shown in the second and third rows of Fig. 5, respectively. The proposed CAMixer greatly reduces the false alarms, and obtains the best PCC and KC values on both datasets. It is evident that the proposed CAMixer improves the feature representations via parallel convolution and self-attention computation. The parallel design of shift convolution and self-attention extracts local and global features simultaneously, leading to high-quality representations against the speckle noise. Additionally, the GFFN selectively emphasizes critical features, which further mitigates the interference caused by speckle noise. From the above experiments on three SAR datasets, it can be seen that the proposed CAMixer has better performance than several traditional methods and CNN-based methods. Furthermore, CAMixer reports the best KC values, gaining 1.90%, 12.24%, and 1.29% improvement over the second-best one on three datasets, respectively. It should be noted that KC value is of the most convincing evaluation metrics for SAR change detection. Moreover, the CAMixer obtains balanced FP and FN values on three datasets. It demonstrates that the PCAM captures abundant convolution and self-attention feature interactions, and contributes to better change detection results. ## IV Conclusions and Future Work In this letter, we propose CAMixer, a novel SAR change detection network that produces reliable change detection results. To address the inductive bias limitation of Transformer-like networks, we combine self-attention with shift convolution in a parallel manner. Moreover, we propose a gated feed-forward network to enhance non-linear feature transformation, formulated as the element-wise multiplication of two parallel linear layers. Extensive experiments on three SAR change detection datasets demonstrate the superiority of CAMixer and validate the effectiveness of its two critical components. In the future, we plan to investigate the fusion of mutli-source remote sensing data to improve the change detection performance.
2309.14461
Non-radiant multiphoton states in quantum ring oligomers
Arrays of coupled dipole emitters support collective single- and multiphoton states that can preserve quantum excitations. One of the crucial characteristics of these states is the lifetime, which is fundamentally limited due to spontaneous emission. Here, we present a mechanism of external coupling of two states via the radiation continuum, which allows for an increase in the lifetime of both single and double excitations. As an illustrative example, we consider a ring-like ensemble of quantum emitters, demonstrating that upon slight optimization of the structure geometry, one can increase the lifetime of singly and doubly excited states with non-zero orbital momentum by several orders of magnitude.
Nikita Ustimenko, Danil Kornovan, Ilya Volkov, Alexandra Sheremet, Roman Savelev, Mihail Petrov
2023-09-25T18:51:31Z
http://arxiv.org/abs/2309.14461v2
# Non-radiant multiphoton states in quantum ring oligomers ###### Abstract Arrays of coupled dipole emitters support collective single- and multiphoton states that can preserve quantum excitations. One of the crucial characteristics of these states is the lifetime, which is fundamentally limited due to spontaneous emission. Here, we present a mechanism of external coupling of two states via the radiation continuum, which allows for an increase in the lifetime of both single and double excitations. As an illustrative example, we consider a ring-like ensemble of quantum emitters, demonstrating that upon slight optimization of the structure geometry, one can increase the lifetime of singly and doubly excited states with non-zero orbital momentum by several orders of magnitude. Assembling quantum emitters in ordered systems allows for enhancing light-matter interaction [1; 2] which is crucial for quantum information [3; 4], quantum sensing [5; 6] and optomechanical [7; 8; 9; 10] applications. Due to advanced trapping techniques, emitters can be assembled into the structured arrays in free space [11; 12; 13; 14] or the vicinity of nanophotonic structures [15; 16; 17; 18], which induces collective effects in single- and multiphoton regimes [19; 20; 21; 22; 23; 24; 25; 26]. However, reliable manipulation of quantum states requires their stability, which can be easily destroyed by the spontaneous emission inevitably present in open quantum systems. In this context, controlling the lifetime of quantum states remains one of the key problems in modern quantum optics. Fortunately, this problem can be resolved by generating subradiant states characterized by suppressed spontaneous decay. Since the pioneer work of R. Dicke [27], the emergence of these states has been actively studied both theoretically [28; 29; 30; 31; 32; 33] and experimentally [34; 35; 36; 37; 38]. Besides, spatial ordering of quantum dipole emitters can provide additional control over the lifetime of subradiant states for one-dimensional arrays in free space [39; 40; 41; 26; 42; 43; 44; 27; 45; 46], near a waveguide [42; 43; 44; 45; 46], or for two-dimensional arrays [47; 48; 49]. It has been shown that the radiative decay of large systems can be strongly suppressed, following either a polynomial \(\propto N^{-\alpha}\)[40; 44] or exponential \(\propto e^{-\beta N}\)[39] dependence on the number of quantum emitters in the array \(N\). However, the suppression of radiative decay in smaller structures consisting of as few as several to tens of emitters requires different approaches. In this work, we demonstrate the feasibility of forming singly and doubly excited subradiant eigenstates in finite dipole ensembles based on the mechanism of external coupling, which was initially proposed by H. Friedrich and D. Wintgen [50] for open quantum systems. It has recently garnered significant attention in the field of nanophotonics for highly efficient nonlinear generation [51], lasing in single semiconductor nanostructures [52], achieving a strong nonlinear response [53], and engineering bound states in the continuum of extended periodic structures such as metasurfaces [54; 55]. For the first time, we demonstrate that this mechanism can also be extended to form doubly excited subradiant states. Our focus is on concentric rings of two-level dipole emitters, which have already attracted significant attention in quantum optics [56; 57; 58; 59; 60; 61], owing to their high symmetry and relevance to various natural quantum systems such as organic molecules. The observation of long-lived doubly excited states with non-zero orbital momentum may find utility in quantum information protocols that involve beams with high angular momentum [62; 63; 64; 65]. The proposed mechanism can be straightforwardly applied to ordered arrays with different symmetries and geometries to extend the lifetime of quantum excitations. _General formalism._--Let us consider a double-ring ensemble of \(N\) two-level dipole emitters shown in Fig. 1. All dipoles are located in the \(z=0\) plane and can be characterized by decay rate \(\gamma_{0}=k_{0}^{3}|\mathbf{d}|^{2}/(3\pi\hbar\epsilon_{0})\), where \(k_{0}=\omega_{0}/c=2\pi/\lambda_{0}\) is the wavenumber in vacuum, \(\omega_{0}\) is the transition frequency, and \(\epsilon_{0}\) is the vacuum permittivity. The transition dipole moment of emitters is fixed to be oriented along the \(z\)-axis, \(\mathbf{d}=|\mathbf{d}|\mathbf{e}_{z}\), which can be achieved by applying an external magnetic field isolating this transition from the two in-plane ones. The emitters are coupled via free-space electromagnetic modes, and their quantum states are governed by the effective non-Hermitian Hamiltonian (further, the Planck constant is set to be \(\hbar=1\)) [58]: Figure 1: An open system representing a double-ring oligomer of two-level dipole emitters. The doubly excited eigenstates can have non-zero orbital quasi-momentum with radiative losses that can be strongly suppressed via the mechanism of external coupling. \(\widehat{H}_{\text{eff}}=-i\frac{\gamma_{0}}{2}\sum\limits_{k=1}^{N}\hat{\sigma}_{k}^ {\dagger}\hat{\sigma}_{k}+\sum\limits_{k=1}^{N}\sum\limits_{\begin{subarray}{c} l=1,\\ l\neq k\end{subarray}}^{N}g(|\mathbf{r}_{kl}|,\omega_{0})\hat{\sigma}_{k}^{ \dagger}\hat{\sigma}_{l}\), where \(\hat{\sigma}_{k}^{\dagger}\) (\(\hat{\sigma}_{k}\)) is the creation (annihilation) operator for excitation on emitter \(k\), \(|\mathbf{r}_{kl}|=|\mathbf{r}_{k}-\mathbf{r}_{l}|\) is the relative distance between emitters \(k\) and \(l\), and the energy of non-interacting system \(\omega_{0}\sum\limits_{k=1}^{N}\hat{\sigma}_{k}^{\dagger}\hat{\sigma}_{k}\) is subtracted. The coupling rate between two emitters is defined via free-space electromagnetic Green's tensor [66]: \(g(|\mathbf{r}|,\omega_{0})=\left(-3\gamma_{0}\pi/k_{0}\right)\mathbf{e}_{z}^{T }\cdot\mathbf{G}_{0}(\mathbf{r},\omega_{0})\cdot\mathbf{e}_{z}\). Assumption of the Born-Markov approximation \(g(|\mathbf{r}|,\omega)\approx g(|\mathbf{r}|,\omega_{0})\) allows to avoid the dispersion of the coupling rate since \(\gamma_{0}\ll\omega_{0}\). _Ring oligomer._--First, we consider an ensemble of \(N_{d}\) emitters arranged in a ring of radius \(R\) with the corresponding separation distance between neighbor emitters \(a=2R\sin(\pi/N_{d})\). The eigenvalues of the system can be found by substituting the effective Hamiltonian into Schrodinger equation \(\widehat{H}_{\text{eff}}\left|\psi\right\rangle=\varepsilon\left|\psi\right\rangle\) (Sec. S1, [67]). Each singly excited eigenstate of the ring can be associated with the orbital quasi-momentum \(m\) due to the discrete rotational symmetry of the system, \(\left|\psi\right\rangle=\left|\psi_{\text{ring}}^{(m)}\right\rangle\). As an illustrative example, we consider \(N_{d}=6\) dipoles, for which \(m\) can be equal to \(0,\pm 1,\pm 2,3\) (Sec. S2, [67]). These eigenstates have different parity under the symmetry operations from point group \(C_{6v}\) and transform according to the irreducible representations \(A_{1}\), \(E_{1}\), \(E_{2}\), and \(B_{2}\), respectively (Sec. S4, [67]). As the result, the eigenstates with \(\pm m\) are degenerate, i.e., \(\varepsilon^{(m)}=\varepsilon^{(-m)}\). In the basis of coupled emitters, the eigenstate with a quasi-momentum \(m\) reads as \(\left|\psi_{\text{ring}}^{(m)}\right\rangle=\sum\limits_{k=1}^{N}c_{k}^{(m)} \hat{\sigma}_{k}^{\dagger}\left|g\right\rangle^{\otimes N_{d}}\), where \(c_{k}^{(m)}=e^{im\varphi_{k}}/\sqrt{N_{d}}\) is the excitation probability amplitude for emitter \(k\) and \(\varphi_{k}=2\pi\left(k-1\right)/N_{d}\), see Refs. [56; 58] and also Sec. S2 in [67]. The radiative losses of a single ring do not exhibit any peculiarities [57; 58; 39] and the collective decay rate of subradiant eigenstates monotonically decreases with the ring size (Sec. S5, [67]). However, the situation can be drastically changed by adding a single emitter at the ring center, forming the _oligomer_ ensemble, as shown in Fig 2(a). The external coupling between the ring state with \(m=0\) and the central dipole via the radiation continuum leads to the formation of a long-living state. This can be understood by constructing the wave function for the oligomer's eigenstates from two contributions: \(\left|\psi\right\rangle=c_{a}\left|g_{\text{ring}}\right\rangle\otimes\left|e _{0}\right\rangle+c_{b}\left|\psi_{\text{ring}}^{(0)}\right\rangle\otimes \left|g_{0}\right\rangle\), where \(\left|g_{0}\right\rangle\) and \(\left|e_{0}\right\rangle\) are the wave functions of the central dipole emitter in the ground and excited states, respectively. The central emitter can couple only to the ring state with \(m=0\) due to symmetry considerations. The Hamiltonian in such a basis can be represented as a sum of the unperturbed \(\widehat{H}_{0}\) and interaction \(\widehat{V}\) parts [59]: \[\widehat{H}=\widehat{H}_{0}+\widehat{V}=\begin{pmatrix}\varepsilon_{0}&0\\ 0&\varepsilon_{\text{ring}}^{(m=0)}\end{pmatrix}+\begin{pmatrix}0&\varkappa\\ \varkappa&0\end{pmatrix}, \tag{1}\] where \(\varepsilon_{0}=-i\gamma_{0}/2\) and \(\varepsilon_{\text{ring}}^{(m=0)}=-i\gamma_{0}/2+\sum\limits_{k=2}^{N_{d}}g(| \mathbf{r}_{1k}|,\omega_{0})\) are the energies of the excited central emitter \(\left|e_{0}\right\rangle\) and the excited ring eigenstate \(\left|\psi_{\text{ring}}^{(0)}\right\rangle\), respectively, while the coupling rate is given by \(\varkappa=\sqrt{N_{d}}g\left(R,\omega_{0}\right)\). As depicted in Fig. 2(a), the interaction between subsystems leads to the appearance of symmetric \(\left|\psi_{+}\right\rangle\) and antisymmetric \(\left|\psi_{-}\right\rangle\) hybridized eigenstates with \(m=0\). Their decay Figure 2: (a) Schematic representation of the hybridization of ring states with a single emitter’s state. (b) Lifetime enhancement for the states in panel (a). Inset: The excitation probability is shown in red for the oligomer eigenstates. (c) Scattering cross section (color-coded) for the ring oligomer excited with the Bessel beam (inset). The red and green lines represent the eigenfrequencies of the oligomer eigenstates. (d) Lifetime enhancement for antisymmetric eigenstates with different \(m\) forming in two rings with \(b/a=2\) (inset). rates are defined by the imaginary part of eigenenergies \(\gamma_{\pm}=-2\operatorname{Im}\left(\varepsilon_{\pm}\right)\) where \(\varepsilon_{\pm}\) are given in Sec. S6, [67]. The calculations reveal that the antisymmetric eigenstate possesses resonantly increased lifetime by a factor of \(\tau_{-}/\tau_{0}=\gamma_{0}/\gamma_{-}\approx 230\) for the particular separation between emitters \(a/\lambda_{0}\approx 0.16\), see Fig. 2(b). At this point, the phase shift between the probability amplitudes for the central dipole and the ring reaches \(\pi\) exactly, while the excitation is primarily located at the central emitter, see inset in Fig. 2(b) and Sec. S6 in [67]. Such suppression of radiation, caused by external coupling between the ring and the central emitter states, resembles the destructive interference between the radial modes in dielectric cylindrical cavities [68; 69]. The long-living oligomer state can be excited with tightly focused Bessel beams possessing a longitudinal component of the field [70]. In Fig. 2(c), one can see the scattering cross section (SCS) \(\sigma\) for the oligomer ensembles of different sizes illuminated by a Bessel beam with orbital angular momentum \(\ell=1\) and spin \(s=-1\) adding up to the total angular momentum \(m=0\). The SCS of the oligomer is normalized by \(N\sigma_{0}\) where \(N=N_{d}+1=7\) is the total number of emitters in the system, and \(\sigma_{0}\) is the SCS for the central emitter (Sec. S7, [67]). One can observe a resonant enhancement of the SCS for the symmetric (superradiant) eigenstate, \(\ket{\psi_{+}}\), and a drastic narrowing of the spectral line corresponding to the antisymmetric (subradiant) eigenstate, \(\ket{\psi_{-}}\) when approaching the optimal size condition. The proposed external coupling mechanism can be also applied to control the lifetime of eigenstates with \(m\neq 0\). For instance, in the oligomer ensemble consisting of two concentric rings shown in Fig. 1, the symmetry of singly excited states in the isolated inner and outer rings, or \(\ket{\psi_{\text{in-ring}}^{(m)}}\) and \(\ket{\psi_{\text{out-ring}}^{(m)}}\), is the same. Therefore, the coupling between rings can lead to radiation suppression for arbitrary \(m\). The mechanism of such interaction can be also described within the framework of two coupled states with the wave function of the coupled system given as \(\ket{\psi_{\text{two-ring}}^{(m)}}=c_{a}^{(m)}\ket{\psi_{\text{in-ring}}^{(m)}} \otimes\ket{g_{\text{out-ring}}}+c_{b}^{(m)}\ket{g_{\text{in-ring}}}\otimes \ket{\psi_{\text{out-ring}}^{(m)}}\). By scaling the oligomer size with the fixed ratio \(b/a=2\), one can reach the regime of resonantly enhanced lifetime for antisymmetric eigenstates with different \(m\) as shown in Fig. 2(d). Additional details on the superradiant symmetric states are provided in Sec. S6, [67]. _Doubly excited states._--The mechanism of external coupling can also be exploited to doubly excited states, which, to the best of our knowledge, has not been proposed before. Doubly excited quantum states form a manifold in the Hilbert space with a dimension of \(N(N-1)/2\). Due to the symmetry of the wave function and the Pauli principle, they can be expanded over the wave function basis as \(\ket{\Psi}=\sum\limits_{k=1}^{N}\sum\limits_{l=k+1}^{N}c_{kl}\hat{\sigma}_{k} ^{\dagger}\hat{\sigma}_{l}^{\dagger}\ket{g}^{\otimes N}\). Importantly, one can characterize doubly excited eigenstates of ring oligomers with orbital quasi-momentum \(m\) in a similar manner to the singly excited eigenstates. Moreover, a doubly excited eigenstate can be expanded over the products of singly excited eigenstates, with \(v_{m_{1},m_{2}}\neq 0\) if \(m_{1}+m_{2}=m\) (mod \(N_{d}\)). This condition for the quasi-momentum immediately follows from representation theory. For example, a direct product of two wave functions of singly excited states entering \(E_{1}\) (\(m_{1}=\pm 1\)) and \(E_{2}\) (\(m_{1}=\pm 2\)) irreducible representations results in the wave function of the doubly excited state that enters one of the three irreducible representations \(E_{1}\otimes E_{2}=B_{1}+B_{2}+E_{1}\), with a total \(m=3\) (\(B_{1}\), \(B_{2}\)), or \(m=\pm 1\) (\(E_{1}\)). Two rings of \(N_{d}=6\) emitters support 61 doubly excited eigenstates including ten ones with the largest orbital quasi-momentum \(m=3\) (Sec. S3, [67]). From now on, we will pay special attention to the eigenstates \(\ket{\Psi^{(3)}}\) with \(m=3\), which also enter the \(B_{1}\) irreducible representation. The indistinguishability of excitations and the Pauli principle imply that the double-ring oligomer supports only four possible doubly excited eigenstates of this Figure 3: (a) Eigenfrequency curves for eigenstates (2) where the color corresponds to the lifetime enhancement. The inset schematically shows the largest amplitudes \(c_{kl}\) for the non-radiant eigenstate. (b) The lifetime enhancement for singly excited eigenstates with \(m=1\) (blue dashed-dotted) and \(m=2\) (green dashed) that form doubly excited eigenstates (2) with \(m=3\) (orange solid). type: \[\left|\Psi_{s_{1},s_{2}}^{(3)}\right\rangle =\frac{i}{2}\left(\left|\psi_{s_{1}}^{(+1)}\right\rangle\left|\psi_ {s_{2}}^{(+2)}\right\rangle+\left|\psi_{s_{2}}^{(+2)}\right\rangle\left|\psi_{s _{1}}^{(+1)}\right\rangle\right.\] \[-\left.\left|\psi_{s_{1}}^{(-1)}\right\rangle\left|\psi_{s_{2}}^{( -2)}\right\rangle-\left|\psi_{s_{2}}^{(-2)}\right\rangle\left|\psi_{s_{1}}^{(- 1)}\right\rangle\right), \tag{2}\] where \(s_{1},s_{2}=\pm\) correspond to symmetric (antisymmetric) singly excited eigenstates of the double-ring oligomer. We note that the other six doubly excited eigenstates with \(m=3\) have lower lifetimes and correspond to the \(B_{2}\) symmetry since they contain a direct product of the singly excited eigenstates with \(m_{1}=0\) (\(A_{1}\) representation) and \(m_{2}=3\) (\(B_{2}\) representation). Doubly excited eigenstates inherit their properties from at least two singly excited eigenstates, and therefore their lifetime can also be controlled with the external coupling mechanism. Indeed, the energy \(\mathcal{E}^{(m)}=\omega^{(m)}-i\Gamma^{(m)}/2\) of a doubly excited state \(\left|\Psi^{(m)}\right\rangle\) can be written as (Sec. S8, [67]) \[\mathcal{E}^{(m)}=\sum\limits_{m_{1},m_{2}}\left|v_{m_{1},m_{2}}\right|^{2} \left[\varepsilon^{(m_{1})}+\varepsilon^{(m_{2})}\right], \tag{3}\] whereas amplitudes \(v_{m_{1},m_{2}}\) can be found based on the direct diagonalization of the effective Hamiltonian (Sec. S1, [67]). Hence, the energies of states (2) are \(\mathcal{E}_{s_{1},s_{2}}^{(3)}=\varepsilon_{s_{1}}^{(+1)}+\varepsilon_{s_{2 }}^{(+2)}\) where \(\varepsilon_{\pm}^{(m)}\) are given in Sec. S6, [67]. Consequently, the radiative decay of states (2) can also be decomposed into the sum \(\Gamma_{s_{1},s_{2}}^{(3)}=\gamma_{s_{1}}^{(+1)}+\gamma_{s_{2}}^{(+2)}\). Hence, the suppression of radiative losses for eigenstates (2) can be achieved for a particular oligomer geometry when both singly excited eigenstates with \(m_{1}=1\) and \(m_{2}=2\) have the lowest radiative losses. By varying the inner (\(a/\lambda_{0}\)) and outer ring (\(b/a\) ratio) sizes independently, we find the optimal parameters to be \(b/a=2.2\) and \(a/\lambda_{0}\approx 0.16\) (Sec. S9, [67]). Indeed, for these parameters, the fully antisymmetric eigenstate \(\left|\Psi_{--}^{(3)}\right\rangle\) has a lifetime that is two orders of magnitude larger than that of a single emitter, see Fig. 3(a). Moreover, this point is characterized by the maximal lifetime of both antisymmetric singly excited eigenstates with \(m=1\) and \(m=2\), see Fig. 3(b). The radiative losses for the antisymmetric state with \(m=2\) are much smaller than those for \(m=1\), i.e., \(\gamma_{-}^{(2)}\ll\gamma_{-}^{(1)}\), therefore, the overall radiative losses for the non-radiant doubly excited eigenstate \(\left|\Psi_{--}^{(3)}\right\rangle\) are \(\Gamma_{--}^{(3)}\approx\gamma_{-}^{(1)}\), see Fig. 3(b). Additionally, we can emphasize that the form of the non-radiant eigenstate \(\left|\Psi_{--}^{(3)}\right\rangle\), given by Eq. (2), implies that both excitations within this state are predominantly localized on the inner ring, see inset in Fig. 3(a), inheriting the properties of \(\left|\psi_{-}^{(\pm 1)}\right\rangle\) and \(\left|\psi_{-}^{(\pm 2)}\right\rangle\) non-radiant eigenstates. _Photon emission and spatial correlations._--Finally, the radiative decay of doubly excited states can be characterized by the second-order correlation function, which may be necessary for the future design of potential detection schemes. This function allows for describing spatial correlations between photons emitted by a state \(\left|\Psi\right\rangle\) when detectors \(D_{1}\) and \(D_{2}\) are positioned at coordinates \(\mathbf{r}_{1}^{D}\) and \(\mathbf{r}_{2}^{D}\), respectively, and reads as [71, 59]: \[g^{(2)}(\mathbf{r}_{1}^{D},\mathbf{r}_{2}^{D})=\frac{\sum\limits_{\alpha, \beta}\langle\Psi|\hat{E}_{\beta,2}\hat{E}_{\alpha,1}\hat{E}_{\alpha,1}^{ \dagger}\hat{E}_{\beta,2}^{\dagger}|\Psi\rangle}{\sum\limits_{\alpha,\beta} \langle\Psi|\hat{E}_{\beta,1}\hat{E}_{\beta,1}^{\dagger}|\Psi\rangle\langle \Psi|\hat{E}_{\alpha,2}E_{\alpha,2}^{\dagger}|\Psi\rangle}. \tag{4}\] Here, \(\alpha,\beta=x,y,z\) denote the components in the Cartesian coordinate system, and \(\hat{E}_{\alpha,1(2)}\equiv\hat{E}_{\alpha}\left(\mathbf{r}_{1(2)}^{D}\right)\) is the electric field operator that creates a photon at the detector position \(\mathbf{r}_{1(2)}^{D}\) with the polarization along the \(\alpha\)-axis. The corresponding electric field operator can be expressed via free-space Green's tensor [39]: \(\hat{\mathbf{E}}^{\dagger}(\mathbf{r})=k_{0}^{2}/\epsilon_{0}\sum\limits_{k =1}^{N}\mathbf{G}_{0}(\mathbf{r}-\mathbf{r}_{k},\omega_{0})\cdot\mathbf{d} \hat{\sigma}_{k}\). Singly excited states can be characterized by the far-field radiation pattern of a single photon \(p(\theta,\varphi)\) (Sec. S7, [67]). In Fig. 4(a), two possible configurations of a two-photon detection scheme are presented: one when a first detector is in the polar position (\(\theta_{1}^{D}=0\)), while a sec ond one scans over the sphere - Configuration 1, and another when both detectors are placed in the same point (\(\mathbf{r}_{1}^{D}=\mathbf{r}_{2}^{D}\)) - Configuration 2. The second-order correlation function (4) of the non-radiant eigenstate \(\left|\Psi_{--}^{(3)}\right\rangle\) (see Eq. (2) and Fig. 3) is shown in Fig. 4(b). It exhibits typical hexagonal features with a maximal correlation function at a nodal line around \(\theta\approx 60-70^{\circ}\) for both configurations. This behavior can also be explained by radiation patterns for the singly excited eigenstates \(\left|\psi_{-}^{(+1)}\right\rangle\) and \(\left|\psi_{-}^{(+2)}\right\rangle\) with \(m=+1\) and \(m=+2\), respectively, shown in Fig. 4(c). While maximal emission of the \(m=1\) eigenstate is observed around \(\theta=35^{\circ}\), the emission of the \(m=2\) eigenstate is mainly concentrated around the \(\theta=90^{\circ}\) plane, which results in a maximal correlation function at an intermediate polar angle as shown in Fig. 4(b). _Conclusion.--_In this work, we have exploited the Friedrich-Wintgen mechanism to demonstrate the formation of subradiant singly and doubly excited eigenstates with given orbital momentum in ring oligomers. We have shown that the oligomers can be viewed as two subsystems of emitters supporting states that interact if they possess the same symmetry. The proposed mechanism relies on the destructive interference between the subsystems resulting in the formation of antisymmetric states characterized by suppressed radiative losses for pre-optimized oligomer geometry. The suggested approach is not limited to the systems considered in this work and can be applied to control radiative losses of multiphoton states in various open quantum systems. The authors acknowledge Kristina Frizyuk, Andrey Bogdanov, Ivan Iorsh, and Yuri Kivshar for fruitful discussions. The work was supported by the Russian Academic Leadership Program Priority 2030.
2309.04114
New Consistency Relations between Averages and Variances of Weakly Lensed Signals of Gravitational Waves
The lensing of gravitational waves (GWs) occurs when GWs experience local gravitational potential. In the weak lensing regime, it has been reported that a simple consistency relation holds between the variances of the magnification and phase modulation. In this paper, we present two additional consistency relations between the averages and variances of the weakly lensed GW signals in wave optics. We demonstrate that these consistency relations are derived as the weak lensing limit of the full-order relations for the averages of the amplification factor and its absolute square. These full-order relations appear to originate from energy conservation and the Shapiro time delay, and they are demonstrated to hold irrespective of the matter distribution.
Morifumi Mizuno, Teruaki Suyama, Ryuichi Takahashi
2023-09-08T04:27:10Z
http://arxiv.org/abs/2309.04114v2
# New Consistency Relations between Averages and Variances of ###### Abstract The lensing of gravitational waves (GWs) occurs when GWs experience local gravitational potential. In the weak lensing regime, it has been reported that a simple consistency relation holds between the variances of the magnification and phase modulation. In this paper, we present two additional consistency relations between the averages and variances of the weakly lensed GW signals in wave optics. We demonstrate that these consistency relations are derived as the weak lensing limit of the full-order relations for the averages of the amplification factor and its absolute square. These full-order relations appear to originate from energy conservation and the Shapiro time delay, and they are demonstrated to hold irrespective of the matter distribution. ## I Introduction The direct detection of gravitational waves (GWs) from binary black holes [1] and the detection of background GWs [2] has marked the onset of the GW astronomy era. With ongoing and the expectation of future discoveries in the coming decade, our understanding of the Universe is set to reach new depths [3].1 Footnote 1: The gravitational wave is a source of gravitational waves, which is a source of gravitational waves. Gravitational lensing, which has been extensively studied in the context of light [4; 5], also occurs in GWs [6; 7]. Although the detection of lensed GW signals has not been reported to date, experimental efforts are underway to search for its evidence [8]. On the theoretical front, the lensing of GWs has been an active research subject. For example, gravitational lensing of GWs can enhance the amplitude of GWs, thereby causing the high tail for the redshifted mass distribution of black hole binaries [9; 10]. Note that there are distinct differences between the lensing of light and that of GWs, which is primarily due to the much longer wavelength of GWs. These differences give rise to the wave optics effect, primarily interference and diffraction, which can be used to extract complementary information about the lensing objects [11; 12; 13; 14]. Specifically, lensing in wave optics is frequency dependent and involves a complex-valued quantity, i.e., the amplification factor, while in geometric optics, lensing effects arise simply due to light following the null geodesics in the curved space-time. Thus, measuring the amplification factor across a wide range of frequencies enables us to study the additional properties of lensing objects that cannot be captured in geometric optics. In the weak lensing regime, the lensing of GWs is insensitive to structures smaller than the Fresnel scale [15; 16]. This feature can be exploited to probe the small-scale matter density fluctuations corresponding to the Fresnel scale of detectable GWs [17; 18; 16]. If the observed GWs are enhanced due to strong lensing, the weak lensing signals superimposed on them would also be enhanced and more easily discerned [19]. Weak lensing is based on the Born approximation and its precision is investigated by including the post-Born corrections [20]. There, it is shown that the averages of the magnification and phase modulation become biased once the post-Born corrections are included. In these weak lensing studies of GWs, it has been demonstrated that the variances of the magnification and phase modulation satisfy a universal and very simple relation [21]. While its physical meaning was not identified at the time, this relation provides a nontrivial connection between the real and imaginary parts of the amplification factor (thus, the consistency relation) and holds irrespective of the shape of the matter power spectrum. In addition, another consistency relation for the real and imaginary parts of the amplification factor, i.e., the GW version of the Kramers-Kroning relation, has been reported [22]. In this paper, we demonstrated the existence of two additional consistency relations for the averages and variances of the magnification and phase modulation. In doing so, we review the weak lensing of GWs in wave optics and show that the averages of the magnification and phase modulation are nonzero at the level of the post-Born approximation. Then, we explain how the additional consistency relations hold and argue that these relations as well as the relation derived by [21] can be understood as the weak lensing limit of more comprehensive relations that hold to infinite order in the gravitational potential. Importantly, one relation emerges as a consequence of the energy conservation law of GWs, and the second additional relation and a previously reported relation (Eq. (3.5)) are attributed to the Shapiro time delay. Interpreting lensing as a consequence of the Shapiro time delay appears to provide a physical explanation for the question raised by [21]. The rest of the paper is organized as follows. In section II, the weak lensing of GWs is reviewed and the key quantities (i.e., the averages and variances of the magnification and phase modulation) are derived. In section III, the existence of two additional consistency relations is demonstrated and their physical meaning (energy conservation and the Shapiro time delay) as well as their significance in observations is discussed. Section IV concludes the paper. Throughout this paper, we take \(c=1\) and \(\hbar=1\). ## II Weak lensing of gravitational waves In most astronomical situations, perturbations to the relevant metric due to the presence of matter clumps are small, and the space-time metric is given as follows: \[ds^{2}=-\left(1+2\Phi\right)dt^{2}+\left(1-2\Phi\right)dx^{2}, \tag{1}\] where \(\Phi\) is the Newtonian gravitational potential. In this case, the wave equation for the amplitude of GWs \(\phi\) can be expressed as follows: \[\nabla^{2}\phi-\left(1-4\Phi\right)\frac{\partial^{2}\phi}{\partial t^{2}}=0, \tag{2}\] where we assume that \(\Phi\) varies very slowly with time and ignore its time derivative. The derivation of this equation is predicated on certain assumptions, including the consideration of a small gravitational potential \(|\Phi|\ll 1\) and omission of polarization effects, as well as the assumption that the typical curvature radius induced by \(\Phi\) is much larger than the wavelength of GWs. While the detail is beyond the scope of this paper, a rigorous derivation of the wave equation can be found in the literature [13; 23; 24]. Note that the expansion of the Universe is ignored in both Eqs. (1) and (2). However, the inclusion of the expansion does not change these equations once \(t\) and \(\mathbf{x}\) are replaced with the conformal time and comoving distance with an associated redefinition of GWs due to attenuation of their amplitude as \(\phi\rightarrow\phi/a\)[25]. The lensing effect is commonly described in terms of the amplification factor, which is defined as the ratio of the lensed waveform to the unlensed waveform in the frequency domain, i.e., \(F(\omega)\equiv\tilde{\phi}(\omega)/\tilde{\phi}_{0}(\omega)\). Under the assumption that the typical wavelength of GWs is much smaller than the spatial variation of \(F(\omega)\), Eq. (2) is rewritten as follows: \[i\frac{\partial F}{\partial\chi}+\frac{1}{2\omega\chi^{2}}\nabla_{\theta}^{2} F=2\omega\Phi F. \tag{3}\] In this expression, the coordinates \((\chi,\mathbf{\theta})\) are chosen such that the source of the GWs is located at the origin, where \(\chi\) and \(\mathbf{\theta}\) represent the distance from the source and the angular coordinate, respectively. Note that the solution to this equation is generally nonlinear in \(\Phi\) even though \(|\Phi|\ll 1\) is assumed. This is because the effect of the higher-order terms in \(\Phi\) in Eq. (3) appears equivalently as the higher order terms in \(\mathcal{O}(\Phi\omega\chi_{s})\), where \(\chi_{s}\) is the distance from the source to the observer, and this is not necessarily small even if \(\Phi\ll 1\). Physically, this implies that the phase change of GWs during propagation from the source to the observer becomes significant and leads to complex nonlinear interference effects. For this reason, it is necessary to compute this equation to full order in \(\Phi\) to obtain the comprehensive lensing effects. On the other hand, in the context of weak lensing, it is assumed that \(\Phi\) is sufficiently small that the expansion of \(F\) in \(\Phi\) up to first order provides a reasonable estimate of the true value of the amplification factor. This approximation (i.e., the Born approximation) is primarily used to probe the small-scale power spectrum [16; 17]. In the Born approximation, the real and imaginary parts of the amplification factor are defined as the magnification \(K\) and phase modulation \(S\), which are functions of the GW frequency \(\omega\), the line of sight comoving distance \(\chi_{s}\) to the source, and the angular coordinate \(\mathbf{\theta}\) perpendicular to the line of sight. In this definition, \(K\) is related to the absolute value of \(F\), and \(S\) is interpreted as the argument of \(F\). Following the Born approximation, a systematic scheme to handle post-Born corrections was formulated by [20], which introduced a new definition of \(S\) and \(K\) as \(F(\omega)\equiv e^{K(\omega)}e^{S(\omega)+i\omega\Delta_{s}}\). Here, \(\omega\Delta_{s}\) is a shift of the phase due to the Shapiro time delay and is separated from \(S(\omega)\) as the Shapiro time delay is not directly observable. In the post-Born approximation, \(K\) and \(S\) are computed to second order in \(\Phi\) as follows: \[S^{(1)}= -2\omega\int_{0}^{\chi_{s}}d\chi\left[\cos\left[\frac{W(\chi_{s} \chi_{s})\nabla_{\theta}^{2}}{2\omega}\right]-1\right]\Phi, \tag{4}\] \[S^{(2)}= -2\omega\int_{0}^{\chi_{s}}\frac{d\chi}{\chi^{2}}\int_{0}^{\chi}d \chi_{1}\int_{0}^{\chi}d\chi_{2}\left[\cos\left[\frac{(W\nabla)^{(2)}}{2 \omega}\right]-1\right](\nabla_{\theta 1}\Phi_{1}\cdot\nabla_{\theta 2}\Phi_{2}),\] (5) \[K^{(1)}= 2\omega\int_{0}^{\chi_{s}}d\chi\sin\left[\frac{W(\chi_{s}\chi_{ s})\nabla_{\theta}^{2}}{2\omega}\right]\Phi,\] (6) \[K^{(2)}= 2\omega\int_{0}^{\chi_{s}}\frac{d\chi}{\chi^{2}}\int_{0}^{\chi} d\chi_{1}\int_{0}^{\chi}d\chi_{2}\sin\left[\frac{(W\nabla)^{(2)}}{2\omega} \right](\nabla_{\theta 1}\Phi_{1}\cdot\nabla_{\theta 2}\Phi_{2}), \tag{7}\] where \(\Phi_{1(2)}=\Phi(\chi_{1(2)},\mathbf{\theta})\), \(W(\chi,\chi_{s})=1/\chi-1/\chi_{s}\) and, \[(W\nabla)^{(2)}= W(\chi,\chi_{s})\nabla_{\theta 12}^{2}+W(\chi_{1},\chi)\nabla_{ \theta 1}^{2}+W(\chi_{2},\chi)\nabla_{\theta 2}^{2}. \tag{8}\] In addition, the Shapiro time delay is given in the same manner up to second order as follows: \[\Delta_{s}^{(1)}= -2\int_{0}^{\chi_{s}}\Phi d\chi, \tag{9}\] \[\Delta_{s}^{(2)}= -2\int_{0}^{\chi_{s}}\frac{d\chi}{\chi^{2}}\int_{0}^{\chi}d\chi_{1 }\int_{0}^{\chi}d\chi_{2}\nabla_{\theta 1}\Phi_{1}\cdot\nabla_{\theta 2}\Phi_{2}. \tag{10}\] Note that the derivatives are taken with respect to \(\mathbf{\theta}\) with the operator \(\nabla_{\theta 12}^{2}\) acting on both \(\Phi_{1}\) and \(\Phi_{2}\), while \(\nabla_{\theta 1(2)}\) only acts on \(\Phi_{1(2)}\). In addition, the integral is taken along the straight line connecting the source and observer. In these expressions, the first order terms \(S^{(1)}\) and \(K^{(1)}\) are the Born approximation, where \(K^{(1)}\) reduces to the linear order convergence \(\kappa\) in geometric optics in the high-frequency limit. As is common in the context of weak lensing in geometric optics, the lensing signals are treated as random variables and the averages \(\langle\cdots\rangle\) of these quantities are considered. Using the power spectrum of the gravitational potential \(\Phi\) combined with the Limber approximation, it is shown that, the following is satisfied for arbitrary functions \(F(\mathbf{y})\) and \(G(\mathbf{y})\) of the two-dimensional vector \(\mathbf{y}\): \[\langle F(\nabla_{\theta 1})\Phi_{1}(G(\nabla_{\theta 2})\Phi_{2})=\delta^{D}( \chi_{1}-\chi_{2})\int\frac{d^{2}\mathbf{k}_{\perp}}{(2\pi)^{2}}F(i\chi_{1}\mathbf{k}_{ \perp})G(-i\chi_{1}\mathbf{k}_{\perp})P_{\Phi}(k_{\perp},\chi_{1}). \tag{11}\] With this relation and Eqs. (4)-(7), we obtain \[\langle S\rangle= 2\omega\int_{0}^{\chi_{s}}\frac{d\chi}{\chi^{2}}\int_{0}^{\chi}d \chi_{1}\int\frac{d^{2}\mathbf{k}_{\perp}}{(2\pi)^{2}}k_{\perp}^{2}\left(1-\cos \left[\frac{(\chi-\chi_{1})\chi_{1}}{\chi\omega}k_{\perp}^{2}\right]\right)P_{ \Phi}(k_{\perp},\chi_{1}), \tag{12}\] \[\langle K\rangle= -2\omega\int_{0}^{\chi_{s}}\frac{d\chi}{\chi^{2}}\int_{0}^{\chi} d\chi_{1}\chi_{1}^{2}\int\frac{d^{2}\mathbf{k}_{\perp}}{(2\pi)^{2}}k_{\perp}^{2}\sin \left[\frac{(\chi-\chi_{1})\chi_{1}}{\chi\omega}k_{\perp}^{2}\right]P_{\Phi}( k_{\perp},\chi_{1}), \tag{13}\] for the averages, and, we obtain the following: \[\langle S^{2}\rangle= 4\omega^{2}\int_{0}^{\chi_{s}}d\chi\int\frac{d^{2}\mathbf{k}_{\perp} }{(2\pi)^{2}}\left[1-\cos\left(\frac{(\chi_{s}-\chi)\chi_{2}}{2\chi_{s}\omega} k_{\perp}^{2}\right)\right]^{2}P_{\Phi}(k_{\perp},\chi), \tag{14}\] \[\langle K^{2}\rangle= 4\omega^{2}\int_{0}^{\chi_{s}}d\chi\int\frac{d^{2}\mathbf{k}_{\perp} }{(2\pi)^{2}}\sin^{2}\left[\frac{(\chi_{s}-\chi)\chi_{2}}{2\chi_{s}\omega}k_{ \perp}^{2}\right]P_{\Phi}(k_{\perp},\chi),\] (15) \[\langle SK\rangle= -4\omega^{2}\int_{0}^{\chi_{s}}d\chi\int\frac{d^{2}\mathbf{k}_{\perp} }{(2\pi)^{2}}\sin\left[\frac{(\chi_{s}-\chi)\chi_{2}}{2\chi_{s}\omega}k_{ \perp}^{2}\right]\left(1-\cos\left[\frac{(\chi_{s}-\chi)\chi}{2\chi_{s}\omega }k_{\perp}^{2}\right]\right)P_{\Phi}(k_{\perp},\chi), \tag{16}\] for the variances and the correlation between \(S\) and \(K\). In these expressions, the scale at which the argument of the trigonometric functions becomes order unity provides a rough scale at which GWs are particularly sensitive. This particular scale is referred to as the Fresnel scale \(r_{F}=\sqrt{\chi(\chi_{s}-\chi)/\chi_{s}\omega}\). In the context of lensing of GWs, the Fresnel scale is expressed as follows [15; 16]: \[r_{F}\sim 120\text{pc}\left(\frac{f}{\text{mHz}}\right)^{-1/2}\left[\frac{ \chi(\chi_{s}-\chi)/\chi_{s}}{10\text{Gpc}}\right]^{1/2}, \tag{17}\] where \(f=\omega/2\pi\). The Fresnel scale varies with the GW frequency \(\omega\); thus, measuring the frequency dependence of \(\langle S^{2}\rangle\),\(\langle K^{2}\rangle\), \(\langle SK\rangle\), \(\langle S\rangle\), and \(\langle K\rangle\) is expected to be a unique probe for density fluctuations at scales as small as \(k\simeq 10^{6}-10^{8}\text{Mpc}^{-1}\) for \(f=10-1000\) Hz [17; 16; 20]. Since the frequency dependence becomes relevant in the following discussion, the notations \(S_{\omega}\) and \(K_{\omega}\) are used to indicate the frequency dependence of each lensing signal (e.g., \(\langle S_{\omega}\rangle=\langle S(\omega)\rangle\)). ## III Consistency relations The expressions for the averages (Eqs. (12) and (13)) can be simplified by exchanging the order of the integral as \(\int_{0}^{\chi_{s}}d\chi\int_{0}^{\chi}d\chi_{1}\int_{0}^{\chi}d\chi_{1}\int_{ 0}^{\chi_{s}}d\chi\). Then it is straightforward to obtain the following: \[\langle S_{\omega}\rangle= 2\omega^{2}\int_{0}^{\chi_{s}}d\chi_{1}\int\frac{d^{2}\mathbf{k}_{ \perp}}{(2\pi)^{2}}\left(\left(\frac{(\chi_{s}-\chi_{1})\chi_{1}}{\chi_{s} \omega}k_{\perp}^{2}\right)-\sin\left(\frac{(\chi_{s}-\chi_{1})\chi_{1}}{\chi_{s }\omega}k_{\perp}^{2}\right)\right)P_{\Phi}(k_{\perp},\chi_{1}), \tag{18}\] \[\langle K_{\omega}\rangle= -4\omega^{2}\int_{0}^{\chi_{s}}d\chi_{1}\int\frac{d^{2}\mathbf{k}_{ \perp}}{(2\pi)^{2}}\sin^{2}\left[\frac{(\chi_{s}-\chi_{1})\chi_{1}}{2\chi_{s} \omega}k_{\perp}^{2}\right]P_{\Phi}(k_{\perp},\chi_{1}). \tag{19}\] By comparing these expressions with Eqs.(14)-(16), we can readily find that the following consistency relations, which are accurate up to second order in \(\Phi\): \[\langle K_{\omega}^{2}\rangle+\langle K_{\omega}\rangle= 0, \tag{20}\] \[\langle S_{\omega}\rangle-\frac{1}{2}\left\langle S_{2\omega} \right\rangle= -\langle S_{\omega}K_{\omega}\rangle\,. \tag{21}\] Note that, to the best of our knowledge, these consistency relations have not been previously reported. These relations involve the averages of \(S\) and \(K\), which vanish in the Born approximation and only appear at the level of the post-Born approximation. The discovery of these relations was possible by considering the post-Born approximation within the wave optics framework. In addition, the consistency relations derived here can provide new insight into an existing consistency relation derived by [21], which is explicitly expressed as follows: \[\langle S_{\omega}^{2}\rangle+\langle K_{\omega}^{2}\rangle=\langle K_{2\omega }^{2}\rangle\,. \tag{22}\] We observe that this consistency relation can be merged with Eq. (21) as a single consistency relation for a complex-valued quantity using Eq. (20). By combining Eqs. (21) and (22), we obtain the following equivalent consistency relation: \[\langle K_{\omega}+iS_{\omega}\rangle-\frac{1}{2}\left\langle K_{2\omega}+iS_{2 \omega}\right\rangle=-\frac{1}{2}\left\langle\left(K_{\omega}+iS_{\omega}\right)^{ 2}\right\rangle. \tag{23}\] In the following, we demonstrate that these relations can be derived as the weak lensing limit of more general relations that are accurate to full-order in \(\Phi\). In particular, we demonstrate that the consistency relation (3.3) arises from the energy conservation of GWs. A similar relation for the convergence \(\kappa\) (\(\langle\kappa^{2}\rangle=-2\left\langle\kappa\right\rangle\)) [26; 27] is derived under the photon number conservation in geometric optics [28]; however, the discussion based on energy conservation is more general because it includes both geometric and wave optics. On the other hand, the consistency relations (3.4) and (3.5) appear to be attributed to the Shapiro time delay, which is discussed in the subsection III.3. ### Ensemble average The main results presented above, i.e., Eqs. (3.3) and (3.4), are based on the computation of the average \(\left\langle\cdots\right\rangle\) without paying particular attention to its meaning. However, it is important to revisit the meaning of the average to ensure a precise understanding of its implications, particularly in relation to the energy conservation law. In addition, it is also essential for determining how the average should be practically taken in future experimental settings. The average considered to this point in this paper is referred to as the ensemble average [29; 30], which hypothetically assumes the existence of multiple universes, each with different matter density configurations. In this scenario, we can compute the lensing signal \(X(\chi_{s},\mathbf{\theta})\) (e.g., \(S,K\) in wave optics and \(\kappa,\gamma\) in geometric optics) by considering the GW(or light) signals from the same source at a fixed distance \(\chi_{s}\) in each realization. Note that since \(X\) describes the lensing effect, it does not depend on the physical property of the source. The ensemble average is then obtained by taking the average value of \(X\) over the ensemble of universes. This is the original meaning of the ensemble average that we implicitly assumed in the previous discussion. In cosmology, it is presumed that the universe is statistically homogeneous and isotropic, meaning that the average of all realizations of universes is homogeneous and isotropic, even if each individual realization is not necessarily so. This implies that the spatial derivative of \(X\) with respect to the true location of the source always vanishes; thus, we obtain the following: \[\nabla_{\mathbf{\theta}}\left\langle X(\mathbf{\theta})\right\rangle=0. \tag{3.7}\] However, in reality, we only have access to a single realization of the universe, thereby making the true ensemble average unattainable. Therefore, it becomes necessary to replace the ensemble average with a statistically computable averaging process. In a statistically homogeneous and isotropic universe, one can find that the ensemble average is approximated by the average over the observers which represents the mean value of \(X\) measured by a number of observers uniformly populated on the surface of a sphere with radius \(\chi_{s}\) surrounding a single source. This allows us to rewrite \(\left\langle X(\mathbf{\theta})\right\rangle\) as follows: \[\left\langle X\right\rangle=\frac{1}{4\pi}\int X(\mathbf{\theta})d\Omega, \tag{3.8}\] where \(\mathbf{\theta}\) is the location of the observers on the surface of a sphere with radius \(\chi_{s}\) surrounding the source. However, we can only observe the source from the Earth; thus, it remains unfeasible to directly compute the average over the observers. In practice, \(\left\langle X\right\rangle\) is taken as the average over the sources, which represents the mean value of \(X\) computed from various sources located at the same fixed distance \(\chi_{s}\). It is obtained by simply summing all lensing signals \(X\) from the sources at \(\chi_{s}\) and dividing the sum by the number of the sources. As long as each individual source is fully resolved, the average over the sources can be identified as the ensemble average. In our context, we focus on a GW signal from binary systems where each individual source can be identified; thus, the ensemble averages of the lensing signals derived in the previous section (\(\left\langle S^{2}\right\rangle,\left\langle K^{2}\right\rangle\), etc.) should be taken as the average over the sources. It is important to emphasize that \(\left\langle X\right\rangle\), which, as discussed above, should not be confused with the average over the apparent directions of the sources within the framework of geometric optics. The average over the directions is another approach commonly used in cosmology to compute the average of \(X\)[31; 32] and is computed in a practical manner by dividing the celestial sphere into small patches with equal area and averaging \(X\) over these patches. The difference between the average over the sources and the average over the directions may seem subtle and indeed can be disregarded within the Born approximation (i.e., the first-order approximation of \(X\)). However, when the higher-order terms are taken into account, making the distinction between these two becomes crucial, and failure to do so results in erroneous outcomes [33; 27]. ### Energy conservation Before delving into the main discussion, it is important to consider the meaning of the energy of GWs. Although defining the energy of GWs is not as simple as the case of electromagnetic waves, it is still possible to assign energy to GWs as a conserved quantity when there is a clear separation of scales [34]. In the context of gravitational lensing, there are two types of metric perturbations: the gravitational potential \(\Phi\) due to the presence of matter inhomogeneity and the metric perturbation caused by the GWs themselves. Here we assume that the wavelength of GWs is much shorter than the typical curvature radius of the gravitational potential; thus, the metric perturbation associated with GWs can be separated from the background metric. As a result, we can treat GWs as a classical field just like any other fields living in an inhomogeneous universe described in Eq. (2.1). This approach enables us to identify a conserved quantity corresponding to the energy of GWs [25]. With this in mind, we can observe that Eq. (2.2) is essentially a wave equation with the lensing effect included as an interaction between GWs and the gravitational potential \(\Phi\). Thus, it can be rewritten as follows: \[\frac{\partial}{\partial t}\left(\frac{1}{2}(\nabla\phi)^{2}+\frac{1}{2}\dot{ \phi}^{2}-2\Phi\dot{\phi}^{2}\right)=-\nabla\cdot(-\dot{\phi}\nabla\phi). \tag{3.9}\] Now, let us consider the volume integral over the region \(V\) whose surface is denoted as \(S\). Here, since the energy of GWs in a certain region is given by Eq. (A.7), we can connect Eq. (3.9) with the energy conservation law by taking the time average of Eq. (3.9)2 in addition to the spatial integral. Then, Eq. (3.9) can be rewritten as follows: \[\frac{dE}{dt}=-\frac{1}{16\pi G}\int_{t}^{t+T}\frac{dt^{\prime}}{T}\int_{S}dS\, \mathbf{n}\cdot(-\phi\nabla\phi), \tag{3.10}\] where \(\mathbf{n}\) is a unit normal vector at each point on \(S\) and \(T\) is a range of time average, which is taken sufficiently longer than the period of GWs. From this expression, it is clear that the left-hand side represents the average rate at which the total energy in the region \(V\) varies, and the right-hand side represents the average energy flow going into \(V\). Thus, when the sign of the right-hand side is flipped, it is interpreted as the the energy going out from \(V\). Suppose the GW source is at the origin of the coordinate and \(\phi\) is the superposition of the different frequency modes: \[\phi(\mathbf{x},t)=\int\frac{d\omega}{2\pi}\frac{e^{i\omega t-i\omega t}}{\chi}h( \omega)F(\omega,\mathbf{x}), \tag{3.11}\] where \(h(\omega)\) is the Fourier transform of the original waveform. Next, we consider a sphere with the radius \(\chi\). By taking the volume integral over this region and the time average, we obtain the following: \[\int_{t}^{T+t}\frac{dt^{\prime}}{T}\int_{S}dS\,\mathbf{n}\cdot(-\phi\nabla\phi)= \frac{1}{T}\int\frac{d\omega}{2\pi}\,\omega^{2}|h(\omega)|^{2}\int d\Omega F (\omega)F^{*}(\omega), \tag{3.12}\] where \(\Omega\) is a solid angle. Therefore, we obtain the following: \[\frac{dE}{dt}= -\frac{1}{16\pi GT}\int\frac{d\omega}{2\pi}\,\omega^{2}|h(\omega) |^{2}\int d\Omega F(\omega)F^{*}(\omega). \tag{3.13}\] When the GW source is completely confined in the region \(V\) and there are no objects in \(V\) that absorb or produce GWs, then the right-hand side, especially \(\int d\Omega F(\omega)F^{*}(\omega)\), becomes independent of the radius of a sphere \(\chi\) surrounding the source. In addition, the left-hand side is independent of the matter distribution in the region \(V\) assuming that the gravitational potential does not significantly change over time; thus, the right-hand side is also not subject to this dependence. Given that \(F=1\) when there are no lensing effects, \(\int d\Omega FF^{*}\) needs to be normalized as follows: \[1=\frac{1}{4\pi}\int d\Omega FF^{*}. \tag{3.14}\] The right-hand side is the average of \(FF^{*}\) over the observers, and it is identical to both the ensemble average and the average over the sources; thus, we obtain the following relation for the average of the absolute square of \(F\): \[\langle FF^{*}\rangle=1. \tag{3.15}\] In our notation, the magnification \(K\) and the phase modulation \(S\) are defined as \(F=e^{K+S\cdot i\omega t\omega t}\), which allows us to rewrite the energy conservation condition as \(\left\langle e^{2K}\right\rangle=1\). In a weak lensing regime, \(K\) is sufficiently smaller than unity and the Taylor expansion of \(e^{2K}\) up to second order in \(K\) provides \(e^{2K}=1+2K+2K^{2}+\mathcal{O}(K^{3})\). From this expression, it is clear that, up to second order in \(\Phi\), \(\left\langle K^{2}\right\rangle+\langle K\rangle\)=0 needs to hold. One noteworthy aspect of the relation \(\langle FF^{*}\rangle=1\) is its generality. It is the full-order result and does not assume any specific distribution of matter. ### Average of amplification factor In the following, we explain a more general way to derive the consistency relations (3.4) and (3.5). The physical interpretation of these relations may not be as clear as the consistency relation associated with energy conservation; however, they can still be derived from a more general, full-order condition, similar to how \(\langle K\rangle=-\langle K^{2}\rangle\) is directly derived from \(\langle FF^{*}\rangle=1\). By observing Eq. (2.3), it is clear that the expression takes the same form as the Schrodinger equation with time-varying mass. Therefore it is possible to obtain the formal solution to this equation using the path integral method, as presented by [23]: \[F(\omega,\chi_{x},\mathbf{\theta}_{s})=\int\mathcal{D}[\theta(\chi)]\exp\left[i \int_{0}^{\chi_{x}}\left(\frac{1}{2}\omega\chi^{2}\left|\frac{d\theta(\chi)} {d\chi}\right|^{2}-2\omega\Phi(\chi,\theta(\chi))\right)d\chi\right], \tag{3.16}\] where the normalization factor is absorbed in \(\mathcal{D}[\theta(\chi)]\) and is determined to satisfy \(F=1\) when \(\Phi=0\). Now, we consider taking the ensemble average of this expression. When the ensemble average is taken, the only random variable that appears in this expression is \(\Phi\). Thus, \(\langle F\rangle\) is given as follows: \[\langle F\rangle=\int\mathcal{D}[\theta(\chi)]\exp\left[i\frac{1}{2}\omega \int_{0}^{\chi_{x}}\chi^{2}\left|\frac{d\theta(\chi)}{d\chi}\right|^{2}dx \right]\left\langle e^{-2i\omega\int_{0}^{\chi_{x}}\Phi(\chi_{x})d\chi}\right\rangle. \tag{3.17}\] Here, the computation of the \(n\) point correlation function \(\langle\Phi(\chi_{1},\mathbf{\theta}(\chi_{1}))\cdots\Phi(\chi_{n},\mathbf{\theta}(\chi_{n }))\rangle\) is required to obtain \(\left(\exp\left[-2i\omega\int_{0}^{\chi_{i}}\Phi(\chi,\mathbf{\theta}(\chi))d\chi \right]\right)\). By considering the spatial homogeneity and the assumption that the potential \(\Phi\) evaluated at different \(\chi\) is uncorrelated (the Limber approximation), we obtain \(\left\langle\exp\left[-2i\omega\int_{0}^{\chi_{i}}\Phi(\chi,\mathbf{\theta}(\chi))d \chi\right]\right\rangle=\left\langle\exp\left[-2i\omega\int_{0}^{\chi_{i}} \Phi(\chi,\mathbf{\theta}_{s})d\chi\right]\right\rangle^{3}\). Then, \(\langle F\rangle\) is further simplified as follows: \[\langle F(\chi_{s},\mathbf{\theta}_{s})\rangle=\left\langle\exp\left(-2i\omega \int_{0}^{\chi_{s}}\Phi(\chi,\mathbf{\theta}_{s})d\chi\right)\right\rangle=\left \langle e^{i\omega\Delta_{s}^{(1)}}\right\rangle. \tag{3.18}\] This is a surprisingly simple relation that is accurate to full-order. Here, \(F\) is written as \(F=e^{K(\omega)}e^{S(\omega)+i\omega\Delta_{s}}\); thus, this expression can be formally expanded in \(\Phi\) as follows: \[1+\langle K+S+\omega\Delta_{s}\rangle+\frac{1}{2}\left\langle(K+iS+i\omega \Delta_{s})^{2}\right\rangle+\mathcal{O}(\Phi^{3})= 1-\frac{\omega^{2}}{2}\left\langle(\Delta_{s}^{(1)})^{2}\right\rangle+ \mathcal{O}(\Phi^{3}). \tag{3.19}\] From this relation and Eqs. (2.4)-(2.10), we obtain the following expression up to second order in \(\Phi\): \[\langle K_{\omega}+iS_{\omega}\rangle-\frac{1}{2}\left\langle K_{2\omega}+iS_ {2\omega}\right\rangle=-\frac{1}{2}\left\langle(K_{\omega}+iS_{\omega})^{2} \right\rangle. \tag{3.20}\] This is nothing more than Eqs. (3.4) and Eq. (3.5). In addition, the expressions of the consistency relations (3.4) and (3.5) are based partly on the Limber approximation, which was not assumed in the derivation of the consistency relation associated with energy conservation. A notable difference between the consistency relations (3.4) and (3.5) and the one related to energy conservation (3.3) is that Eqs. (3.4) and (3.5) establish a nontrivial connection between the real and imaginary parts of the amplification factor (i.e., magnification \(K\) and the phase modulation \(S\) in weak lensing). Here, we propose that this non-trivial relation arises from the Shapiro time delay. As observed in Eq. (3.16), the amplification factor \(F\) is obtained by the superposition of all waves traveling along various possible paths. Since the presence of the gravitational potential in a particular region only induces a phase shift to the GWs passing through that area, the resulting \(F\) undergoes changes in both the magnification and the phase modulation. However, these changes are only due to constructive and destructive interference. Thus, it is expected that there is a nontrivial connection between the magnification and the phase modulation, and it appears that this connection becomes apparent in the form of the consistency relations when the average is taken 4. Footnote 3: Because \(\langle\Phi(\chi_{1},\mathbf{\theta}(\chi_{1}))\cdots\Phi(\chi_{n},\mathbf{\theta}( \chi_{n}))\rangle=\phi^{B}(\chi_{1}-\chi_{2})\cdots\phi^{B}(\chi_{n-1}-\chi_{n })\langle\Phi(\chi_{1},\mathbf{\theta}(\chi_{1}))\cdots\Phi(\chi_{1},\mathbf{\theta}( \chi_{1}))\rangle=\langle\Phi(\chi_{1},\mathbf{\theta}_{s})\cdots\Phi(\chi_{1}, \mathbf{\theta}_{s})\rangle\), where \(\langle\cdots\rangle_{2}\) indicates the ensemble average on the plane perpendicular to the line of sight. To obtain a more intuitive understanding of this nontrivial connection between the real and imaginary parts of \(F\), we provide a simple toy model that demonstrates this effect. Suppose two GWs with the same amplitude travel along different paths of equal length and arrive at the location of an observer. Without any lensing objects, the amplification factor is \(F=1\). However, if one of the GWs passes through a region with nonzero gravitational potential \(\Phi\) that extends over a length \(\Delta_{X}\), the resulting amplification factor can be written as follows: \[F(\omega)=e^{K_{\omega}+iS_{\omega}}=\frac{1}{2}\left(1+e^{-2i\omega\Delta_{X }\Phi}\right). \tag{3.21}\] From this amplification factor, we obtain the expressions for the magnification \(K_{\omega}\) and phase modulation \(S_{\omega}\): \[K_{\omega}= \frac{1}{2}\ln\left(\frac{1+\cos\left(2\omega\Delta_{X}\Phi\right) }{2}\right), \tag{3.22}\] \[S_{\omega}= -\tan^{-1}\left(\frac{\sin\left(2\omega\Delta_{X}\Phi\right)}{1+ \cos\left(2\omega\Delta_{X}\Phi\right)}\right). \tag{3.23}\] By expanding these expressions up to second order in \(\Phi\), we can verify the following: \[K_{\omega}+iS_{\omega}-\frac{1}{2}(K_{2\omega}+iS_{2\omega})=-\frac{1}{2}(K_{ \omega}+iS_{\omega})^{2}+\mathcal{O}(\Phi^{3}). \tag{3.24}\] This relation is identical to Eq. (3.20) with the only difference being the absence of the averaging process. Therefore, it is reasonable to conclude that the Shapiro time delay is responsible for the origin of the consistency relation Eq. (3.20). Before concluding this section, let us comment on the modification of the consistency relations in a particular example: massive graviton. The derivation of the consistency relations Eq. (3.20) is based on the assumption that gravitons are massless; thus, the inclusion of the mass term in the wave equation slightly modifies the consistency relations. When the mass of a graviton \(m\) is considered, the wave equation for the amplification factor \(F\) is rewritten as follows: \[i\frac{\partial F}{\partial\chi}+\frac{1}{2\omega\chi^{2}}\nabla_{\theta}^{2}F =2\omega\Phi F+\frac{m^{2}}{2\omega}F. \tag{3.25}\] This expression indicates that a newly defined function \(F^{\prime}=Fe^{\frac{m^{2}\chi_{s}}{2\omega}}=e^{K+(S+\frac{m^{2}\chi_{s}}{2 \omega})+i\omega\Delta_{s}}\) satisfies the equation for a massless graviton (2.3). As we have shown above, the magnification and phase modulation for a massless graviton satisfy Eq. (3.20), and in this case, the corresponding magnification and phase modulation are \(K_{\omega}\) and \(S_{\omega}+\frac{m^{2}\chi_{s}}{2\omega}\); thus, the modified version of the consistency relation when the mass of a graviton is included is obtained by simply replacing \(S_{\omega}\) with \(S_{\omega}+\frac{m^{2}\chi_{s}}{2\omega}\) as follows: \[\left\langle K_{\omega}+i\left(S_{\omega}+\frac{m^{2}\chi_{z}}{2\omega}\right) \right\rangle-\frac{1}{2}\left(K_{2\omega}+i\left(S_{2\omega}+\frac{m^{2}\chi_{ z}}{4\omega}\right)\right)=-\frac{1}{2}\left(\left(K_{\omega}+i\left(S_{\omega}+\frac{m^{2} \chi_{z}}{2\omega}\right)\right)^{2}\right). \tag{3.26}\] This modified version of the consistency relation implies that the deviation from Eq. (3.4) is of the order \(\frac{m^{2}\chi_{z}}{\omega}\) while the deviation from Eq. (3.5) is of the order \((\frac{m^{2}\chi_{z}}{\omega})^{2}\) when the mass of a graviton is considered. Note that the consistency relation originating from the energy conservation (i.e., \(\langle K\rangle=-\langle K^{2}\rangle\) or \(\langle FF^{*}\rangle=\langle e^{2K}\rangle=1\)) is unchanged even when the mass of a graviton is considered. This is because \(FF^{*}=\langle e^{2K}\rangle=1\) is unaffected even if we replace \(F\) with \(Fe^{\frac{m^{2}\chi_{z}}{2\omega}}\). ### Application Generally, consistency relations have the potential to serve as a means to verify the reliability of the lensing signal obtained from observational data [21; 22]. By confirming the satisfaction of the consistency relations, we can independently confirm the correctness of the observed lensing signals, enabling us to use them as probes for small-scale matter density fluctuations. In addition, satisfaction of the consistency relations will confirm the validity of the general relativistic formulation of the lensing signals. Conversely, any deviation from the consistency relations serves as a warning sign that the estimation of \(S\) and \(K\) may have not been performed correctly, which prevents incorrect results from being inferred from unreliable data. While the primary objective of this paper is to present the new consistency relations and discuss their physical implications, it is worth providing a rough estimate of how well the presented consistency relations are satisfied under more realistic scenarios. Therefore, we consider the feasibility of confirming the consistency relations following a similar method presented in [20; 21]. In practical situations, the average \(\langle\cdots\rangle\) is taken as the average over the sources, which requires a number of GWs from various sources, e.g., binary black holes located at a fixed redshift. However, in principle, it is impossible to collect a sufficient number of lensing signals from the sources with exactly the same redshift \(z_{s}\); thus, it is necessary to redefine the average by allowing the inclusion of signals whose redshift falls within a range \(z_{s}-\Delta z<z<z_{s}+\Delta z\). The redshift dependence of the lensing signal \(X(=S,K)\) suggests that the observed variance at \(z_{s}+\Delta z\) is roughly given by \(\langle X(z_{s}+\Delta z)^{2}\rangle=\langle X(z_{s})^{2}\rangle(1+O(\Delta z))\)[16; 17; 20]. With this in mind, we define the estimators \(\mathcal{E}_{A}\) and \(\mathcal{E}_{B}\) as \[\mathcal{E}_{A}(\omega)= \frac{1}{N}\sum_{i}(K_{i}^{2}(\omega,z_{i})+K_{i}(\omega,z_{i})), \tag{3.27}\] \[\mathcal{E}_{B}(\omega)= \frac{1}{N}\sum_{i}\left(S_{i}(\omega,z_{i})-\frac{1}{2}S\left( 2\omega,z_{i}\right)+S_{i}(\omega,z_{i})K_{i}(\omega,z_{i})\right), \tag{3.28}\] where \(K_{i}\) and \(S_{i}\) are assumed to contain independent Gaussian noise \(n_{i}\) with zero mean and variance \(1/\text{SNR}^{2}\), where \(\text{SNR}\) is the signal-to-noise ratio of the detectors for a particular frequency of GWs. In addition, the products of the signals, e.g., \(K_{i}^{2}(\omega,z_{i})\) and \(S_{i}(\omega,z_{i})K_{i}(\omega,z_{i})\), are assumed to be computed using the two values obtained from different detectors with independent noise. Under this assumption, we can immediately obtain \(\langle\mathcal{E}_{A}\rangle=\langle\mathcal{E}_{B}\rangle=0\). Furthermore, under the assumptions of weak lensing, small \(\Delta z\) (\(\langle X(z_{s}+\Delta z)^{2}\rangle\sim\langle X(z_{s})^{2}\rangle\)) and \(|K|,|S|<1/\text{SNR}\), we obtain, \(\langle\mathcal{E}_{A}^{2}\rangle^{1/2}\sim\langle\mathcal{E}_{B}^{2}\rangle^ {1/2}\sim\frac{1}{\text{SNR}}\frac{1}{\text{SNR}}\), which provides the estimated fluctuations in \(\mathcal{E}_{A}\) and \(\mathcal{E}_{B}\). The number of GW events expected to be observed per year within a redshift range \(2.9<z_{s}<3\) can be estimated as \(N\sim 10^{3}\) under the assumption that the merger rate at \(z_{s}=3\) is \(R=20\)\(\text{Gpc}^{-3}\text{yr}^{-1}\)[35]. In the \(\text{SNR}=50\) case, \(\frac{1}{\text{SNR}~{}\text{W}}\sim 6\times 10^{-4}\). Since \(\sqrt{\langle K^{2}\rangle}\sim\mathcal{O}(10^{-2})\) and \(\sqrt{\langle S^{2}\rangle}\sim\mathcal{O}(10^{-3})\) at \(z_{s}\sim 3\) and \(f\sim 1\) Hz, in this scenario, the consistency relation (3.3) can be confirmed with an accuracy of approximately \(\mathcal{O}(1)\%\) of \(\langle K\rangle\), \(\langle K^{2}\rangle\), and the consistency relation (3.4) can be confirmed with an accuracy of up to \(\mathcal{O}(10)\%\) of \(\langle S\rangle\), \(\langle S\rangle\). Note that the value of the merger rate \(R\) used here is an estimated value at a fiducial redshift \(z=0.2\) (rather than \(z=3\)). Since \(R\) is expected to take a larger value at higher redshift, the number of GW events we estimated might be moderately underestimated. Thus, in reality, the consistency relation can be even more tightly confirmed. ## IV Conclusion In this paper, we investigated the lensing of GWs with a particular focus on consistency relations. In addition to the previously reported consistency relation [21], we have identified two additional consistency relations (3.3) and (3.4) that are accurate in the weak lensing regime by directly computing the magnification \(K\) and phase modulation \(S\). We have demonstrated that Eq. (3.3) arises from the conservation of energy in GWs by demonstrating that Eq. (3.3) is derived as the weak lensing limit of \(\langle FF^{*}\rangle=1\). In fact, \(\langle FF^{*}\rangle=1\) holds to full order in \(\Phi\) regardless of the shape or the correlation of the matter clumps. In addition, we have shown that the other consistency relations (3.4) and (3.5) can be also derived as the weak lensing limit of the average of the amplification factor \(\langle F\rangle=\langle e^{-2\omega_{0}\int_{0}^{z_{s}}\Phi(z_{s})}\rangle\), which is also accurate to full order in \(\Phi\). The analysis presented in this paper indicates that the consistency relations (3.4) and (3.5) appear to arise from the Shapiro time delay, which locally alters the phase of GWs. This leads to interference effects and poses the nontrivial connection between \(K\) and \(S\), which becomes evident when the average is taken. Finally, we have demonstrated that these consistency relations can be confirmed observationally given that sufficient \(\text{SNR}\sim 50\) is achieved. Thus, we expect that they will provide independent verification of the correct observed lensing signals and enable us to properly probe matter density fluctuations at very small scales. ## Acknowledgements This work was supported by the JSPS KAKENHI Grant Numbers JP19K03864 (TS), JP23K03411 (TS), JP22H00130 (RT) and JP20H05855 (RT). ## Appendix A Energy density of gravitational waves Here, we provide a brief derivation of the energy density of GWs propagating in curved spacetime characterized by Eq. (1). When there is a clear separation between the metric components due to the background \(\overline{g}_{\mu\nu}\) (typical variation scale \(L\)) and highly oscillatory perturbations \(h_{\mu\nu}\) (typical wavelength \(\lambda\)), the total metric \(g_{\mu\nu}\) is separated into two parts [36]: \[g_{\mu\nu}=\overline{g}_{\mu\nu}+h_{\mu\nu}, \tag{10}\] where \(\overline{g}_{\mu\nu}\) is given by Eq. (1). The Einstein equations \(R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R=8\pi GT_{\mu\nu}\) are rewritten by expanding the Ricci tensor as \(R_{\mu\nu}=\overline{R}_{\mu\nu}+R^{(1)}_{\mu\nu}+R^{(2)}_{\mu\nu}+\cdots\) where \(\overline{R}_{\mu\nu}\) is the Ricci tensor computed using \(\overline{g}_{\mu\nu}\) alone, and \(R^{(\mu)}_{\mu\nu}\) are the correction terms to \(R_{\mu\nu}\) and are of the \(n\)-th order in \(h_{\mu\nu}\). Then, \(R^{(1)}_{\mu\nu}\) and \(R^{(2)}_{\mu\nu}\) are explicitly given as follows: \[R^{(1)}_{\mu\nu}= \frac{1}{2}\left[\overline{\nabla}^{\alpha}\overline{\nabla}_{ \mu}h_{\nu\alpha}+\overline{\nabla}^{\alpha}\overline{\nabla}_{\nu}h_{\mu \alpha}-\overline{\nabla}^{\alpha}\overline{\nabla}_{\alpha}h_{\mu\nu}- \overline{\nabla}_{\mu}\overline{\nabla}_{\nu}h_{\mu}\right], \tag{11}\] \[R^{(2)}_{\mu\nu}= \frac{1}{2}\overline{g}^{\alpha\nu}\overline{g}^{\alpha\beta} \left[\frac{1}{2}\overline{\nabla}_{\mu}h_{\mu\nu}\overline{\nabla}_{\nu}h_{ \alpha\beta}+(\overline{\nabla}_{\mu}h_{\nu\alpha})(\overline{\nabla}_{\nu}h _{\mu\beta}-\overline{\nabla}_{\beta}h_{\mu\nu})\right.\] \[+\left.h_{\rho\alpha}(\overline{\nabla}_{\nu}\overline{\nabla}_{ \mu}h_{\alpha\rho}+\overline{\nabla}_{\beta}\overline{\nabla}_{\nu}h_{\mu \nu}-\overline{\nabla}_{\mu}\overline{\nabla}_{\nu}h_{\mu\nu}-\overline{ \nabla}_{\beta}\overline{\nabla}_{\nu}h_{\mu\nu})\right.\] \[\left.+(\frac{1}{2}\overline{\nabla}_{\alpha}h_{\rho\nu}- \overline{\nabla}_{\mu}h_{\alpha\nu})(\overline{\nabla}_{\nu}h_{\mu\beta}+ \overline{\nabla}_{\mu}h_{\mu\beta}-\overline{\nabla}_{\beta}h_{\mu\nu}) \right], \tag{12}\] where \(\overline{\nabla}_{\mu}\) is a covariant derivative with respect to the background metric \(\overline{g}_{\mu\nu}\)[25]. Up to quadratic order in \(h_{\mu\nu}\), we have the Einstein equations for \(\overline{R}_{\mu\nu}\): \[\overline{R}_{\mu\nu}-\frac{1}{2}\overline{g}_{\mu\nu}\overline{R}= 8\pi G\left(\overline{T}_{\mu\nu}+t_{\mu\nu}\right), \tag{13}\] where \(\overline{T}_{\mu\nu}\) is the energy-momentum tensor contributed by matter components, and it varies slowly with time and space, and \(t_{\mu\nu}\) is an effective energy-momentum tensor of GWs. In our case, the derivative of background gravitational potential is small compared to the derivative of GWs due to \(L\gg\lambda\). Under this assumption and by ignoring the derivative of the background potential, the explicit expression of \(t_{\mu\nu}\) up to relevant order is given as [6; 34; 37]: \[t_{\mu\nu}=-\frac{1}{8\pi G}\left(R^{(2)}_{\mu\nu}-\frac{1}{2} \overline{g}_{\mu\nu}R^{(2)}\right)_{t,x}\] \[= \frac{1}{32\pi G}\left(\overline{g}^{\nu\alpha}\overline{g}^{ \nu\alpha}\partial_{\mu}h_{\alpha\beta}\partial_{\nu}h_{\rho\sigma}-\frac{1} {2}\overline{g}_{\mu\nu}\overline{g}^{\nu\alpha}\overline{g}^{\beta\nu} \overline{g}^{\beta\nu}\overline{g}^{\beta\nu}\partial_{\lambda}h_{\alpha \beta}\partial_{\nu}h_{\rho\sigma}\right)_{t,x}. \tag{14}\] Note that \(\langle\cdots\rangle_{t,x}\) is a space-time average whose integral region is greater than the typical wavelength of GWs and much smaller than the typical scale over which the background metric varies. With this definition, it is possible to assign a gauge invariant local energy of GWs. Now, we introduce the polarization tensor \(e_{\mu\nu}\) such that \(h_{\mu\nu}=\phi e_{\mu\nu}\left(e_{\mu\nu}e^{\mu\nu}=2,\,e^{\mu}_{\mu}=0\right)\) and by setting \(e_{\mu\nu}\) to a constant [38; 6], we obtain the following: \[t_{\mu\nu}=\frac{1}{16\pi G}\left(\partial_{\mu}\phi\partial_{\nu}\phi-\frac{1 }{2}\overline{g}_{\mu\nu}\partial_{\lambda}\phi\partial^{\lambda}\phi\right)_ {t,x}. \tag{15}\] Using this notation, the total energy of GWs in volume \(V\) averaged out over a certain period of time \(T\), denoted as \(\langle\cdots\rangle_{t}=(1/T)\int_{t}^{t+T}dt^{\prime}(\cdots)\), is given by \[E= \int\left\langle t^{00}\right\rangle_{t}dV\] \[= \frac{1}{16\pi G}\int_{t}^{t+T}\frac{dt^{\prime}}{T}\int dV \left(\frac{1}{2}(\nabla\phi)^{2}+\frac{1}{2}\phi^{2}-2\Phi\phi^{2}\right). \tag{16}\] By combining the conservation of energy \(\partial_{\mu}t^{\mu\nu}=0\), we obtain the following: \[\partial_{0}E= -\int\partial_{i}\left\langle t^{0i}\right\rangle_{t}dV\] \[= -\frac{1}{16\pi G}\int_{t}^{t+T}\frac{dt^{\prime}}{T}\int_{S}dS \,h_{i}\cdot\left(-\phi\partial_{i}\phi\right). \tag{17}\] Note that the space-time average \(\langle\cdots\rangle_{t,x}\) is removed when \((1/T)\int_{t}^{t+T}dt^{\prime}\int dV\) is taken. This expression is the same as the one derived in section III using the wave equation (2). Thus, the conserved quantity associated with Eq. (2) is properly considered as the energy of GWs. Note that, only one degree of freedom associated with the polarization of GWs is considered in this discussion. When accounting for two polarization components (\(h_{\mu\nu}=\phi_{\nu}e^{\lambda}_{\mu\nu}+e^{+}_{\mu\nu}\)) and assuming that the polarization tensors \(e^{\lambda}_{\mu\nu}\) and \(e^{\mu}_{\mu\nu}\) are independent, the total energy of GWs is simply given by the sum of the energy of the \(\times\) mode \(E^{\times}\) and the \(+\) mode \(E^{+}\), i.e., \(E=E^{\times}+E^{+}\).
2309.13468
Robert Millikan, Japanese Internment, and Eugenics
Robert A. Millikan (1868-1953) was the second American to win the Nobel Prize in physics. At the peak of his influence, no scientist save Einstein was more admired by the American public. Millikan, the head of the California Institute of Technology (Caltech) during its first 24 years, oversaw its rapid growth into one of the leading scientific institutions of the world. In response to demands for social justice, Caltech reached a decision to strip Millikan of honors (such as the library named after him), following accusations against him. This article analyzes a specific accusation against Millikan that was published in Nature: that he collaborated to deprive Japanese Americans of their rights during their forced relocation to internment camps during the Second World War. An examination of original historical sources will show that this accusation is false. On the contrary, Millikan actively campaigned during the war to promote the rights of Japanese Americans. The article also treats Caltech's central accusation against Millikan: he lent his name to a morally reprehensible eugenics movement that had been scientifically discredited in his time. In a reversal of Caltech's claims, this article shows that all three of Caltech's scientific witnesses against eugenics were actually pro-eugenic to varying degrees. Millikan's beliefs fell within acceptable scientific norms of his day.
Thomas Hales
2023-09-23T19:47:27Z
http://arxiv.org/abs/2309.13468v2
# Robert Millikan, Japanese Intermittent, and Eugenics ###### Abstract Robert A. Millikan (1868-1953) was the second American to win the Nobel Prize in physics. At the peak of his influence, no scientist save Einstein was more admired by the American public. Millikan, the head of the California Institute of Technology (Caltech) during its first 24 years, oversaw its rapid growth into one of the leading scientific institutions of the world. In response to demands for social justice following the murder of George Floyd, Caltech launched an investigation into Millikan. Caltech reached a decision to strip Millikan of honors (such as the library named after him), following accusations from various sources that he was a sexist, racist, xenophobic, antisemitic, pro-eugenic Nazi sympathizer. In short, Caltech threw the book at him. This article analyzes a specific accusation against Millikan that was published in _Nature_: that he collaborated to deprive Japanese Americans of their rights during their forced relocation to internment camps during the Second World War. An examination of original historical sources will show that this accusation is false. On the contrary, Millikan actively campaigned during the war to promote the rights of Japanese Americans. This article traces the stages of misrepresentation that led to current false beliefs about Millikan. In view of Millikan's extraordinary position in American science, this misrepresentation is a cautionary tale. The article also treats Caltech's central accusation against Millikan: he lent his name to "a morally reprehensible eugenics movement" that had been scientifically discredited in his time. The article considers the statements purporting to show that eugenics movement had been denounced by the scientific community by 1938. In a reversal of Caltech's claims, all three of Caltech's scientific witnesses against eugenics - including two Nobel laureates - were actually pro-eugenic to varying degrees. This article concludes that Millikan's beliefs fell within acceptable scientific norms of his day. ## 1 Robert Millikan This first section describes some of Millikan's contributions to science and society. ### Scientific contributions Millikan's life spanned one of the most fertile periods in the history of physics, covering the birth of quantum theory, special and general relativity, subatomic physics, and nuclear energy. He was born in a small town in Illinois in 1868 not long after James Maxwell published his famous equations on electromagnetism and died in 1953 shortly before the birth of Yang-Mills theory. Millikan was one of the greatest experimental physicists in the world during the first half of the 20th century. He built his reputation on high-precision instruments that he designed for experimental use. During his lifetime, the United States moved from the margins of physics to a powerful central position. In 1951, in a letter to Werner Heisenberg, Millikan made a self-assessment of what he thought to be his most important scientific contributions [60, 43:729]. First on his list was the isolation the electron and the measurement of its charge. Millikan was awarded a Nobel Prize in 1923 for the famous oil-drop experiment, which measured the electron charge. These measurements also led to reliable estimates of Avogadro's number [55]. Second was his experimental verification of Albert Einstein's photoelectric equation, which was also recognized by the Nobel Prize. Einstein's and Millikan's Nobel Prizes are closely linked: Einstein's for the theoretical derivation the photoelectric equation and Millikan's for the experimental verification. (Contrary to what some might expect, Einstein's Nobel Prize was not for the theory of relativity.) As a bonus, Millikan obtained what was then the best experimental value of Planck's constant. The photoelectric effect gave one of the first strong indications of wave particle duality. Even today, wave-particle duality poses puzzling philosophical questions [56]. Einstein hypothesized the existence of what is now called a photon - a hypothesis that flew in the face of the settled science of wave optics. This unsettling concept of light motivated Millikan in his experiments, and he eventually confirmed Einstein's equation. Millikan's research on the photoelectric effect and the electron helped to propel physics from the atomic scale to subatomic particles. Third was the closing of the gap between light spectra and X-ray spectra, using the "hot spark source of light." Robert Kargon described the research findings as follows: In a series of papers with R.A. Sawyer and Ira Bowen, Millikan was able to make a considerable extension of the map of the ultraviolet spectrum; they had been able to photograph, measure the wavelength, and analyze the atoms of light elements and multiply ionized atoms of the heavier atoms. They found about 1,000 new [atomic spectral] lines, and showed that their wavelengths were consistent with the Bohr theory -Kargon [44, p 125]. More accurately, Millikan and his collaborators compared their new lines against the predictions of an enhancement of the Bohr model: the relativistic Bohr-Sommerfeld model of the atom [62, pp 209-231]. They found some problems with the theory that were later reconciled by the introduction of electron spin to the model [94].1 Footnote 1: The Uhlenbeck-Goudsmit article of December 1925, which introduced the concept of electron spin, stated, “The assumption of the spinning electron leads to a new insight into the remarkable analogy between the multiplet structure of the optical spectra and the structure of X-ray spectra, which was emphasized especially by Landé and Millikan.... [This analogy] obtains an immediate explanation on the hypothesis of the spin electron” [94]. The fourth on the list is "the law governing the pulling of electrons out of metals by intense electrical fields." See [44, p 126] and [62, p 261]. The experimental results of C. F. Eyring, Millikan, and Charles Lauritsen were eventually explained by J. Robert Oppenheimer quantum-mechanically as electrons tunneling through a potential barrier. Fifth was the extraterrestrial origin of cosmic rays. The term 'cosmic ray' was coined by Millikan, but he was not the first to provide evidence of the extraterrestrial origin of cosmic rays. In 1912, several years before Millikan became involved in cosmic ray research, Victor Hess, in Nobel Prize winning work, made measurements in balloons and "concluded that the upper atmosphere is ionized by radiation from space" [71]. Millikan and G. Harvey Cameron submerged electroscopes into mountain lakes and found that their readings at different water depths confirmed the extraterrestrial origin of cosmic rays [21]. In the early days of cosmic ray research there was much to investigate: how cosmic ray intensity varies according to altitude, latitude, and longitude; how deeply they penetrate; whether the rays carry positive, negative, or no charge; and their energy distribution. Millikan was among the most active researchers in this field. Millikan was sometimes wrong. If cosmic rays carry charge, then they are deflected by the earth's magnetic field, producing a more abundant shower of cosmic rays near the earth's poles than the equator. Millikan and Cameron made a scientific expedition to Peru in 1926 to answer this question experimentally, but because of their lack of thoroughness, limitations of their electroscopes, and partial equipment failure, they failed to detect any significant dependence of cosmic ray intensity on latitude. Millikan fatefully took non-detection as evidence that cosmic rays carry no charge. In opposition to Millikan's conclusion, the next year, Jacob Clay presented experimental evidence that cosmic rays do indeed vary in intensity according to latitude, and hence that cosmic rays do carry charge. Arthur Compton backed up Clay's results. Millikan eventually accepted the Clay-Compton conclusion, but only slowly and only after acrimonious exchanges with Compton, which were dragged into the public arena by _New York Times_ reporting [44]. Sixth is the design of a specialized cloud-chamber for the detection of cosmic particles. Carl Anderson was a PhD student under Millikan at Caltech. After completing his PhD, Anderson continued at Caltech as a Research Fellow (1930-1933) but shifted to cosmic ray research, still supervised by Millikan, who ran three different cosmic ray research groups, each using a different kind of detector. Anderson later recalled, "Professor R. A. Millikan and the writer in the spring of 1930 planned a cloud-chamber apparatus suitable for cosmic-ray studies,...[2]"2 Anderson built the cloud chamber over a period of months. In August 1932, when Anderson was only 26 years old, his cloud chamber detected a positron, the first form of anti-matter ever detected. For the discovery of the positron, Anderson was awarded a Nobel Prize in physics. It was Millikan who nominated him. Four years later, Anderson and his first graduate student, Seth Neddermeyer, discovered yet another elementary particle, the muon, which carries the same electric charge as the electron, but possesses greater mass. Footnote 2: Millikan wrote that he and Anderson designed the cloud chamber in the summer of 1929, but Anderson’s date seems more plausible, because of the period of Anderson’s fellowship [62, p 322]. ### 1.2 Secondary influences Millikan attracted and was attracted to truly extraordinary scientific talent. He went to Germany for three semesters after finishing his PhD at Columbia in 1895 and was present in Berlin for Wilhelm Rontgen's first large public exhibition of the X-ray. In 1896, Millikan attended Max Planck's lectures. According to Millikan, it was during those lectures that Planck conceived the idea of the quantum or discontinuous change (but did not publish on the topic until a few years later). Elsewhere in Europe were Henri Becquerel's discovery of radioactivity in 1896 and J. J. Thomson's research on the electron in 1897. It was a new era in physics. Millikan wrote in his autobiography that these "discoveries actually determined the direction of my own study and research for the next fifty years" [63, p 270]. The first American Nobel Prize winner in physics was Albert A. Michelson, who won his prize for the famous Michelson-Morley experiment, which failed to detect motion of the earth relative to the luminiferous ether, a fact later explained by Einstein's special theory of relativity. It was Michelson who recruited Millikan to the University of Chicago. For decades in Chicago, until Millikan left for Caltech, the two were close colleagues and friends, playing tennis together regularly [63, p 87]. Michelson regarded Millikan as his successor at the University of Chicago, but history had other plans. The second American to win a Nobel Prize in physics was Millikan. The third was Arthur Compton, Millikan's nemesis in the cosmic ray debate. (Arthur's brother Karl will be relevant later in this article.) The fourth was Millikan's student Carl Anderson, mentioned earlier. The fifth was Clinton Davisson, who as a student at University of Chicago was inspired by Millikan to go into physics. Davisson faced financial hardship as a student, and it was through Millikan's recommendation that Davisson obtained employment as a physics teacher, while he continued to work his way through school [26][46]. Davisson won his Nobel Prize for observing diffraction patterns of electrons, confirming Louis de Broglie's prediction that wave-particle duality is not limited to photons. Millikan's influence extended beyond pure science into industrial research that shaped the modern world. The physicist Frank Jewett was the first president of the legendary AT&T Bell Telephone Labs starting in 1925 and later became the chairman of its board of directors. Jewett and Millikan were the closest of friends. Millikan helped line up Jewett with his wife, who had been a Millikan student; and Jewett was the best man at Millikan's wedding [63, p 53]. In 1909, chief engineers and executives at AT&T started discussing the feasibility of establishing transcontinental phone service. The core technological problem was that as the telephone signal travels down the copper wire it grows weaker, so that a signal originating in New York would be entirely lost before reaching San Francisco. What was needed was an efficient _amplifier_ (or _repeater_ to use the term originating in the telegraph industry) that boosted the signal and kept it from attenuating in its transcontinental journey [27, p 21]. Already in 1909, Jewett was a senior manager at AT&T. The task of amplification became Jewett's responsibility. He consulted his friend Millikan about the problem in 1910. Jewett asked Millikan to let him "have one or two, or even three of the best young men who are taking their doctorates with you and are intimately familiar with your field. Let us take them into our laboratory in New York and assign to them the sole task of developing a telephone repeater" [27, p 22]. Millikan sent Jewett his recent PhD student, Harold Arnold (who would later become Bell Lab's first director of research) to work on the task. What happened next is mythical. An amplifier, called an _audion_, had already been recently invented by Lee De Forest. (In 1946, Millikan called De Forest's audion the single most important advance in electronics of all time [60, 43:63].) Arnold and his team went to work improving the performance of the audion for AT&T, after the organization bought the patent rights to De Forest's invention. From time to time, Millikan consulted on the project, but he was mostly a bystander. The research and development resulted in a practical vacuum tube amplifier, which was used in the transcontinental telephone line. At the Panama-Pacific International Exposition in 1914, Alexander Graham Bell and Thomas Watson, positioned in New York and San Francisco, gave the first transcontinental public demonstration: "Mr. Watson, come here, I want you," echoing their immortal words from decades earlier [27, pp 23-24]. The vacuum tube amplifier revolutionized electronics throughout multiple industries: communications, radio, film, television, computer, and consumer electron ics. The friendship between Millikan and Jewett led to a process of physicists that were trained by Millikan and employed by Bell Labs. Mervin Kelly was another physicist who obtained his PhD under Millikan [27, p 16].3 Kelly became director of research at Bell Labs starting in 1936 and later became president of Bell Labs. As research director, Kelly redirected research focus from vacuum tube technology to solid-state physics and recruited William Shockley to lead the solid-state research group. Kelly once stopped by Shockley's office (which was shared with Davisson) and lectured on the coming day when something electronic would replace telephone relays. "For the rest of his life Shockley considered Kelly's lecture as the moment when a particular idea freed his ambition, and in many respects all modern technology from its moorings" [27, p 23]. The research group - Schockley, John Bardeen, and Walter Brattain - was awarded the Nobel Prize in physics for the invention of the transistor. Footnote 3: How many PhD students did Millikan supervise? A very rough estimate suggests he supervised more than 80 PhDs in physics, including more than 30 from the University of Chicago. At Caltech alone, by 1940, Millikan supervised about one-third of the more than 135 PhDs [29, p 107]. Relationships were not merely scientific. Chien-Shiung Wu is known for the discovery of parity violation, for which she was awarded the Wolf Prize in physics in 1978. When Chien-Shiung Wu married Luke Chia-Liu Yuan in 1942, none of the parents were able to attend, because of the war. The Millikans hosted the wedding, which was held at their home. Millikan was Yuan's doctoral advisor. Yuan's grandfather was Yuan Shikai, the first president of the republic of China. ### 1.3 Education In education, Millikan and Gale's _First Course in Physics_ is perhaps the best selling English-language physics textbook of all time. Including all editions and title variations, Millikan and Gale sold 1,610,637 copies between 1906 and 1952 [60, 43:520]. It is particularly remarkable that these numbers were achieved in the first half of the twentieth century, when the physics textbook market was much smaller than it is now. In the international textbook market, a Caltech webpage states that "_The Feynman Lectures on Physics_ is perhaps the most popular physics book ever written. More that 1.5 million English-language copies have been sold; probably even more copies have been sold in a dozen foreign-language editions (the number of copies in Russian alone, for example, is estimated to be over 1 million)" [32]. By way of comparison, popular physics books have sold far more copies. Stephen Hawking's _A Brief History of Time_ has sold more than 25 million copies [57]. ### 1.4 Caltech's executive Kargon wrote that the physicist Millikan, astrophysicist George Hale, and chemist Arthur Noyes formed a "triumvirate" that "was responsible for the rapid rise to prominence of... the California Institute of Technology" [44]. The idea of a founding triumvirate of Caltech has been repeated almost to the point of cliche, but the three scientists contributed in entirely different ways to the rise of the institute. George Hale was the dreamer. He called himself a schemer. In 1903, Hale secured funding from the Carnegie Institute to construct a solar observatory on Mount Wilson near Pasadena. Each of his dreams led to another. Hale envisioned a chemical laboratory to handle problems that arose at Mount Wilson, then more ambitiously of an outstanding technical institute to house the chemistry lab. In 1907, Hale became a trustee of Throop Polytechnic Institute in Pasadena, which was an unremarkable local manual training school. Hale channeled his hope of creating an outstanding institute into Throop. He spent years recruiting Noyes to head the division of chemistry, and Millikan to serve as president. Jewett called Hale the "gifted strategist" and Millikan "the field general" [44, p 92]. What Hale schemed, Millikan brought to life. Millikan was appointed the chief executive of Caltech in 1921, the year after Throop was renamed the California Institute of Technology; he held the position for 24 years. He was offered the title of president but instead chose an organizational structure that made him the chairman of an executive council, consisting of four trustees and four distinguished members of the faculty. Noyes was one of them. Millikan was also the director of Caltech's physics division - the Norman Bridge Laboratory of Physics. "Millikan was everywhere planning, pushing, deciding, ad-monishing.... His day at the Institute, starting at eight o'clock in the morning, frequently did not terminate until long after midnight" [21]. Noyes had been acting president of MIT for two years before moving to Caltech and had considerable administrative know-how. Linus Pauling wrote that "Millikan became a great public figure, who in the minds of the people of the country represented the California Institute of Technology; but Noyes was often the one who was responsible for the policies that were announced by Millikan" [74]. When Millikan first arrived at Caltech, DuMond recalled, "the faculty and graduate students were still a small enough group so that at the first faculty dinner we could all sit around a single long table in the basement of the Crown Hotel" [21]. Under Millikan, Caltech swiftly grew. During its first decade, Caltech made several stellar faculty appointments. * Theodor von Karman, who became the director of Caltech's Aeronautical Laboratory and later a founder of the Jet Propulsion Laboratory; * Carl Anderson, the Nobel Prize winning discoverer of the positron, discussed earlier; * Fritz Zwicky, an astronomer of neutron stars, dark matter, and gravitational lenses; * Robert Oppenheimer, who was head of the Los Alamos Lab during the Manhattan Project to develop nuclear weapons; * Richard Tolman, who served as scientific advisor to General Leslie Groves, the director of the Manhattan Project; * Thomas Hunt Morgan, who won the Nobel Prize in 1933 for establishing that chromosomes carry the genetic material; * Alfred Sturtevant, who made the first genetic map of a chromosome; * Theodosius Dobzhansky, who was one of the architects of the modern synthesis, combining Darwinian evolution, population genetics, and Medelian genetics; and * Linus Pauling, one of the founders of quantum chemistry and the recipient of two Nobel Prizes (in Chemistry and Peace). Other renowned scientists were closely affiliated with Caltech. Charles Richter, who developed the Richter scale, was part of the Caltech Seismological Lab, which at the time was a cooperative venture between the Carnegie Institute and Caltech. Edwin Hubble, who gave evidence for the expansion of the universe, was nearby at the Mount Wilson Observatory. The growth of Caltech can be measured in many ways. In scientific productivity, "by 1930, Caltech was ranked as the leading producer of important physics papers in the country" [29, p 108]. Its endowment, which was almost nonexistent in 1920, had grown to $25 million by 1947. Caltech had completed construction of only two permanent buildings in 1920 and thirty-six by 1947 [63, p 249]. Caltech initiated a broad array of scientific projects, each of which was an undertaking of industrial scale, requiring major funding (from sources such as the Rockefeller Foundation, the Guggenheim Fund, the Carnegie Institution, or industrial partners); scientists, engineers, and students; buildings, labs, and equipment. In cooperation with the Southern California Electric Company, a high-voltage lab was built. The Guggenheim aeronautics lab tested Douglas airplanes in its wind tunnel. "Caltech's Daniel Guggenheim Graduate School of Aeronautics played a major role in turning southern California into the aircraft capital of the world [29, p 177]." There were Millikan's cosmic ray project, a seismology project, the study and development of jet propulsion, and the study of oil under high pressure sponsored by the American Petroleum Institute. Starting in the 1920s, Hale had a vision of building the largest telescope in the world. Caltech's Palomar Observatory, including its 200-inch Hale Telescope, became operational in 1948 [63, pp 238-250]. ## 2 Japanese Intermittent and the Fair Play Committee World-altering political events shaped Millikan's final years at the head of Caltech. After the attack on Pearl Harbor, the United States declared war on Japan. Japanese Americans living on the West Coast were forced to relocate to internnment camps. An organization named the _Fair Play Committee_ was formed in Berkeley to defend the rights of Japanese Americans. Millikan became a vice president in this organization. After the war ended, scientific missions were organized to Japan to evaluate Japan's scientific capabilities. Millikan assisted in the recruitment of scientists and engineers for these missions. ### Pearl Harbor Japan attacked Pearl Harbor on December 7, 1941. The next day, the United States declared war on Japan. In January 1942, anti-Japanese sentiments intensified, and widespread hostile feelings were directed against Japanese Americans.4 Anti-Japanese rhetoric was especially fierce in Cal ifornia, which was the home to the majority of all continental Japanese Americans. In California, the Hearst newspapers, the McClatchy newspapers, the _Los Angeles Times_, as well as hundreds of civic organizations were united against Japanese Americans [77, p 37][49]. On January 29, 1942, a Hearst newspaper columnist wrote of the Japanese on the West Coast, "Herd 'em up, pack 'em off and give 'em the inside room in the badlands. Let 'em be pinched, hurt, hungry and dead up against it.... Personally, I hate the Japanese. And that goes for all of them" [58]. A barber shop offered "free shaves for Japs" but was "not responsible for accidents" [93, p 18]. Footnote 4: The _Los Angeles Times_ is a _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as the _Los Angeles Times_, as well as as the _Los Angeles Times_ Lippmann's column went to Secretary of War Henry Stimson and to the second in command, John J. McCloy. According to McCloy's biographer Kai Bird, "More than any other individual, McCloy was responsible for the decision [of Roosevelt], since the president had delegated the matter to him through Stimson" [11]. On February 19, 1942, a little more than two months after the attack on Pearl Harbor, President Roosevelt signed an executive order 9066. The order authorized the establishment of military zones, and all Japanese Americans living within these zones were soon to be forcibly expelled and relocated to internment camps. The military zone included all of California, coastal portions of Oregon and Washington, and southern parts of Arizona. Starting in 1942, all (about 112,000) Japanese Americans were forcibly relocated from the West Coast into ten internment camps that were built in scattered locations throughout western states and Arkansas. For example, the Topaz internment camp in central Utah housed over 8000 Japanese Americans in military-style barracks, each small family in a one-room apartment lit by a single lightbulb and heated by a potbelly stove. Life was austere, yet the camp offered a range of basic services: a hospital, library, schools, Buddhist and Christian churches, general stores, co-ops, banks, repair shops, and police and fire departments run by the internees. The camp was one square mile, surrounded by barbed wire and guard stations. A civilian government agency, the War Relocation Authority (WRA), ran the internment camps. Dillon S. Myer was the WRA director for almost the entire duration of agency's existence, 1942-1946. Although the director over internment, Myer became opposed to internment.6 The historian Alonzo Hamby, in a book review of Greg Robinson's book on internment [78], wrote that Myer and a few others "struggled futiley for an early end to the program against a flood tide of hysteria and expediency" [36]. Over time, legal challenges against internment made their way to the Supreme Court. Finally, on December 17, 1944, the "Roosevelt administration issued Public Proclamation No. 21... declaring that Japanese Americans could return to the West Coast the next month." This proclamation marked the end of Japanese internment. The proclamation pre-empted by one day the Supreme Court, which issued two decisions (Korematsu v. United States and Ex Parte Endo). The justices ruled that the WRA had no authority to detail loyal citizens. ### 2.2 Japanese-American petitions to Millikan When Japan attacked Pearl Harbor on December 7, 1941, Robert Millikan and his wife Greta were traveling in Mexico City. As Greta tells it, "a Mexican gentleman, recognizing us as Americans, I suppose, told [our guide] E. that he had just heard on the radio of Japan's attack on Manila and Hawaii. It seems impossible that it has really come - we were sobered and stunned,..." [60, 80:89]. As soon as Greta was back in Pasadena in January 1942, she hurried over to the home of her Japanese gardener Harris Ozawa and his family, recording in her diary, "Although warned to stay at home, they had gone quietly about their jobs, having nothing to conceal or be ashamed of" [60, 80:108]. Later, when about to be forced out of Pasadena, in a sentimental scene, Ozawa replanted the azaleas from his own home into the Millikan garden as a parting gift, but Greta insisted that the flowers were only on loan until his return. Greta saw the Ozawas off at the train station, as they left for the Tulare detention camp: "With duffle bags and big suit cases, bedding rolls and baskets, families assembled at the starting point where a sympathetic social service worker helped them with details; a group of church women served hot coffee and rolls." Concerned about the living conditions at the temporary detention camps, Greta visited the Santa Anita camp. She made contacts with organizations to aid Japanese-Americans. Harris's wife Elizabeth wrote frequent letters to Greta that chronicled life under internment. After the war, the Ozawa family was prevented from retaking their house in Pasadena, and Greta assisted with the "time-consuming and not a little exasperating" legal process to regain possession [60, 80:109,132,135,284,381]. The war brought Robert Millikan into renewed contact with Japanese acquaintances. A friend, T. Hori, sent a plea to Millikan [60, 43:1016]. Yes, the United States is now at war. It is my sincere hope that this country will win and forever banish the demons of war from this world.... Last Monday, a week ago today, all our assets were frozen because of my visit to Japan for the sole purpose of visiting my invalid brother who lived in this country for over thirty years, and also because I am an alien. During this past week, a majority of the leading Japanese aliens were taken away to the Terminal islands for investigation,... On the other hand, my wife, who is an American citizen, received many telephone calls from her American church friends asking about her welfare.... This kindness and thoughtfulness impresses us deeply.... Yes my entire household is wholeheartedly supporting the cause of freedom and democracy for which the United States stands.... - Hori, Dec 15, 1941 Arriving back in Pasadena from Mexico, Millikan responded to Mr. Hori [60, 43:1016]: Your letter touched me deeply. I shall be glad to talk over the situation with you anytime, and if I can be of any assistance in making the difficult situation in which you and other loyal members of your group find yourselves I shall want to do it. Mrs. Millikan, too, has been wondering whether she could be of help to members of your group. - Millikan, Jan 6, 1942 A former Caltech student Koichi Kido sent a plea to Millikan on January 28 [60, 28:452]: You, Dr. Millikan, are a champion of democracy. You, sir, are a champion of human freedom. Will you fight to help maintain them in these United States?... There is the probability that many of us will be moved inland very shortly....There is the final possibility that all of us will not only be moved inland but be interned for the duration.... We who are citizens of the United States just as you are and who have no political ties with Japan (dual citizenship for instance) have the same rights and privileges as you do. Everyone who has any knowledge of the Constitution and the Bill of Rights is aware of this fact.... Who will defend us? - Koichi Kido, Jan 28, 1942 Moved by Koichi Kido's letter, Millikan wrote to the Pasadena Chamber of Commerce and City Board of Directors [60, 28:454]. Pearl Harbor awakened this country from its idle dream of isolated security. It is very important that we do not go to the other extreme and take hysterical, instead of considered, action in the matter of defense.... It is particularly important that we do not go to any unwise wholesale schemes in trying to protect ourselves against the dangers arising from our residents of enemy country origin.... To adopt any wholesale policy of putting them all in concentration camps would be an action which would defeat its own purpose, not only in weakening the defense industries in which they are so useful, but also in arousing justified resentment because of the unfair treatment of loyal American citizens and thus increasing, rather than decreasing the danger of sabotage. - Millikan, Feb 4, 1942 Millikan responded to Mr. Kido [60, 28:454]. [I] appreciate fully the difficult situation in which you, the American citizens of Japanese parentage, are placed by the action of the Japanese government. I have been doing all I can to get discrimination and sanity into the treatment of the many who are likely to suffer unjustly in this difficult situation. If there is anything in particular I can do to help you I hope you will let me know. - Millikan Feb 6, 1942 By mid-February, through two of its press releases, Millikan had become aware of the Fair Play Committee in the California Bay Area, which opposed discrimination against Japanese civilians. (The organization was to go through several renamings, but this article consistently calls it the Fair Play Committee.) The organization's first press release in October 1941 (before Pearl Harbor) called upon "fair-minded Californians to combat discrimination against their fellow residents of Japanese race" [48]. The first paragraph of the press release stated [48]. ... popular resentment toward Japan may find expression in greater discrimination or even physical violence against fellow-residents of Japanese extraction, distrust of the Japanese Government being transferred to all persons of Japanese race. A moment's thought will show that such animus would be not only un-American, but also a menace to public welfare and the good name of the state. - David Barrows, Fair Play, Oct 1, 1941. On February 15, 1942, Millikan wrote to the Fair Play organization with an interest "in securing cooperation in Los Angeles County." Millikan's overture was welcomed. On March 3, 1942, Millikan was appointed a vice president of the committee. Millikan was now a senior officer in an organization fighting for the fair treatment of Japanese Americans. ### 2.3 Fair Play The Fair Play Committee is an organization that was formed in California to defend the rights of Japanese Americans. The Fair Play Committee was formed in the fall of 1941 by David Barrows and Galen Fisher. Barrows was a political scientist and former president of the University of California. The historian Robert Shaffer has written, [83] Galen Merriam Fisher (1873-1955) was probably the most significant and consistent white organizer of opposition during World War II to the wholesale incarceration... of Japanese Americans. As a former missionary in Japan... Fisher was the key founder in 1941 of the [Fair Play Committee].... Throughout the war Fisher wrote numerous articles criticizing the mistreatment of Japanese Americans. - Robert Shaffer, Densho Encyclopedia From the beginning, the committee had strong university ties, especially with Berkeley. Robert Sproul, the president of the University of California, became the honorary chairman. Provost Monroe Deutsch and several Berkeley professors were also active participants. The Fair Play advisory board consisted of a group of highly influential political, business, educational, and religious leaders, including Stanford's Chancellor Ray Lyman Wilbur; its Dean of the Graduate School of Busi ness, Jackson; former California Governor, C. C. Young; the President of the California State Chamber of Commerce, A. Lundburg; former President of the State Bar Association, G. Hagar; the Mayor of Berkeley, F. Gaines; and many others. The Committee's executive secretary, Ruth Kingman, had access to the corridors of government power, meeting with key government figures such as California State Attorney General Bob Kenny, U.S. Attorney General Biddle, WRA Director Myer, Assistant Secretary of War McCloy, FDR's daughter Anna Roosevelt Boettiger, and some members of Congress.7 Judicious and steadfast, she always seemed to know what to suggest next to nudge policy makers one step further into alignment with the goals of Fair Play. The committee was loyal to the United States and the war effort and worked as a "moderating influence on both public opinion and government authorities, and helped avert mob violence against Japanese residents" [48]. The committee was not a relief organization. It worked to change policy and public opinion through speakers, educational materials, and corrections to the press. A membership drive leaflet affirmed its purpose: Footnote 7: Robert Cozzens, the Assistant National Director of the WRA, said that “Ruth Kingman was to see us at least once a week and generally a couple of times a week.” Kingman worked a lot with state Attorney General Bob Kenny, who was “said to have influenced Warren finally to accept the inevitable” [18]. The fundamental purpose of the Committee is to support the principles enunciated in the Constitution of the United States, and to that end to maintain, unimpaired, the liberties guaranteed in the Bill of Rights. The Committee believes: 1. That attacks upon the rights of any minority tend to undermine the rights of the majority. 2. That attempts to deprive any law-abiding citizen of his citizenship because of racial descent are contrary to fundamental American principles and jeopardize the citizenship of others.... 3. That it is un-American to penalize persons of Japanese descent in the United States solely for the crimes of the Government and military cast of Japan. - Fair Play [16] The page footer of Fair Play stationary quoted FDR: "Americanism is not, and never was, a matter of race or ancestry" [25]. Fair Play advocated a policy of _dispersed relocation_ for Japanese civilians after the war, a policy in agreement with the War Relocation Authority [85]. The policy had three noteworthy components. (1) The policy opposed segregation into Japanese ghettos. (2) For practical reasons, "it is convinced that there will never be a mass return of evacuees to the West Coast." (3) Finally, "the right of loyal Japanese to come back [to the West Coast], if they so elect, cannot be denied without a denial of all that America has hitherto meant to racial and religious minorities, of all that it has symbolized for the hopes of humanity" [85]. During the war, while working to change policy and public attitudes, Fair Play held that Japanese-American rights must ultimately prevail. However, recognizing the strong public opposition, the committee did not insist on their immediate return. Millikan publicly supported Fair Play's policy of dispersed relocation. In fact, dispersed relocation is how history played out. According to the _Washington Post_, "When at last the Army rescinded its exclusion order about 57,500 evacuees moved back to their former homes in the West Coast states. But about 51,800 settled eastward in new homes" (March 28, 1946) [7]. Even before rescission, under Myer's lenient policies, a significant fraction of Japanese had left the internment camps and resettled outside California [6][99]. ### Investigation into Pasadena's Fair Play The Pasadena chapter of Fair Play came under political attack in December 1943. Leading the attack was California State Assemblyman Chester Gannon, who ran the California State Assembly Committee on the Japanese Problem, which opposed the return of Japanese-Americans to California. Anti-Japanese sentiments were flaring up, because of reports of Japanese riots at the Tule segregation camp the previous month.8 Accusing the Pasadena chapter of waging a pro-Japanese campaign, Gannon organized hearings at the State Building in Los Angeles and subpoenaed the entire executive committee of the Pasadena chapter. The committee grandstanded to intimidate Fair Play and to portray the Pasadena chapter in a bad light. In reaction to Gannon committee excesses, the _Los Angeles Times_ published an editorial against the committee's tendency to "browbeat and abuse witnesses." "When they turn themselves into witch-burning agencies... they go far afield" [5]. A December 1943 article "Inquisition in Los Angeles" in _Time Magazine_ described the hearings as a "legislative romp into U.S.-Jap baiting" [4]. Footnote 8: Accounts of the Tule camp events of November 1943 vary widely. Myer believed that “no violence was planned” [3]. Barbara Takei, the author of a book on the Tule camp, has written that the Japanese “riots” were actually a “peaceful show of support” and dismissed the newspaper reports of rioting as “sensationalized tales” [92]. The events of November led to the imposition of martial law at the Tule camp, which lasted into January 1944. When it was realized that Millikan had not received a subpoena, an attorney for Gannon's committee called Millikan on the telephone for a statement, which was read into the record of the hearings. For the record, "Dr. Millikan stated that he was familiar with the statement to this committee by Mrs. Maynard F. Thayer [Pasadena Fair Play chapter chair] and was in hearty accord with it" [17]. ### Kuroki and Sproul speeches In an oral interview about the Fair Play Committee, Ruth Kingman recalled that there were two events in 1944 that "marked the first real change in the attitude of the state" of California: the Kuroki speech at the San Francisco Commonwealth Club9 and the Sproul speech in southern California [49]. Footnote 9: Today, “the Commonwealth Club of California is the nation’s oldest and largest public affairs forum... Martin Luther King, Ronald Reagan, Bill Clinton and Bill Gates have all given landmark speeches at the Club” [8]. In decades past, it was viewed as an influential group of business and professional men in San Francisco and northern California. Ben Kuroki was an American citizen of Japanese descent in the United States Army Air Force who flew a total of 58 combat missions over Europe, North Africa, and Japan during World War II. Monroe Deutsch, who was on the Fair Play executive committee and the president of the San Francisco Commonwealth Club, arranged for Kuroki to speak. He spoke frankly about his combat experience and the discrimination he faced. [L]oyal Americans of Japanese descent are entitled to the democratic rights which Jefferson propounded, Washington fought for and Lincoln died for. In my own case, I have almost won the battle against intolerance; I have many close friends in the Army now - my best friends, as I am theirs - where two years ago I had none. But I have by no means completely won that battle. Especially now, after the widespread publicity given the recent atrocity stories,10 I find prejudice once again directed against me, and neither my uniform nor the medals which are visible proof of what I have been through, have been able to stop it. - Ben Kuroki, Feb 4, 1944 [51] Kuroki received a standing ovation. Many in the audience were weeping. The publicity was resoundingly positive, even from the Hearst and McClatchy newspapers [49]. The second event was the University of California President Sproul's speech in Los Angeles. Katherine Kaplan, the Fair Play executive for southern California, decided to organize a Los Angeles chapter. Her husband Joseph Kaplan, a physicist at UCLA and friends with Sproul, persuaded Sproul to speak at a luncheon in Los Angeles to launch the local chapter. At Sproul's suggestion, Katherine Kaplan asked Millikan to be the "chief sponsor of the event and to act as Master of Ceremonies." Millikan agreed [43][49]. Sproul called for increased tolerance: The barometer of tolerance toward the evacuees is still too low on this Coast, and the opposition is still vehement and unscrupulous. We need your help... to create an acceptance by the California public of the enlightened way of dealing with law-abiding persons even though they are members of an unpopular minority. -Sproul, June 29, 1944 [85] Sproul's speech "was considered probably the best single statement made all during the war on the status of Japanese Americans" [49]. Katherine Kaplan called it "magnificent!" The speech, which was made into a pamphlet and became an authoritative statement of Fair Play policy, received "the same degree of favorable publicity as Sergeant Kuroki got up North" [49]. Thousands of copies of the pamphlet were distributed [101]. Ruth Kingman spoke about the changes in public opinion after the Kuroki and Sproul speeches [49]. From then on, we got opposition, but very little hate opposition. We got a great deal of support for constitutional rights; the rights of men in uniform; or the rights of people who were doing a job for the country which the public would never have accepted before as being the prerogative of anybody of Japanese ancestry. Opponents didn't talk very often anymore. Some did, but by and large there was no further extensive or rabid talk about "They'll never come back!" It was, sort of, "Well, when they come back." - Ruth Kingman ### 2.6 Millikan's participation Millikan's participation in the Fair Play Committee should not be exaggerated. There was a limit to what he could contribute. Caltech, which was going through major transformations during the war, needed Millikan's decisive leadership. Caltech became "practically a factory for the production of war weapons," building more than a million rockets [60, 43:564][63, p 248]. At the same time, many were on leave for war work. It was a challenging time to lead Caltech. Despite being a busy man, Millikan contributed what he could to Fair Play. He was the master of ceremonies at Sproul's speech, an event that helped turn the tide of public opinion; his name inspired the Pasadena chapter, the most active chapter outside the Bay Area; he helped organize the Los Angeles chapter and became a member of the chapter's executive committee, while still serving on the the committee's central advisory board [43, p 51]. On September 29, 1944, not long before Roosevelt issued the proclamation ending internment, the Pasadena chapter of Fair Play sponsored a talk by WRA Director Myer at the local public library auditorium. The political fractions on internment policy were complicated and take some time to unravel. As mentioned earlier, Myer had become opposed to internment and pushed to dismantle the program. However, politicians and public opinion hindered his efforts. Fair Play supported Myer's belief in the Bill of Rights for all citizens. Antagonistic Hearst newspapers accused the WRA of coddling the Japanese. Many wanted Director Myer dismissed and replaced with a hard-liner. Millikan, introducing Myer to an overflowing audience, quoted parts of Sproul's speech, emphasized the need to preserve the Bill of Rights, and denied Hearst newspaper accusations. Myer spoke of softening public attitudes. As reported by the _Los Angeles Times_, "A changing attitude on the part of the public will make the return of all Japanese to all sections of the country an easier job from here on" [6]. Myer won the audience over. On December 18, 1944, the day after the proclamation from FDR that ended internment, the chairman of the community council at the Wyoming Heart Mountain relocation center (one of the ten Japanese internment camps) sent Millikan a thank-you note. The Japanese community council, which met twice a week, formed the leadership of the Heart Mountain center. Dear Dr. Millikan: May we take this opportunity to thank you for your untiring efforts in bringing the principles of this nation into proper perspective to the people in regards to the evacuation of Japanese from the West Coast. We are in receipt of the good news today of the lifting of the restriction on Japanese by the Western Defense Command. We realize that you played no small part in realizing this very important move. -Minejiro Hayashida, Dec 18, 1944 [60, 43:931] The letter shows that Millikan's influence was felt directly in the internment camp. Many Japanese-Americans thanked Fair Play in letters. In 1946, after its purpose was completed, Fair Play was dissolved. ### 2.7 Postwar scientific missions to Japan Millikan's wartime involvement with Japanese policy came primarily through the Fair Play Committee. In addition to his Fair Play activism, Millikan also had a small indirect scientific connection with postwar Japan. At the end of the war, Millikan communicated with the physicist Alan Waterman about scientific missions to Japan, organized through the Office of Scientific Research and Development (OSRD). This section describes the origin of these scientific missions and Millikan's assistance in recruiting Japanese-American scientists and engineers. During the war, Vannevar Bush, who reported directly to FDR, was the head of the Office of Scientific Research and Development, a wartime agency created to conduct scientific research for the military. This U.S. federal government agency coordinated almost all wartime military research and development. Even the Manhattan Project to develop atomic bombs was initially under the OSRD. Within the OSRD, Bush created the Office of Field Service (OFS), and appointed Karl Compton chief and Alan Waterman deputy. The OFS provided "civilian technical expertise needed by military commands in the field, particularly those in the Pacific." Its employees were largely scientists and engineers - especially physicists, electrical engineers, and communications experts [82, p 17]. The OSRD and its field office were directed by renowned names in science. Bush, who had been a dean and vice president at MIT and an early researcher in analog computers, became a visionary of information technology. The physicist Karl Compton, the brother of the Nobel laureate Arthur Compton, was president of MIT for 18 years. After the war, Waterman, a physicist, became the first director of the National Science Foundation. He is remembered today through the prestigious Alan T. Waterman Award for scientists. The OSRD was reorganized when the war in Europe ended. Compton was transferred to Manila to establish a Pacific branch of the OSRD, and Waterman replaced Compton as OFS chief. There were plans to expand the Manila office to more than two hundred scientists and engineers. However, the day after Compton arrived in Manila, on August 6, 1945, the atomic bomb was dropped on Hiroshima. Japan surrendered days later, on August 14 (Victory over Japan Day or V-J Day). The plans to expand the Manila office were abruptly cancelled. Instead, efforts were immediately redirected toward postwar scientific expeditions to Japan. According to Compton, "every branch of Army, Navy, Air Force began immediately after V-J Day to get technical investigating teams" to evaluate Japan's scientific capabilities. Some of the teams were to number as many as 750. Compton and Edward Moreland (who was Bush's successor as dean of engineering at MIT) themselves led one of the earliest expeditions, which resulted in a massive 850-page scientific report [82, p 17][41]. It was at this moment of intense activity of the OSRD in the Pacific when Waterman sent Millikan an urgent request. Waterman asked Millikan for information about "former Japanese students enrolled in scientific and engineering studies" at Caltech [60, 31:1003]. The Office of Scientific Research had an apparent urgent need for Japanese-speaking scientists and engineers. Caltech's registrar and alumni office promptly produced 44 names, which Millikan forwarded to Waterman. The Waterman-Millikan correspondence will matter in what follows, because of the way it was later misinterpreted. ## 3 Dishonor Anthony Platt is a retired professor of American history, public policy, and social sciences. He is the author of the book _Bloodlines_, published in 2006, which is the source of false and inflammatory accusations about Millikan [75].11 This section analyzes some of the claims in _Bloodlines_, especially those related to Japanese internment. Although false, some of these accusations became part of the official report of Caltech's _Committee on Naming and Recognition_ and were among the reasons given to strip Millikan of honors [79]. ### 3.1 Bloodlines The book _Bloodlines_ is structured around the history of the original typescript of the Nuremberg Laws, signed by Hitler, sealed with bright red swastikas, and enacted by Nazi Germany in 1935 [75]. The laws established the black-white-red swastika as the national flag, declared that only those of German or related blood were eligible for citizenship, and prohibited marriage between German and Jew. The original Nuremberg document fell into the hands of General George S. Patton Jr. in Germany at the conclusion of World War II, as a spoil of war. When he returned to America, Patton was welcomed by a hometing parade along the streets of Los Angeles on June 9, 1945. Two days later in Pasadena, Patton presented to Millikan the Nuremberg Laws, which were placed in the vault of the Huntington Library for safekeeping. At the time, Millikan was the chairman of the board of trustees at the Huntington Library. A historic photograph shows Patton and Millikan standing together under a portrait of George Washington, the document in hand. From there, Patton traveled to Washington D.C. to meet with President Truman [75, p 102]. A central accusation of Platt's book is that several of those who handled the original Nuremberg Laws - including Patton, Millikan, and some trustees at Huntington Library - formed an enclave of Nazi sympathizers. Platt provocatively claimed that Millikan's "ideological assumptions were the same as those that guided the Nuremberg Laws...."12 Platt imagined Patton and Millikan "in 1934 shaking their heads in agreement as they read an item in the U.S.-published _Eugenical News_, reprinted from the Nazi press, about how 'large German cities' were being 'literally swamped by... Jewish physicians.'" Platt's accusation that Millikan was ideologically aligned with Nazi Germany is false and takes little effort to refute. His tale linking Patton and Millikan to the Nazi press is purely fictional and is inconsistent with Millikan's character. Basic intellectual honesty requires Platt not to hide primary sources that discredit his own interpretation. One convincing way to see that _Bloodlines_ misrepresents Millikan is to read Millikan's own words. Millikan's autobiography contains several statements about Hitler [63]. Every single statement is enumerated as follows, to allow Millikan's views on Nazism to appear without editorial selection. On page 90, Millikan asks "is not the greatest menace... defined as what Mussolini, Hitler and Lenin have done to Italy, Germany and Russia?" On page 116, Millikan speaks of the "degradation of Germany under Hitler." On page 254, Hitler is called a gangster. On page 256, Millikan writes that Hitler "could have been permanently checked," "had we been in the League of Nations in 1936, prepared to do our part...." On page 258, Millikan wrote that "for the sake of overthrowing Hitler we became Russia's ally,..." Finally, on page 277, Hitler is called a maniac. Not a single one of these statements supports the claim that Millikan was a Nazi sympathizer. Starting the in the mid-1930s (and to a lesser extent in the 1920s) Millikan warned America of the twin dangers of pacifism and isolationism. "Long before Pearl Harbor he [Millikan] saw what was coming and led the movement against isolationism in California" (_Los Angeles Times_, 1950) [9]. Although the _Los Angeles Times_ report might be hyperbolic, as the decade progressed, he became increasingly publicly vocal through many speeches, publications, and radio addresses on the evils of fascism and totalitarianism. Millikan did not hold back. As an admired Nobel lau reate, he had enormous influence, which he directed towards this pressing cause. The documentary evidence is abundant. Millikan's evangelism against fascism had international reach. In his address "India and the War" on June 12, 1940, Millikan made an appeal to India, following his eight-month cosmic-ray expedition to India. He spoke with the eloquence of a senior statesman [60, 67:730]: In no war in history have the fundamental issues of the struggle been made more clear either with respect to India, the United States, or any peace loving people, for they have been stated unmistakably in _Mein Kampf_. Or, if words are not considered sufficient evidence, and one thinks there is any chance that the expressed purposes will not be put into practice, the continuous stream of acts of perfidy, barbarism, and dishonor which have accompanied the inhuman treatment of the Jews and the successive rape of the liberties of the adjoining little countries of Austria, Czechoslovakia, Poland, Denmark, Norway, Holland, Luxembourg, and Belgium make crystal clear what modern civilization the world over can expect from the triumph of the Nazis. Between the ideology of conquest and that of rational, peaceful change there is no possible compromise.... This is the time for every American and every Indian and every peace loving man everywhere to exert every ounce of influence he can to prevent the destruction of civilization and the return of the horrible tyrannies and despotisms that have cursed mankind through all history. - Millikan, June 12, 1940 ### Committee on Naming and Recognition (CNR) A movement to have Millikan and others stripped of honors at Caltech (such as the library named after Millikan) started during the summer of the 2020 _Black Lives Matter_ protests amid the widespread toppling of statues. The Black Scientists and Engineers at Caltech (BSEC) sent a petition to the Caltech community on June 25, 2020 [79, p 36]: By now we are all well-aware of the global protests calling for police reform following the graphic murders of Ahmaud Arbery, Breonna Taylor, George Floyd, and countless others at the hands of the officers whose supposed duty is to protect and serve. -BSEC The BSEC called on Caltech to "reform the long-standing causes of racial bias which have disproportionately hurt racially minoritized members of the Caltech community" and called on the Board of Trustees to "rename the buildings which currently honor Nazis, racists, and eugenicists: Millikan, Watson, Ruddock, Chandler." A separate petition with many signatures was submitted by Michael Chwe (a Caltech alum who now is a political scientist at UCLA) in July 2020. Here are the opening sentences of Chwe's petition [79, p 39]: As members and friends of the Caltech community, we believe that Caltech cannot honor individuals who actively supported and encouraged crimes against humanity. Therefore we call for Caltech to rename all buildings, spaces, and programs named after Robert A. Millikan,...- Chwe petition Caltech President Thomas Rosenbaum formed a Committee on Naming and Recognition (CNR) to examine the issues in the petitions. The committee issued a final report on December 17, 2020, recommending "that Caltech remove the names Millikan, Chandler, Gosney, Munro, Robinson, and Ruddock from all Institute assets and honors." The Cal tech Board of Trustees endorsed the recommendation. President Rosenbaum wrote that "renaming buildings is a symbolic act, but one that has real consequences in creating a diverse and inclusive environment." A follow-up article was published in _Nature_ on November 10, 2021 [91]. One student is quoted, "I find it important to rename the buildings just because I don't want to have that constant reminder that the people who built this institution didn't want me to exist." Another student said of the Millikan library, "we shouldn't be idolizing people with horrible views of the world." It is striking to find such emotionalism coming from Caltech students and published in _Nature_. It is beyond the scope of the research here to examine all of the claims of the CNR report and the _Nature_ article. The scope of this section is restricted to statements about Millikan's wartime attitudes toward Japanese Americans. ### 3.3 Bloodlines on internment policy This subsection analyzes statements from the book _Bloodlines_ about Millikan's activism for Japanese-American rights. Millikan's activism in the Fair Play Committee for the protection of Japanese rights does not square with _Bloodlines's_ claim that Millikan was ideologically a Nazi. According to _Bloodlines_, "on the 'Japanese Problem' during World War II, Millikan took a more complicated, but ultimately opportunistic position" [75, p 126]. Millikan's position was not complicated: his support of the Fair Play Committee was unwavering throughout the war. Millikan was not opportunistic in the sense of seeking self-gain; nor did his actions lack ethical principle. Most criticisms by Platt of Millikan on the Japanese issue are non-specific and suggest that Platt held the entire Fair Play Committee in low regard. Two sentences from _Bloodlines_ are particularly relevant. Platt wrote [75, p 126]: In 1943, Millikan told counsel for an Assembly committee investigating the dangers of treason that he favored dispersal throughout the country of California's Japanese at the end of the war. - Platt The sentence refers to Millikan's telephone conversation during Assemblyman Gannon's committee hearings, as discussed earlier. The sentence is a peculiar way to describe the Gannon hearings because of all that it leaves out - that the Gannon committee and Fair Play were political adversaries, that Fair Play was the target of the investigation, that Millikan was called up because of his support of Fair Play, that the committee had the tendency to browbeat witnesses, and that _Time Magazine_ called the hearings an inquisition. The mention of "dispersal" in Platt's sentence is a reference to Fair Play's policy of dispersed relocation, which is discussed in detail above. The policy declared a Japanese right of return to California as soon as the exclusion order was rescinded, while recognizing practical reasons for voluntary partial dispersal outside California. Millikan endorsed this policy. The sentence from _Bloodlines_ makes Fair Play's position sound sinister by failing to mention its insistence on Japanese-American rights and by setting the encounter at a "committee investigating the dangers of treason." The insertion of the dangers of treason into the context is a red herring: Fair Play's policy was entirely unrelated to dangers of treason. Indeed, Fair Play waged a public relations campaign against false but widespread accusations of Japanese-American disloyalty. From its very first press release, Fair Play stated that it would be un-American to transfer a "distrust of the Japanese Government" "to all persons of Japanese race." The CNR copied verbatim Platt's sentence on the Gannon investigation into its report, without citing _Bloodlines_ as its source. Here is the second relevant sentence from _Bloodlines_[75, p 126]: After the war ended, Millikan did not hesitate to turn over to military intelligence the names and known address of all students of Japanese background who had studied at Caltech between 1929 and 1944. - Platt The sentence has an ominous feel to it. Millikan almost sounds like a wartime collaborator with military intelligence against Japanese Americans, except that cannot be, because of the timing after the war. Platt did not give context, but this article has supplied extensive context in the section on postwar scientific missions to Japan. Platt left out essential details: the request came from a long-time acquaintance, the physicist Waterman; the request asked specifically for scientists and engineers; the request came from the Office of Scientific Research and Development, which employed many scientists and engineers; the request was written on V-J day and was urgent. The CNR copied Platt's sentence into its report, without citing _Bloodlines_ as its source. The sentence was copied verbatim, except for a tiny but significant change. Instead of writing, "after the war ended," the CNR wrote "ca. 1945." Crucially, in the CNR report, it becomes possible to interpret the Millikan's action as occurring before the end of the war. A game of telephones is in play here, where a message becomes increasingly garbled with each repetition. The original context of postwar scientific missions to Japan is described in detail earlier. Platt leaves out essential context and makes the Waterman-Millikan correspondence sound ominous but strangely anachronistic, because of the timing after the war. The CNR report modified the date so that the exchange was no longer unambiguously after the war. The final stage of the game of telephones is provided by Nidhi Subbaraman, writing for _Nature_. The year has been altered from 1945 to 1942, from the end of the war to the beginning of internnent. No trace remains of the original historical context of postwar scientific missions to Japan. The Waterman-Millikan correspondence on scientific recruitment was falsely reported the following way in _Nature_[91]. During the Second World War, as the United States began a nationwide effort to imprison civilians living in the country who had Japanese ancestry, Millikan collected the names and addresses of Japanese students who had studied at Caltech in the previous two decades and passed the list to the US military. - Subbaraman, Nature 2021 This is simply false. From the syntax of Sabbaraman's sentence, it is recognized as being derived from _Bloodlines_ as modified by the CNR report. However, the game of telephones has fully corrupted the meaning. The corruption progressed from original sources, to Platt, to the CNR report, and finally to Nature. At each stage, the meaning changed unjustifiably in the same direction: always to injure Millikan's reputation and never to bring him favor. ## 4 Eugenics in 1938 According to Caltech President Thomas Rosenbaum, "The most intense concerns at Caltech center on Robert A. Millikan," because he "lent his name and his prestige to a morally reprehensible eugenics movement that already had been discredited scientifically during his time" [80]. This section analyzes the evidence in the CNR report in support of the claim that the eugenics movement had been scientifically discredited by 1938 [79].13 The discussion is restricted in scope to scientific claims and does not treat the moral and political dimensions of the eugenics movement. The word _eugenics_ evokes many connotations; a starting point is the Oxford English Dictionary definition of eugenics: "the study of how to arrange reproduction within a human population to increase the occurrence of heritable characteristics regarded as desirable." From this starting point, the definition diverges in many directions [98, p 44][50]. The CNR report presents statements from three of Millikan's scientific contemporaries (Lancelot Hogben, Hermann Joseph Muller, and Thomas Hunt Morgan) to establish by their authority that eugenics had "fallen into disrepute" by the late 1930s [79]. ### 4.1 Hogben The first of the CNR committee's three authorities against eugenics was the medical statistician Lancelot Hogben (and author of the 1936 best-seller _Mathematics for the Million_). According to the CNR report, "In 1931, geneticist Lancelot Hogben declared that 'all the verifiable data eugenicists had accumulated on the inheritance of mental traits could "be written on the back of a postage stamp"'" [84][79]. (The quotations marks from the CNR report have been preserved, which requires triple nesting.) There is a serious problem with this quotation: it is a fabrication. Hogben declared no such thing. In fact, what Hogben actually wrote was that all existing and and genuine knowledge about the way in which the physical characteristics of human communities are related to their cultural capabilities can be written on the back of a postage stamp.... there is as yet no biological knowledge bearing on the social capabilities of different 'races'... -Hogben (from Hogben's 1937 preface to _Half-caste_[20] and reprinted in [40, p 47]) In the preface to the book _Half-caste_, Hogben was building an argument that children of mixed marriages should be afforded the same cultural advantages as other children. The subject matter was not eugenics, and it was reckless scholarship to alter Hogben's statement to make it appear to be. Hogben himself wrote more than a postage stamp's worth about eugenics, which is the subject of the entire last chapter of his book _Genetic Principles in Medicine and Social Sciences_ (1931) [39]. Hogben's principled stance of scientific detachment kept him from making policy recommendations. Nonetheless, despite the detachment, Hogben most certainly did not view eugenics as scientifically unfeasible. He noted that "Eugenics was defined by [Francis] Galton as the study of agencies under social control which may improve or impair the racial qualities of future generations. With such a proposal it is difficult to see that any reasonable person would disagree" [39, pp 209-210]. Hogben wrote that he "would prefer to use the term 'genetic therapy' for the legitimate province of applied human genetics," because of negative political associations that the term _eugenics_ had acquired by 1931. Nevertheless, eugenists have begun "to write with greater caution" in the past two decades. "[W]e can agree about certain disorders which practically all comparatively healthy people would wish to remove" [39, p 213]. In California, eugenic practice took the form of a large sterilization program. The state government ran several hospitals for the care of those with mental illness or intellectual disabilities. During the years 1909-1979, over twenty thousand patients at those hospitals were sterilized [87]. The "operations were ordered at the discretion of the hospital superintendents," as authorized by California law [98]. About three-quarters of sterilizations in California in 1936 were performed by request or consent of the patient or guardian [31, Popenoe:18:01] [98].14 Footnote 14: Calculations of consent rates were based on signed consent forms. However, standards of written consent in the 1930s compare infavorably with ethical and legal standards of informed medical consent today. In some cases, hospital release was contingent on consent to sterilization [88]. In an analysis of the reasons for lack of consent among Spanish-surnamed patients during the years 1935-1944, the most frequent reason was that “no consenter was Footnote 11: The _Holographic_ is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, which is a _Holographic_, is a _Holographic_, ogy. "Muller was convinced that his eugenic ideas could only flourish in a socialistic state" [1, p 351]. Marxism and eugenics went hand in hand: Marxism to fix the environment and eugenics to fix heredity. Socialism claimed dominance over eugenics because revolution was nigh at hand. "In place of the economic conditions imposed by the class struggle, entirely new conditions will be substituted.... True eugenics can then first come into its own,... Thus it is up to us, if we want eugenics that functions, to work for it in the only way now practicable, by first turning our hand to help throw over the incubus of the old, outworn society" [66]. True to his beliefs, Muller, who was born in New York, established a lab in the Soviet Union. His timing could have hardly been worse. "It was not long before he realized that the conditions for the development of a Bolshevik eugenics were less promising than he had assumed" [73, p 579]. Muller's eugenic manifesto _Out of the Night,_ which was published in 1936, did not please Joseph Stalin, and he fled the Soviet Union under inauspicious circumstances. According to the historian Garland E. Allen, Muller's book _Out of the Night_ "occupies a significant place in the history of eugenic writing" [1]. Muller wrote "Thus we see that only the eugenics of the new society, freed of the traditions of caste, of slavery, of colonialism, can be a thoroughing and true eugenics" [68, p 150].15 Footnote 15: Magnus Hirschfeld held eugenic views similar to those of Muller and quoted this particular sentence in his 1938 book _Racism_[38, p 174]. In time to come, the best thought of the race will necessarily be focused on the problems of evolution - not of the evolution gone by, but of the evolution still to come - and on the working out of genetic methods, eugenic ideals, yes, on the invention of new characteristics, organs, and biological systems that will work out to further the interests, the happiness, the glory of the god-like beings whose meagre foreshadowings we present ailing creatures are. - H. J. Muller, 1936 [68, p 156] The geneticist Elof Carlson, who received his PhD under Muller's mentorship, discovered evidence that Muller had suggested in private that Stalinesque coercion might be used should voluntary programs fail. In Carlson's words, Muller envisioned that eugenic "controls might be imposed as a second step, just as, after the Soviet Revolution, the land was given to the peasants, but when agriculture remained as backward as under the czars, Stalin had to impose a strict control over the land, collectivizing the farms,..." [13, p 186]. Hogben, Muller, and Caltech geneticist Dobzhansky all signed the "Geneticists' Manifesto" at the 1939 International Congress of Genetics. An article about Ronald Fisher's involvement in eugenics states that this document was signed by 23 leading geneticists, including some with strongly left-wing political views like J.B.S. Haldane, H.J. Muller and Lancelot Hogben. It started with the question "How could the world's population be improved most effectively genetically?" They went on to say that "the raising of the level of the average of the population nearly to that of the highest now existing in isolated individuals... would, as far as purely genetic considerations are concerned, be possible within a comparatively small number of generations". This goes far beyond the proposal of Fisher's,... and shows that eugenic ideas were widely held across the political spectrum at the time (see Paul 1984 [73] for further discussion).- Bodmer et al. 2021 [12] The manifesto called for a process of "conscious selection" to replace natural selection in human evolution. "The most important genetic characteristics" for conscious selection should be health, intelligence, and prosocial behavior [19]. Some - including Muller, J.B.S. Haldane, and Ju lian Huxley - "continued to argue the case for eugenics into the 1960s...." [73, p 589]. Just months before his death in 1967, in his last public address, Muller proposed gene selection to enhance cooperative behavior among humans [1, p 352][67]. ### Morgan The committee's final scientific witness against eugenics was Thomas H. Morgan. The CNR report states that "By this time [1937], many geneticists - including Nobel laureate and Caltech professor Thomas H. Morgan - had already denounced eugenics for its lack of scientific merit" [79, p 21]. "In 1925, in his book _Evolution and Genetics_, Morgan criticized eugenics for its interpretation of 'feeble-mindedness' and its insistence on the genetic basis for such characterological traits" [79, p 13]. Indeed, Morgan's book does contend that based on available evidence it would be "extravagant to pretend to claim that there is a single Mendelian factor for this condition" of feeble-mindedness [65, p 201]. He stated that until certain questions "are better understood it is impossible to know how far observed differences are innate and how far acquired" [65, p 200]. On display here is Morgan's modesty in the face of enduring questions about heritability. He then expressed his opinions more fully. [MISSING_PAGE_POST] * [noitemsep] * [noitem] * [noitemsep] * [noitemsep] * [noitem] * [noitem] * [nosep] * [noitem] * [nosep] * [noitem] * [noitem] * [noitem] * [nosep] * [noitem] * [nosep] * [noitem] * [nosep] * [noitem] * [nosep] * [noitem] * [noitem] * [nosep] * [noitem] * [nosep] * [noitem] * [noitem] * [nosep] * [noitem] * [nosep] * [noitem] * [nosep] * [noitem] * [nosep] * [noitem] * [nosep] * [noitem] * [nosep] * [noitem] * [nosep] * [noitem] [MISSING_PAGE_POST] ]] ders, however they may have arisen. In fact, this is attempted at present on a somewhat extensive scale by the segregation into asylums of the insane and feeble-minded. I should hesitate to recommend the incarceration of all the relatives if the character is suspected of being recessive, or of their children if a dominant.... How long and how extensively this casual isolation of adults would have to go on to produce any considerable decrease in defectives, no informed person would, I should think be willing to state. - T. H. Morgan [65, p 206]. In summary, in 1925 Morgan tentatively believed that there were significant individual differences that "are probably strictly genetic." He did not condemn eugenic practices in the asylums. However, he hesitated to recommend extending incarceration to relatives. Morgan agreed to join the board of the Human Betterment Foundation (HBF), a eugenic research and advocacy group based in Pasadena, California.16 In a letter to board member William B. Munro on March 17, 1942, the founder Ezra S. Gosney wrote, Footnote 16: For unexplained reasons, the committee chose to judge Morgan and Millikan by entirely different standards: one man deemed good and the other evil. Dr. T. H. Morgan has promised me that he would serve. I was very glad to find that he approves our 8-page pamphlet, and in reply to my request for constructive criticism he said "I would not change a word in the pamphlet, it is all right." He has manifested more interest in our work than I had expected him to show. -Gosney, 1942 [31, 01:04:26] The eight-page pamphlet refers to _Human Sterilization Today_, authored by Gosney in 1937 [30]. Morgan was voted unanimously onto the board, but because of Gosney's death later that year, Morgan's participation was cut short. However, by approving the pamphlet, Morgan surpassed Millikan in eugenic piety. There is no evidence that Millikan ever read the pamphlet or endorsed it. Board members were free to dis agree: board membership did not imply endorsement of the pamphlet.17 Footnote 17: According to Gosney, “Dr. Munro was against our first print [of _Human Sterilization Today_], Dr. Robert Freeman favored...” [31, 01:01:23-25]. ### Millikan Above all, the eugenics movement was a _big tent_ that transcended national boundaries and political ideologies [10][86]. In 1938, Haldane felt that these questions "cut right across the usual political divisions" [35, p 9]. "Many eugenicists had very low regard for one [an]other and routinely disagreed with others in the community" [98, p 45]. These squabbles did not signal the end of the movement. Further evidence could be presented that the scientific community was not united against the feasibility of eugenic interventions in 1938. However, the committee's case has been amply refuted by its own witnesses, and it would be fruitless to continue. Where did Millikan fit into the eugenics movement? Millikan was a bit player. His name does not appear in the definitive histories of eugenics, and biographies of Millikan do not mention eugenics. Caltech emeritus professor Daniel J. Kevles, who is a leading authority on both Millikan and eugenics, did not mention Millikan in his history of eugenics [47]. Overwhelmingly, Millikan's most significant contribution to the biological sciences was his part in establishing Caltech's division of biology [45]. Caltech was his legacy. The following context is relevant. 1. In 1938, Millikan became a board member of the Human Betterment Foundation.18 He did not attend the annual board meetings. Millikan's non-participation is documented in the form of signed proxy-vote slips that are in the Caltech archives for those years that the board met [31]. By the time Millikan joined, the foundation was dying. In its final stage, the foundation's only significant activities were the management of its real estate assets and the distribution of old pamphlets.19 Footnote 19: The foundation owned a six story loft building in Los Angeles, business property in San Bernardino, and ranch property [31, Castle:08:14:60]. 2. The foundation closed down after Gosney's death in 1942, and its assets were eventually donated to Caltech. In consultation with Gosney's daughter, it was Millikan who redirected the funds away from sterilization research [64]. 3. Millikan and other public intellectuals of the 1930s sponsored an enormous number of civic organizations, events, and causes. The vast array of causes that Millikan endorsed has never been catalogued. His Nachlass, which is more than 125,000 pages, records many of them [60]. For Millikan, the HBF was a drop in the ocean. Millikan's permission to use his name was often no more than words to the effect "I am willing to agree to the use of my name as you suggest" [60, 45:433]. Sometimes these endorsements were recognized by a seat on the board of these organizations. These honorary positions do not generally mean that Millikan governed the organizations. Millikan viewed this system of patronage as wholesome, but it was imperfect at best. Millikan once complained that "executive committees over which the sponsors have no control make commitments and take actions which the sponsors might warmly oppose" [60, 41:183]. Privately, Millikan murmured that John Dewey and Albert Einstein endorsed causes they knew little about, yet Millikan was also guilty.20 Millikan grumbled, "One of my friends tells me that my name is being so used... as to let the public think that I am thoroughly familiar with its activities and am one of its sponsors, when the facts are that I have known nothing whatever about the management or the program... for something like twenty years...." "I cannot look up even a tenth part of the institutions seeking my sponsorship" [60, 42:359]. 4. The CNR committee did not have access to Millikan's own statements on eugenics. As a result, the report was largely speculative. The committee relied on private words of praise (phrases such as "magnificent job") from Millikan to Gosney to infer that Millikan's beliefs were the same as Gosney's [79, p 16].21 However, it is rash to infer too much about the concordance of heartfelt beliefs when a university president praises a wealthy donor, especially when the letters were written in response to Gosney's "generous gift" and "philanthropic enterprises." After the CNR report was issued, two statements about eugenics in Millikan's own words have surfaced. These direct statements from Millikan supersede the committee's speculations. Footnote 21: In a 1945 letter to Clarence Gamble, the heir of the Proctor and Gamble fortune, Millikan referred to the establishment of the Human Betterment Foundation as “epoch-making” [60, 28:441]. 5. In his first statement, as reported by the Los Angeles Times in 1925, Millikan spoke out in strong opposition to eugenics. He denounced the race degeneration theories of Albert Edward Wiggam and Lothrop Stoddard [100][89]. Millikan maintained, "We can't control the germ plasm but we can control education" and consequently, education was the "supreme problem" and a "great duty" [69]. 6. Millikan's second statement on eugenics appeared in a 1939 article _Science and the World of Tomorrow_, which contained his forecast of how science might change "life in America fifty or a hundred years hence" [61]. He readily admitted that "the possibility that something so completely foreign to my thinking may happen as to make any prognosis that I may hazard now look ridiculous in the years to come,..." This article was written not long after he joined the Human Betterment Foundation (HBF). If we are looking for the message that Millikan might have delivered to the HBF had he ever attended a board meeting, it is here that we should look. He wrote I have no doubt that in the field of public health the control of disease, the cessation of the continuous reproduction of the unfit, etc., big advances will be made, but here I am not a competent witness, and I find on the whole those who are the most competent and informed the most conservative.- Millikan, 1939 [61]. There is no mention of specific methods such as sterilization. There is no suggestion of the use of coercion. He speaks in the future tense, "advances will be made" in the coming fifty or hundred years. He is cautious. He humbly professes his lack of expertise. That brief statement is all. No other statements by Millikan on eugenics have surfaced. Based on available evidence, it appears that Millikan was actually milder in his views than the committee's three witnesses against him. We might suppose that Caltech geneticists would have been foremost in Millikan's mind when he deferred to the conservative views of "those who are most competent and informed." Chief among them would be Morgan, who served alongside Millikan for many years on Caltech's executive council. Morgan's views are documented above. Overall, Caltech geneticists voiced caution.22 Footnote 22: Dobzhansky, who signed the 1939 manifesto, wrote in 1946 that a sterilization program may “take centuries or even millennia... It is, perhaps, not too selfish to say that posterity should be allowed to tackle its own problems and to hope that it may have better means for doing so than we have” [22]. In 1965, Alfred Sturtevant contended that eugenics was fraught with uncertainties and difficulties [90, pp 130-132] [52]. 7. Millikan was not opposed to birth control.23 In a private letter, Millikan told the birth control proponent Margaret Sanger that "I approve of your movement," but he declined to make a public endorsement [60, 45:851]. Like Millikan, Sanger has been accused of having eugenic sympathies. Footnote 10: The author is grateful to the anonymous referee for his comments on the manuscript. ## 5 Conclusions Millikan was one of the greatest experimental physicists in the world during the early decades of the twentieth century. He made fundamental contributions to the isolation and measurement of the electron charge, the experimental verification of Einstein's photoelectric equation, the measurement of Avogadro's number and Planck's constant, the study of spectral lines of ionized atoms, and the understanding of cosmic rays. He spurred the growth of American science through his many contacts with Bell Labs, through the almost countless number of graduate students he supervised, and through his best-selling physics textbook. At the head of Caltech during the first 24 years of its existence, Millikan oversaw its rapid maturation into one of the outstanding technical institutions of the world. The CNR report's review of Millikan's scientific achievements was deficient, and this article has aimed to remedy that deficiency. About Millikan's famous oil drop experiment, the Caltech report seems to have no institutional memory. The report quoted the first sentence from an _Encyclopedia Britannica_ article about the experiment, not even making it to the second sentence. Concerning the photoelectric effect, the Caltech report exhibited a similar lack of institutional memory, by going no further than the one-sentence banner statement from NobelPrize.org. In brief, Millikan's greatest scientific achievements were compressed to two stock sentences. By contrast, the CNR committee members cared abundantly for their own reputations by including seven pages of their bios in the report. Throughout World War II, Millikan advocated for the rights of Japanese Americans. At a time when public opinion had turned overwhelmingly against Japanese Americans, Greta and Robert Millikan were both moved by compassion toward their Japanese friends and acquaintances. When she received a letter from her friend Elizabeth Ozawa, who was in a detention camp at Tulare, Greta wrote in her diary that "things are far from right - we must keep alert and busy on our Fair Play Committee" [60, 80:p 132]. Robert Millikan wrote to Hori that "your letter touched me deeply" and to Kido that he had been doing all he could to assist "the many who are likely to suffer unjustly in this difficult situation." For Millikan, who habitually executed his plans through a broad network of committees, it was natural to reach out to a committee that shared his views on Japanese-American rights. He became a vice president of Fair Play, which worked tirelessly to change public opinion on Japanese-American issues and had a significant impact on public policy. This article has made an extended commentary on Platt's ideas because of his ultimate influence on the decision to strip Millikan of honors. His fingerprints are everywhere. Platt was cited five times in Michael Hiltzik's _Los Angeles Times_ article, "Caltech faces reckoning," which made the petition against Millikan known to a large public [37]. The _LA Times_ published Platt's false accusation that a quota existed at Caltech during Millikan's tenure "allowing for the appointment of only one Jewish full-time faculty member per year." More than two full pages (out of eight total pages) of the archivist's report to the CNR committee were direct quotations from Platt's book [15]. Michael Chwe's presentation to the CNR committee quoted Platt's book four times [14]. Like Platt, Chwe imagined far fetched associations between Millikan and Nazi Germany. Platt's accusations that "California's elite" had "fascist sympathies" set the tone of moral condemnation. Through an escalation of rhetoric, Millikan's thoughts and deeds came to be seen as "reprehensible" (Thomas Rosenbaum), "horrible" (Daniel Mukasa, Caltech President of BSEC), and "crimes against humanity" (Michael Chwe). The historian Kirt von Daacke has called for "us" collectively to atone for this history. Regarding the history of sterilization in California, the science historian Alex Wellerstein has stated plainly, "We do not find Nazis in California mental health institutions" [98]. Platt's book _Bloodlines_ is an unreliable source for Millikan scholarship. It is false that Millikan was ideologically aligned with the Nazi Nuremberg Laws. The alleged association between Millikan and Nazi Germany is abundantly refuted by Millikan's own words as found in many speeches, publications, and radio addresses. On Japanese policy, Platt omitted key parts of Fair Play's recommendation of dispersed relocation, which fully recognized the right of Japanese Americans to resettle after the war wherever in the United States they pleased. He falsely described the physicist Waterman's recruitment of Japanese scientists for postwar expeditions to Japan, making Millikan seem to be a military informant against Japanese Americans. All in all, Platt is more a storyteller than a careful historian. _Bloodlines_ became the only source in Caltech's CNR report on the issue of Japanese internment. The report selected the worst tidbits from _Bloodlines_ for verbatim inclusion into its report. Remarkably, the committee selected the worst parts from the worst secondary source, without consulting any primary sources on Japanese internment. Here, "worst" is meant both in the sense of historically unreliable and in the sense of hostile toward Millikan. The report did not mention Millikan's significant participation in Fair Play. Finding itself at the final stage of a game of telephones, _Nature_ falsely accused Millikan of colluding in a military roundup of Japanese for imprisonment at the beginning of the war. Not only is this accusation false, it is a complete reversal of historical fact; Millikan was one of a small minority who actively promoted Japanese-American rights during the war. Millikan received the Kansha Award, which recognizes "individuals who aided Japanese Americans during World War II" [72]. According to Rosenbaum, Caltech's most intense concern was Millikan's association with a eugenics movement that had already been discredited. The CNR committee report made similar accusations against Millikan; he "failed to perform the due diligence" and was "derelict in this duty" to ensure that the HBF had its "science right" [79]. Alas, it was the CNR committee that failed to perform due diligence. This article examined the three authorities that the CNR committee cited to prove that the eugenics movement had been discredited scientifically by 1938. In a dramatic denouement, it turns out that all three of the CNR committee's authorities had various degrees of eugenic involvement at that time. Two of them signed the pro-eugenic Geneticists' Manifesto in 1939, which proclaimed "the truth that both environment and heredity constitute dominating and inescapable complementary factors in human wellbeing, but factors both of which are under the potential control of man and admit of unlimited and interdependent progress" [19]. In the committee's portrayal, "the hereditary nature of human behavior and character had fallen into disrepute within various quarters..." by 1937 [79]. Millikan, who believed that education could be controlled but not the germ plasm, did not hold strong hereditarian opinions. The CNR committee was quick to condemn the hereditarians of that era, but it would be rash to pass sentence, without grounding that judgment in recent science. The statistical concept of heritability provides a scale that quantifies environmental and genetic influence. Evidence in support of rather high heritability of intelligence and various human traits is presented in [76][95][34]. Richard J. Haier, who pioneered the use of neu roimaging in intelligence research, has written, "Although the full role of genes is not yet known, the evidence for major genetic involvement in intelligence is overwhelming" [34]. To be clear, nobody proposes a return to the California sterilization practices of the 1930s. According to Wellerstein, sterilization rates in California declined sharply in the early 1950s. The practice died not with a bang but a whimper. "No one took credit for killing the practice, and no one at the time appears to have noticed that it had ended." "The horror we attach to sterilizations today, and to eugenics in general, did not become widespread until the 1970s, with the rise of interest in patient autonomy, women's rights," among other reasons [98, pp 49-51]. During earlier decades, the largest institutional force in moral opposition to eugenics had been the Roman Catholic Church [23]. The moral outrage at Caltech grew out of the Black Lives Matter movement and was directed toward the cause of "dismantling Caltech's legacy of white supremacy" [81][79]. The landscape has changed in other irreversible ways: birth control and genetic engineering have advanced far beyond the capabilities of the sterilization era [59]. The committee relied on fabricated quotations from its authorities to buttress its case. To be clear, the committee was not the original source of the fabricated quotations it used. However, the committee failed to to detect that the false words ring false. The committee, which was free to present evidence of whatever form it pleased, chose to portray Millikan in the worst possible light. It is telling that the evidence unravels so sensationally. Rosenbaum stated that the committee reached its conclusions "by close reading of primary sources" [80]. Rosenbaum's statement is not credible. We end this article with a postscript on merit and diversity. This article has been written with a focus on Millikan. However, the larger message of the CNR report is diversity. The word _diversity_ and its inflections appear 78 times in the 77 page CNR report. The word appears as many as nine times per page. The report, which is hosted by the Caltech diversity website, makes repeated reminders that Caltech has an "ongoing effort to forge a diverse and inclusive community." The Pulitzer Prize winning journalist Daniel Golden has written on admission practices at elite American universities. Caltech was unique among the most elite. Not long ago, Caltech boasted that on matters of admission, it made "no concessions to wealth, and it won't sacrifice merit for diversity's sake" [28, p 278]. David Baltimore, who was the president of Caltech and a member of the CNR committee, assured Golden that "Caltech would never compromise its standards. 'People should be judged not by their parentage and wealth but by their skills and ability,... Any school that I'm associated with, I want to be a meritocracy'" [28, p 284]. Never say never. The era of uncompromising standards at Caltech has come to an end. The _Los Angeles Times_ reported on August 31, 2023 that Caltech is making historic changes to its admission standards. "In a groundbreaking step, the campus announced Thursday that it will drop admission requirements for calculus, physics, and chemistry courses for students who don't have access to them and offer alternative paths...." "Data... showed a significant racial gap in access to those classes." Caltech's executive director of undergraduate admissions explained the new policy in these terms "'I think that we're really in a time where institutions have to decide if everything that they've been saying about diversity and inclusion is true,' she said, noting that the challenge is especially acute now that the U.S. Supreme Court has banned affirmative action. 'Is this something fundamental about who we are as an institution... or is this something that was just really nice window dressing'" [96]. The action against Millikan has been one campaign within a much larger political movement. Millikan himself had this to say about those who engage in mean-spirited attacks against America's finest [60, 70:535]: To attempt to spread poison over the United States with respect to the characters and motives of the finest, ablest and most public spirited men whom American has recently produced is resorting to a method which, it seems to me, all men of honesty and refinement can only abhor and detest. - Millikan To be sure, Caltech has stirred up a hornets' nest. ### Acknowledgments The author gives special thanks to J. Goodstein, P. Collopy, D. Kevles, M. Johnston, B. Palais, R. Warne, B. Charlesworth and other members of the Fisher Memorial Trust, and the Caltech archives. The author bears sole responsibility for content.
2309.14307
A post-selection algorithm for improving dynamic ensemble selection methods
Dynamic Ensemble Selection (DES) is a Multiple Classifier Systems (MCS) approach that aims to select an ensemble for each query sample during the selection phase. Even with the proposal of several DES approaches, no particular DES technique is the best choice for different problems. Thus, we hypothesize that selecting the best DES approach per query instance can lead to better accuracy. To evaluate this idea, we introduce the Post-Selection Dynamic Ensemble Selection (PS-DES) approach, a post-selection scheme that evaluates ensembles selected by several DES techniques using different metrics. Experimental results show that using accuracy as a metric to select the ensembles, PS-DES performs better than individual DES techniques. PS-DES source code is available in a GitHub repository
Paulo R. G. Cordeiro, George D. C. Cavalcanti, Rafael M. O. Cruz
2023-09-25T17:25:39Z
http://arxiv.org/abs/2309.14307v2
# A post-selection algorithm for improving dynamic ensemble selection methods ###### Abstract Dynamic Ensemble Selection (DES) is a Multiple Classifier Systems (MCS) approach that aims to select an ensemble for each query sample during the selection phase. Even with the proposal of several DES approaches, no particular DES technique is the best choice for different problems. Thus, we hypothesize that selecting the best DES approach per query instance can lead to better accuracy. To evaluate this idea, we introduce the Post-Selection Dynamic Ensemble Selection (PS-DES) approach, a post-selection scheme that evaluates ensembles selected by several DES techniques using different metrics. Experimental results show that using accuracy as a metric to select the ensembles, PS-DES performs better than individual DES techniques. PS-DES source code is available in a GitHub repository1. Footnote 1: Paulo R.G. Cordeiro is with Centro de Informatica, Universidade Federal de Pernambuco, Recife, Brazil ([email protected])\({}^{2}\)George D.C. Cavalcanti is with Centro de Informatica, Universidade Federal de Pernambuco, Recife, Brazil ([email protected])\({}^{3}\)Rafael M.O. Cruz is with Departement de genie Logicci et des TI, Ecole de Technologie Superieure, Montreal, Canada ([email protected]) ## I Introduction Multiple Classifier Systems (MCS) are often used to improve the accuracy and reliability of machine learning models [1]. MCS has three main phases: generation, selection, and combination. In the generation phase, a pool of classifiers is created by training base classifiers with techniques such as Bagging [2], different models, and variations of learning algorithms [3]. The selection phase selects the most competent classifiers in the pool to predict a given query sample. Two main approaches for selecting classifiers are static selection [4] and dynamic selection (DS) [1]. In the static approach, the selection is performed during training and works on the classifiers' overall performance on a validation set. In contrast, the DS approach selects the classifiers on the fly based on their competence in predicting a specific query sample. When only one classifier is selected, it is called Dynamic Classifier Selection (DCS), and when more than one classifier is selected, it is called Dynamic Ensemble Selection (DES). Examples of DES techniques include META-DES [5], Dynamic Ensemble Selection Performance (DES-P) [6], and K-Nearest Oracles Union (KNORA-U) [4]. The last phase of an MCS is combination, also called integration. In this phase, the output of all the classifiers selected is combined to produce a final prediction. The combination or integration of the predictions can be done in various ways, including voting and weighting [7]. Currently, research efforts in DES are focused on proposing new methods to improve phases, such as generation [8], selection [9], and combination [10]. Additionally, there have been attempts to apply DES to other areas of knowledge [11]. Despite the progress that has been made, no DES technique is suitable for all problems. This is in line with the statistical rationale for MCS [12], which suggests that combining multiple classifiers increases the likelihood of finding the optimal result for any given problem. However, to the authors' knowledge, the field still lacks techniques that work on evaluating the ensembles selected by DES methods and explores the advantages of pre-selected ensembles to obtain better performance. Aiming to evaluate this gap in DES' research field, we pose the following research question: "How to analyze ensembles selected by different DES techniques and choose the one having the highest correct prediction potential?" To investigate this question, we propose the Post-Selection Dynamic Ensemble Selection (PS-DES) approach. PS-DES is based on the assumption that different selection criteria may lead to different selected ensembles, and the best criteria used to select an ensemble may differ on an instance-to-instance basis. PS-DES aims to analyze and choose the best ensemble from a set of ensembles generated by various DES techniques to obtain more reliable predictions. Therefore, our proposal work as a post-selection scheme, i.e., it performs after the selection phase of different DES methods and before the combination phase. Moreover, the best ensemble is selected based on a new concept of ensemble potential proposed in this work. In contrast to the selection criteria employed in many DES methods such as META-DES [5] that work by estimating the quality or competence of each model, the proposed ensemble potential evaluates whether the final selected ensemble of classifiers is reliable. We propose three approaches based on classical performance estimation metrics for measuring the ensemble potential: Accuracy, F-score, and Matthew's Correlation Coefficient. Experiments over 20 classification datasets and considering three different performance evaluation metrics demonstrate that the post-selection scheme based on the ensemble potential leads to systematic improvement in classification performance over state-of-the-art DES methods. Thus, the evaluation of the pre-selected ensemble capabilities should not be neglected. The rest of the paper is organized as follows: Section II shows a literature review on DES. Section III presents our proposal. Section IV shows the experimental setup. The results are discussed in Section V, and Section VI presents the conclusions. ## II Literature review Typically, the development of a DES involves three stages. Firstly, in the generation phase, a pool of classifiers is generated. Secondly, in the selection phase, a subset of classifiers (ensemble) is chosen from the pool created in the generation phase. Finally, the classifiers in the ensemble are combined to classify a given query sample in the combination, or integration, phase. During DES' generation phase, a pool of classifiers is created, denoted as \(P=\{C_{1},C_{2},\ldots,C_{M}\}\), where \(M\) represents the number of classifiers in the pool. The classifiers in the pool must exhibit both diversity and accuracy. Diversity [12] refers to the property that the classifiers should not make the same prediction mistakes, as this is crucial to cover the feature space adequately. Several approaches can be used to generate a pool. These approaches include using different distributions of the training set, such as Bagging [2], using different parameters for the same base classifier (e.g., variations in the number of neighbors in a k-Nearest Neighbors algorithm), or using different base classifiers altogether, which are called heterogeneous ensembles [3]. Heterogeneous ensembles tend to be more diverse than homogeneous ones due to their different mathematical formulations, which typically result in different classification results [13]. The second phase of developing a DES is selection, which aims to choose a subset of classifiers (\(P^{\prime}\subseteq P\)), also known as an ensemble. There are two approaches to selection, namely static and dynamic [1]. A fixed subset of classifiers is chosen for all test samples in the static approach. In contrast, the dynamic approach, called Dynamic Ensemble Selection (DES), involves selecting a subset of the pool for each query sample \(\mathbf{x}_{q}\). In dynamic selection, classifiers are chosen based on some criteria, given the pool created in the previous phase. Among the criteria found in the literature are the Oracle approach, as seen in KNORA-E, KNORA-U [4], and K-Nearest Output Profile (KNOP) [14], accuracy-based methods, such as DES Performance (DES-P) [6], and meta-learning, as in the case of META-DES [5]. These criteria are typically computed from the Region of Competence (RoC), a local region for a query sample (\(\mathbf{x}_{q}\)), denoted as \(\theta_{\mathbf{x}_{q}}\), which is a fundamental concept in dynamic selection approaches. The RoC is usually obtained by applying k-NN or clustering methods to a validation set (DSEL) or the training set itself, such that \(\theta_{\mathbf{x}_{q}}=\{\mathbf{x}_{1},\ldots,\mathbf{x}_{k}\}\), where \(k\) is the size of the ROC. The final phase of a DES is integration, also called aggregation or combination, which involves combining the classifiers selected in the selection phase when multiple classifiers are chosen. Techniques used in this phase include majority vote, product rule, and sum rule [12]. It is worth noting that research papers related to DES do not focus on assessing ensembles that DES techniques have already generated. Elmi and Eftekhari [9] presented a solution by utilizing the selection phase of DES approaches. However, their proposed approach only allowed for layer-by-layer ensemble selection and did not allow the evaluation of the collective output of all ensembles generated by DES methods. ## III Post-Selection Dynamic Ensemble Selection The proposed Post-Selection Dynamic Ensemble Selection (PS-DES) is based on the notion of potential, the capability of an ensemble selected by a given DS technique to make a correct prediction. Consequently, it works as a post-processing scheme for ensembles chosen according to different criteria (e.g., meta-learning, Oracle, accuracy). In addition, this proposal aims to evaluate the quality or potential of a selected ensemble, which contrasts with the current DES methods that build an ensemble by selecting multiple competent classifiers individually without trying to characterize the selected dynamic ensemble. This approach consists of three phases: **(1)** pool generation and setup, **(2)** post-selection, and **(3)** combination. ### _Phase 1: Pool generation and DES' setup_ The initial stage of PS-DES involves generating a pool of classifiers and configuring the DES techniques. First, Bagging generates multiple bootstraps (\(T^{b}\)) from the original dataset (\(T\)), where \(b\) is the number of bootstraps. Then, a pool of \(b\times m\) classifiers, denoted as \(P=\{C_{1}^{1},C_{1}^{2},\ldots,C_{m}^{b}\}\), is constructed by training each of the \(m\) classifiers (\(C_{1},\ldots,C_{m}\)) on each of the \(b\) bootstraps generated by Bagging. Finally, any DES techniques specified by the user, including META-DES, KNORA-U, and DES-P, are initialized using the same pool \(P\) as input. These techniques can be used to select an intermediate ensemble later validated by our post-processing scheme to obtain the optimal one. All DES approaches, \(DES_{set}=\{des_{1},des_{2},\ldots,des_{n}\}\), are consolidated into the set \(DES_{set}\). ### _Phase 2: Selection_ This phase seeks to identify the optimal ensemble from a set of ensembles generated by different DES techniques, and Algorithm 1 shows its pseudo-code. Given a query sample (\(\mathbf{x}_{q}\)), the validation dataset \(DSEL\), and a set of DES techniques (\(DES_{set}\)), this phase involves selecting several dynamic ensembles \(P^{\prime}\), each one created using a different DES method, and assessing their effectiveness in order to determine which ensemble is most likely to perform well for the given \(\mathbf{x}_{q}\). This phase begins by computing the Region on Competence (RoC) (\(\theta_{\mathbf{x}_{q}}\)) for the query sample \(\mathbf{x}_{q}\) using the k-NN algorithm. It is essential to note that all \(DES_{n}\) utilize the same RoC (\(\theta_{\mathbf{x}_{q}}\)) based on the k-NN, thereby reducing the computational burden of implementing multiple selection criteria. Then, for each technique in \(DES_{set}\), a set of classifiers is selected according to its competence estimation and selection criterion, forming the ensemble \(P^{\prime}\) (lines 5 and 6). Subsequently, the potential of the generated ensemble \(P^{\prime}\) is evaluated (line 7). As the class label of \(\mathbf{x}_{q}\) is unknown, the potential assumes that the output class of \(\mathbf{x}_{q}\) corresponds to the majority vote of the ensemble. Consequently, it computes the potential of this ensemble by assessing the proportion of methods in it that contribute to this decision. For instance, given a binary classification problem and an ensemble with seven base classifiers, \(P^{\prime}=\{C_{1},\ldots,C_{7}\}\) selected by a given DES technique, and let \(y_{P^{\prime}}=\{0,1,0,0,1,1,1\}\) be the predictions of the base classifier for the given \(\mathbf{x}_{q}\). The majority vote would give the class \(1\) as the answer. The potential is then estimated based on a classical performance metric comparing the ensemble majority vote and the votes of each classifier using a performance metric. If accuracy is used to calculate \(P^{\prime}\) potential, the value would be \(pot_{des}=0.57\). If the F-score is chosen as the potential metric, the \(P^{\prime}\) potential is \(pot_{des}=0.73\). After evaluating the potential of all possible ensembles, the one that obtained the highest value, \(P^{sel}\), is returned as the selected one for the combination step. ### _Phase 3: Combination_ Once the \(P^{sel}\) selection is complete, Phase 3 begins, which is accountable for combining the classifiers into \(P^{sel}\), using techniques such as majority vote or sum rule. ## IV Experimental setup **Datasets.** The experiments were conducted using 20 datasets from the UCI Machine Learning Repository [15], which vary in sample size, dimensions, number of classes, and Imbalance Ratio (IR) (Table I). Each dataset \(T\) is split into three parts: training (50%), \(DSEL\) (25%), and testing (25%). This split is stratified, meaning that the proportions of the classes between the three datasets are maintained. For each dataset, we run 30 replications, changing the distribution of the sets (holdout) to obtain the average values for the evaluated metrics. The data is scaled using the Standard Scaler (also known as Z-score normalization [16]). **Phase 1.** First, Bagging, 100 bootstraps were used for all experiments, consistent with previous studies [1, 4, 5]. For the pool generation, three base classifiers (Perceptron, Logistic Regression, and Naive Bayes) were selected for the experiments. As they have different mathematical foundations and low computational costs [1, 10, 17] they are suitable for building a diverse and lightweight pool of classifiers. Thus, the classifier pool (\(P\)) consisted of 300 classifiers (3 base classifiers \(\times\) 100 bootstraps). Since the focus of the research was not on optimizing each base model's hyperparameters, the default hyperparameters values from scikit-learn were used. Four DES approaches (KNORA-U, KNOP, DES-P, and META-DES) were chosen due to their application of various selection criteria (e.g., Oracle, accuracy, meta-learning). These approaches showed superior performance in a recent empirical study [1]. We applied these DES methods default hyperparameter configurations of the DESlib 0.3 library [18] to guarantee experiment consistency. Moreover, the same pool of classifiers was utilized to fairly compare all DES techniques. **Phase 2.** The Region of Competence (RoC) was calculated applying k-Nearest Neighbors (k-NN) with k = 7, as suggested in [1]. To assess the performance of the ensembles, we employed a range of evaluation metrics, including accuracy, F-score, and Matthews Correlation Coefficient (MCC). Accuracy is a popular metric for DES techniques, although it may not be suitable for imbalanced datasets (i.e., high IR). Meanwhile, F-score and MCC are more suitable for such datasets. F-score is advantageous in scenarios where there is an appreciation for recall and precision, since these two metrics are used in its calculation. The MCC considers false-negative rates in its formulation, which can be of interest to specific problems. The PS-DES variants are labeled according to the metric used to calculate the potential: accuracy (PS-DES-acc), F-score (PS-DES-F), and Matthews Correlation Coefficient (PS-DES-MCC). To assess whether the proposed metrics perform better than random selection, we also conducted an experiment that randomly selected the best ensemble (PS-DES-Random). **Phase 3.** Finally, majority voting was used as a combination approach since individual DES techniques usually apply it [1]. ## V Results and discussions The proposed method is evaluated based on three metrics: accuracy (Table II), F-score (Table III), and MCC (Table IV). Upon examining the tables, our results indicate that PS-DES-acc outperforms all the other approaches in all metrics. The PS-DES-acc obtained the best rank considering all performance metrics, followed by the variant using the F-score metric for computing the ensemble's potential. These results are interesting since, even though the final proposal may be evaluated regarding a different performance metric (e.g., F-score or MCC), using accuracy as the metric for computing the ensemble potential is more advantageous. Analogously, MCC obtained the lowest ranking among all PS-DES variants even when in the scenario that MCC is used as a performance evaluation metric to compute the overall method performance. This result indicates no relation between the metrics selected for calculating ensemble potential and the same metric applied to evaluate the approaches. For accuracy, F-score, and MCC, the chosen metric for calculating the potential does not interfere with the approach's evaluation. Nevertheless, according to these tables, the average ranking of PS-DES approaches is systematically better when compared to individual DS techniques (e.g., META-DES). Thus, the proposed post-processing selection scheme indeed leads to more robust dynamic ensemble selection systems. However, to see if such a difference in performance is significant, we need to go further and perform a more fine-grained analysis by comparing pair of techniques over multiple datasets. Hence, we also conducted one analysis based on the number of wins, ties, and losses (w/t/l) obtained by a control technique and the Wilcoxon signed rank test with a confidence level of 95%. Results of these pairwise comparisons are presented in Tables V, VI, and VII for the PS-DES-acc, PS-DES-F and PS-DES-MCC methods, respectively. The pairwise statistical analysis of PS-DES-acc shows it outperforms KNORA-U, KNOP, META-DES, Random, and PS-DES-MCC regarding accuracy (Table V). No significant difference is observed between PS-DES-acc and DES-P or PS-DES-F. However, considering the presence of datasets with \(IR>1\), it is necessary to consider F-score and MCC. The F-score analysis reveals that PS-DES-acc outperforms all DES individual techniques and Random, with no significant difference to PS-DES-F and PS-DES-MCC. For MCC, PS-DES-acc performs exceptionally well and obtains significantly better results compared to all techniques apart from DES-P and PS-DES-F. Ultimately, this variant based on accuracy for computing the ensemble potential obtained more victories than all other models, regardless of the performance metric used. The statistical analysis of PS-DES-F (Table VI) indicates that it performs better than KNORA-U, KNOP, and Random on all three metrics. However, no statistical difference is found for MCC when compared with DES-P. However, the win-tie-loss analysis demonstrates that the PS-DES-F systematically obtained more wins against the state-of-the-art DES techniques and the random selection scheme (between 13 to 15 wins over the 20 datasets). In contrast, the analysis of PS-DES-MCC (Table VII) presents the worst results compared to PS-DES-acc and PS-DES-F. Based on Wilcoxon's test analysis, PS-DES-MCC scores better than KNORA-U and KNOP only in F-score. The hypothesis that PS-DES-MCC scores better cannot be refuted for all other metrics and comparisons. In summary, the results indicate that PS-DES-acc and PS-DES-F yield comparable outcomes. Still, PS-DES-acc holds a slight advantage over its competitor, particularly when it is compared against the state-of-the-art DES methods. ## VI Conclusion This work proposed a new Dynamic Ensemble Selection (DES) method: Post-Selection Dynamic Ensemble Selection (PS-DES). This method is based on the idea that the optimal criteria for ensemble selection may differ at the instance level leading to ensembles with different qualities or "potentials". To this end, the approach evaluates the potential of ensembles chosen by various DES techniques to determine which is more suitable for labeling a given instance. Experiments demonstrate no direct correlation between the metrics applied for calculating the ensemble potential and for evaluating the approaches, as the PS-DES-acc was found to achieve the best overall results in all cases. Additionally, PS-DES was consistently superior to the existing state-of-the-art DES techniques, which implies that evaluating the selected ensembles as a collective is more important than assessing and choosing each base classifier separately, as is the trend in most DES methods. Thus, post-processing approaches in DES are vital, and future works will explore new metrics for measuring the ensemble's potential. ## Acknowledgment The authors would like to thank the Instituto Federal de Pernambuco, Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior (CAPES), Fundacao de Amparo a Ciencia e Tecnologia de Pernambuco (FACEPE), and the Natural Sciences and Engineering Research Council of Canada (NSERC).
2309.12776
Electric control of optically-induced magnetization dynamics in a van der Waals ferromagnetic semiconductor
Electric control of magnetization dynamics in two-dimensional (2D) magnetic materials is an essential step for the development of novel spintronic nanodevices. Electrostatic gating has been shown to greatly affect the static magnetic properties of some van der Waals magnets, but the control over their magnetization dynamics is still largely unexplored. Here we show that the optically-induced magnetization dynamics in the van der Waals ferromagnet Cr$_2$Ge$_2$Te$_6$ can be effectively controlled by electrostatic gates, with a one order of magnitude change in the precession amplitude and over 10% change in the internal effective field. In contrast to the purely thermally-induced mechanisms previously reported for 2D magnets, we find that coherent opto-magnetic phenomena play a major role in the excitation of magnetization dynamics in Cr$_2$Ge$_2$Te$_6$. Our work sets the first steps towards electric control over the magnetization dynamics in 2D ferromagnetic semiconductors, demonstrating their potential for applications in ultrafast opto-magnonic devices.
Freddie Hendriks, Rafael R. Rojas-Lopez, Bert Koopmans, Marcos H. D. Guimaraes
2023-09-22T10:39:07Z
http://arxiv.org/abs/2309.12776v1
Electric control of optically-induced magnetization dynamics in a van der Waals ferromagnetic semiconductor ###### Abstract Electric control of magnetization dynamics in two-dimensional (2D) magnetic materials is an essential step for the development of novel spintronic nanodevices. Electrostatic gating has been shown to greatly affect the static magnetic properties of some van der Waals magnets, but the control over their magnetization dynamics is still largely unexplored. Here we show that the optically-induced magnetization dynamics in the van der Waals ferromagnet Cr\({}_{2}\)Ge\({}_{2}\)Te\({}_{6}\) can be effectively controlled by electrostatic gates, with a one order of magnitude change in the precession amplitude and over 10% change in the internal effective field. In contrast to the purely thermally-induced mechanisms previously reported for 2D magnets, we find that coherent opto-magnetic phenomena play a major role in the excitation of magnetization dynamics in Cr\({}_{2}\)Ge\({}_{2}\)Te\({}_{6}\). Our work sets the first steps towards electric control over the magnetization dynamics in 2D ferromagnetic semiconductors, demonstrating their potential for applications in ultrafast opto-magnonic devices. magnetization dynamics, two-dimensional magnets, electric control, magneto-optics, van der Waals materials + Footnote †: preprint: AIP/123-QED ## I Introduction Ever since the experimental confirmation of magnetism in two-dimensional (2D) van der Waals (vdW) materials [1; 2], researchers have tried to understand their fundamentals and to utilise their unique properties for new technologies, such as novel spintronic devices for information storage and processing [3; 4; 5; 6]. The use of magnetization dynamics is particularly interesting since it provides an energy efficient route to transfer and process information [7; 8; 9; 10; 11]. A key challenge in this field, named magnonics, is the effective control over the magnetization and its dynamics using electrostatic means, allowing for energy efficient, on-chip, reconfigurable magnonic circuit elements [12; 13; 14]. For conventional (tree-dimensional) systems this control has been shown to be very promising to reduce the energy barriers for writing magnetic bits using spin-orbit torques [15; 16]. Nonetheless, the effect is still relatively modest [17; 18; 19; 20]. In contrast, 2D magnetic semiconductors provide an ideal platform for electric manipulation of magnetization. Their low density of states and high surface-to-volume ratio allow for an effective control over the magnetic parameters in these systems, such as the magnetic anisotropy and saturation magnetization [21; 22; 23; 24; 25]. Additionally, 2D magnetic semiconductors offer a bridge to another exciting field: the combination of optics and magnetism. These materials have shown to possess strong light-matter interaction and high magneto-optic coefficients which strength can be further tuned by the use of vdW heterostructures [26; 27; 28; 29; 30; 31]. These properties make 2D magnetic semiconductors ideal for the merger of two emerging fields: magnonics and photonics. Most works on the electric control of magnetization in vdW magnets have focused on their magneto-static properties, such as the magnetic anisotropy, saturation magnetization and Curie temperature, in both metallic [32; 33; 34; 24; 25; 32; 35] and semiconducting [21; 22; 23; 24; 25] materials. In contrast, their magnetization dynamics have only recently started to receive more attention, and studied using microwave driven magnetic resonance [36; 37; 38; 39; 40; 41; 42], or time-dependent magneto-optic techniques [43; 44; 45; 46; 47; 48; 49; 50; 51]. The latter were used on antiferromagnetic bilayer CrI\({}_{3}\) to show that its magnetic resonance frequency can be electrically tuned by tens of GHz [52]. Nonetheless, the electric control over the optical excitation of magnetization and its subsequent dynamics in 2D ferromagnets remains to be explored. Here we show that the magnetization dynamics of the vdW semiconductor Cr\({}_{2}\)Ge\({}_{2}\)Te\({}_{6}\) (CGT) can be efficiently controlled by electrostatic gating. Using ultrafast (fs) laser pulses we bring the magnetization out of equilibrium and study its dynamics with high temporal resolution through the magneto-optic Faraday effect. Using both top and bottom electrostatic gates, we independently control the gate-induced change in the charge carrier density (\(\Delta n\)) and the electric displacement field (\(\Delta D\)) in the CGT, and show that both have drastic effects on the optically-induced oscillation amplitudes and a more modest effect on its frequency. Finally, we observe a strong asymmetric behavior on the magnetization oscillation amplitudes with respect to a reversal of the external magnetic field, which is also strongly affected by both \(\Delta n\) and \(\Delta D\). This asymmetry can be explained by a strong influence of coherent opto-magnetic phenomena, such as the inverse Cotton-Mouton effect and photo-induced magnetic anisotropy, on the excitation of the magnetization dynamics. ## I Device structure and measurement techniques Our sample consists of a CGT flake, encapsulated in hexagonal boron nitride (hBN), with thin graphite layers as top gate, back gate, and contact electrodes, as depicted in Fig. 1a and b (see Methods for more details). The measurements were performed at low temperatures (10 K), with the sample mounted at 50 degrees with respect to the magnetic field axis for transmission measurements. The laser light is parallel to the magnetic field axis. We use the time-resolved magneto-optic Faraday effect to monitor the magnetization dynamics in our system using a single-color pump-probe setup similar to the one described in [53; 54] (more information in Methods). The process of optical excitation of magnetization dynamics in van der Waals magnets has been previously reported as purely thermal [43; 44; 45; 46; 47; 52], similar to many studies on conventional metallic thin-films [55; 56; 57]. Here we find strong evidence that coherent opto-magnetic phenomena also play an important role in the excitation of the magnetization dynamics. The detailed microscopic description of how the magnetization dynamics is induced is described later in the article, but in short, the excitation of the magnetization dynamics can be described as follows (Fig. 1c): In equilibrium (1), the magnetization \(\mathbf{M}\) points along the total effective magnetic field \(\mathbf{H}_{\text{eff}}\), which is the sum of the external field (\(\mathbf{H}_{\text{ext}}\)), and the internal effective field (\(\mathbf{H}_{\text{int}}\)) caused by the magnetocrystalline anisotropy (\(K_{\text{u}}\)) and shape anisotropy. For CGT, \(\mathbf{H}_{\text{int}}\) points out-of-plane [2; 45; 58], meaning that \(K_{\text{u}}\) dominates over the shape anisotropy. The linearly polarized pump pulse interacts with the sample (2), reducing the magnetization and changing the magnetocrystalline anisotropy through the mechanisms mentioned above, which causes \(\mathbf{M}\) to cant away from equilibrium. Since \(\mathbf{M}\) and \(\mathbf{H}_{\text{eff}}\) are not parallel anymore, this results in a precession of \(\mathbf{M}\) around \(\mathbf{H}_{\text{eff}}\), while they both recover to their equilibrium value as the sample cools. ## III Gate control of magnetization dynamics The dual-gate geometry of our device allows for the independent control of both the charge carrier density Figure 1: **Magnetization dynamics in a CGT based heterostructure.****a**, Illustration of time-resolved Faraday ellipticity measurements, combined with an optical micrograph of the sample (the scale bar is 10 \(\mu\)m). The CGT flake is outlined in blue. **b**, Schematic of the layers comprising the sample, including electrical connections for gating. **c**, Process of laser-induced magnetization precession (see main text). **d**, Time-resolved Faraday ellipticity traces at \(\mu_{0}H_{\text{ext}}=100\) mT for three different values of \(\Delta n\) with \(\Delta D=0\). A vertical offset was added for clarity. **e**, RMS power of the frequency spectrum of the oscillations in the data shown in **d**. Different transparencies indicate different values of \(H_{\text{ext}}\). and the perpendicular electric field. The dependence of \(\Delta n\) and \(\Delta D\) on the top and back gate voltages - \(V_{t}\) and \(V_{b}\), respectively - is derived in the Methods. The change in the Fermi level induced by \(\Delta n\) is expected to affect the magnetic anisotropy of CGT due to the different Cr \(d\)-orbitals composition of the electronic bands [59]. The effect of \(\Delta D\) is, however, more subtle. The inversion symmetry breaking caused by \(\Delta D\) can allow for an energy shift of the (initially degenerate) electronic bands, potentially also modulating the magnetization parameters. Additionally, the perpendicular electric field can induce a non-uniform distribution of charge carriers along the thickness of the CGT flake, leading to \(\Delta n\)-induced local changes in the magnetization parameters. Typical results from the time-resolved Faraday ellipticity (TRFE) measurements for different values of \(\Delta n\), with \(\Delta D=0\), are shown in Fig. 1d. For \(\Delta t<0\) the signal is constant, since the magnetization is at its steady state value. All traces show a sharp increase at \(\Delta t=0\), indicating a fast laser-induced dynamics. For \(\Delta t>0\), the TRFE traces show oscillations, indicating a precession of the magnetization induced by the pump pulse. We observe that the magnetization dynamics strongly depends on \(\Delta n\), with the amplitude, frequency and starting phase of the oscillations in the TRFE signal all being affected. The most striking observation is that the amplitude of the TRFE signal increases by more than a factor of seven when \(\Delta n\) is changed from \(1.2\times 10^{13}\) cm\({}^{-2}\) to \(-1.8\times 10^{13}\) cm\({}^{-2}\). The observations of modulation of both the amplitude and starting phase of the oscillations hint at a change in the pump excitation process. The change in oscillation frequency due to \(\Delta n\) is better visible in the Fourier transform of the signals, shown in Fig. 1e (see Methods for details on the Fourier transform). This analysis clearly shows that both the frequency and amplitude of the magnetization precession are tuned by \(\Delta n\). All these observations point to an effective control of the (dynamic) magnetic properties of CGT by electrostatic gating. The origin of the electric control of the magnetization dynamics can be further understood by analyzing the precession frequency at various magnetic fields and values of \(\Delta n\) (Fig. 2a). For magnetic fields below 250 mT we observe a significant shift of the frequency (4 - 10%) by changing the charge carrier density. This is clearly visible in the inset of Fig. 2a, which shows a close-up of the data up to 150 mT. The change in precession frequency for different values of \(\Delta n\) strongly points towards a modulation of the magnetization parameters of CGT as a function of the Fermi level, controlled by \(\Delta n\). A quantitative analysis of the oscillation frequency (\(f\)) as a function of \(H_{\rm ext}\) can be used to extract the magnetization dynamics parameters of the device. Our data is well described by the ferromagnetic resonance mode obtained from the Landau-Lifshitz-Gilbert (LLG) equation with negligible damping [60]: \[f=\frac{g\mu_{B}\mu_{0}}{2\pi\hbar}\sqrt{|{\bf H}_{\rm eff}|\left(|{\bf H}_{ \rm eff}|-H_{\rm int}\sin^{2}\left(\theta_{M}\right)\right)}, \tag{1}\] where \(g\) is the Lande g-factor, \(\mu_{B}\) the Bohr magneton, \({\bf H}_{\rm eff}={\bf H}_{\rm ext}+H_{\rm int}\cos(\theta_{M})\hat{\bf z}\), with \(H_{\rm int}=2K_{\rm u}/(\mu_{0}M_{\rm s})-M_{\rm s}\), \(M_{\rm s}\) the saturation magnetization, and \(\theta_{M}\) the angle between \({\bf M}\) and the sample normal (z-direction). The angle \(\theta_{M}\) is calculated by minimizing the magnetic energy in the presence of an external field, perpendicular magnetic anisotropy, and shape anisotropy [45]. We obtain the \(g\)-factor and \(H_{\rm ext}\) of the CGT by fitting the \(f\) versus \(H_{\rm ext}\) data (e.g. the data presented in Fig. 2a) using Eq. (1), as explained in the Methods. This yields \(g\approx 1.89\) with no clear dependence on \(\Delta n\) or \(\Delta D\), which is in agreement with (albeit slightly lower than) the values reported for CGT [40; 42; 45] (see Supplementary Sections 4 and 5 for more details). We also find no clear dependence of the precession damping time (\(\tau_{\rm osc}\)) on \(\Delta n\) or \(\Delta D\). The intrinsic Gilbert damping we obtain form our measurements is about \(6\times 10^{-3}\) (see Supplementary Section 6), in line with values found in literature [41; 45]. The internal effective field shows a clear dependence on both \(V_{t}\) and \(V_{b}\), as shown in Fig. 2c, with values similar to the ones found in other studies [45]. Upon comparing Fig. 2e to 2b, one notices that the gate dependence of \(H_{\rm int}\) is very similar to that of the precession frequency at \(\mu_{0}H_{\rm ext}=100\) mT. This suggests that the gate dependence of the precession frequency is caused by the gate-induced change in \(H_{\rm int}\). From the dependence of \(H_{\rm int}\) on \(V_{t}\) and \(V_{b}\), we extract its behavior as a function of \(\Delta n\) and \(\Delta D\), shown in Fig. 2d and e. We observe that \(H_{\rm int}\) decreases with both increasing \(\Delta n\) and \(\Delta D\). The dependence of \(H_{\rm int}\) on \(\Delta n\) is consistent with theoretical calculations [59], showing that \(K_{\rm u}\), and therefore \(H_{\rm int}\), is reduced upon increasing the electron density in the same order of what we achieve in our sample. The \(\Delta n\) dependence of \(H_{\rm int}\) is also consistent with the dependence of the coercive field obtained from static measurements (see Supplementary Section 3), providing further evidence that the change in \(f\) is driven by a change in \(K_{\rm u}\). Now we draw our attention to the large modulation of the oscillations in the TRFE measurements with varying gate voltage, as shown in Fig. 1d and 1e. Here we attribute this change in magneto-optical signal amplitude to an actual increase in amplitude of the magnetization precession (increase in the precession angle) and not to an increase in the strength of the Faraday effect. This is supported by our observation that the time-resolved measurements for different combinations of gate voltages are not simply scaled - i.e. the amplitude of the oscillations and their (ultrafast demagnetization) background scale differently. A detailed discussion can be found in Supplementary Section 7. Figure 3a clearly shows that the magnetization precession amplitude is mostly affected by \(\Delta n\), and to a much lesser extend by \(\Delta D\). The precession amplitude versus \(H_{\rm ext}\) for various values of \(\Delta n\) is presented in Fig. 3c. We note that for \(|\mu_{0}H_{\rm ext}|<50\) mT the magnetization is not completely saturated (see Supplementary Section 3), which can lead to multi-domain formation [61] and a devi ation from the general trend. We find that not only the precession amplitude for a given \(H_{\text{ext}}\) is strongly modulated by \(\Delta n\), but its decaying trend with \(H_{\text{ext}}\) is also strongly affected. Additionally, we observe another interesting effect: the amplitude shows an asymmetry in the sign of the applied magnetic field, which is also dependent on \(\Delta n\). This latter is unexpected, especially since the observed precession frequency is symmetric in \(\text{H}_{\text{ext}}\) (see Supplementary Section 9). A similar precession frequency for opposite magnetic fields indicates that the magnetocrystalline anisotropy and the saturation magnetization are independent of the sign of \(H_{\text{ext}}\). Therefore, we conclude that the origin of the modulation of the precession amplitude is related to the excitation mechanism of the magnetization precession (see Supplementary Section 10 for the complete discussion). To get further insight into the microscopic mechanisms involved in the optical excitation of magnetization dynamics, we analyze the magnetic field dependence of the starting phase (\(\phi_{0}\)) of the precessions for different values of \(V_{t}\) and \(V_{b}\) (Fig. 3b). Unlike the amplitude, we find that \(\phi_{0}\) depends on both \(\Delta n\) and \(\Delta D\). As can be seen in Fig. 3d, the behavior of \(\phi_{0}\) with \(H_{\text{ext}}\) is also modulated by \(\Delta n\). For a purely thermal excitation of the magnetization dynamics one would expect \(\phi_{0}(-H_{\text{ext}})=\pi+\phi_{0}(H_{\text{ext}})\) in our geometry. Nonetheless, we observe that \(\phi_{0}\) for positive and negative magnetic fields differ by less than \(\pi\). Moreover, \(\Delta n\) seems to also affect the trend on how \(\phi_{0}\) approaches the values at high magnetic fields. Combined with the observed asymmetry of the precession amplitude, our data strongly suggests that the optical excitation of the magnetization dynamics is not dominated by a thermal excitation (\(\Delta\)K mechanism) as previously reported for other van der Waals magnets [43; 44; 45; 46; 47; 52]. ## IV Opto-magnetic effects Coherent opto-magnetic mechanisms provide possible alternatives for the optical excitation of magnetization dynamics in CGT. Here we find that our data can be explained by two of these mechanisms that are compatible Figure 2: **Gate-dependence of precession frequency and internal effective field.****a**, Frequency of oscillations as a function of external magnetic field, for different values of \(\Delta n\). The circles are the frequencies extracted from the TRFE data for \(\Delta t>26\) ps. Solid lines are best fits of Eq (1). _Inset_: Close-up of the data for low fields, showing the frequency shift due to gating. The error bars are smaller than the markers. **b**, Frequency of the oscillations in the TRFE signal at \(\mu_{0}H_{\text{ext}}=100\) mT for various values of top (\(V_{t}\)) and back gate voltages (\(V_{b}\)). The black and gray arrows indicate, respectively, the directions of constant \(\Delta D\) and varying \(\Delta n\), and of constant \(\Delta n\) and varying \(\Delta D\). For other values of \(H_{\text{ext}}\) see Supplementary Fig. S10. **c**, Internal effective field as a function of \(V_{t}\) and \(V_{b}\). **d**, **e**, The dependence of the internal effective field on \(\Delta n\) for fixed \(\Delta D\) (**d**) and on \(\Delta D\) for fixed \(\Delta n\) (**e**), with solid lines to guide the eye. The traces are taken along the dotted lines indicated in **c**. with a linearly-polarized pump pulse, the inverse Cotton-Mouton effect (ICME) [62; 63; 64] and photo-induced magnetic anisotropy (PIMA) [65], in addition to the conventional (thermal) \(\Delta\)K mechanism [66]. The ICME, which could be described by impulsive stimulated Raman scattering, relies on the generation of an effective magnetic field upon interaction with linearly polarized light in a magnetized medium [62; 63; 64; 67; 68]. This effective magnetic field is proportional to both the light intensity and magnetization. For pulsed laser excitation, the ICME generates a strong impulsive change in \(H_{\mathrm{ext}}\) that results in a fast rotation of the magnetization. Therefore, this effect can cause the amplitude of the precession to be asymmetric in \(H_{\mathrm{ext}}\)[64; 68]. Figure 3e illustrates how the ICME could result in an asymmetric magnetic field dependence of the amplitude. For simplicity we only consider the \(y\)-component of the generated effective magnetic field, since this component is responsible for the asymmetry. (1) A sample with perpendicular magnetic anisotropy is subject to an external magnetic field \(\mathbf{H}_{\mathrm{ext}}\) (\(-\mathbf{H}_{\mathrm{ext}}\)) in the \(xz\)-plane, pointing in the positive (negative) direction of both axes. In equilibrium, the magnetization points along the total effective field, as indicated by the light gray arrow. During laser pulse excitation, the ICME results in an effective magnetic field along the \(y\)-axis, rotating the magnetization either towards the \(z\)-axis or the \(x\)-axis, depending on the sign of \(\mathbf{H}_{\mathrm{ext}}\). Additionally, the ultrafast demagnetization process leads to a reduction of the magnetization. (2) After the laser pulse, the magnetization precesses around the total effective field that is comprised of \(\mathbf{H}_{\mathrm{ext}}\) and \(\mathbf{H}_{\mathrm{int}}\). Depending on the sign of \(\mathbf{H}_{\mathrm{ext}}\), the ICME has either rotated \(\mathbf{M}\) towards or away from \(\mathbf{H}_{\mathrm{eff}}\), resulting in different precession amplitudes. The second coherent mechanism for laser-induced magnetization dynamics is PIMA, which leads to a step-like change in \(\mathbf{H}_{\mathrm{eff}}\) due to pulsed laser excitation [64]. This mechanism has been reported to arise from an optical excitation of nonequivalent lattice sites (e.g. dopants and impurities), which effectively redistributes the ions and hence changes the magnetic anisotropy [69; 70; 71]. Unlike the ICME, the PIMA mechanism is not expected to lead to an asymmetry of the magnetization precession amplitude of upon a reversal of \(H_{\mathrm{ext}}\), because it is present for times much longer than the period of precession and Figure 3: **Gate dependence of magnetization precession amplitude and phase.****a**, **b**, Gate dependence of the amplitude (**a**) and starting phase (**b**) of the oscillations in the TRFE measurements at \(\mu_{0}H_{\mathrm{ext}}=100\) mT. For other values of \(H_{\mathrm{ext}}\) see Supplementary Figs. S8 and S9. **c**, **d**, External magnetic field dependence of the amplitude (**c**) and starting phase (**d**) of the oscillations for different values of \(\Delta n\) at \(\Delta D=0\). The values are extracted from the TRFE data for \(\Delta t>26\) ps. **e**, Schematics of the inverse Cotton-Mouton effect for opposite directions of \(H_{\mathrm{ext}}\). The magnetization direction is depicted by a red arrow, the external magnetic field in blue, the effective magnetic field induced by the ICME and the effective field are shown in cyan. The \(xz\)-plane is highlighted by the shaded region. therefore acts as a constant change of the effective magnetic field [65; 70; 71; 72]. All three discussed mechanisms for inducing magnetization precession - ICME, PIMA and the \(\Delta\)K mechanism - are affected by electrostatic gating. The optomagnetic effects can be affected through a change in e.g. the polarization-dependent refractive index and the occupation of charge states of ions and impurities. Additionally, the \(\Delta\)K mechanism can be affected by the changes in charge relaxation pathways through, for example, electron-electron and electron-phonon interactions. We find that the combination of the above mechanisms can describe quantitatively the starting phase and qualitatively the amplitude of the observed magnetic field dependence shown in Fig. 3 (see Supplementary Section 11). The balance between these three mechanism affects the magnetic field dependence of the amplitude and the starting phase, increases or decreases the asymmetry in the induced precession amplitude, and changes the steepness of the starting phase versus magnetic field graph. Therefore, since our data shows a change in these properties, we conclude that the relative strength of the mechanisms for excitation of magnetization precession are effectively controlled by electrostatic gating. ## V Conclusions We envision that the electric control over the optically-induced magnetization precession amplitude demonstrated here can be applied to devices which make use of spin wave interference for signal processing [12; 13; 14]. This should lead to an efficient electric control over the mixing of spin waves, leading to an easier on-chip implementation of combined magnonic and photonic circuits. Even though the control over the precession frequency shown here is still modest (\(\approx\)10%), we believe it can be further enhanced by the use of more effective electrostatic doping, such as using high-\(\kappa\) dielectrics or ionic-liquid gating which is capable of achieving over one order of magnitude higher changes in carrier densities than the ones reported here [21; 24; 25; 32; 73]. We note that due to the non-monotonic behavior of the magnetic anisotropy energy with changes in charge carrier density, one might expect more drastic changes on \(H_{\text{int}}\) for larger changes in \(\Delta n\). This control over the magnetic anisotropy can then be used for the electrostatic guiding and confinement of spin waves, leading to an expansion of the field of quantum magnonics. Finally, the presence of coherent optical excitation of magnetization dynamics we observed in CGT should also lead to a more energy-efficient optical control of magnetization [74]. Therefore, the electric control over magnetization dynamics in CGT shown here provides the first steps towards the implementation of vdW ferromagnets in magneto-photonic devices that make use of spin waves to transport and process information. ## VI Methods ### Sample fabrication The thin hBN and graphite flakes are exfoliated from bulk crystals (HQ graphene) on an oxidized silicon wafer (285 nm oxide thickness). The CGT flakes are exfoliated in the same way in an inert (nitrogen gas) environment glove box with less than 0.5 ppm oxygen and water to prevent degradation. The flakes are selected using optical contrast and stacked using a polycarbonate/polydimethylsiloxane stamp by a dry transfer van der Waals assembly technique [75]. First an hBN flake (21 nm thick) is picked up, followed by the CGT flake. Next, a thin graphite flake is picked up to make electrical contact with a corner of the CGT, and extends beyond the picked-up hBN flake. After this, a second hBN flake (20 nm thick) is picked up and a thin graphite flake to function as the back gate electrode. This stack is then transferred to an optically transparent fused quartz substrate finally a thin graphite flake is transferred on top the stack to function as the top gate electrode. The device is then contacted by Ti/Au (5/50 nm) electrodes fabricated using conventional electron-beam lithography and thin metallic film deposition techniques. ### Measurement setup All measurements are done at 10 K under low-pressure (20 mbar) Helium gas. The sample is mounted at an angle, such that the sample normal makes an angle of 50 degrees with the external magnetic field and the laser propagation direction. The \(\sim\)200 fs long laser pulses are generated by a mode-locked Ti:Sapphire oscillator (Spectra-Physics MaiTai), at a repetition rate of 80 MHz. After a power dump, the pulses are split in an intense pump and a weaker probe pulse by a non-polarizing beam splitter. The pump beam goes through a mechanical delay stage, allowing us to modify the time-delay between pump and probe by a change in the optical path length. To allow for a double-modulation detection [53; 54], the pump beam goes through an optical chopper working at 2173 Hz. The polarization of the pump is set to be horizontal (p-polarized with respect to the sample), to allow us to block the pump beam through a polarization filter at the detection stage. The initially linearly polarized probe pulse goes through a photoelastic modulator (PEM) which modulates the polarization of the light at 50 kHz. A non-polarizing beam splitter is used to merge the pump and probe beams on parallel paths, with a small separation between them. From here, they are focused onto the sample by an aspheric cold lens with a numerical aperture of 0.55. The probe spot size (Full Width at Half Maximum) is \(\sim\)1.8 \(\mu\)m and the pump spot size is \(\sim\)3.4 \(\mu\)m, both elongated by a factor of \(1/\sin(50^{\circ})\) because the laser hits the sample at \(50^{\circ}\) with respect to the sample normal. The fluence of the pump and probe pulses are \(F_{\text{pump}}=25\ \mu\text{J/cm}^{2}\) and \(F_{\text{probe}}=5.7\ \mu\text{J/cm}^{2}\), respectively. The transmitted light is collimated by an identical lens on the opposite side of the sample and leaves the cryostat. The pump beam is blocked and the probe beam is sent to a detection stage consisting of a quarter wave plate, a polarization filter, and an amplified photodetector. The quarter wave plate and the polarization filter are adjusted until they compensate for the change in polarization caused by the optical components between the PEM and the detection stage, ensuring that our signals are purely due to the rotation or ellipticity of the probe polarization induced by our samples. The first and second harmonic of the signal (50 or 100 kHz) obtained at the photodetector are then proportional to the change in ellipticity and rotation due to the Faraday effect of the sample. For static magneto-optic Faraday effect measurements we have blocked the pump beam before reaching the sample. ### Calculating \(\Delta n\) and \(\Delta D\) from the gate voltages The gate-induced change in charge carrier density (\(\Delta n\)) and displacement field (\(\Delta D\)) at the CGT are calculated from the applied gate voltages using a parallel plate capacitor model. The displacement field generated by the top (\(D_{\text{t}}\)) and back (\(D_{\text{b}}\)) gates is given by \(D_{i}=\varepsilon_{\text{hBN}}E_{i}=\frac{1}{2}\sigma_{\text{free},i}\), where \(i\) denotes \(t\) or \(b\), \(\varepsilon_{\text{hBN}}=3.8\varepsilon_{0}\) is the hBN dielectric constant [76] with \(\varepsilon_{0}\) the vacuum permittivity, and \(\sigma_{\text{free}}\) the free charge per unit area. The applied top and back gate voltages are related to \(\sigma_{\text{free}}\) by \(V_{\text{i}}=-\int D_{\text{i}}/\varepsilon\,\text{d}z\). This equation, combined with the condition of charge neutrality, gives the following 3 relations: \[V_{t}/d_{t} =\frac{\sigma_{t}-\sigma_{\text{CGT}}-\sigma_{b}}{2\varepsilon_{ \text{hBN}}},\] \[V_{b}/d_{b} =\frac{\sigma_{b}-\sigma_{\text{CGT}}-\sigma_{t}}{2\varepsilon_{ \text{hBN}}},\] \[0 =\sigma_{t}+\sigma_{b}+\sigma_{\text{CGT}},\] where \(d_{\text{t,b}}\) denotes the thickness of the top (21 nm) and bottom (20 nm) hBN flakes, and \(\sigma_{i}\) the free charge per unit area in the top gate (\(t\)), back gate (\(b\)), and the CGT flake. Solving this set of equations yields: \[\sigma_{t} =\varepsilon_{\text{hBN}}V_{t}/d_{t},\] \[\sigma_{b} =\varepsilon_{\text{hBN}}V_{b}/d_{b},\] \[\Delta n =\sigma_{\text{CGT}}/e =-\frac{\varepsilon_{\text{hBN}}}{e}\left(\frac{V_{t}}{d_{t}}+ \frac{V_{b}}{d_{b}}\right),\] where \(e\) is the positive elementary charge. Note that for positive gate voltages, a negative charge carrier density is induced in the CGT. For the gate-induced change in the displacement field at the CGT layer, we get: \[\Delta D=(\sigma_{b}-\sigma_{t})/2=-\varepsilon_{\text{hBN}}(V_{t}/d_{t}-V_{b }/d_{b})/2\] Filling in the values for the thickness of the hBN flakes and dielectric constant of hBN gives \(\Delta D/\varepsilon_{0}\) and \(\Delta n\) at the CGT: \[\Delta n =-(1.00V_{t}+1.05V_{b})\times 10^{12}V^{-1}\text{cm}^{-2}\] \[D/\varepsilon_{0} =-(0.090V_{t}-0.095V_{b})\,\text{nm}^{-1}.\] Throughout the main text we use \(\Delta D/\varepsilon_{0}\) instead of \(\Delta D\) for easier comparison of our values of the gate-induced change in the displacement field with values mentioned in other works. Note that we use the conversion factor \(\varepsilon_{0}\) and not the permittivity of CGT. Therefore, the values for the \(\Delta D\) that we report are the equivalent electric field values in _vacuum_, not in CGT. ### Windowed Fourier transform The RMS power spectra of the TRFE oscillations shown in Fig. 1e are calculated from the TRFE measurements using a windowed Fourier transform. The type of window used for this calculation if the Hamming window, which extends from \(\Delta t=0\) up to the last data point. The RMS power spectrum (\(P_{\text{RMS}}(f)\)) of the TRFE oscillations is calculated as \[P_{\text{RMS}}(f)=\Bigg{(}\sum_{\Delta t>0} \left[W_{\text{Ham}}(\Delta t)y(\Delta t)\sin(2\pi f\Delta t) \right]^{2}+\] \[\left[W_{\text{Ham}}(\Delta t)y(\Delta t)\cos(2\pi f\Delta t) \right]^{2}\Bigg{)}^{1/2},\] where \(W_{\text{Ham}}\) is the Hamming window, y the data points of the TRFE measurements, and \(f\) the frequency. ### Determining the \(g\)-factor and \(H_{\text{int}}\) The Landic \(g\)-factor and \(H_{\text{int}}\) can be extracted by fitting the magnetic field dependence of the precession frequencies with Eq. (1). The values of \(g\) and \(H_{\text{int}}\) we obtained from the fit were, in most cases, strongly correlated. Therefore, we first determined \(g\) by fitting the data for \(\mu_{0}H_{\text{ext}}\geq 125\) mT, since \(g\) is most sensitive to the slope at high fields. This yields \(g=1.89\ \pm\ 0.01\). I we further allow for an additional uncertainty in the mounting angle of the sample, the g-factor can change by \(\sim 0.1\). Then we determine \(H_{\text{int}}\) by fitting Eq. (1) for all remaining measurements fixing \(g=1.89\). We note that the values for \(H_{\text{int}}\) do depend on the exact value of \(g\), but the modulation due to electrostatic gating is not affected, as is shown in Supplementary Section 4. ### Extracting the magnetization precession parameters from the TRFE measurements We extract the amplitude, frequency, and starting phase of the oscillations in the TRFE measurements by fitting the data for \(\Delta t>26\) ps with the phenomenological formula [45; 60] \[y= y_{0}+ae^{-\Delta t/\tau_{\rm osc}}\cos{(2\pi f\Delta t-\phi_{0})}\] \[+A_{\rm l}e^{-\Delta t/\tau_{\rm l}}+A_{\rm s}e^{-\Delta t/\tau_{ \rm s}}. \tag{2}\] This formula describes a phase shifted sinusoid on top a double exponential background. The background captures the demagnetization and remagnetization of the CGT, while the sinusoid describes the magnetization precession. ## VII Data Availability The raw data and the data underlying the figures in the main text are publicly available through the data repository Zenodo at [https://doi.org/10.5281/zenodo.8321758](https://doi.org/10.5281/zenodo.8321758). ## VIII Acknowledgments We thank Bart J. van Wees for critically reading the manuscript and providing valuable feedback, and we thank J. G. Holstein, H. Adema, H. de Vries, A. Joshua and F. H. van der Velde for their technical support. This work was supported by the Dutch Research Council (NWO) through grants STU.019.014 and OCENW.XL21.XL21.058, the Zernike Institute for Advanced Materials, the research program "Materials for the Quantum Age" (QuMat, registration number 024.005.006), which is part of the Gravitation program financed by the Dutch Ministry of Education, Culture and Science (OCW), and the European Union (ERC, 2D-OPTOSPIN, 101076932). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them. The device fabrication and nanocharacterization were performed using Zernike NanoLabNL facilities. ## IX Author Information M.H.D.G. conceived and supervised the research. F.H. designed and fabricated the samples, performed the measurements, analyzed the data, and calculated the effect of coherent excitations on the magnetization precession under M.H.D.G. supervision. F.H. and R.R.R.L built and tested the measurement setup. F.H., M.H.D.G, and B.K discussed the data and provided the interpretation of the results. F.H. and M.H.D.G co-wrote the manuscript with input from all authors. ## X Ethics Declaration ### Competing interests The authors declare no competing interests.
2308.16639
Security Allocation in Networked Control Systems under Stealthy Attacks
This paper considers the problem of security allocation in a networked control system under stealthy attacks. The system is comprised of interconnected subsystems represented by vertices. A malicious adversary selects a single vertex on which to conduct a stealthy data injection attack with the purpose of maximally disrupting a distant target vertex while remaining undetected. Defense resources against the adversary are allocated by a defender on several selected vertices. First, the objectives of the adversary and the defender with uncertain targets are formulated in a probabilistic manner, resulting in an expected worst-case impact of stealthy attacks. Next, we provide a graph-theoretic necessary and sufficient condition under which the cost for the defender and the expected worst-case impact of stealthy attacks are bounded. This condition enables the defender to restrict the admissible actions to dominating sets of the graph representing the network. Then, the security allocation problem is solved through a Stackelberg game-theoretic framework. Finally, the obtained results are validated through a numerical example of a 50-vertex networked control system.
Anh Tung Nguyen, André M. H. Teixeira, Alexander Medvedev
2023-08-31T11:16:56Z
http://arxiv.org/abs/2308.16639v2
# Security Allocation in Networked Control Systems under Stealthy Attacks ###### Abstract This paper considers the problem of security allocation in a networked control system under stealthy attacks in which the system is comprised of interconnected subsystems represented by vertices. A malicious adversary selects a single vertex on which to conduct a stealthy data injection attack to maximally disrupt the local performance while remaining undetected. On the other hand, a defender selects several vertices on which to allocate defense resources against the adversary. First, the objectives of the adversary and the defender with uncertain targets are formulated in probabilistic ways, resulting in an expected worst-case impact of stealthy attacks. Next, we provide a graph-theoretic necessary and sufficient condition under which the cost for the defender and the expected worst-case impact of stealthy attacks are bounded. This condition enables the defender to restrict the admissible actions to a subset of available vertex sets. Then, we cast the problem of security allocation in a Stackelberg game-theoretic framework. Finally, the contribution of this paper is highlighted by utilizing the proposed admissible actions of the defender in the context of large-scale networks. A numerical example of a 50-vertex networked control system is presented to validate the obtained results. Cyber-physical security, networked control system, Stackelberg game, stealthy attack. ## I Introduction Networked control systems are ubiquitous in modern societies, with examples including transportation networks, power systems, and water distribution networks. These systems, utilizing non-proprietary information and communication technologies such as public Internet and wireless communication, are exposed to the threat of cyber attacks [1, 2, 3], which can cause severe financial and societal consequences. For instance, an Iranian industrial control system and a Ukrainian power grid have witnessed the catastrophic consequences of malware such as Stuxnet in 2010 [2] and Industryer in 2016 [3], respectively. Thus, in light of these alarming realities, the issue of security has acquired unprecedented significance in the realm of control systems. In terms of cyber attacks on control systems, deception attacks that undermine the integrity of control systems have emerged as an area of increasing scholarly interest. For example, Pang and Liu have proposed an encryption-based predictive control mechanism to counteract and mitigate such attacks [4]. Another form of deception attacks, replay attacks, has been unmasked by physical watermarking [5, 6] and multiplicative watermarking [7]. Meanwhile, the development of stealthy attacks on control systems has been made to evade the most advanced detection schemes [8, 9, 10, 11]. Upon review of the above existing studies [4, 5, 6, 7, 8, 9, 10, 11], it is noticed that they have concentrated on secure estimation and secure control from either the defender's or the adversary's perspective. Nonetheless, it is crucial to note that both parties are confronted with similar challenges, as the defender has limited resources to counteract malicious activities, while the adversary also faces energy and detectability constraints when executing attacks. As a result, addressing the security problem within a unified framework that encompasses both the defender and the adversary is of utmost importance. Game theory offers a unified framework to consider the objectives and actions of both strategic players, namely the defender and the adversary [12]. It also allows us to deal with the robustness and security of cyber-physical systems within the common well-defined framework of \(\mathcal{H}_{\infty}\) robust control design [13]. Further, many other concepts of games describing networked systems subjected to cyber attacks such as matrix games [13, 14, 15], dynamic games [16], stochastic games [17], and network monitoring games [18] have been recently studied. Recent studies [13, 15, 19, 20] have utilized the common concept of zero-sum games to address the problem of input attacks on cyber-physical systems. The investigation of control systems exposed to cyber attacks has been extensively studied through game theoretic approaches [16, 17, 18]. However, these approaches have not accounted for the deployment of detectors in an effort to increase the detection of cyber attacks. This creates a significant gap in knowledge which must be addressed in order to enhance security measures. One such effort to close the aforementioned gap has been presented in a game-theoretic formulation outlined by Pirani et al. [21]. The game payoff in [21] has been formulated by combining the maximum \(\mathcal{L}_{2}\) gains of multiple outputs with respect to a single input representing the attack signal. On the one hand, these multiple \(\mathcal{L}_{2}\) gains are evaluated separately and thus may be attained for different optimal input signals. Further, the utilization of a maximum gain for characterizing the detectability corresponds to an optimistic perspective, where the adversary attempts to maximize the energy of the detection output, instead of the opposite. Therefore, in order to address the critical issue of cyber security and develop a security metric against cyber attacks, it is imperative to thoroughly investigate the optimal placement of sensors in a networked system to minimize the impact of cyber attacks while maintaining maximum detectability. Additionally, the above existing studies [13; 14; 15; 16; 18; 20; 21] investigated the security problem by letting the defender and the adversary select their actions simultaneously. However, this formulation may not always be applicable in practical situations where an adversary probably moves after observing the action of the defender. To deal with this limitation, a game-theoretic Stackelberg framework [22] offers a more practical solution. In the framework, after analyzing possible attack scenarios, the defender, so-called _the leader_, has the power to select and announce their action first, knowing that the malicious adversary bases their actions on the leader's decision. Then, the malicious adversary, so-called _the follower_, finds the best response to the defender's action. This paper considers a continuous-time networked control system, associated with an undirected connected graph, under stealthy attacks involving two strategic agents: a malicious adversary and a defender. The system is comprised of multiple interconnected one-dimensional subsystems, referred to as vertices, in which a single performance vertex is selected to represent the local performance of the entire network. The purpose of the adversary is to maliciously degrade the local performance without being detected. To pursue this purpose, the adversary selects one vertex on which to launch a stealthy data injection attack on its input. Meanwhile, the defender allocates defense resources by selecting a set of monitor vertices to measure their outputs with the aim of alleviating the attack impact. Given the strategic nature of both agents, we investigate the optimal selection of the monitor vertices using the Stackelberg game-theoretic approach described above. By leveraging the concept of the Stackelberg game in [22], we can elucidate the complex interplay between the two agents and identify their best actions. Figure 1 visualizes the above-defined game in a networked control system. The contributions of this paper are the following: 1. A novel objective function, the expected output-to-output gain, is proposed to capture the expected worst-case impact of stealthy attacks with uncertain performance vertex. 2. We cast the security allocation problem in a Stackelberg game-theoretic framework with the defender as the leader and the malicious adversary as the follower. 3. We propose a control design for which we provide a graph-theoretic necessary and sufficient condition under which the defender guarantees the boundedness of the cost and the expected worst-case impact of stealthy attacks. 4. Leveraging the uncertainty of the attack and performance vertices, we show that the necessary and sufficient condition in 3) restricts the admissible choice of monitor sets to be dominating sets of the graph. 5. We highlight the advantage of the proposed security allocation scheme in the context of large-scale networks. The remainder of this paper is organized as follows. Section II provides the description of a networked control system under stealthy attacks, the worst-case impact of stealthy attacks caused by the malicious adversary, and the cost for the defender. Thereafter, Section III investigates the boundedness of the cost for the defender and the worst-case impact of stealthy attacks caused by the malicious adversary. The investigation affords us to restrict the admissible actions of the defender, which is presented at the end of Section III. In Section IV, by employing the Stackelberg game-theoretic framework, we formulate the optimal actions for the malicious adversary and the defender. In Section V, the effectiveness of the proposed security allocation scheme in terms of computational complexity is highlighted, especially in large-scale networks. Section VI presents a numerical example to validate the obtained results. Section VII concludes the paper. We conclude this section by providing the notation to be used throughout this paper. **Notation:** the set of real positive numbers is denoted as \(\mathbb{R}_{+}\) ; \(\mathbb{R}^{n}\) and \(\mathbb{R}^{n\times m}\) stand for sets of real \(n\)-dimensional vectors and \(n\)-row \(m\)-column matrices, respectively. Let us define \(e_{i}\in\mathbb{R}^{n}\) with all zero elements except the \(i\)-th element is set as \(1\). A continuous linear time-invariant (LTI) system with the state-space model \(\dot{x}(t)=\bar{A}x(t)+\bar{B}u(t)\), \(y(t)=\bar{C}x(t)+\bar{D}u(t)\) is denoted as \(\bar{\Sigma}\triangleq(\bar{A},\bar{B},\bar{C},\bar{D})\). Consider the norm \(\left\|x\right\|_{\mathcal{L}_{2}(0,T]}^{2}\triangleq\frac{1}{T}\int_{0}^{T} \left\|x(t)\right\|_{2}^{2}\)\(\mathrm{d}t\), we simplify the notation \(\left\|x\right\|_{\mathcal{L}_{2}}^{2}\) if the time horizon \([0,T]\) is clear from the context. The space of square-integrable functions is defined as \(\mathcal{L}_{2}\triangleq\left\{f:\mathbb{R}_{+}\rightarrow\mathbb{R}\mid \left\|f\right\|_{\mathcal{L}_{2}(0,\infty]}^{2}<\infty\right\}\) and the extended space be defined as \(\mathcal{L}_{2}\triangleq\left\{f:\mathbb{R}_{+}\rightarrow\mathbb{R}\mid \left\|f\right\|_{\mathcal{L}_{2}(0,T]}^{2}<\infty,\ \forall\ 0<T<\infty\right\}\). For a vector \(x\in\mathbb{R}^{n}\), \(\left\|x\right\|_{0}\) denotes the number of non-zero elements in the vector \(x\). Let \(\mathcal{G}\triangleq(\mathcal{V},\mathcal{E},A)\) be a graph with the set of \(N\) vertices Figure 1: An illustration of a networked control system with the (green) performance vertex under a stealthy attack. While the defender selects the (blue) monitor vertices on which to place a sensor at each monitor vertex, the adversary selects the (red) attack vertex on which to conduct a stealthy attack. \(\mathcal{V}=\{1,2,...,N\}\), the set of edges \(\mathcal{E}\subseteq\mathcal{V}\times\mathcal{V}\), and the adjacency matrix \(A=[a_{ij}]\). For any \((i,j)\in\mathcal{E},\ i\neq j\), the element of the adjacency matrix \(a_{ij}\) is positive, and with \((i,j)\notin\mathcal{E}\) or \(i=j\), \(a_{ij}=0\). The degree of vertex \(i\) is denoted as \(\Delta_{i}=\sum_{j=1}^{n}a_{ij}\) and the degree matrix of graph \(\mathcal{G}\) is defined as \(\Delta=\text{diag}\big{(}\Delta_{1},\Delta_{2},\ldots,\Delta_{N}\big{)}\), where \(\text{diag}\) stands for a diagonal matrix. The Laplacian matrix is defined as \(L=[\ell_{ij}]=\Delta-A\). Further, \(\mathcal{G}\) is called an undirected connected graph if and only if matrix \(A\) is symmetric and the algebraic multiplicity of zero as an eigenvalue of \(L\) is one. The set of all neighbours of vertex \(i\) is denoted as \(\mathcal{N}_{i}=\{j\in\mathcal{V}\ |\ (i,j)\in\mathcal{E}\}\). We denote a subset set \(\mathcal{V}_{-i}\triangleq\mathcal{V}\setminus\{i\}\). ## 2 Problem formulation In this section, we first describe a networked control system under stealthy attacks in the presence of a defender and a malicious adversary. The malicious adversary conducts a stealthy data injection attack on the input of a vertex with the purpose of degrading the local performance of the system. Meanwhile, the defender desires to alleviate the attack impact on the system through placing sensors at several vertices. In the remainder of this section, we analyze the worst-case impact of stealthy attacks on the system based on the output-to-out gain security metric, which will be utilized to formulate the objectives of the adversary and the defender. ### Networked control system under stealthy attacks Consider an undirected connected graph \(\mathcal{G}\triangleq(\mathcal{V},\mathcal{E},A)\) with \(N\) vertices, the state-space model of a one-dimensional vertex \(i\) is described: \[\dot{x}_{i}(t)=u_{i}(t),\ \ i\in\big{\{}1,\ 2,\ldots,\ N\big{\}}, \tag{1}\] where \(x_{i}(t)\in\mathbb{R}\) is the state of vertex \(i\). The local performance of the entire network is evaluated via the output energy of a vertex \(\rho\in\mathcal{V}\) over a given, possibly infinite, time horizon denoted as \(\left\|x_{\rho}\right\|_{\mathcal{L}_{2}}^{2}\). Each vertex \(i\in\mathcal{V}\) is controlled by the following control law: \[u_{i}(t)=-\theta_{i}x_{i}(t)+\sum_{j\in\mathcal{N}_{i}}\big{(}x_{j}(t)-x_{i}( t)\big{)}, \tag{2}\] where \(\theta_{i}\in\mathbb{R}_{+}\) is an adjustable self-loop control gain at vertex \(i\). This self-loop control gain will be used to improve the security of the entire network later in this paper. For convenience, let us denote \(x(t)\) as the state of the entire network, \(x(t)=\big{[}x_{1}(t),\ x_{2}(t),\ldots,\ x_{N}(t)\big{]}^{\top}\). To get prepared for facing malicious activities, the defender selects a subset of the vertex set \(\mathcal{V}\) as a set of monitor vertices (say \(\mathcal{M}=\{m_{1},m_{2},\ldots,m_{|\mathcal{M}|}\}\)) on which to place a sensor at each selected monitor vertex. Due to practical reasons, the number of utilized sensors should be constrained. Let us denote \(n_{s}\) as the sensor budget that is the maximum number of utilized sensors, i.e., \(|\mathcal{M}|\leq n_{s}\). On the other hand, the malicious adversary selects a vertex \(a\in\mathcal{V}\) on which to conduct an additive time-dependent attack signal \(\zeta(t)\in\mathbb{R}\), where \(\zeta\in\mathcal{L}_{2e}\), at its input as follows: \[u_{a}(t)=-\theta_{a}x_{a}(t)+\sum_{j\in\mathcal{N}_{a}}\big{(}x_{j}(t)-x_{a}( t)\big{)}+\zeta(t). \tag{3}\] The purpose of the malicious adversary is to maximally disrupt the local performance of the entire network that is represented as the energy of the unknown performance vertex \(\rho\) while remaining stealthy to the defender. In practice, the location of the performance vertex \(\rho\) should not be revealed publicly, leading to the following reasonable assumption. **Assumption 1**: _The performance vertex \(\rho\) and attack vertex \(a\) are distinct, i.e., \(a\in\mathcal{V}_{-\rho}\) and \(\rho\in\mathcal{V}_{-a}\). \(\triangleleft\)_ The system model (1) under the control law (2)-(3) can be rewritten in the presence of the attack signal at the vertex \(a\in\mathcal{V}_{-\rho}\) with outputs of the performance vertex \(\rho\) and outputs observed at the monitor vertices \(m_{k}\in\mathcal{M}\) \[\dot{x}(t) =-\bar{L}x(t)+e_{a}\zeta(t), \tag{4}\] \[y_{\rho}(t) =e_{\rho}^{\top}x(t),\] (5) \[y_{\mathcal{M}}(t) =C_{\mathcal{M}}^{\top}x(t), \tag{6}\] where \(C_{\mathcal{M}}=[e_{m_{1}},e_{m_{2}},\ldots,e_{m_{|\mathcal{M}|}}]\), \(\bar{L}=L+\Theta\), and \(\Theta=\text{diag}(\theta_{1},\theta_{2},\ldots,\theta_{N})\). The Laplacian matrix \(L\) is associated with the undirected connected graph \(\mathcal{G}\) and \(\theta_{i}\in\mathbb{R}_{+},\ \forall i\in\mathcal{V}\), resulting in that all the eigenvalues of the matrix \(\bar{L}\) are positive real. This property of \(\bar{L}\) ensures that the state of the network \(x(t)\) asymptotically converges to the origin in case of attack-free, affording us to employ the following assumption. **Assumption 2**: _The system (4) is at its equilibrium \(x_{e}=0\) before being affected by the attack signal \(\zeta(t)\). \(\triangleleft\)_ In the scope of this study, we mainly focus on the stealthy data injection attack that will be defined in the following. Consider the above structure of the continuous LTI system (4)-(6), which we denote as \(\Sigma_{\rho\mathcal{M}}\triangleq(-\bar{L},e_{a},[e_{\rho},C_{\mathcal{M}}]^{ \top},0)\), with the performance output \(y_{\rho}(t)=e_{\rho}^{\top}x(t)\) and the monitor outputs \(y_{m_{k}}(t)=e_{m_{k}}^{\top}x(t),\ \forall m_{k}\in\mathcal{M}\). The input signal \(\zeta(t)\) of the system \(\Sigma_{\rho\mathcal{M}}\) is called the stealthy data injection attack if the monitor outputs satisfy \(\left\|y_{m_{k}}\right\|_{\mathcal{L}_{2}}^{2}<\delta_{m_{k}}\), for all \(m_{k}\in\mathcal{M}\), in which \(\delta_{m_{k}}>0\) is given for each corresponding monitor vertex \(m_{k}\) and called an alarm threshold. This means that the adversary is said to be detected if there exists at least one monitor vertex \(m_{k}\in\mathcal{M}\) whose output energy crosses its corresponding alarm threshold \(\delta_{m_{k}}\). Further, the impact of the stealthy data injection attack is measured via the energy of the performance vertex \(\rho\) over the horizon \([0,T]\), i.e., \(\left\|y_{\rho}\right\|_{\mathcal{L}_{2}[0,T]}^{2}\). The worst-case impact of the stealthy data injection attack conducted by the malicious adversary on the local performance will be further investigated. Then, this worst-case attack impact will be utilized to formulate the objectives of the adversary and the defender in the following subsection ### Worst-case impact of stealthy attacks Since the performance vertex \(\rho\) is unknown to the adversary, we will investigate the attack impact for all the possible locations of the performance vertex in this subsection. We start by considering a fixed performance vertex. #### 3.2.1 For a fixed performance vertex \(\rho\) According to _Assumption 1_, given a fixed performance vertex \(\rho\), the adversary selects an attack vertex \(a\in\mathcal{V}_{-\rho}\) while the defender selects a set of monitor vertices \(\mathcal{M}\) (\(|\mathcal{M}|\leq n_{s}\)). The worst-case impact of stealthy attacks on the fixed performance vertex \(\rho\) is formulated as follows: \[J_{\rho}(a,\mathcal{M})\triangleq\sup_{x(0)=0,\ \zeta\in \mathcal{L}_{2a}}\left\|y_{\rho}\right\|_{\mathcal{L}_{2}}^{2} \tag{7}\] \[\text{s.t.}\qquad\left\|y_{m_{k}}\right\|_{\mathcal{L}_{2}}^{2} \leq\delta_{m_{k}},\ \forall m_{k}\in\mathcal{M}.\] The dual problem of (7) is given as follows: \[\inf_{\gamma_{m_{k}}>0}\left[\sup_{x(0)=0,\ \zeta\in \mathcal{L}_{2a}}\Big{(}\left\|y_{\rho}\right\|_{\mathcal{L}_{2}}^{2}-\sum_{m _{k}\in\mathcal{M}}\gamma_{m_{k}}\left\|y_{m_{k}}\right\|_{\mathcal{L}_{2}}^ {2}\Big{)}\right.\\ \left.+\sum_{m_{k}\in\mathcal{M}}\gamma_{m_{k}}\delta_{m_{k}} \right]. \tag{8}\] The dual problem (8) is bounded only if \(\left\|y_{\rho}\right\|_{\mathcal{L}_{2}}^{2}-\sum_{m_{k}\in\mathcal{M}}\gamma _{m_{k}}\left\|y_{m_{k}}\right\|_{\mathcal{L}_{2}}^{2}\leq 0,\ \forall\zeta\in \mathcal{L}_{2e}\) and \(x(0)=0\), which results in the following minimization problem: \[J_{\rho}(a,\mathcal{M})=\min_{\gamma_{m_{k}}>0}\sum_{m_{k}\in \mathcal{M}}\gamma_{m_{k}}\delta_{m_{k}}, \tag{9}\] \[\text{s.t.}\quad\left\|y_{\rho}\right\|_{\mathcal{L}_{2}}^{2}- \sum_{m_{k}\in\mathcal{M}}\gamma_{m_{k}}\left\|y_{m_{k}}\right\|_{\mathcal{L}_{ 2}}^{2}\leq 0,\] \[x(0)=0,\ \forall\zeta\in\mathcal{L}_{2e}.\] The strong duality can be proven by utilizing S-Procedure [23, Ch.4]. Recalling the key results in dissipative system theory for linear systems with quadratic supply rates [24], the constraint in (9) can be translated into a linear matrix inequality [25, Prop. 1] as follows: \[J_{\rho}(a,\mathcal{M})= \min_{\gamma_{m_{k}}>0,\ P=P^{\top}\geq 0}\ \sum_{m_{k}\in\mathcal{M}}\gamma_{m_{k}}\delta_{m_{k}} \tag{10}\] \[\text{s.t.}\ \left[\begin{array}{cc}-\bar{L}P-P\bar{L}&Pe_{a}\\ e_{T}^{\top}P&0\end{array}\right]+\left[\begin{array}{cc}e_{\rho}\\ 0\end{array}\right]\left[\begin{array}{cc}e_{\rho}^{\top}&0\end{array}\right]\] \[-\sum_{m_{k}\in\mathcal{M}}\gamma_{m_{k}}\left[\begin{array}{ cc}e_{m_{k}}\\ 0\end{array}\right]\left[\begin{array}{cc}e_{m_{k}}^{\top}&0\end{array} \right]\leq 0.\] To guarantee the existence of a solution to the optimization problem (10), we need to show the feasibility of the optimization problem (7). This feasibility will be studied in Section 3 after we formulate the expected worst-case impact of stealthy attacks in the case of a probabilistic performance vertex in the following subsection. #### 3.2.2 For a probabilistic performance vertex Due to the importance of the performance vertex \(\rho\), its location should not be revealed publicly. Thus, to investigate the worst-case impact of stealthy attacks (7), the malicious adversary considers the location of the performance vertex \(\rho\) in a probabilistic way, described by conditional probabilities, i.e., given an attack vertex \(a\in\mathcal{V}\), the conditional probability \(\pi^{a}(\rho|a)\ (0<\pi^{a}(\rho|a)<1,\ \forall\rho\neq a)\) stands for the belief of the malicious adversary in the location of the performance vertex \(\rho\). To neutralize the malicious adversary, the defender selects a probability distribution to the target vertex of the malicious adversary, analogously denoted by \(\pi^{d}(\rho|a)\ (0<\pi^{d}(\rho|a)<1,\ \forall\rho\neq a)\). According to _Assumption 1_, one has \(\sum_{\rho\in\mathcal{V}_{-a}}\pi^{a}(\rho|a)=1\) and \(\sum_{\rho\in\mathcal{V}_{-a}}\pi^{d}(\rho|a)=1\). Considering the uncertain objectives of the two agents in probabilistic ways leads to their objective functions in the following. When the defender selects the set of monitor vertices \(\mathcal{M}\) and the adversary selects attack vertex \(a\), the defender desires to minimize the following cost: \[R(a,\mathcal{M})\triangleq\mathfrak{c}(|\mathcal{M}|)+\sum_{\rho\in\mathcal{ V}_{-a}}\pi^{d}(\rho|a)J_{\rho}(a,\mathcal{M}), \tag{11}\] where \(\mathfrak{c}(|\mathcal{M}|)\) is a cost for the number of utilized sensors. This sensor-to-cost function \(\mathfrak{c}(|\mathcal{M}|)\) has the following properties: 1) it significantly increases as the number of utilized sensors increases, and 2) it is bounded for any monitor set \(\mathcal{M}\subseteq\mathcal{V}\). Meanwhile, the malicious adversary desires to maximize the following expected worst-case impact of stealthy attacks: \[Q(a,\mathcal{M})\triangleq\sum_{\rho\in\mathcal{V}_{-a}}\pi^{a}(\rho|a)J_{\rho} (a,\mathcal{M}). \tag{12}\] From (7), \(J_{\rho}(a,\mathcal{M})\) is non-negative for every pair of attack vertex \(a\) and monitor set \(\mathcal{M}\). Thus, the cost \(R(a,\mathcal{M})\) and the expected worst-case impact of stealthy attacks \(Q(a,\mathcal{M})\) are bounded when the worst-case impact of stealthy attacks (7) on every performance vertex \(\rho\in\mathcal{V}_{-a}\) is bounded. In the next section, we will present how the defender finds a set of admissible monitor vertices \(\mathcal{M}\) that guarantees the boundedness of the worst-case impact of stealthy attacks (7) for every attack vertex. **Remark 1**: _In a similar scenario, another objective function based on \(\mathcal{L}_{2}\)-gain for both the adversary and the defender has been proposed in [21, Sec. 3]. The objective function in [21, Sec. 3] was formulated in terms of the maximal \(\mathcal{L}_{2}\)-gains from the attack vertex \(a\) to the performance vertex \(\rho\) and from the attack vertex \(a\) to the monitor vertex \(m_{k}\). More specifically, the objective function in [21, Sec. 3] is given by_ \[W_{\rho}(a,m_{k})=\sup_{\|\zeta\|_{\mathcal{L}_{2}}\neq 0}\frac{\|y_{\rho}\|_{ \mathcal{L}_{2}}^{2}}{\|\zeta\|_{\mathcal{L}_{2}}}-\lambda\sup_{\|\zeta\|_{ \mathcal{L}_{2}}\neq 0}\frac{\|y_{m_{k}}\|_{\mathcal{L}_{2}}^{2}}{\|\zeta\|_{\mathcal{L}_{2}}^{2}}, \ (\lambda\geq 0).\] _The above objective \(W_{\rho}(a,m_{k})\) also considers two different outputs \(y_{\rho}(t)\) and \(y_{m_{k}}(t)\), but note that the output energies are maximized separately, thus leading to two different optimal input signals \(\zeta(t)\) in general cases. By contrast, our objective function (7) considers the worst-case impact of stealthy attacks that is simultaneously characterized by the multiple outputs \(y_{\rho}(t)\) and \(y_{m_{k}}(t)\) with respect to a single input signal \(\zeta(t)\). \(\triangleleft\) ## 4 Characterizing the set of monitor vertices In this section, we first provide an upper bound of the worst-case impact of stealthy attacks (7). The feasibility of this upper bound is guaranteed by a necessary and sufficient condition. From the investigation of this upper bound, we provide a graph-theoretic necessary and sufficient condition under which the cost (11) and the expected worst-case impact (12) are bounded. This condition, then, allows us to limit the admissible actions of the defender. In the remainder of this section, we show how the defender characterizes their admissible actions. ### Evaluating the worst-case impact of stealthy attacks The following lemma states a key property of the worst-case impact of stealthy attacks (7). **Lemma 1**: _Consider the continuous LTI system \(\Sigma_{\mathcal{M}}=(-\bar{L},e_{a},C_{\mathcal{M}}^{\top},0)\) with a given performance vertex \(\rho\), an attack vertex \(a\in\mathcal{V}_{-\rho}\), and a non-empty monitor vertex set \(\mathcal{M}\), the worst-case impact (7) has an upper bound:_ \[J_{\rho}(a,\mathcal{M})\ \leq\ \underline{J}_{\rho}(a,\mathcal{M}), \tag{13}\] _where_ \[\underline{J}_{\rho}(a,\mathcal{M})=\min_{m_{k}\in\mathcal{M}}\left\{\begin{array} []{rl}\sup_{x(0)=0,\ \zeta\in\mathcal{L}_{2e}}&\|y_{\rho}\|_{\mathcal{L}_{2}}^{2}\\ \text{s.t.}&\|y_{m_{k}}\|_{\mathcal{L}_{2}}^{2}\leq\delta_{m_{k}}\end{array} \right\}.\triangleleft \tag{14}\] The proof is postponed to Appendix -B **Lemma 1** enables us to guarantee the boundedness of the worst-case impact of stealthy attacks (7) through considering the isolated worst-case impact of stealthy attacks (14) at a single monitor vertex \(m_{k}\in\mathcal{M}\). Next, at the first stage in the investigation of the boundedness of the worst-case impact of stealthy attacks (14), we adopt a result in [26]. Inspired by [26, Th. 2], the feasibility of the optimization problem (14) is related to the invariant zeros of \(\Sigma_{\rho}\triangleq(-\bar{L},e_{a},e_{\rho}^{\top},0)\) and \(\Sigma_{m_{k}}\triangleq(-\bar{L},e_{a},e_{m_{k}}^{\top},0)\), which are defined as follows. **Definition 1** (Invariant zeros): _Consider the strictly proper LTI system \(\bar{\Sigma}\triangleq(\bar{A},\bar{B},\bar{C},0)\) where \(\bar{A},\bar{B}\), and \(\bar{C}\) are real matrices with appropriate dimensions. A tuple \((\bar{\lambda},\bar{x},\bar{g})\in\mathbb{C}\times\mathbb{C}^{N}\times\mathbb{ C}\) is a zero dynamics of \(\bar{\Sigma}\) if it satisfies_ \[\left[\begin{array}{cc}\bar{\lambda}I-\bar{A}&-\bar{B}\\ \bar{C}&0\end{array}\right]\left[\begin{array}{c}\bar{x}\\ \bar{g}\end{array}\right]=\left[\begin{array}{c}0\\ 0\end{array}\right],\quad\bar{x}\neq 0. \tag{15}\] _In this case, a finite \(\bar{\lambda}\) is called a finite invariant zero of the system \(\bar{\Sigma}\). Further, the strictly proper system \(\bar{\Sigma}\) always has at least one invariant zero at infinity [27, Ch. 3]. \(\triangleleft\)_ More specifically, to guarantee the boundedness of the worst-case impact of stealthy attacks (14), let us state the following lemma. **Lemma 2** ([26, Th. 2]): _Consider the following continuous LTI systems \(\Sigma_{\rho}\triangleq(-\bar{L},e_{a},e_{\rho}^{\top},0)\) and \(\Sigma_{m_{k}}\triangleq(-\bar{L},e_{a},e_{m_{k}}^{\top},0),\ \forall m_{k}\in\mathcal{M}\). The optimization problem (14) is feasible if, and only if, there exists at least one system \(\Sigma_{m_{k}}\) such that its unstable invariant zeros are also invariant zeros of \(\Sigma_{\rho}\)._ \(\triangleleft\)__ The proof follows directly the result in [26, Th. 2]. The result in **Lemma 2** enables us to investigate invariant zeros of \(\Sigma_{m_{k}}\). Let us adopt the following lemma from our previous work [20] that considers finite invariant zeros of \(\Sigma_{m_{k}}\). **Lemma 3** ([20, Lem. 4.4]): _Consider a networked control system associated with an undirected connected graph \(\mathcal{G}\triangleq(\mathcal{V},\mathcal{E},A)\), whose closed-loop dynamics is described in (4). Suppose that the networked control system is driven by the stealthy data injection attack at a single attack vertex \(a\), and observed by a single monitor vertex \(m_{k}\), resulting in the state-space model \(\Sigma_{m_{k}}\triangleq(-\bar{L},e_{a},e_{m_{k}}^{\top},0)\). Then, there exist self-loop control gains \(\theta_{i},\ \forall i\in\{1,2,\ldots,N\},\) in (2) such that \(\Sigma_{m_{k}}\) has no finite unstable invariant zero. \(\triangleleft\)_ The proof is postponed to Appendix -B. **Lemma 3** enables us to carefully design the control law (2) such that for every pair of an input vertex \(a\) and an output vertex \(m_{k}\), the corresponding LTI system \(\Sigma_{m_{k}}=(-\bar{L},e_{a},e_{m_{k}}^{\top},0)\) has no unstable invariant zero. Hence, it leaves us to investigate infinite invariant zeros of systems \(\Sigma_{m_{k}},\ \forall m_{k}\in\mathcal{M}\) in the following subsection. ### Infinite invariant zeros We investigate the infinite invariant zeros of the systems \(\Sigma_{\rho}\) and \(\Sigma_{m_{k}},\ \forall m_{k}\in\mathcal{M}\). In the investigation, we make use of known results connecting infinite invariant zeros mentioned in _Definition 1_ and the relative degree of a linear system, which is defined below. **Definition 2** (Relative degree [28, Ch. 13]): _Consider the strictly proper LTI system \(\bar{\Sigma}\triangleq(\bar{A},\bar{B},\bar{C},0)\) with \(\bar{A}\in\mathbb{R}^{n\times n}\), \(\bar{B}\), and \(\bar{C}\) are real matrices with appropriate dimensions. The system \(\bar{\Sigma}\) is said to have relative degree \(r\ (1\leq r\leq n)\) if the following conditions satisfy_ \[\bar{C}\bar{A}^{k}\bar{B}=0,\ \ 0\leq k<r-1,\] \[\bar{C}\bar{A}^{r-1}\bar{B}\neq 0.\ \ monitor vertex \(m_{k}\in\mathcal{M}\) that fulfills the condition (17). The following subsection presents how to find such a monitor set \(\mathcal{M}\). **Remark 3**: _Let us consider the following continuous LTI system \(\Sigma_{\mathcal{M}}=(-\bar{L},e_{a},C_{\mathcal{M}}^{\top},0)\) where its input is at the vertex \(a\) and its outputs are at monitor vertices \(m_{k}\in\mathcal{M}\). By employing the definition of the relative degree of single-input-multiple-output systems, adapted from [29], the relative degree of the system \(\Sigma_{\mathcal{M}}\) is the least relative degree from its input to its single monitor vertex. Thus, we need to find at least one monitor vertex \(m_{k}\) such that it fulfills the condition (17), resulting in the boundedness of (14). This result eventually allows us to guarantee that the worst-case impact of stealthy attacks in (7) is bounded according to the property in (13). \(\triangleleft\) ### Dominating sets Consider a subset \(\mathcal{M}\subseteq\mathcal{V}\) where its cardinality is not greater than the sensor budget \(n_{s}\), the maximum number of available sensors, i.e., \(\mathcal{M}=\{m_{1},m_{2},\ldots,m_{|\mathcal{M}|}\}\) and \(|\mathcal{M}|\leq n_{s}\). A monitor set \(\mathcal{M}\) is admissible if it contains at least one monitor vertex \(m_{k}\in\mathcal{M}\) such that this vertex \(m_{k}\) fulfills the necessary and sufficient condition (17) in Theorem 1. This set \(\mathcal{M}\) is called a dominating set which is defined below. **Definition 3** (Dominating set): _Given an undirected graph \(\mathcal{G}\triangleq(\mathcal{V},\mathcal{E},A)\), a subset of the vertex set \(\mathcal{D}\subseteq\mathcal{V}\) is called a dominating set if, for every vertex \(u\in\mathcal{V}\setminus\mathcal{D}\), there is a vertex \(v\in\mathcal{D}\) such that \((u,v)\in\mathcal{E}\). \(\triangleleft\)_ The following lemma presents a necessary and sufficient condition under which a subset of the vertex set is a dominating set. **Lemma 4**: _Consider an undirected graph \(\mathcal{G}\triangleq(\mathcal{V},\mathcal{E},A)\), a subset \(\mathcal{M}\subseteq\mathcal{V}\) is a dominating set of \(\mathcal{V}\) if, and only if, the following condition holds_ \[\left\|\mathcal{C}(\mathcal{M})\right\|_{0}=N, \tag{18}\] _where \(\mathcal{C}(\mathcal{M})=\sum_{m_{k}\in\mathcal{M}}(A+I)e_{m_{k}}\) and \(N\) is the cardinality of the vertex set \(\mathcal{V}\). \(\triangleleft\)_ The proof is postponed to Appendix -A. By investigating all the subsets of \(\mathcal{V}\), we can find all the dominating sets which fulfill the condition (18). Let us make use of the following assumption. **Assumption 3**: _The vertex set \(\mathcal{V}\) has at least one dominating set such that it contains at most \(n_{s}\) elements. \(\triangleleft\)_ Based on _Assumptions 1-3_ and the above results in _Lemma 1_ and _Theorem 1_, we are now ready to state the following theorem that provides a graph-theoretic necessary and sufficient condition under which the cost (11) and the expected worst-case impact of stealthy attacks (12), caused by the stealthy data injection attack at an arbitrary attack vertex \(a\), are bounded. **Theorem 2**: _Suppose that Assumptions 1-3 hold. Consider the networked control system (4) associated with an undirected connected graph \(\mathcal{G}\) where the system has the stealthy data injection attack (3) at the input of an arbitrary attack vertex \(a\) and outputs (6) at monitor vertices \(m_{k}\in\mathcal{M}\). The cost \(R(a,\mathcal{M})\) in (11) and the expected worst-case impact of stealthy attacks \(Q(a,\mathcal{M})\) in (12) are bounded if, and only if, the monitor set \(\mathcal{M}\) is a dominating set of \(\mathcal{G}\). \(\triangleleft\)_ Let us consider the following continuous LTI systems \(\Sigma_{\rho}\triangleq(-\bar{L},e_{a},e_{\rho}^{\top},0)\) and \(\Sigma_{m_{k}}\triangleq(-\bar{L},e_{a},\epsilon_{m_{k}}^{\top},0),\ \forall m_{k}\in \mathcal{M}\). The systems have the same stealthy data injection attack at the input of an arbitrary attack vertex \(a\) but \(\Sigma_{\rho}\) has an output at an arbitrary performance vertex \(\rho\) and \(\Sigma_{m_{k}}\) has an output at monitor vertex \(m_{k}\). Based on _Definition 2_, _Assumption 1_ guarantees that the relative degree of \(\Sigma_{\rho}\) is not lower than one, i.e., \(r_{(\rho,a)}\geq 1\). We begin by providing sufficiency. _Assumption 3_ ensures that there exists at least one dominating set that has at most \(n_{s}\) elements. Therefore, the defender selects the monitor set \(\mathcal{M}\) as one of such dominating sets. According to _Definitions 2-3_, there exists at least one system \(\Sigma_{m_{k}}\), where its input is at an arbitrary attack vertex \(a\) and its output is at the monitor vertex \(m_{k}\) (\(m_{k}\in\mathcal{M}\)), such that its relative degree is not greater than one, i.e., \(r_{(m_{k},a)}\leq 1\). Based on the above observation, one has \[r_{(m_{k},a)}\leq 1\leq r_{(\rho,a)},\] fulfilling the condition (17). From the results in _Theorem 1_ and _Lemma 1_, the satisfaction of (17) allows us to guarantee the boundedness of the worst-case impact of stealthy attacks (7). Therefore, the cost \(R(a,\mathcal{M})\) and the expected worst-case impact of stealthy attacks \(Q(a,\mathcal{M})\) are bounded based on their definitions in (11)-(12). For necessity, let us present a contradiction argument by assuming that the cost \(R(a,\mathcal{M})\) and the expected worst-case impact of stealthy attacks \(Q(a,\mathcal{M})\) are bounded while the monitor set \(\mathcal{M}\) is not a dominating set of \(\mathcal{G}\). Based on the definitions of \(Q(a,\mathcal{M})\) and \(R(a,\mathcal{M})\) in (11)-(12), they are bounded if, and only if, \(J_{\rho}(a,\mathcal{M})\) is bounded for all pairs of \(\rho\) and \(a\). Since the attack vertex \(a\) can be chosen arbitrarily and the monitor set \(\mathcal{M}\) is not a dominating set, the attack vertex \(a\) can be chosen such that it does not belong to \(\mathcal{M}\) and none of its neighbors belongs to \(\mathcal{M}\), resulting in \(r_{(m_{k},a)}>1\ \forall m_{k}\in\mathcal{M}\). On the other hand, the adversary considers all the possibilities of the performance vertex \(\rho\) including \((\rho,a)\in\mathcal{E}\), resulting in \(r_{(\rho,a)}=1\). The above observation gives us \[r_{(\rho,a)}=1<r_{(m_{k},a)},\ \ \forall m_{k}\in\mathcal{M}, \tag{19}\] violating the necessary and sufficient condition (17). Hence, for this particular pair of \(\rho\) and \(a\), the worst-case impact of stealthy attacks \(J_{\rho}(a,\mathcal{M})\) is unbounded, contradicting the assumption. \(\blacksquare\) **Lemma 4** enables us to determine whether a subset of \(\mathcal{V}\) is a dominating set. On the other hand, _Theorem 2_ affords us to restrict the admissible actions of the defender to dominating sets of \(\mathcal{V}\). This step is beneficial to the defender in selecting monitor vertices such that the cost (11) and the worst-case impact of stealthy attacks (12) are always bounded. More detail on how the defender and the malicious adversary select their actions will be given in the following section. **Remark 4**: _Regarding the concept of dominating sets, Lemma 4 is an alternative presentation of Definition 3. We can easily see their equivalence through the proof of Lemma 4 in Appendix -A. Given the graph configuration consisting of vertices and edges, Definition 3 affords us to characterize dominating sets. Meanwhile, Lemma 4 provides an algebraic condition that allows us to find dominating sets when the adjacency matrix and canonical basis vectors, representing single vertices, are given. \(\triangle\) ## 4 Stackelberg Security Game In this section, to assist the defender and the malicious adversary in selecting their best actions, we employ the Stackelberg game-theoretic framework where the defender acts as _the leader_ and the malicious adversary acts as _the follower_ of the game. Subsequently, we provide an algorithm to illustrate the procedure of how the two agents seek their best actions. ### Game setup To investigate the best actions of the defender and the adversary, we assume that they are two strategic players in a game. The defender can select at most \(n_{s}\) monitor vertices on which to place one sensor at each selected vertex with the purpose of detecting malicious activities. Given _Assumption 3_, let us denote the set of dominating sets as \(\mathbb{D}\), where each dominating set has at most \(n_{s}\) elements, i.e., \(\mathbb{D}=\{\mathcal{M}\ |\ \mathcal{M}\subseteq\mathcal{V},\ |\mathcal{M}|\leq n _{s},\ \mathcal{M}\ \text{satisfies (\ref{eq:defender})}\}\). This set \(\mathbb{D}\) is chosen as the action space of the defender. Meanwhile, the malicious adversary is able to select any vertex to conduct the stealthy data injection attack, i.e., the action space of the malicious adversary is \(\mathbb{A}=\mathcal{V}\), the vertex set. Based on the catastrophic consequences caused by famous malware such as Stuxnet and Industroyer [2, 3], the defender should decide their defense strategy regardless of the presence of malicious adversaries since the defender does not know when adversaries appear. Thus, it is reasonable to let the defender select and announce their action publicly before the presence of the adversary [30, 31]. The defender is called _the leader_ of the Stackelberg game [22]. The purpose of the defender is to minimize the cost function \(R(a,\mathcal{M})\) in (11). Subsequently, after observing the leader's action, the adversary, with the full system knowledge, selects their action with the purpose of maximizing the expected worst-case impact of stealthy attacks (12). The adversary is called _the follower_. Let us summarize the resources and the purposes of the defender and the malicious adversary as follows: #### 4.1.1 **Model knowledge** The defender and the malicious adversary have the following information, i.e., they know the vertex set \(\mathcal{V}\), the edge set \(\mathcal{E}\), the self-loop gains \(\theta_{i}\ (\forall i\in\mathcal{V})\), the alarm threshold \(\delta_{i}\ (\forall i\in\mathcal{V})\), the sensor budget \(n_{s}\), the cost for the number of utilized sensors \(\mathfrak{c}(|\mathcal{M}|)\). On the other hand, given an attack vertex \(a\), the defender and the adversary have their own probability distributions to the performance vertex \(\rho\), i.e., the defender considers the malicious target through the conditional probability \(\pi^{d}(\rho|a)\) while the adversary assumes the location of the performance vertex through the conditional probability \(\pi^{a}(\rho|a)\). #### 4.1.2 **Action space** The malicious adversary can select any attack vertex \(a\in\mathcal{V}\) and assume that this attack vertex is distinct from the performance vertex \(\rho\), i.e., \(a\neq\rho\). Meanwhile, due to the sensor budget \(n_{s}\) and the boundedness of the cost (11) and the expected worst-case impact of stealthy attacks (12) given from _Theorem 2_, the defender is only allowed to select an element in \(\mathbb{D}\) that contains dominating sets. #### 4.1.3 **Objective** The defender wants to minimize the cost function \(R(a,\mathcal{M})\) in (11) by selecting an optimal dominating set \(\mathcal{M}^{\star}\in\mathbb{D}\) with knowing that the malicious adversary bases their action on the defender's decision. Given the action of the defender, the malicious adversary desires to maximize the expected worst-case impact of stealthy attacks \(Q(a,\mathcal{M})\) in (12) by selecting an optimal attack vertex \(a^{\star}(\mathcal{M}^{\star})\in\mathbb{A}=\mathcal{V}\). Thus, the leader considers the following problem. **Problem 1**: _The defender is required to select an optimal dominating set \(\mathcal{M}^{\star}\in\mathbb{D}\) that minimizes the cost (11). \(\triangleleft\)_ The components of the Stackelberg game between the defender and the malicious adversary are summarized in Table 1. We cast the _Problem 1_ in the Stackelberg game-theoretic framework with the defender as the leader, who selects and announces their action first, and the malicious adversary as the follower. This Stackelberg game always admits an optimal action [22], which is defined below. **Definition 4** (Stackelberg optimal action [32]): _If there exists a mapping \(\mathcal{T}:\mathbb{D}\rightarrow\mathbb{A}\) such that, for any fixed \(\mathcal{M}\in\mathbb{D}\), one has \(Q(\mathcal{T}\mathcal{M},\mathcal{M})\geq Q(a,\mathcal{M})\) for all \(a\in\mathbb{A}\), and if there exists \(\mathcal{M}^{\star}\in\mathbb{D}\) such that \(R(\mathcal{T}\mathcal{M}^{\star},\mathcal{M}^{\star})\leq R(\mathcal{T} \mathcal{M},\mathcal{M})\) for all \(\mathcal{M}\in\mathbb{D}\), then the pair \((a^{\star}(\mathcal{M}^{\star}),\mathcal{M}^{\star})\in\mathbb{A}\times \mathbb{D}\), where \(a^{\star}(\mathcal{M}^{\star})=\mathcal{T}\mathcal{M}^{\star}\), is called a Stackelberg optimal action with the defender as the leader and the adversary as the follower of the game. \(\triangleleft\)_ Based on _Definition 4_, we first analyze the Stackelberg optimal action and then provide an algorithm that finds it in the following subsection. ### Stackelberg optimal action Recall _Problem 1_ and _Definition 4_, the defender finds their optimal action by solving the following optimization problem: \[\mathcal{M}^{\star}=\arg\min_{\mathcal{M}\in\mathbb{D}}\ R(a^{\star}( \mathcal{M}),\mathcal{M}), \tag{20}\] where \[a^{\star}(\mathcal{M})=\arg\max_{a\in\mathbb{A}}R(a,\mathcal{M}). \tag{21}\] After observing the defender's optimal action, the adversary finds their optimal action by solving the following optimization problem: \[a^{\star}(\mathcal{M}^{\star})=\arg\max_{a\in\mathbb{A}}\ Q(a,\mathcal{M}^{ \star}). \tag{22}\] One can verify that the optimal solution \((a^{\star}(\mathcal{M}^{\star}),\mathcal{M}^{\star})\) found through the optimization problems (20)-(22) is equivalent to the one in _Definition 4_. Finally, the procedure of finding the Stackelberg optimal action for the adversary and the defender \((a^{\star}(\mathcal{M}^{\star}),\mathcal{M}^{\star})\) is summarized in _Algorithm 1_. \begin{table} \begin{tabular}{|l|l|} \hline **Component** & **Description** \\ \hline \hline **Players** & Defender and Adversary \\ \hline **Action Space** & Defender: \(\mathbb{D}=\{\mathcal{M}\ |\ \mathcal{M}\subseteq\mathcal{V},|\mathcal{M}|\leq n _{s},\eqref{eq:defender}\}\) \\ & Adversary: \(\mathbb{A}=\{a\ |\ a\in\mathcal{V}\}\) \\ \hline **Game Payoff** & Defender minimizes \(R(a,\mathcal{M})\) defined in (11) \\ **\& Goal** & Adversary maximizes \(Q(a,\mathcal{M})\) defined in (12) \\ \hline **Information** & Defender takes action first \\ **Structure** & Adversary responds to Defender’s action \\ \hline \end{tabular} \end{table} Table 1: Components of the Stackelberg security game between a defender and a malicious adversary. ``` 0: The vertex set \(\mathcal{V}\), the edge set \(\mathcal{E}\), the self-loop gains \(\theta_{i}\), the alarm thresholds \(\delta_{i},\ \forall i\in\mathcal{V}\), the sensor budget \(n_{s}\), the cost of utilized sensors \(\mathsf{c}(|\mathcal{M}|)\), and the conditional probabilities \(\pi^{a}(\rho|a)\) and \(\pi^{d}(\rho|a)\). The defender is the leader and the malicious adversary is the follower of the Stackelberg security game. Output: The best set of monitor vertices \(\mathcal{M}^{\star}\) and the best attack vertex \(a^{\star}(\mathcal{M}^{\star})\). Initialize:\(\mathbb{D}=\emptyset\) 1:for every subset \(\mathcal{M}\subset\mathcal{V}\) where \(|\mathcal{M}|\leq n_{s}\)do 2:if\(\mathcal{M}\) fulfills the condition (18) then append \(\mathcal{M}\) to \(\mathbb{D}\) 3:endif 4:endfor 5:for every pair of \(\mathcal{M}\in\mathbb{D}\) and \(a\in\mathcal{V}\)do 6:for every performance vertex \(\rho\in\mathcal{V}_{-a}\)do 7: solve (10) to obtain the worst-case impact of stealthy attacks \(J_{\rho}(a,\mathcal{M})\). 8:endfor 9: Compute the cost for the defender \(R(a,\mathcal{M})\) through (11) and the average worst-case impact of stealthy attacks \(Q(a,\mathcal{M})\) through (12). 10:endfor 11: For each action \(\mathcal{M}\in\mathbb{D}\) by the defender, find the best response \(a^{\star}(\mathcal{M})\) through solving (21). 12: The defender (the leader) selects their best action \(\mathcal{M}^{\star}\) by solving (20). 13: The malicious adversary (the follower) selects their best response \(a^{\star}(\mathcal{M}^{\star})\) by solving (22). ``` **Algorithm 1** Stackelberg optimal action ## V Computational Complexity In this section, we highlight the benefits of characterizing admissible monitor sets as dominating sets to the computation, especially in large-scale networked control systems. The defender is allowed to select at most \(n_{s}\) monitor vertices to place sensors. Thus, if the defender selects \(k\ (k\leq n_{s})\) vertices, the defender has \(k\)-combinations of the vertex set \(\mathcal{V}\). Next, we compute the number of all the possible subsets of the vertex set \(\mathcal{V}\) in which each subset has at most \(n_{s}\) elements. Let us denote the number of possible subsets of the vertex set \(\mathcal{V}\) as \(S(N,n_{s})\) where \(N\) is the number of vertices in the network and \(n_{s}\) is the sensor budget. This number \(S(N,n_{s})\) can be computed as follows: \[S(N,n_{s})=\sum_{k=1}^{n_{s}}\binom{N}{k}. \tag{23}\] This number \(S(N,n_{s})\) grows dramatically when either the number of vertices \(N\) or the sensor budget \(n_{s}\) increases due to \(S(N,n_{s})=\mathcal{O}(N^{n_{s}})\), where \(\mathcal{O}\) stands for Big O notation. Let us take some numerical examples to illustrate the above claim. For \(n_{s}=3\) and \(N=50\), one has \(S(50,3)=\sum_{k=1}^{3}\binom{50}{k}=20875\); for \(n_{s}=3\) and \(N=100\), one has \(S(100,3)=\sum_{k=1}^{3}\binom{100}{k}=166750\); for \(n_{s}=4\) and \(N=50\) one has \(S(50,4)=\sum_{k=1}^{3}\binom{50}{k}=251175\). It is noticed that \(S(100,3)\) is almost eight times as much as \(S(50,3)\) when we just double the number of vertices. On the other hand, \(S(50,4)\) is almost twelve times as much as \(S(50,3)\) when the sensor budget slightly increases from \(3\) to \(4\). An illustration of the dramatic increase of \(S(N,n_{s})\) with respect to \(N\) (blue dashed-dotted line) can be found in Figure 2 where it has the same slope as \(\mathcal{O}(N^{3})\) (red dashed line). In Figure 2, we also conduct Monte-Carlo simulations with 500 samples to count the number of dominating sets with respect to the size of the graph \(N\), which is denoted as the black dashed-dotted line. In the Monte-Carlo simulations, we examine Erdos-Renyi random undirected connected graphs \(G(N,q)\) where \(N\) is the number of vertices and an edge is included to connect two vertices with probability \(q=0.5\)[33]. Since the number \(S(N,n_{s})\) in (23) represents the possible actions available to the defender, the defender should examine all of these actions to seek their optimal action against the malicious adversary. However, as the number of vertices increases, the number of possible actions grows significantly (see Figure 2), making it increasingly difficult to investigate large-scale systems. In contrast, the number of dominating sets typically decreases with respect to the size of random graphs (see an example in Figure 2), greatly alleviating the number of optimization problems to be solved. The above illustration highlights the advantages of the proposed scheme. In the following, we show that the concept of dominating sets affords us not only to alleviate the computational burden discussed above but also to guarantee the existence of the Stackelberg optimal action. Let us assume that the defender selects a monitor set \(\mathcal{M}\) that is not a dominating set of the graph \(\mathcal{G}\) (see Definition 3). Based on Definition 3, there exists at least one vertex \(a\in\mathcal{V}\) satisfying the following properties: 1) the vertex \(a\) does not belong to the monitor set \(\mathcal{M}\) and 2) there is no vertex \(m_{k}\in\mathcal{M}\) such that \((m_{k},a)\in\mathcal{E}\). These two properties result in that the relative degree of the system \(\Sigma_{m_{k}}\triangleq(-\bar{L},e_{a},e_{m_{k}}^{\top},0)\) is greater than one, i.e., \(r_{(m_{k},a)}>1\) for all \(m_{k}\in\mathcal{M}\) (see Remark 3). On the other hand, if the performance vertex \(\rho\) is unfortunately selected as a neighbor of the attack vertex \(a\), resulting in that the relative degree of Figure 2: Given the sensor budget \(n_{s}=3\), the number of subsets of the vertex set \(\mathcal{V}\) with respect to the number of vertices has the same slope as \(\mathcal{O}(N^{3})\). The number of dominating sets is given through Monte-Carlo simulation with 500 samples. the system \(\Sigma_{\rho}\triangleq(-\bar{L},e_{a},e_{p}^{\top},0)\) is one, i.e., \(r_{(\rho,a)}=1\). Based on the above assumptions, one has \[r_{(m_{k},a)}>1=r_{(\rho,a)},\ \forall m_{k}\in\mathcal{M},\] violating the necessary and sufficient condition (17) in _Theorem 1_. This failure results in that the optimization problem (14) does not admit a finite solution. The boundedness of the worst-case impact of stealthy attacks (7) is not guaranteed. Thus, the expected worst-case impact of stealthy attacks (12) might be infinite, resulting in no solution to (21). Since finding a solution to the optimization problem (21) is the first step (line 11 in _Algorithm 1_) of seeking the Stackelberg optimal action, we are unable to proceed to the last two steps of _Algorithm 1_ (lines 12-13) that find the best actions for the defender and the adversary. In the next section, we are likely to show the effectiveness of the proposed security allocation scheme with the notion of dominating sets through a numerical example. ## 6 Numerical example In the first part of this section, step-by-step of _Algorithm 1_ will be run to validate the result in _Theorem 2_ and to find the Stackelberg optimal action for the defender and the malicious adversary in an example. In the remainder of this section, the alleviation in the computational complexity will be examined. To demonstrate the obtained results, let us consider an example of a \(50\)-vertex networked control system depicted in Figure 3. Parameters of the system are selected as follows: \(\theta_{i}=0.5\), \(\delta_{i}=1,\ \forall i\in\mathcal{V}\); the cost for the number of utilized sensors is set as \(\mathsf{c}(|\mathcal{M}|)=e^{\kappa|\mathcal{M}|}\) where \(\kappa=0.2\); beliefs of the defender and the malicious adversary in the location of the performance vertex given an attack vertex are assumed to be uniformly distributed, i.e., \(\pi^{a}(\rho|a)=1/(N-1)\) and \(\pi^{d}(\rho|a)=1/(N-1)\); and the sensor budget \(n_{s}=3\). ### _The Stackelberg optimal action_ First, we start at _lines 1-4_ of _Algorithm 1_. By investigating all the subsets \(\mathcal{M}\subset\mathcal{V}\) where \(|\mathcal{M}|\leq n_{s}\), twenty subsets satisfy the necessary and sufficient condition (18), which are dominating sets. One of those dominating sets is illustrated in Figure 3 where elements of a dominating set are coded blue. From Figure 3, let us consider a system \(\Sigma_{m_{k}}\triangleq(-\bar{L},e_{a},e_{m_{k}}^{\top},0)\) where \(e_{a}\) represents the input at any vertex and \(e_{m_{k}}\) represents the monitor output at a blue vertex. We simply examine that there exists at least one blue vertex such that the relative degree of \(\Sigma_{m_{k}}\) is never greater than one. Thus, the cost for the defender and the expected worst-case impact of stealthy attacks conducted at the input of an arbitrary vertex are always bounded according to the result in _Theorem 2_. This result, then, affords us to proceed to _lines 5-10_ of _Algorithm 1_ to compute the worst-case impact of stealthy attacks \(J_{\rho}(a,\mathcal{M})\), the cost \(R(a,\mathcal{M})\) defined in (11) for the defender, and the expected worst-case impact of stealthy attacks \(Q(a,\mathcal{M})\) defined in (12) for the malicious adversary. Through simulation, the maximum cost for the defender and the maximum expected worst-case impact of stealthy attacks are obtained as follows: \(R(a,\mathcal{M})\leq 50.2456\) and \(Q(a,\mathcal{M})\leq 48.4235\) for an arbitrary pair of a vertex \(a\in\mathcal{V}\) and a dominating set \(\mathcal{M}\), which verifies the result in _Theorem 2_. Finally, let us go from _line 11_ to _line 13_ of _Algorithm 1_ to find the Stackelberg optimal action for the defender and the malicious adversary. The optimal action \(\mathcal{M}^{\star}\) for the defender consists of three blue vertices in Figure 3 that yields the minimum cost of \(R(a^{\star}(\mathcal{M}^{\star}),\mathcal{M}^{\star})=49.7985\). Given such an optimal action \(\mathcal{M}^{\star}\), the malicious adversary chooses the red vertex \(a^{\star}(\mathcal{M}^{\star})\) in Figure 3 that allows them to maximize the expected worst-case impact of stealthy attacks at \(Q(a^{\star}(\mathcal{M}^{\star}),\mathcal{M}^{\star})=47.9764\). ### _Computational complexity_ As discussed above, the \(50\)-vertex networked control system (see Figure 3) gives us twenty dominating sets where the sensor budget is three (\(n_{s}=3\)). This number is extremely smaller than the number of subsets of the vertex set which has at most \(n_{s}\) elements, i.e., \(S(50,3)=20875\). There are \(50\) possibilities of attack vertex \(a\), leading to \(49\) possibilities of performance vertex \(\rho\) due to \(\rho\neq a\) (see _Assumption 1_). Thus, we only need to solve \(20\times 50\times 49=49000\) optimization problems compared to \(20875\times 50\times 49=51143750\) optimization problems for investigating all the possible monitor sets. We did an experiment of solving optimization problems (10) by using CVX with Matlab version 2021a [34] on a personal computer which has a configuration: CPU Intel Core i7-10700 2.9 GHz and 16 Gb RAM DDR4. It took an average of \(5.12\) seconds to solve the optimization problem (10) once. Hence, the proposed algorithm through the concept of dominating sets in this paper approximately took \(70\) hours instead of \(72738\) hours. Figure 3: 50-vertex graph where the optimal monitor vertices are coded blue and the optimal attack vertex is coded red. ## VII Conclusion In this paper, we investigated the security allocation problem in a networked control system when faced with a stealthy data injection attack. The uncertain performance vertex allowed us to formulate the objective functions of the defender and the adversary through considering probabilistic locations of the local performance. We presented a necessary and sufficient condition based on dominating sets under which the defender guarantees the boundedness of their cost and the expected worst-case impact of stealthy attacks. Since the defender should decide their action regardless of the presence of the adversary, we cast the security allocation problem in the Stackelberg game-theoretic framework. Then, we provided an algorithm to show the procedure of finding the Stackelberg optimal action with the defender as the leader and the adversary as the follower of the game. The advantage of the proposed security allocation scheme was highlighted in the context of large-scale networks via a discussion on the computational burden and several numerical simulations. ## Appendix A Proof of Lemma 1 Showing (13) is trivial when the monitor vertex set \(\mathcal{M}\) has only one vertex. We assume that \(\mathcal{M}\) has more than one monitor vertex. From the worst-case impact of stealthy attacks (7), let us introduce the following optimization by removing \(|\mathcal{M}|-1\) constraints except the constraint corresponding to a monitor vertex \(m_{k}\in\mathcal{M}\) as follows: \[J_{\rho}(a,m_{k})=\sup_{x(0)=0,\ \zeta\in\mathcal{L}_{2u}} \left\|y_{\rho}\right\|_{\mathcal{L}_{2}}^{2}\] (24) s.t. \[\left\|y_{m_{k}}\right\|_{\mathcal{L}_{2}}^{2}\leq\delta_{m_{k}}.\] The design of the optimization problem (24) tells us that its feasible set contains the feasible set of the optimization problem (7). Further, the two optimization problems (7) and (24) have the same objective function. This implies that \(J_{\rho}(a,\mathcal{M})\leq J_{\rho}(a,m_{k})\) for all \(m_{k}\in\mathcal{M}\), directly resulting in (13). ## Appendix B Proof of Lemma 3 Let us denote a tuple \((\bar{\lambda}_{m_{k}},\bar{x}_{m_{k}},\bar{g}_{m_{k}})\in\mathbb{C}\times \mathbb{C}^{N}\times\mathbb{C}\) as a zero dynamics of \(\Sigma_{m_{k}}\), where a finite \(\lambda_{m_{k}}\) is called a finite invariant zero of \(\Sigma_{m_{k}}\). From _Definition 1_, one has that the tuple \((\bar{\lambda}_{m_{k}},\bar{x}_{m_{k}},\bar{g}_{m_{k}})\) satisfies \[\left[\begin{array}{cc}\bar{\lambda}_{m_{k}}I+\bar{L}&-e_{a}\\ e_{m_{k}}^{\top}&0\end{array}\right]\left[\begin{array}{c}\bar{x}_{m_{k}}\\ \bar{g}_{m_{k}}\end{array}\right]=\left[\begin{array}{c}0\\ 0\end{array}\right]. \tag{25}\] The above equation is rewritten as \[\left[\begin{array}{cc}(\bar{\lambda}_{m_{k}}-\theta_{0})I+\bar{L}+\theta_{0 }I&-e_{a}\\ e_{m_{k}}^{\top}&0\end{array}\right]\left[\begin{array}{c}\bar{x}_{m_{k}}\\ \bar{g}_{m_{k}}\end{array}\right]=\left[\begin{array}{c}0\\ 0\end{array}\right], \tag{26}\] where \(\theta_{0}\in\mathbb{R}_{+}\) is a uniform offset self-loop control gain. From (26), the finite value \((\bar{\lambda}_{m_{k}}-\theta_{0})\in\mathbb{C}\) is an invariant zero of a new state-space model \(\tilde{\Sigma}_{m_{k}}\triangleq(-\bar{L}-\theta_{0}I,e_{a},e_{m_{k}}^{\top},0)\). For all \(\bar{\lambda}_{m_{k}}\in\mathbb{C}\) satisfying (26), the control gain \(\theta_{0}\) can be adjusted such that \(\theta_{0}>\text{Re}(\bar{\lambda}_{m_{k}})\), resulting in that \(\tilde{\Sigma}_{m_{k}}\) has no finite unstable zero. Then, the self-loop control gains \(\theta_{i},\ i\in\{1,2,\ldots,N\},\) in (2) are tuned with \(\theta_{0}\) such that the system \(\Sigma_{m_{k}}\) is identical with \(\tilde{\Sigma}_{m_{k}}\). By this tuning procedure, the system \(\Sigma_{m_{k}}\) also has no finite unstable invariant zero. ## Appendix C Proof of Theorem 1 The result in _Lemma 2_ enables us to investigate invariant zeros of the systems \(\Sigma_{\rho}\) and \(\Sigma_{m_{k}},\ \forall m_{k}\in\mathcal{M}\). Based on _Lemma 3_, \(\Sigma_{m_{k}}\) has no finite unstable invariant zero, which leaves us to analyze infinite invariant zeros of those systems. Recall the equivalence between the relative degree of a SISO system and the degree of its infinite zero (see _Remark 2_), a necessary condition to guarantee the feasibility of the optimization problem (14) is that there exists at least one system \(\Sigma_{m_{k}}(m_{k}\in\mathcal{M})\) such that the number of its infinite invariant zeros is not greater than that of the system \(\Sigma_{\rho}\). This implies \(r_{(m_{k},a)}\leq r_{(\rho,a)}\). For sufficiency, it remains to show that if \(r_{(m_{k},a)}\leq r_{(\rho,a)}\), all the infinite zeros of the system \(\Sigma_{m_{k}}\) are also infinite zeros of the system \(\Sigma_{\rho}\). The following proof is adapted from our previous results in [13, Th. 7]. In the investigation, we make use of the definition of infinite invariant zeros in [35, Def. 2.4]. We investigate infinite zeros of \(\Sigma_{m_{k}}\) and \(\Sigma_{\rho}\) by starting from their transfer functions with zero initial states \[G_{(\rho,a)}(s) =e_{\rho}^{\top}(sI+\bar{L})^{-1}e_{a}=\frac{P_{(\rho,a)}(s)}{Q(s)},\] \[G_{(m_{k},a)}(s) =e_{m_{k}}^{\top}(sI+\bar{L})^{-1}e_{a}=\frac{P_{(m_{k},a)}(s)}{Q (s)}, \tag{27}\] where \(s\in\mathbb{C}\) is the Laplace complex variable. Based on _Remark 2_, it gives that \(P_{(\rho,a)}(s)\), \(P_{(m_{k},a)}(s)\), and \(Q(s)\) are the polynomials of degrees \(N-r_{(\rho,a)}\), \(N-r_{(m_{k},a)}\), and \(N\), respectively. Let us denote \(z_{\tau}=\sigma_{\tau}+j\omega_{\tau}\in\mathbb{C},\ \tau\in\{1,2,\ldots,r_{(m_{k},a)}\}\) with infinite module as infinite invariant zeros of \(\Sigma_{m_{k}}\). Indeed, the zero \(z_{\tau}\) (\(1\leq\tau\leq r_{(m_{k},a)}\)) is an infinite invariant zero of maximal degree \(r_{(m_{k},a)}\) of the system \(\Sigma_{m_{k}}\)[35, Def. 2.4] if it satisfies \[\lim_{\|z_{\tau}\|\to\infty}z_{\tau}^{q}G_{(m_{k},a)}(z_{\tau})=0, \ (0\leq q\leq r_{(m_{k},a)}-1),\] \[\lim_{\|z_{\tau}\|\to\infty}z_{\tau}^{r_{(m_{k},a)}}G_{(m_{k},a)}( z_{\tau})\neq 0. \tag{28}\] Further, with \(0\leq q\leq r_{(m_{k},a)}-1\), we also basically have \[\lim_{\|z_{\tau}\|\to\infty}z_{\tau}^{q}G_{(\rho,a)}(z_{\tau})=\lim_{\|z_{\tau}\| \to\infty}\frac{z_{\tau}^{q}P_{(\rho,a)}(z_{\tau})}{Q(z_{\tau})}=0. \tag{29}\] The above limit (29) holds because the denominator \(z_{\tau}^{q}P_{(\rho,a)}(z_{\tau})\) is the polynomial of degree \(N-r_{(\rho,a)}+q\leq N-1<N\), where \(N\) is the degree of the polynomial \(Q(z_{\tau})\). This implies that any infinite zeros \(z_{\tau}\) of maximal degree \(r_{(m_{k},a)}\) of the system \(\Sigma_{m_{k}}\) are also infinite zeros of degree \(r_{(m_{k},a)}\) of the system \(\Sigma_{\rho}\). ## Appendix IV Proof of Lemma 4 Let us decompose \(\mathcal{C}(\mathcal{M})=\mathcal{C}_{A}(\mathcal{M})+\mathcal{C}_{I}(\mathcal{M})\) where \(\mathcal{C}_{A}(\mathcal{M})=\sum_{m_{k}\in\mathcal{M}}Ae_{m_{k}}\) and \(\mathcal{C}_{I}(\mathcal{M})=\sum_{m_{k}\in\mathcal{M}}e_{m_{k}}\). Entry \(i\)-th of \(\mathcal{C}_{I}(\mathcal{M})\) takes \(0\) if vertex \(i\) does not belong to \(\mathcal{M}\) and \(1\) if vertex \(i\) belongs to \(\mathcal{M}\). Entry \(i\)-th of \(\mathcal{C}_{A}(\mathcal{M})\) takes \(0\) if all the neighbors of vertex \(i\) do not belong to \(\mathcal{M}\) and a non-zero value if at least one neighbor of vertex \(i\) belongs to \(\mathcal{M}\). Thus, entry \(i\)-th of \(\mathcal{C}(\mathcal{M})\) takes \(0\) if vertex \(i\) and all of its neighbors do not belong to \(\mathcal{M}\); takes a non-zero value if vertex \(i\) or one of its neighbors belong to \(\mathcal{M}\). If the condition (18) fulfills, the vector \(\mathcal{C}(\mathcal{M})\) has no zero entry. This implies that an arbitrary vertex in \(\mathcal{V}\) is either a vertex of \(\mathcal{M}\) or a neighbor of a vertex of \(\mathcal{M}\), resulting in that \(\mathcal{M}\) is a dominating set.
2309.16510
Variations of the HCO$^{+}$, HCN, HNC, N$_2$H$^+$ and NH$_{3}$ deuterium fractionation in high-mass star-forming regions
We use spectra and maps of the $J=1-0$ and $J=2-1$ DCO$^{+}$, DCN, DNC, $\rm N_2D^+$ lines and $1_{11}-1_{01}$ ortho- and para-NH$_{2}$D lines, obtained with the IRAM-30m telescope, as well as observations of their hydrogenated isotopologues to study deuteration processes in five high-mass star-forming regions. The temperature was estimated from CH$_3$CCH lines, also observed with the IRAM-30m telescope, and from NH$_3$ lines, observed with the 100-m radio telescope in Effelsberg, as well as using the integrated intensity ratios of the $J=1-0$ H$^{13}$CN and HN$^{13}$C lines and their main isotopologues. Applying a non-LTE radiative transfer model with RADEX, the gas density and the molecular column densities were estimated. D/H ratios are $0.001-0.05$ for DCO$^{+}$, $0.001-0.02$ for DCN, $0.001-0.05$ for DNC and $0.02-0.4$ for NH$_{2}$D. The D/H ratios decrease with increasing temperature in the range of $\rm 20-40 \,K$ and slightly vary at densities $n(\rm H_2) \sim 10^4-10^6\, cm^{-3}$. The deuterium fraction of $\rm N_2H^{+}$ is $0.008-0.1$ at temperatures in the range of $\rm 20-25\, K$ and at a density of $\sim 10^5\, \rm cm^{-3}$. We also estimate relative abundances and find $ \sim 10^{-11}-10^{-9}$ for DCO$^{+}$ and DNC, $ \sim 10^{-11}-10^{-10}$ for $\rm N_2D^+$ and $ \sim 10^{-10}-10^{-8}$ for NH$_{2}$D. The relative abundances of these species decrease with increasing temperature. However, the DCN/H$_2$ ratio is almost constant ($\sim 10^{-10}$). The observational results agree with the predictions of chemical models (although in some cases there are significant differences).
A. G. Pazukhin, I. I. Zinchenko, E. A. Trofimova, C. Henkel, D. A. Semenov
2023-09-28T15:16:33Z
http://arxiv.org/abs/2309.16510v1
Variations of the HCO\({}^{+}\), HCN, HNC, N\({}_{2}\)H\({}^{+}\) and NH\({}_{3}\) deuterium fractionation in high-mass star-forming regions ###### Abstract We use spectra and maps of the \(J=1-0\) and \(J=2-1\) DCO\({}^{+}\), DCN, DNC, N\({}_{2}\)D\({}^{+}\) lines and \(1_{11}-1_{01}\) ortho- and para-NH\({}_{2}\)D lines, obtained with the IRAM-30m telescope, as well as observations of their hydrogenated isotopologues to study deuteration processes in five high-mass star-forming regions. The temperature was estimated from CH\({}_{3}\)CCH lines, also observed with the IRAM-30m telescope, and from NH\({}_{3}\) lines, observed with the 100-m radio telescope in Effelsberg, as well as using the integrated intensity ratios of the \(J=1-0\) H\({}^{13}\)CN and HN\({}^{13}\)C lines and their main isotopologues. Applying a non-LTE radiative transfer model with RADEX, the gas density and the molecular column densities were estimated. D/H ratios are 0.001-0.05 for DCO\({}^{+}\), 0.001-0.02 for DCN, 0.001-0.05 for DNC and 0.02-0.4 for NH\({}_{2}\)D. The D/H ratios decrease with increasing temperature in the range of 20-40 K and slightly vary at densities \(n(\mathrm{H}_{2})\sim 10^{4}-10^{6}\) cm\({}^{-3}\). The deuterium fraction of N\({}_{2}\)H\({}^{+}\) is 0.008-0.1 at temperatures in the range of 20-25 K and at a density of \(\sim 10^{5}\) cm\({}^{-3}\). We also estimate relative abundances and find \(\sim 10^{-11}-10^{-9}\) for DCO\({}^{+}\) and DNC, \(\sim 10^{-11}-10^{-10}\) for N\({}_{2}\)D\({}^{+}\) and \(\sim 10^{-10}-10^{-8}\) for NH\({}_{2}\)D. The relative abundances of these species decrease with increasing temperature. However, the DCN/H\({}_{2}\) ratio is almost constant (\(\sim 10^{-10}\)). The observational results agree with the predictions of chemical models (although in some cases there are significant differences). keywords: ISM: abundances - ISM: molecules - Stars: formation - Stars: massive - astrochemistry ## 1 Introduction Our knowledge of high-mass star formation (HMSF) and early evolution of massive stars is still far from being satisfactory. The HMSF regions are rare and are located at large distances, hence understanding the involved physical and chemical processes is important (e.g., Tan et al., 2014). One of the questions is related to deuterium fractionation in these regions. The deuterium fraction \(D_{\mathrm{frac}}\) is a ratio of abundances of a deuterated molecule and its hydrogenated counterpart. The observed abundance of deuterated molecules in star-forming regions is higher than the initial D/H ratio \(\sim 10^{-5}\)(Oliveira et al., 2003). The abundance of deuterium in interstellar molecules increases because the forward reaction (deuteration) occurs without a thermal barrier, while the reverse reaction (removal of D) has an energy barrier (e.g., Turner, 2001): \[\mathrm{H}_{3}^{+}+\mathrm{HD} \rightleftarrows \mathrm{H}_{2}\mathrm{D}^{+} +\mathrm{H}_{2}+230\mathrm{\ K}, \tag{1}\] \[\mathrm{CH}_{3}^{+}+\mathrm{HD} \rightleftarrows \mathrm{CH}_{2}\mathrm{D}^{+}+\mathrm{H}_{2}+370\mathrm{\ K},\] (2) \[\mathrm{C}_{2}\mathrm{H}_{2}^{+}+\mathrm{HD} \rightleftarrows \mathrm{C}_{2}\mathrm{HD}^{+}+\mathrm{H}_{2}+550\mathrm{\ K}. \tag{3}\] Reaction (1) is efficient at temperatures of \(\sim\)10-30 K, while reactions (2), (3) are efficient at temperatures up to \(\sim\)80 K. At densities above \(10^{5}\) cm\({}^{-3}\)and temperatures below 10 K, the freezing out of gaseous species onto grain surfaces, such as CO, also causes deuteration enhancement due to the depletion of reactions between CO and the ions in (1), (2) and (3) (e.g., Caselli et al., 1999). There are chemical models that describe the formation of deuterated molecules (Turner, 2001; Roueff et al., 2007; Albertsson et al., 2013; Sipili et al., 2015, 2019). N\({}_{2}\)D\({}^{+}\), DCO\({}^{+}\), DNC and NH\({}_{2}\)D are mainly formed via the low temperature pathway (1), while DCN can be formed at high temperatures via the reactions (2) and (3). The reaction (1) with H\({}_{3}^{+}\) at temperatures around 30 K begins to proceed in the reverse direction, and the deuterium fraction decreases with increasing temperature. This effect of deuterium fractionation is observed in low-mass as well as in high-mass star-forming regions. The \(D_{\mathrm{frac}}\) varies with temperature and can be used as an evolutionary indicator. For instance, Crapsi et al. (2005) have carried out a survey of N\({}_{2}\)H\({}^{+}\) and N\({}_{2}\)D\({}^{+}\) towards 31 low-mass starless cores using the IRAM-30m telescope. They recognized that high deuterium fractionation of N\({}_{2}\)H\({}^{+}\) characterise the most evolved, or "prestellar", starless cores. For massive star-forming regions, Fontani et al. (2011) observed rotational transitions of N\({}_{2}\)D\({}^{+}\) and N\({}_{2}\)H\({}^{+}\) and derived the deuterium fraction in 27 cores, with the IRAM-30m telescope. They concluded that the N\({}_{2}\)D\({}^{+}\)-to-N\({}_{2}\)H\({}^{+}\) column density ratio can be used as an evolutionary indicator. Moreover, for the same regions in Fontani et al. (2014, 2015) they have estimated NH\({}_{2}\)D/NH\({}_{3}\) and DNC/HNC. \(D_{\rm frac}\)(NH\({}_{3}\)) is on average above 0.1 and does not change significantly with evolutionary phase. For DNC/HNC, they have found no statistically significant differences among the three evolutionary groups of objects such as high-mass starless cores (HMSCs), high-mass protostellar objects (HMPOs) and ultracompact HII regions (UCHIIs). Additionally, Sakai et al. (2012) found that the DNC/HNC ratio does not depend only on the current kinetic temperature towards 18 massive clumps (IRDCs and HMPOs), by using the Nobeyama Radio Observatory 45 m telescope. With a chemical model, they suggested that the DNC/HNC ratio also depends on history in their starless-core phase, such as its duration time. Gerner et al. (2015) observed a sample of 59 high-mass star-forming regions with different evolutionary phases: starting with IRDCs via HMPOs to hot molecular cores (HMCs) and finally UCHIIs regions. They found that the D/H ratios of DNC, DCO\({}^{+}\), and N\({}_{2}\)D\({}^{+}\) show decreasing trends with evolutionary stages, despite high standard deviations of ratios within individual stages. However, DCN/HCN shows maxima in the HNC phase. Also N\({}_{2}\)D\({}^{+}\) was only detected in a few IRDCs and HMPOs. Trofimova et al. (2020) have undertaken a survey of 60 massive star forming regions in DCN, DNC, DCO\({}^{+}\), N\({}_{2}\)D\({}^{+}\), by using the 20-m Onsala radio telescope. The N\({}_{2}\)D\({}^{+}\) was detected only in two sources, other deuterated molecules were detected in about 1/3 of the sources. They have found that the abundances relative to H\({}_{2}\) of DCN and DNC and the DCN/HCN ratio are almost constant at temperatures of \(\sim\)15-55 K, while DCO\({}^{+}\)/H\({}_{2}\) decreases with increasing temperature. Using the Mopra-22m and the IRAM-30m telescopes, Wienen et al. (2021) observed NH\({}_{2}\)D at 86 and 110 GHz towards over 900 high-mass clumps discovered by the APEX Telescope Large Area Survey of the Galaxy (ATLASGAL). They did not find a correlation between the NH\({}_{3}\) deuteration and evolutionary relevant physical tracers such as rotational temperature. There are few mapping surveys to study the deuterium fractionation. For instance, using the IRAM-30m telescope, Feng et al. (2019) have imaged two high-mass protostellar clumps that show different evolutionary stages in IRDC G28.34+0.06. They have found that the deuteration of N\({}_{2}\)H\({}^{+}\) is more efficient than that of HCO\({}^{+}\), HCN, and HNC. The deuterations are favoured towards the chemically younger clump with its colder and denser environment. The NH\({}_{2}\)D abundance is almost independent of environmental differences. Pilalai et al. (2012) obtained maps of the ortho-H\({}_{2}\)D\({}^{+}\)(1\({}_{10}\)-1\({}_{11}\)) and N\({}_{2}\)H\({}^{+}\)(4-3) lines with the James Clerk Maxwell Telescope (JCMT), and N\({}_{2}\)D\({}^{+}\)(3-2) and dust continuum with the Submillimeter Array (SMA) in the DR21 filament of Cygnus X. The H\({}_{2}\)D\({}^{+}\) emission is widely associated with dust emission and N\({}_{2}\)D\({}^{+}\), however the H\({}_{2}\)D\({}^{+}\) peaks are offset from the dust and N\({}_{2}\)D\({}^{+}\)(3-2) peaks. Tan et al. (2013) obtained maps of N\({}_{2}\)D\({}^{+}\)(3-2) and DCO\({}^{+}\)(3-2) emissions from four IRDCs using ALMA. In addition, Coutens et al. (2014) detected the HDO and H\({}_{2}^{18}\)O transitions with the Herschel/HIFI instrument, the IRAM 30-m telescope, and the CSO towards the HMSF region G34.26+0.15. The radial variation of the deuterium fraction was determined using a 1D non-LTE radiative transfer code. The HDO/H\({}_{2}\)O ratio is estimated to be \(\sim 3.5-7.5\times 10^{-4}\) in the hot core (\(\sim\)200 K) and \(\sim 1.0-2.2\times 10^{-4}\) in the colder envelope (\(\sim\)100 K). In a recent study, Redaelli et al. (2021) performed ALMA mapping observations of the continuum emission at 0.8 mm and of the ortho-H\({}_{2}\)D\({}^{+}\)(1\({}_{10}\)-1\({}_{11}\)) towards the two infrared-dark massive clumps. They found that ortho-H\({}_{2}\)D\({}^{+}\) is an ideal tracer of cold (\(\sim\) 10 K) and dense (\(\sim\) 10\({}^{6}\) cm\({}^{-3}\)) gas. In this work, we study the physical and chemical conditions of high-mass star-forming regions, using observations with the IRAM-30m radio telescope and the 100-m radio telescope in Effelsberg. We investigate the spatial distribution of abundances, temperature and density. We also wish to compare observational results with model predictions. We use the integrated intensity ratios of the \(J=1-0\) HCN and HNC lines and their \({}^{13}\)C bearing isotopologues as temperature indicator. Assuming optically thin molecular emission we estimate the gas density and the molecular column densities using a non-LTE radiative transfer model applying the RADEX code. We derive and discuss abundances of the deuterated molecules DCO\({}^{+}\), DCN, DNC, N\({}_{2}\)D\({}^{+}\) and NH\({}_{2}\)D as functions of gas properties such as temperature and density. We also discuss the spatial distribution of molecules. To obtain abundances, we derive H\({}_{2}\) column densities from 850 \(\mu\)m SCUBA dust emission. Previous works were mainly based on single-dish pointing surveys, while our study is one of the first including maps to study the deuterium fractionation in HMSF regions. The paper is organized as follows. Section 2 describes the observations and data reduction. The results and discussions are presented in Section 3 and 4. A summary of the main conclusions is presented in Section 5. ## 2 Observations and Data Reduction ### Observations at the 30-m radio telescope of the Institut Radioastronomie Millimetrique (IRAM) In September 2019, with the 30-m radio telescope of the Institut de Radioastronomie Millimetrique (IRAM), we observed five massive star forming regions at wavelengths of 2 and 3-4 mm (in the framework of the project 041-19). The sources are selected from the sample of a previous survey conducted at Onsala (Trofimova et al., 2020), possessing a relatively strong emission in the lines of deuterated molecules and having different gas temperatures. The list of sources is given in Table 1. The source position list is mainly based on a galactic H\({}_{2}\)O maser catalogue (Palagi et al., 1993; Valdettaro et al., 2001; Ladeyschikov et al., 2019). The L1287 source position is associated with the IRAS 00338+6312. For S187 the central position corresponds to the submillimetre dust emission peak and the N\({}_{2}\)H\({}^{+}\) peak (Zinchenko et al., 2009) associated with the massive pre-main-sequence star S187H\(\alpha\)(Zayagno et al., 1994). Table 2 contains the list of the observed molecular lines with some spectroscopic parameters. Only one source, DR21(OH), was observed at \(\sim\)110 GHz in the para-NH\({}_{2}\)D line due to limited observing time. Transition frequencies and upper level energies are taken from The Cologne Database for Molecular Spectroscopy (CDMS)1(Muller et al., 2005). Footnote 1: [http://cdms.de](http://cdms.de) The full beam width at half maximum at the observed frequencies ranged from \(\sim 36\arcsec\) to \(\sim 17\arcsec\). Antenna temperatures \(T_{\rm A}^{*}\) were converted to the main beam brightness temperature \(T_{\rm mb}\), using the main beam efficiency \(B_{\rm eff}\), which was determined by the Ruze's formula in accordance with the IRAM recommendations2 and ranged from 0.72 to 0.82. The minimum system noise temperatures were \(\sim\) 100 K in the 3 mm range and \(\sim\) 200 K in the 2 mm range. Footnote 2: [https://publicwiki.iram.es/Iram30mEfficiencies](https://publicwiki.iram.es/Iram30mEfficiencies) Observations were carried out in the on-the-fly (OTF) mode over a mapping area of a \(200\arcsec\times 200\arcsec\) in total power mode. The reference position was chosen with a shift of \(10\arcmin\) in right ascension. In some extended sources, i.e. DR21(OH) and NGC7538, two partially overlapping areas were observed. The pointing accuracy was checked periodically by observations of nearby continuum sources. Observations at the Max-Planck-Institut fur Radiostronomic with the Effelsberg 100-m radio telescope On 9 December 2019 we observed with the 100-m telescope near Effelsberg (Germany) the H\({}_{2}\)O maser transition at a frequency of 22 GHz, as well as the ammonia inversion lines \((J,K)=(1,1)\), (2,2) and (3,3). The full beam width at half maximum was \(\sim 40\arcsec\). The measurements were carried out by the method of continuous mapping using a \(K\)-band receiver in a secondary focus with a dual bandwidth of 300 MHz, including the H\({}_{2}\)O lines in one band and NH\({}_{3}\) in the other band. \(5\arcmin\times 5\arcmin\) maps were obtained at a scanning rate of \(20\arcsec\) per second in right ascension; intervals between scans were \(15\arcmin\). The reference position was shifted by \(+15\arcmin\) in azimuth. Weather conditions included light rain with low wind speeds (\(\sim\)2 m s\({}^{-1}\)). The results are presented in the main beam temperature scale \(T_{\rm mb}\). The source NGC7027 was used for calibration with a flux density \(S_{\rm{O}t}\) of 4.7 Jy at 22 GHz, taking into account the annual change since 1990 (Ott et al., 1994). ### Archival data Ammonia emission from L1287 was taken from observations with the Effelsberg-100m telescope in 1995 (Zinchenko et al., 1997). We used ammonia observations towards the interstellar filament WB 673 with the Effelsberg-100m telescope in 2019 from Ryabukhina et al. (2022). Also we used data from the KEYSTONE survey with the 100 m Green Bank Telescope mapping ammonia emission across giant molecular clouds (Cygnus X North, NGC7538) from Keown et al. (2019). The N\({}_{2}\)H\({}^{+}\) column densities were adopted from observations with the 20-m OSO telescope and the 15-m SEST telescope (Pirogov et al., 2003). The continuum data are obtained with the James Clerk Maxwell telescope (JCMT)-SCUBA at 850 \(\mu\)m (Di Francesco et al., 2008) as the dust distribution indicator. ### Data reduction The GILDAS/CLASS software3 was used for data reduction. All datasets were smoothed to the same spatial resolution of \(40\arcsec\). The spectra were fitted with Gaussian profiles using the LMFIT package (Newville et al., 2014). In the analysis, integrated intensity was obtained from the Gaussian profile area with their errors as the fit errors. For the spectra with hyperfine structure we assume that the widths of all components are equal, and the spacings between them are known. This allows us to determine the optical depth of the main group of hyperfine components, \(\tau\). Assuming that the ratios of hyperfine components correspond to LTE conditions, the optical depth of the main line can be determined from the ratio of the observed main and satellite line intensities: Footnote 3: [http://www.iram.fr/IRAMFR/GILDAS](http://www.iram.fr/IRAMFR/GILDAS) \[\frac{T_{\rm mb}(m)}{T_{\rm mb}(s)}=\frac{1-\exp(-\tau(m))}{1-\exp(-a\tau(m))}, \tag{4}\] where \(T_{\rm mb}\) is the main-beam temperature, \(a\) is the ratio of the satellite and main line strengths using the statistical weights. We adopted \(a=0.28\) and 0.22 for the inner and outer satellites of NH\({}_{3}\)(\(1,1\)) and o-NH\({}_{2}\)D(\(1_{11}\)-\(1_{01}\)) respectively. For H\({}^{13}\)CN and DCN, we used \(a=0.6\) for the \(F=1-1\) and \(F=2-1\) hyperfine components, and \(a=0.2\) for the \(F=0-1\) and \(F=2-1\) components. It should be noted that two velocity components at \(\sim-4\) and \(\sim 0\) km s\({}^{-1}\)are observed in the source DR21(OH) (see details in Schneider et al., 2010). In the reduction, the components were separated, and only the \(\sim-4\) km s\({}^{-1}\)component has been used for the analysis, since it is stronger and is detected throughout the source. Packages NumPy (Harris et al., 2020), LMFIT (Newville et al., 2014), astropy (Astropy Collaboration et al., 2022), matplotlib (Hunter \begin{table} \begin{tabular}{l c c c c} \hline \hline \multicolumn{1}{c}{Source} & RA(J2000) & Dec(J2000) & \(V_{lsr}\) & \(d\) & Note \\ & \((h^{m}\ {\rm s}\ )\) & \((\arcmin\ {}^{s}\ )\) & \((\rm{km\ s}^{-1}\ )\) & \((\rm{kpc})\) & \\ \hline L1287 & 00:36:47.5 & 63:29:02.1 & -17.7 & 0.9\({}^{a}\) & G12:13.00+0.66, IRAS 00338+6312 \\ S187 & 01:23:15.4 & 61:49:43.1 & -14.0 & \(1.4^{b}\) & G12:66.68–0.81, IRAS 01202+6133 \\ S231 & 05:39:12.9 & 35:45:54.0 & -16.6 & \(1.6^{c}\) & G17:48.42+4.15, IRAS 05358+3543 \\ DR21(OH) & 20:39:00.6 & 42:22:48.9 & -3.8 & \(1.5^{d}\) & G81.72+0.57 \\ NGC7538 & 23:13:44.7 & 61:28:09.7 & -57.6 & \(2.7^{e}\) & G111.54+0.78, IRAS 23116+6111 \\ \hline \multicolumn{1}{c}{Distances to sources are quoted from \({}^{a}\)Rygl et al. (2010), \({}^{b}\)Russeli et al. (2007), \({}^{c}\)Burns et al. (2015), \({}^{d}\)Rygl et al. (2012), \({}^{e}\)Moscaldi et al. (2009)} & \multicolumn{1}{c}{} \\ \end{tabular} \end{table} Table 1: List of sources. \begin{table} \begin{tabular}{l c c c} \hline \hline \multicolumn{1}{c}{Molecule} & Transition & Rest frequency & \(E_{u}/k\) \\ & & & (MHz) & (K) \\ \hline NH\({}_{3}\) & \((J,K)=(1,1)\) & 23694.495 & 23.4 \\ & \((J,K)=(2,2)\) & 23722.634 & 64.9 \\ & \((J,K)=(3,3)\) & 23870.128 & 123.5 \\ DCO\({}^{+}\) & \(J=1-0\) & 72039.354 & 3.5 \\ & \(J=2-1\) & 144077.319 & 10.4 \\ DCN & \(J=1-0,F=2-1\) & 72414.694 & 3.5 \\ & \(J=2-1,F=2-1\) & 144828.002 & 10.4 \\ DNC & \(J=1-0\) & 76305.699 & 3.7 \\ & \(J=2-1\) & 152609.744 & 10.9 \\ N\({}_{2}\)D\({}^{+}\) & \(J=1-0\) & 77109.243 & 3.7 \\ & \(J=2-1\) & 154217.011 & 11.1 \\ CH\({}_{3}\)CCH & \(J_{K}=5_{3}-4_{3}\) & 85442.601 & 77.3 \\ & \(J_{K}=5_{2}-4_{2}\) & 85450.766 & 41.2 \\ & \(J_{K}=5_{1}-4_{1}\) & 85455.667 & 19.5 \\ & \(J_{K}=5_{0}-4_{0}\) & 85457.300 & 12.3 \\ NH\({}_{2}\)D ortho & \(J_{K_{u}K_{c}}=1_{11}-1_{01},F=2-2\) & 85926.278 & 20.7 \\ & \(J_{K_{u}K_{c}}=1_{11}-1_{01},F=2-2\) & 110153.594 & 21.3 \\ H\({}^{13}\)CN & \(J=1-0,F=2-1\) & 86339.921 & 4.1 \\ H\({}^{13}\)CO\({}^{+}\) & \(J=1-0\) & 86754.288 & 4.2 \\ HN\({}^{13}\)C & \(J=1-0\) & 87090.825 & 4.2 \\ HCN & \(J=1-0,F=2-1\) & 88631.602 & 4.3 \\ HCO\({}^{+}\) & \(J=1-0\) & 8918.825 & 4.3 \\ HNC & \(J=1-0\) & 90663.568 & 4.4 \\ CH\({}_{3}\)CCH & \(J_{K}=9_{3}-8_{3}\) & 153790.772 & 101.9 \\ & \(J_{K}=9_{2}-8_{2}\) & 153805.461 & 65.8 \\ & \(J_{K}=9_{1}-8_{1}\) & 153814.276 & 44.1 \\ & \(J_{K}=9_{0}-8_{0}\) & 153817.215 & 36.9 \\ \hline \end{tabular} \end{table} Table 2: Observed molecular lines. 2007), and SciPy (Virtanen et al., 2020) were used for the fitting, numerical calculations and plotting. ## 3 Results ### Maps and spectra In Figs. 1 and A1 of the Appendix we show the integrated intensity maps to compare the spatial distribution of molecules. The dust and thus the gas column density distribution are represented by the 850 \(\mu\)m SCUBA continuum emission. In general, all hydrogenated molecules show emission peaks at a position coincident with the main dust emission peak and the IRAS source position. The deuterated molecules present various distributions. DCN, unlike DNC, DCO\({}^{+}\) and NH\({}_{2}\)D, shows emission peaks consistent with the hydrogenated isotopologues. In DR21(OH) and NGC7538, the DCN emission is stronger than DCO\({}^{+}\) and DNC, but in L1287, S231 and S187, DCO\({}^{+}\) provides the strongest emission from a deuterated molecule. Notably, in S187, the NH\({}_{2}\)D line demonstrates stronger emission by a factor of \(\sim 3\) than NH\({}_{3}\), but it is located at the edge of the map. Additionally, the N\({}_{2}\)D\({}^{+}\) emissions are weak and not detected in NGC7538. In Fig. 2 we show the spectra extracted at the 850 \(\mu\)m main dust continuum peak. For HCO\({}^{+}\), HCN and HNC, the \({}^{12}\)C lines are affected by self-absorption, thus \({}^{13}\)C lines are analysed in this paper. In DR21(OH) both velocity components mentioned above (see sect. 2.4) can be seen. The line widths are from \(\sim\)1 km s\({}^{-1}\), towards S187, to \(\sim\)3 km s\({}^{-1}\), towards DR21(OH). In S187 line width is comparable to the velocity resolution. In general, the hydrogenated molecular lines show stronger emission by a factor of \(\sim 2\), but in S187, to the contrary, the deuterated isotopologues lines are stronger. In addition, we also estimated the main line optical depth from the hyperfine structure of DCN(1-0) and H\({}^{13}\)CN(1-0). The optical depths at the emission peaks were found to be low, \(\tau<<1\). Furthermore, the main line optical depths of \(\alpha\)-NH\({}_{2}\)D(1\({}_{11}\)-10\({}_{11}\)) and NH\({}_{3}\)(1,1) at the emission peaks are \(\tau\sim 1\). As a future perspective, we plan to investigate the kinematics and dynamics of the gas. ### Kinetic temperature from observations of CH\({}_{3}\)CCH In Aske et al. (1984) and Bergin et al. (1994) it was shown that the rotational temperature of CH\({}_{3}\)CCH gives a good estimate of the gas kinetic temperature at gas density \(n\ga 10^{3-4}\) cm\({}^{-3}\) (transitions \(J=5-4\) and \(J=6-5\) were considered). This is explained by the fact that, due to the low dipole moment (\(\mu=0.78\) D), the CH\({}_{3}\)CCH molecule is easily thermalized under such conditions. Gas densities in our sources are above this threshold. Thus, the CH\({}_{3}\)CCH lines in our data can be a good gas kinetic temperature indicator. Rotational (and, accordingly, kinetic) temperature is determined from the rotation diagram method. It is assumed here that the emission is optically thin and the background radiation can be neglected. The rotational diagrams were constructed using the \(J=5-4\) and \(J=9-8\) transitions of the CH\({}_{3}\)CCH molecule. To estimate the rotational temperature, we smooth the \(J=5-4\) and \(J=9-8\) line maps to the same angular resolution of 40''. The K-ladder spectra were fitted with Gaussian profiles, assuming that the widths of all components are equal, and the spacings between them are known. ### Kinetic temperature from NH\({}_{3}\) observations Ammonia inversion transitions have a complex hyperfine structure with several components grouped in five lines, a central one as well as inner and outer satellites (see the lowest row of spectra in Fig. 2). Optical depths and rotational temperatures were determined using the methods described in Ho & Townes (1983). The spectra were fitted with Gaussian profiles. In the \((J,K)=\)(1,1) transition line widths of all hyperfine components were assumed to be equal, and the spacings between them are known. As described in section 2.4, the optical depth \(\tau(1,1,m)\) is derived from the ratio of the observed main and satellite line intensities according to Eq. (4). Thus, the rotational temperature can be obtained from the ratio of the main groups component intensities of the (1,1) and (2,2) transitions using the equation (Ho & Townes, 1983): \[T_{\rm rot}=-41.5\Bigg{/}\ln\Big{[}\frac{-0.282}{\tau(1,1,m)} \tag{5}\] \[\times\ln\Big{(}1-\frac{T_{\rm mb}(2,2,m)}{T_{\rm mb}(1,1,m)}\,(1-\exp(-\tau(1,1,m)))\Big{)}\Big{]}.\] The kinetic temperature values were obtained using the equation from Tafalla et al. (2004): \[T_{\rm kin}=\frac{T_{\rm rot}}{1-\frac{T_{\rm rot}}{41.5}\ln\Big{[}1+1.1\exp \Big{(}-\frac{16}{T_{\rm rot}}\Big{)}\Big{]}}. \tag{6}\] ### Kinetic temperature from the integrated intensity HCN/HNC ratio It is known that the HCN/HNC abundance ratio strongly depends on the kinetic temperature (e.g., Hirota et al., 1998). In Hacar et al. (2020) it was proposed to use the intensity ratio of the \(J=1-0\) HCN to HNC line as a temperature indicator based on observations of the integral shaped filament in Orion. Following Pazukhin et al. (2022), we found a correlation between the integrated intensity ratio H\({}^{13}\)CN/HN\({}^{13}\)C and the kinetic temperature expressed in terms of the Boltzmann distribution (see Fig. 3, left panel). We use the following equation as a temperature indicator: \[\frac{\rm H^{13}CN}{\rm HN^{13}C}=40\times\exp\left(\frac{-64}{T_{\rm kin}} \right). \tag{7}\] As can be seen in Fig. 3, this fit is somewhat different from that found in Pazukhin et al. (2022). The difference can be explained by a larger dataset as compared to the previous work by Pazukhin et al. (2022). Figure 3(right) shows the kinetic temperature obtained from the NH\({}_{3}\) and CH\({}_{3}\)CCH transitions in comparison with the temperatures estimated from the integrated intensity ratios \(J=1-0\) H\({}^{13}\)CN and HN\({}^{13}\)C lines and their main isotopologues. In general, the T\({}_{\rm kin}\)(HCN/HNC) values show a good agreement with the estimates derived from the CH\({}_{3}\)CCH and NH\({}_{3}\) lines in the range of 20 to 40 K with deviations of \(\la 5\) K. We further expanded the temperature maps by combining the observational data from isotopologues H\({}^{13}\)CN and HN\({}^{13}\)C with observations from the main isotopologues, as suggested by Beuther et al. (2022). In those source regions where the H\({}^{13}\)CN or HN\({}^{13}\)C lines become too weak, the intensity ratio of the main isotopologues is used. Figs. 5 and A2 in the Appendix represent the temperature maps. The HCN/HNC maps demonstrate a good agreement with the estimates derived from the CH\({}_{3}\)CCH and NH\({}_{3}\) lines. The temperature gradient is clearly visible, with both low-temperature and high-temperature regions being traceable. Temperature peaks coincide spatially with both the dust continuum emission and the IR source position. In addition, the maps extend beyond the temperature maps derived from the CH\({}_{3}\)CCH and NH\({}_{3}\) lines. The T\({}_{\rm kin}\)(HCN/HNC) of individual objects are discussed in Sect. 3.7. ### Non-LTE analysis Assuming non-LTE and optically thin molecular emission, the gas density and the molecular column density can be estimated using the off-line RADEX code (van der Tak et al., 2007). Energy levels, statistical weights, Einstein A-coefficients and collision rate coefficients were taken from the Leiden Atomic and Molecular Database (LAMDA)4(Schoier et al., 2005). Also collisional data were adopted from the BASECOL database5(Dubernet et al., 2013). For D and H isotopologues, we used collision coefficients calculated for DCO\({}^{+}\)-H\({}_{2}\) and HCO\({}^{+}\)-p-H\({}_{2}\) by Denis-Alpizar et al. (2020), HCN-p-H\({}_{2}\) and HNC-p-H\({}_{2}\) by Hernandez Vera et al. (2017), NH\({}_{3}\)-p-H\({}_{2}\) by Bouhafs et al. (2017), NH\({}_{2}\)D-p-H\({}_{2}\) by Daniel et al. (2014) and N\({}_{2}\)H\({}^{+}\)-p-H\({}_{2}\) by Balanza et al. (2020). Footnote 4: [https://home.strw.leidenuniv.nl/~moldata/](https://home.strw.leidenuniv.nl/~moldata/) Footnote 5: [https://basecol.vamdc.eu](https://basecol.vamdc.eu) We determined integrated intensity ratios of the 1\(-\)0 and 2\(-\)1 lines of DCO\({}^{+}\), DCN, DNC and compared them with calculated model values from RADEX. We built model grids with kinetic temperatures in the range \(T_{\rm kin}=5-80\) K, H\({}_{2}\) volume densities in the range \(n({\rm H}_{2})=10^{3}-10^{8}\) cm\({}^{-3}\), and total column densities of \(10^{12}\) cm\({}^{-2}\). At this column density the optical depths are low in all lines, so that line intensity ratios do not vary, if column densities were actually smaller. Note that estimates of the H\({}_{2}\) volume densities decrease with increasing optical depth of the 1\(-\)0 and 2\(-\)1 transitions. Thus, we use model intensity ratios of optically thin lines, which weakly depend on the column density at \(\lesssim\) 10\({}^{13}\) cm\({}^{-2}\). To derive \(n({\rm H}_{2})\) we estimated the following chi-squared minimum: \[\chi^{2}=\left(\frac{R_{i}^{\rm obs}-R_{i}^{\rm mod}}{\sigma_{i}^{R}}\right)^ {2}\times\left(\frac{T_{i}^{\rm obs}-T_{i}^{\rm mod}}{\sigma_{i}^{\rm T}} \right)^{2}, \tag{8}\] where \(R_{i}^{\rm obs}\) and \(R_{i}^{\rm mod}\) are the 2\(-\)1/1\(-\)0 ratios from observations and models, \((\sigma_{i}^{\rm R})^{2}\) is the sum of squared rms noise of 1 \(-\)0 and 2 \(-\)1 lines, \(T_{i}^{\rm obs}\) and \(T_{i}^{\rm mod}\) are the kinetic temperatures from observations and models and \(\sigma_{i}^{\rm T}\) is the uncertainty of \(T_{i}^{\rm obs}\). The volume densities \(n({\rm H}_{2})\) were found from the average value for the condition \(\chi^{2}+3.84\) (95 per cent confidence level), and the errors were found from the standard deviation. Only for DCO\({}^{+}\) and NH\({}_{2}\)D native data files are available in the LAMDA database. In cases of DCN, DNC and N\({}_{2}\)D\({}^{+}\) the data files of their hydrogenated isotopologues were used. This can lead to biases in the density estimates. Indeed, in Fig. 4 we show that the density Figure 1: The integrated intensity maps for L1287. The integrated intensity is obtained by integrating the line in main beam temperature intensity units. The contours show the continuum emission from 850 \(\mu\)m SCUBA data. The levels start from 5% to 95% of the peak intensity of 6.5 mJy beam\({}^{-1}\) in steps of 15%. The star-shaped marker indicates the IRAS source position. Sources, transitions and the velocity range are shown in the upper left corner of each panel. The beam sizes are shown in the bottom left corner of each panel. A scale bar representing a linear scale of 0.3 pc is shown on the bottom-right corner of the first frame. The maps of the other sources are presented in the Appendix (Fig. A1). estimates with the HCO\({}^{+}\) data file are higher than the estimates with the DCO\({}^{+}\) data file by a factor of 3. Hence, to improve the results for DCN and DNC we modified the molecular data files by substituting the frequencies, energy levels and Einstein A-coefficients for the deuterated isotopologues. The data for the \({}^{13}\)C isotopologues were also appended. We suppose that the difference in the \(n(\mathrm{H_{2}})\) estimates is due not only to collision rate coefficients, but also to these parameters. The results of our substitution are shown in Fig. 4. As mentioned above all maps were smoothed to 40'' and to the same grid size. After that, using the integrated 1 - 0 line intensity, the kinetic temperature and calculated \(n(\mathrm{H_{2}})\), the column density was obtained for each pixel in the map. In the analysis we use only the kinetic temperature derived from the integrated intensity ratios of HCN and HNC isotopologue lines (see sect. 3.4). Since the line ratios of the \({}^{12}\)C to \({}^{13}\)C bearing isotopologues indicate that HCO\({}^{+}\), HCN, HNC lines are optically thick, the lines of their optically thin isotopologues H\({}^{13}\)CO\({}^{+}\), H\({}^{13}\)CN, HN\({}^{13}\)C were used. To do this, the obtained values of the column densities of optically thin isotopologues were multiplied by a coefficient obtained from the carbon isotope ratio \(\frac{{}^{12}\mathrm{C}}{\mathrm{C}}=4.8\times\mathrm{R_{GC}}+20.8\)(Yan et al., 2023), where \(\mathrm{R_{GC}}\) is the Galactocentric distance in kpc. For \(N(\mathrm{NH_{3}})\) and \(N(\mathrm{NH_{2}}\mathrm{D})\), we assumed ortho-to-para ratios of 1 and 3, respectively, which correspond to the nuclear spin statistical weights. To derive \(D_{\mathrm{frac}}(\mathrm{NH_{3}})\) and \(N(\mathrm{N_{2}}\mathrm{D^{+}})\) values we use the mean value of \(n(\mathrm{H_{2}})\) of DCO\({}^{+}\), DCN, DNC. ### H\({}_{2}\) column density We derived H\({}_{2}\) column densities from dust maps obtained with the SCUBA Legacy Catalogue at 850 \(\mu\)m and a resolution of 22.9''. The maps were smoothed to 40'' resolution to provide an optimal match with the IRAM-30m and Effelsberg-100m data. Following Figure 2: Spectra extracted at the main dust emission peak from 850 \(\mu\)m SCUBA data. Source names are shown at the top of each column. Transitions are shown in the upper left corner of the first column. The system velocity is shown as a gray dashed line in each panel. Hildebrand (1983), the H\({}_{2}\) column density is related to the dust emission by: \[N_{\rm H_{2}}=\eta\frac{S_{\nu}}{B_{\nu}(T_{\rm dust})\Omega\kappa_{\nu}\mu m_{ \rm H}}, \tag{9}\] where \(\eta=100\) is the gas-to-dust ratio, \(B_{\nu}(T)\) is the Planck function, \(S_{\nu}\) is the observed flux, \(\Omega\) is the beam solid angle, \(T_{\rm dust}\) is the dust temperature, \(m_{\rm H}\) is the mass of a hydrogen atom and \(\mu=2.8\) is the mean molecular weight of the interstellar medium. The dust opacities used were \(\kappa_{\nu}=1.82\,{\rm cm^{2}\,g^{-1}}\) at 850 \(\mu\)m (Ossenkopf & Henning 1994). Then, the column density values, \(N_{\rm H_{2}}\) in cm\({}^{-2}\), are derived using the equation in useful units (Kauffmann et al. 2008): \[N_{\rm H_{2}}=2.02\times 10^{24}\left({\rm e}^{14.39/(\lambda T_{dust})}-1 \right)\frac{\lambda^{3}\,S_{\nu}^{\rm beam}}{\kappa_{\nu}\,\theta^{2}}, \tag{10}\] where \(S_{\nu}^{\rm beam}\) is the observed flux in mJy beam\({}^{-1}\), \(\theta\) is the beam size in arcsec and \(\lambda\) is the wavelength in mm. To derive the H\({}_{2}\) column density we assumed that \(T_{\rm dust}\) is 20 K based on results from Pazukhin et al. (2022). They compared the gas kinetic temperature with the dust temperature, which was taken from the open database[6] according to data from the Herschel telescope (Marsh et al. 2015, 2017). The \(T_{\rm dust}\) values were in the range \(\sim\)18-25 K and no significant correlation between the gas and dust temperatures was found. In addition, we estimated the mean density based on the measured N(H\({}_{2}\)), assuming as the source size the FWHM of the 850 \(\mu\)m continuum emission and Gaussian source profile. These mean densities are given in Sect. 3.7. Figure 4: (a) Curves of the constant HCO\({}^{+}\) (red lines), H\({}^{13}\)CO\({}^{+}\) (green lines) and DCO\({}^{+}\) (blue lines) \(J\)=(2-1)/(1-0) intensity ratios on the \(T_{\rm kin}-n\) (H\({}_{2}\)) plane from the RADEX model calculations for optically thin conditions. (b) Same for HCN, H\({}^{13}\)CN and DCN; (c) HNC, HN\({}^{13}\)C and DNC. Figure 3: The kinetic temperature derived from NH\({}_{3}\) (blue) and CH\({}_{3}\)CCH (red) versus the integrated intensity ratio HCN/HNC (left panel) and the T\({}_{\rm kin}\)(HCN/HNC) (right panel). The fitting results are represented by the black curve. The black dashed curve shows the fitting results from Pazukhin et al. (2022). ### Individual objects #### 3.7.1 L1287 The kinetic temperature at the dust emission peak is \(\approx 25\) K and decreases to \(\approx 20\) K towards the southeast along the dust emission ridge (Figure 5). The gas density at the centre is \(\sim 10^{5}\) cm\({}^{-3}\)and slightly decreases to southeast as well as the temperature (Figure 6). Based on dust emission the mean density is \(3\times 10^{4}\) cm\({}^{-3}\), with a source size of 0.2 pc. The minimum value of the deuterium fraction for HCO\({}^{+}\), HCN and HNC is observed towards the center. The \(D_{\rm frac}\) are 0.006-0.02 for HCO\({}^{+}\), 0.008-0.01 for HCN, 0.015-0.02 for HNC and decrease with rising temperature (Figure 7). The \(D_{\rm frac}\)(NH\({}_{3}\)) values are 0.03-0.1 (Figure 8). #### 3.7.2 S187 The kinetic temperature towards the central dust emission peak is \(\approx 20\) K and increases to \(\approx 25\) K at the IRAS source position (Figure A2). The gas density towards the IRAS source position is \(\la 10^{5}\) cm\({}^{-3}\)and is close to the value at the centre position (Figure A3). The mean density is \(3\times 10^{3}\) cm\({}^{-3}\), determined towards the central dust emission peak, with a size of 0.3 pc. Notably, DCO\({}^{+}\), unlike DCN and DNC, shows emission at the IRAS source position. The \(D_{\rm frac}\) values at the central peak are 0.04 for HCO\({}^{+}\), 0.02 for HCN, and 0.05 for HNC (Figure A4). The \(D_{\rm frac}\)(HCO\({}^{+}\)) values at the IRAS source position are 0.02. The mean Figure 5: Maps for L1287. (a) Combined integrated intensity ratios HCN/HNC and H\({}^{13}\)CN/HN\({}^{13}\)C; (b) The kinetic temperature derived using the eq. 7; (c) the kinetic temperature derived using CH\({}_{3}\)CCH transitions and (d) the kinetic temperature derived using NH\({}_{3}\) data. The star-shaped marker and contours are the same as in Fig. 1. The star-shaped marker indicates the IRAS position. The maps of the other sources are presented in the Appendix (Fig. A2). Figure 6: Maps \(n\)(H\({}_{2}\)) for L1287. The H\({}_{2}\) volume densities are derived using (a) DCO\({}^{+}\) line ratio, (b) DCN line ratio and (c) DNC line ratio. The star-shaped marker and contours are the same as in Fig. 1. The maps of the other sources are presented in the Appendix (Fig. A3). \(D_{\rm frac}({\rm NH_{3}})\) value is 0.23 near the central dust emission peak (Figure A5). #### 3.7.3 S231 The kinetic temperature at the IRAS source position is 26-30 K. It is close to the value at the central dust emission peak and decreases to \(\approx\) 23 K to the northwest along the dust emission distribution (Figure A2). The gas density at the centre is \(\sim 10^{5}\) cm\({}^{-3}\), almost throughout the entire region where we see dust emission (Figure A3). The mean density is \(10^{4}\) cm\({}^{-3}\), with a source size of 0.4 pc. There are few D/H values for HCO\({}^{+}\) in the centre. The \(D_{\rm frac}\) values are 0.003-0.03 for HCO\({}^{+}\), 0.004-0.008 for HCN, 0.007-0.035 for HNC and decrease with rising temperature (Figure A4). The D\({}_{\rm frac}({\rm NH_{3}})\) values are 0.03-0.2 (Figure A5). #### 3.7.4 Dr21(Oh) The kinetic temperature at the main dust emission peak is \(\approx\) 30 K and decreases to \(\approx\) 20 K to the north along the dust emission distribution (Figure A2). The gas density is \(\lesssim\)\(10^{6}\) cm\({}^{-3}\)and decreases to \(\sim 10^{5}\) cm\({}^{-3}\)almost throughout dust emission distribution (Figure A3). The mean density is \(7\times 10^{4}\) cm\({}^{-3}\), with a source size of 0.3 pc. Notably, the DCO\({}^{+}\) line demonstrates weak emission near the main dust emission peak. The \(D_{\rm frac}\) values are 0.004-0.02 for HCO\({}^{+}\), 0.008-0.04 for HNC, 0.03-0.25 for NH\({}_{3}\) and decrease with rising temperature. The \(D_{\rm frac}({\rm HCN})\) are 0.005-0.01 and weakly correlate with temperature (Figs. A4, A5). #### 3.7.5 Ngc7538 The kinetic temperature at the IRAS source position is \(\sim\) 40 K, at the dust peak in the south it is 30 K and decreases to \(\lesssim\)\(25\) K to the southwest (Figure A2). The gas density at the dust emission peaks is \(\lesssim\)\(10^{6}\) cm\({}^{-3}\)and decreases to \(10^{5}\) cm\({}^{-3}\)as well as temperature (Figure A3). The mean densities to the north and south peak are \(3\times 10^{4}\) cm\({}^{-3}\), with a source size of 0.6 pc. There are few D/H values for HCO\({}^{+}\) observed towards the centre. The \(D_{\rm frac}\) values are 0.008 at the dust emission peaks for HNC and increase to 0.02; the D/H values of NH\({}_{3}\) are 0.02-0.3 and decrease with rising temperature. The \(D_{\rm frac}({\rm HCN})\) values are 0.002-0.005 almost throughout the map. The mean \(D_{\rm frac}({\rm HCO^{+}})\) is 0.005 in the lower temperature region (Figs. A4, A5). ### Dependence of deuteration on physical parameters We determined abundances of the deuterated molecules relative to H\({}_{2}\) (Figure 9). These relative abundances are \(\sim 10^{-11}-10^{-9}\) for DCO\({}^{+}\) and DNC, \(\sim 10^{-11}-10^{-10}\) for N\({}_{2}\)D\({}^{+}\) and \(\sim 10^{-10}-10^{-8}\) for Figure 8: Maps of \(D_{\rm frac}({\rm NH_{3}})\) for L1287. The star-shaped marker and contours are the same as in Fig. 1. The maps of the other sources are presented in the Appendix (Fig. A5). Figure 7: Maps of \(D_{\rm frac}\) for L1287. The deuterium fractions are derived using (a) DCO\({}^{+}\)/HCO\({}^{+}\) ratios, (b) DCN/HCN ratios and (c) DNC/HNC ratios. The star-shaped marker and contours are the same as in Fig. 1. The maps of the other sources are presented in the Appendix (Fig. A4). NH\({}_{2}\)D. The relative abundances of these species decrease with increasing temperature. However, the DCN/H\({}_{2}\) ratio is almost constant (\(\sim 10^{-10}\)). In Fig. 10 we show the correlation of the deuterium fraction, the kinetic temperature and the volume density. We also show model predictions by Roueff et al. (2007) and Turner (2001) (see details in section 4.3). We estimate Spearman's rank correlation7 coefficient \(r_{\rm s}\) of D/H ratios, \(T_{\rm kin}\) and \(n({\rm H}_{2})\). We define the correlation as "strong" (\(r_{\rm s}|>0.5\)), "moderate" (\(0.3<|r_{\rm s}|<0.5\)) and "weak" (\(0.1<|r_{\rm s}|<0.3\)). Footnote 7: [https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.spearmanm.html](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.spearmanm.html) \(HCO^{+}\). The deuterium fraction of HCO\({}^{+}\) is in the range from 0.001 to 0.05. The \(D_{\rm frac}\) and \(T_{\rm kin}\) are "strong" anticorrelated at a temperature 20-30 K. The \(D_{\rm frac}\) and \(n({\rm H}_{2})\) are "weak" anticorrelated at a volume density of \(10^{4}-10^{5}\) cm\({}^{-3}\). The obtained values are consistent with the model predictions by Turner (2001). The model values by Roueff et al. (2007) are decreasing more slowly compared to the observational data. _HCN._ The deuterium fraction of HCN is in the range of 0.001-0.02. The \(D_{\rm frac}\) and \(T_{\rm kin}\) are "strong" anticorrelated. The correlation of \(D_{\rm frac}\) and \(n({\rm H}_{2})\) is "moderate". The \(n({\rm H}_{2})\) is in the range of \(10^{4}-10^{6}\) cm\({}^{-3}\). The obtained values agree with the models data with densities \(\la 10^{5}\) cm\({}^{-3}\). _HNC._ The deuterium fraction of HNC is in the range of 0.002-0.05. The \(D_{\rm frac}-T_{\rm kin}\) and \(D_{\rm frac}-n({\rm H}_{2})\) values are "strong" anticorrelated at a temperature of 20-40 K and \(n({\rm H}_{2})\sim 10^{4}-10^{6}\) cm\({}^{-3}\). The obtained values agree with the model curves. _NH\({}_{3}\)._ The deuterium fraction of NH\({}_{3}\) decreases from 0.4 to 0.02. The \(D_{\rm frac}\)-\(T_{\rm kin}\) values are "moderate" anticorrelated, while \(D_{\rm frac}\)-\(n({\rm H}_{2})\) are "strong" anticorrelated. The obtained values are higher than the model predictions by Turner (2001). The model values by Roueff et al. (2007) agree with the observational data. _N\({}_{2}H^{+}\)._ There are few D/H values for N\({}_{2}\)H\({}^{+}\). The deuterium fraction of N\({}_{2}\)H\({}^{+}\) is 0.008-0.1 at temperatures 20-25 K, at a density \(\sim 10^{5}\) cm\({}^{-3}\). The observed values show similar trends to the model by Turner (2001), but are larger than those of the model by Roueff et al. (2007). ## 4 Discussions ### Comparison of the different estimates of the H\({}_{2}\) volume density We obtained the H\({}_{2}\) volume densities using both the dust continuum emission and non-LTE analysis. The densities derived from the molecular excitation analysis (\(\sim 10^{5}\) cm\({}^{-3}\)) are on average an order of magnitude higher than the mean densities (\(\sim 10^{4}\) cm\({}^{-3}\)). This is a typical situation (e.g., Zinchenko et al. 1994, 1998), which can be probably explained by the small-scale clumpiness in the sources. ### The ortho-to-para ratio of NH\({}_{2}\)D To derive the total column density of NH\({}_{2}\)D we were using o-NH\({}_{2}\)D lines assuming the o-/p- ratio of 3:1. The p-NH\({}_{2}\)D line at 110 GHz was also detected in DR21(OH). We decided to verify whether the assumption is correct. For this, integrated intensity o-/p- ratios were obtained, and as a result, the mean value is 2.6, with a standard deviation of 0.7. Fontani et al. (2015) found similar results for high-mass star-forming samples covering different evolutionary phases. They compared integrated intensities and derived a mean value of \(2.6\pm 0.6\). Wiemen et al. (2021) determined a median ortho-to-para column density ratio of \(3.7\pm 1.2\) from the ATLASGAL survey. They also derived this value from integrated intensity ratio. This resulted in a mean value of \(2.6\pm 0.8\), which is close to our results. Additionally, Siipila et al. (2019) constructed models for deuterium and spin-state chemistry for the purpose to model the low-temperature environment in starless and pre-stellar cores. They found, the models presented an NH\({}_{2}\)D ortho-to-para ratio of 2. ### Comparison of deuteration with other results Here we review some of the modelling and observational main work, and highlight the differences with our results. To study the chemistry of deuterated species Turner (2001) used a model, containing 9930 reactions and 610 species. The model described the dependence of the molecular D/H ratios upon temperature, density, ionization rate, extinction, epoch, and elemental abundances. At temperature 20-35 K and density \(10^{5}\) cm\({}^{-3}\)the predicted molecular D/H ratios are shown in Fig. 10. Roueff et al. (2005) presented a steady state model of the gas-phase chemistry. At temperature 20-50 K and density \(10^{5}\) cm\({}^{-3}\)the molecular D/H ratios decrease from 0.1 to 0.002 for the discussed species. After that in Roueff et al. (2007) the abundance ratios were analysed as functions of temperature and density for the "standard" low-metal set, which leads to best results for dark cold clouds, and the "warm-core" set, in which higher values of heavy elements are used to account for partial evaporation of grain mantles. These "warm-core" models are illustrated in Fig. 10. Albertsson et al. (2013) calculated the deuterium fractionation model and simulated it in diverse interstellar sources. For prestellar objects the model predicted 0.01-1 for DCO\({}^{+}\)/HCO\({}^{+}\) and N\({}_{2}\)D\({}^{+}\)/N\({}_{2}\)H\({}^{+}\), as well as 0.001-0.01 for DCN/HCN, DNC/HNC and NH\({}_{2}\)D/NH\({}_{3}\). The DCO\({}^{+}\) and N\({}_{2}\)D\({}^{+}\) values are greater than our results. Also the model prediction of NH\({}_{2}\)D/NH\({}_{3}\) is less than our observational data. Siipila et al. (2015) developed the gas-grain models and the deuterium fraction of ammonia was \(\sim\)0.1 after \(10^{5}\) yr at a density of \(10^{5}\) cm\({}^{-3}\)and a temperature of 15 K. Siipila et al. (2019) presented a chemical model for deuterium chemistry. They found that the highest ammonia D/H ratios were 0.1-1 at \(\sim 10^{5}\) cm\({}^{-3}\)and after \(10^{5}\) yr. Sakai et al. (2012) conducted a survey towards 18 massive clumps. For the five high-mass protostellar objects at temperature \(\sim\)25 K the mean DNC/HNC ratio was 0.0095. Fontani et al. (2011) found that N\({}_{2}\)D\({}^{+}\)/N\({}_{2}\)H\({}^{+}\) and \(T_{\rm kin}\) are slightly anti-correlated. The mean \(D_{\rm frac}\) decreases from \(\sim\)0.26 in the HMSCs to \(\sim\)0.04 in the HMPOs and UCHIIs. Their results in the HMSCs with temperature below 20 K are greater than our observed data. Fontani et al. (2014) observed DNC(1-0) and HN\({}^{13}\)C(1-0) towards 22 massive star-forming cores. They found an average \(D_{\rm frac}\)(HNC) of 0.01 with no significant correlation between the three evolutionary groups of sources. Fontani et al. (2015) observed NH\({}_{2}\)D and NH\({}_{3}\) towards previously observed massive star-forming regions. \(D_{\rm frac}\)(NH\({}_{3}\)) was 0.01-1 at temperature 10-30 K and does not change significantly from the earliest to the most evolved phases. Additionally, for S231, values of 0.498 and 0.191 were found. Gerner et al. (2015) observed a sample of 59 sources including different evolutionary groups. They found D/H ratios of 0.0004-0.02 for DCO\({}^{+}\), 0.003-0.03 for DCN, 0.001-0.02 for DNC and 0.001-0.01 for N\({}_{2}\)D\({}^{+}\). They also reported that the D/H ratios of DNC, DCO\({}^{+}\) and N\({}_{2}\)D\({}^{+}\) decrease with time. DCN/HCN peaks at the HMC stage. They also modelled and observed relative abundances to H\({}_{2}\). The observed values were \(\sim 10^{-11}\) of \(X\)(DCO\({}^{+}\)), \(X\)(DCN), \(X\)(DNC) and \(X\)(N\({}_{2}\)D\({}^{+}\)), mainly close to the model results. However, the model values of \(X\)(N\({}_{2}\)D\({}^{+}\)) were 10\({}^{3}\) less then indicated by observations of the infrared dark cloud phase (IRDC). Our obtained relative abundances are greater by an order of magnitude. Feng et al. (2019) obtained maps towards young high-mass star-forming clumps in G28.34+0.06. As the temperature decreases from 20 to 14 K, the molecular D/H ratios become 0.03-0.06 for N\({}_{2}\)D\({}^{+}\), 0.004-0.006 for DCO\({}^{+}\), 0.006-0.01 for DCN, 0.008-0.013 for DNC and 0.01-0.005 for NH\({}_{2}\)D. Our estimates at \(\sim\)20 K are greater then these results. Trofimova et al. (2020) have carried out a survey of 60 massive star forming regions (including our sources) of DCN, DNC, DCO\({}^{+}\), N\({}_{2}\)D\({}^{+}\), by using the 20-m Onsala radio telescope. The CH\({}_{3}\)CCH \(J=5-4\) transitions were used to estimate the rotational temperature. For L1287, they obtained 0.065 for DCO\({}^{+}\) and 0.017 for DCN at 34.4 K, 0.007 for DCN at 40.2 K in S231, 0.01 for DCN at 29.6 K in DR21(OH), 0.007 for DCN at 47.7 K in NGC7538. These values are comparable to our results. However, our estimates of temperature are 10 K less than theirs in L1287 and S231. Wienen et al. (2021) observed a sample of high-mass clumps discovered by the ATLASGAL survey covering various evolutionary phases of massive star formation. The NH\({}_{2}\)D-to-NH\({}_{3}\) column density ratios range from 0.007 to 1.6. They did not find a correlation between the NH\({}_{3}\) deuteration and evolutionary tracer such as rotational temperature in the range 10-25 K. Li et al. (2022) observed NH\({}_{2}\)D at 110 GHz towards 50 Galactic massive star-forming regions. An excitation temperature \(T_{\rm ex}\) of 18.75 K was used to calculate column densities. The range of deuterium fractionation encompasses values from 0.043 to 0.0006. The D/H ratio is 0.043 in DR21 and 0.015 in NGC7538. These results are close to our results. We summarise the observed D/H values and the results of previous works in Fig. 11. ## 5 Conclusions Using observations with the IRAM-30m radio telescope and the 100-m radio telescope in Effelsberg, we have obtained the spatial distributions of the \(J=1-0\) and \(J=2-1\) DCO\({}^{+}\), DCN, DNC, N\({}_{2}\)D\({}^{+}\) lines and 1\({}_{11}-\)1\({}_{01}\) ortho- and para-NH\({}_{2}\)D lines in five high-mass star-forming regions. We have derived deuterium fractions as functions of gas properties such as temperature and density. This has been combined with H\({}_{2}\) column densities from 850 \(\mu\)m SCUBA dust continuum maps. The results are as follows: 1. The deuterated molecules suggest different spatial distributions. DCN, unlike DNC, DCO\({}^{+}\) and NH\({}_{2}\)D, shows emission peaks consistent with the hydrogenated isotopologues. Notably, in S187, the NH\({}_{2}\)D line demonstrates stronger emission by a factor of \(\sim 3\) than NH\({}_{3}\), but it is located at the edge of the map. The N\({}_{2}\)D\({}^{+}\) emission is weak in most of the sources. 2. The kinetic temperature was estimated from CH\({}_{3}\)CCH and NH\({}_{3}\) lines, as well as using the integrated intensity ratios of the \(J=1-0\) H\({}^{13}\)CN and HN\({}^{13}\)C lines and their main isotopologues. The T\({}_{\rm kin}\)(HCN/HNC) maps show a good agreement with the estimates derived from the CH\({}_{3}\)CCH and NH\({}_{3}\) lines in the range of 20 to 40 K. Using the \(J\)=2-1/1-0 integrated line ratios and the T\({}_{\rm kin}\)(HCN/HNC) with the RADEX code, we have estimated the gas density. Densities are \(\sim 10^{4}-10^{6}\) cm\({}^{-3}\). 3. The abundances relative to H\({}_{2}\) are \(\sim 10^{-9}-10^{-11}\) for DCO\({}^{+}\) and DNC, \(\sim 10^{-10}-10^{-11}\) for N\({}_{2}\)D\({}^{+}\) and \(\sim 10^{-8}-10^{-10}\) for NH\({}_{2}\)D. The relative abundances of these species decrease with increasing temperature. However, the DCN/H\({}_{2}\) ratio is almost constant (\(\sim 10^{-10}\)). Figure 9: Dependence of the relative abundances on the kinetic temperature for the DCO\({}^{+}\), DCN, DNC, NH\({}_{2}\)D and N\({}_{2}\)D\({}^{+}\) molecules. 4. We calculated the total column density of deuterated molecules and their hydrogenated isotopologues to determine the deuterium fraction. The \(D_{\rm frac}\) are 0.001-0.05 for DCO\({}^{+}\), 0.001-0.02 for DCN, 0.001-0.05 for DNC and 0.02-0.4 for NH\({}_{2}\)D. The D/H ratios decrease with increasing temperature in the range of 20-40 K and slightly vary at \(n({\rm H}_{2})\sim 10^{4}-10^{6}\) cm\({}^{-3}\). The deuterium fraction of N\({}_{2}\)H\({}^{+}\) is 0.008-0.1 in the temperature range of 20-25 K and at the density \(\sim 10^{5}\) cm\({}^{-3}\). In addition, we compared those results with the model predictions and observations from the literature (see Figs. 10, 11). The observational results agree with the predictions of chemical models (although in some cases there are significant differences). The \(D_{\rm frac}\) values range is mostly consistent with those found in other works. 5. In DR21(OH), para-NH\({}_{2}\)D at 110 GHz and ortho-NH\({}_{2}\)D at 86 GHz were detected. We derived the integrated intensity ortho-to-para ratio of NH\({}_{2}\)D. As a result, the mean value is 2.6, with standard deviation 0.7. ## Acknowledgements This study was supported by the Russian Science Foundation grant No. 22-22-00809. The research is based on observations made by the 041-19 project with the 30-m telescope, as well as observations with the 100-m MPIIR telescope (Max-Planck-Institut fur Radioastronomie) in Effelsberg. IRAM is supported by INSU/CNRS (France), MPG (Germany) and IGN (Spain). We acknowledge the staff of both observatories for their support in the observations. The authors are grateful to the anonymous reviewer for useful comments that improved the quality of the manuscript. ## Data Availability The original data obtained with IRAM-30m are available under the IRAM Data Archive. The data underlying this article will be shared on reasonable request to the corresponding author.
2309.10161
Biexact von Neumann algebras
We introduce the notion of biexactness for general von Neumann algebras, naturally extending the notion from group theory. We show that biexactness implies solidity for von Neumann algebras, and that many of the examples of solid von Neumann algebras contained in the literature are, in fact, biexact. We also give examples of certain crossed products arising from Gaussian actions that are solid but not biexact, and we give examples of certain $q$-Gaussian von Neumann algebras that are strongly solid but not biexact. The techniques developed involve studying a certain weak form of nuclear embeddings, and we use this setting to give a new description of weak exactness for von Neumann algebras, which allows us to answer several open problems in the literature about weakly exact von Neumann algebras.
Changying Ding, Jesse Peterson
2023-09-18T21:18:00Z
http://arxiv.org/abs/2309.10161v1
# Biexact von Neumann algebras ###### Abstract. We introduce the notion of biexactness for general von Neumann algebras, naturally extending the notion from group theory. We show that biexactness implies solidity for von Neumann algebras, and that many of the examples of solid von Neumann algebras contained in the literature are, in fact, biexact. We also give examples of certain crossed products arising from Gaussian actions that are solid but not biexact, and we give examples of certain \(q\)-Gaussian von Neumann algebras that are strongly solid but not biexact. The techniques developed involve studying a certain weak form of nuclear embeddings, and we use this setting to give a new description of weak exactness for von Neumann algebras, which allows us to answer several open problems in the literature about weakly exact von Neumann algebras. ## 1. Introduction Recall that a finite von Neumann algebra \(M\) is solid if, for any diffuse von Neumann subalgebra \(A\subset M\), the relative commutant \(A^{\prime}\cap M\) is amenable. A seminal result of Ozawa [1] is that free group factors are solid and, as a consequence, every nonamenable subfactor of a free group factor is prime. Ozawa's proof, which is based on \(\mathrm{C}^{*}\)-algebra theory, holds more generally for the class of biexact groups [1, Definition 15.1.2], which are groups that admit a topologically amenable action on a certain small-at-infinity boundary. Biexact groups have since been shown to have many applications to the theory of von Neumann algebras [1, 1, 2, 3]. Combining the techniques of biexact groups with Popa's deformation/rigidity theory has produced a number of striking results where certain structural properties of a von Neumann algebra can be recovered as in [1, 2, 3, 4, 5, 6, 7, 8]. In this article we introduce the notion of biexactness in the setting of von Neumann algebras, thereby allowing for the previous results to be put in a systematic framework. This allows for a more integrated use of \(\mathrm{C}^{*}\) and von Neumann algebraic techniques, and allows us to find a common von Neumann algebraic setting for many solidity results such as those obtained in [1, 2, 1, 1, 1, 1, 1, 10]. This then leads to natural extensions of these results to a larger class of von Neumann algebras. To describe biexactness for von Neumann algebras, we first recall in greater detail the corresponding property for groups (see [1, Chapter 15] for full details). Given a countable group \(\Gamma\), we have commuting actions of \(\Gamma\) on \(\ell^{\infty}\Gamma\) given by left and right multiplication, respectively, i.e., \(L_{t}(f)(x)=f(t^{-1}x)\) and \(R_{t}(f)(x)=f(xt)\) for \(f\in\ell^{\infty}\Gamma\) and \(x,t\in\Gamma\). We denote \[S(\Gamma)=\{f\in\ell^{\infty}\Gamma\ |\ f-R_{t}(f)\in c_{0}\Gamma,\ \forall\ t\in\Gamma\},\] which is a left-invariant \(\mathrm{C}^{*}\)-subalgebra of \(\ell^{\infty}\Gamma\). By Gelfand duality, we may identify \(S(\Gamma)\) with continuous functions on its spectrum \(\overline{\Gamma}\), and we then have an action of \(\Gamma\) on \(\overline{\Gamma}\) by homeomorphisms. The space \(\overline{\Gamma}\) is called the small-at-infinity compactification of \(\Gamma\). The group \(\Gamma\) is biexact if the action \(\Gamma\curvearrowright\overline{\Gamma}\) is topologically amenable in the sense of [1], which is equivalent to the inclusion \(C_{\lambda}^{*}\Gamma\subset C(\overline{\Gamma})\rtimes_{r}\Gamma\) being nuclear. Topological amenability is inherited through factor maps, and so in practice one usually checks biexactness of a group via natural boundary type actions, e.g., hyperbolic groups acting on their Gromov boundary. Let now \(M\) be a von Neumann algebra and \(X\subset\mathbb{B}(\mathcal{H})\) an operator \(M\)-bimodule. Magajna introduced in [10, 11] the \(M\)-topology on \(X\), which is a locally convex topology intermediate between the uniform and ultraweak topologies (see Section 3 for the precise definition). As \(\mathbb{B}(L^{2}M)\) is both an operator \(M\)-bimodule and an \(M^{\prime}\)-bimodule, we may let \(\mathbb{K}^{\infty,1}(M)\subset\mathbb{B}(L^{2}M)\) denote the closure of \(\mathbb{K}(L^{2}M)\) under both the \(M\)-topology and the \(M^{\prime}\)-topology. In the case when \(M\) has a normal faithful trace \(\tau\), we may view \(M\subset L^{2}(M,\tau)\subset L^{1}(M,\tau)\), and then by [1, Proposition 3.1] we have that \(\mathbb{K}^{\infty,1}(M)\) is the closure of \(\mathbb{K}(L^{2}M)\) in the topology given by the norm \(\|T\|_{\infty,1}=\sup_{x\in M,\|x\|\leq 1}\|T\tilde{x}\|_{1}\). In [1] we introduced a von Neumann analog of the small-at-infinity compactification as \[\mathbb{S}(M)=\{T\in\mathbb{B}(L^{2}M)\mid[T,a]\in\mathbb{K}^{\infty,1}(M), \forall\ a\in M^{\prime}\}.\] The advantage of working with the generalized compact operators from \(\mathbb{K}^{\infty,1}(M)\) is that to check that an operator \(T\) is contained in \(\mathbb{S}(M)\), it suffices to check that \([T,a]\in\mathbb{K}^{\infty,1}(M)\) for \(a\) in some generating set of \(M\) (generated as a von Neumann algebra). In particular, if \([T,a]\) is compact for each element \(a\) in some generating set, then \(T\in\mathbb{S}(M)\). This shows that the space \(\mathbb{S}(M)\) is robust enough so that the inclusion \(M\subset\mathbb{S}(M)\) may have interesting properties. For example, if \(M=L\Gamma\) for a group \(\Gamma\), then we have a natural inclusion \(S(\Gamma)\rtimes_{\tau}\Gamma\subset\mathbb{S}(L\Gamma)\). Also, the space \(\mathbb{S}(M)\) is large enough so that it admits non-trivial \(1\)-cohomology [1, Theorem 7.3]. A consequence of this robustness, though, is that the the space \(\mathbb{S}(M)\) is not a \(\mathrm{C}^{*}\)-algebra, but is rather an \(M\)-system, i.e., an operator system that is also an operator \(M\)-bimodule. Since \(\mathbb{S}(M)\) is an operator \(M\)-bimodule, we may again consider the \(M\)-topology on \(\mathbb{S}(M)\). We say that the von Neumann algebra \(M\) is biexact if the inclusion \(M\subset\mathbb{S}(M)\) is \(M\)-nuclear in the sense that there exists nets of u.c.p. maps \(\phi_{i}:M\to\mathbb{M}_{n(i)}(\mathbb{C})\) and \(\psi_{i}:\mathbb{M}_{n(i)}(\mathbb{C})\to\mathbb{S}(M)\) so that for each \(x\in M\) we have \(\psi_{i}\circ\phi_{i}(x)\to x\) in the \(M\)-topology. This notion is flexible enough to show that many von Neumann algebras already studied in the literature are biexact. In particular, we show in Theorem 7.17 that all von Neumann algebras satisfying condition \((AO)^{+}\) or strong property \((AO)\) are biexact. We therefore have that each of the following classes of von Neumann algebras is contained in the class of biexact von Neumann algebras: 1. Amenable von Neumann algebras. 2. Group von Neumann algebras associated to locally compact second countable biexact groups (Proposition 6.2 in [1]). 3. Von Neumann algebras of universal orthogonal and unitary discrete quantum groups (Proposition 3.1.2 in [12]). 4. Free Araki-Woods von Neumann algebras (The proof of Theorem 4.4 in [1]). 5. \(q\)-Gaussian von Neumann algebras associated to finite-dimensional real Hilbert space (Section 4 in [12] for some \(q\), and [13] in general). 6. Free products of biexact von Neumann algebras with respect to normal faithful states (Proposition 6.16). 7. Von Neumann subalgebras with expectation of biexact von Neumann algebras (Proposition 6.10). 8. Amplifications and commutants of biexact von Neumann algebras (Proposition 6.5). In the case of a group von Neumann algebra associated to a discrete group \(M=L\Gamma\), we have a canonical diagonal embedding of \(\ell^{\infty}\Gamma\) into \(\mathbb{B}(\ell^{2}\Gamma)\), as well as a normal conditional expectation \(E_{\ell^{\infty}\Gamma}:\mathbb{B}(\ell^{2}\Gamma)\to\ell^{\infty}\Gamma\). We show that \(E_{\ell^{\infty}\Gamma}(\mathbb{S}(L\Gamma))\subset S(\Gamma)\), and so if \(L\Gamma\) is biexact, then by considering the positive type functions \(\Gamma\ni t\mapsto E_{\ell^{\infty}}(\psi_{i}\circ\phi_{i}(\lambda_{t}) \lambda_{t}^{*})\in S(\Gamma)\) we deduce that the action \(\Gamma\curvearrowright\overline{\Gamma}\) is amenable. The converse implication also holds by using local reflexivity of \(C_{\lambda}^{*}\Gamma\) to show that nuclearity of the inclusion \(C^{*}_{\lambda}\Gamma\subset C(\overline{\Gamma})\rtimes_{r}\Gamma\) can be upgraded to give \(L\Gamma\)-nuclearity for the inclusion \(L\Gamma\subset\mathbb{S}(L\Gamma)\). A consequence, therefore, of this new framework is that we deduce that biexactness for groups is stable under \(W^{*}\)-equivalence (see Corollary 6.3 for the complete details), i.e. if \(\Gamma\) and \(\Lambda\) are countable groups such that \(L\Gamma\cong L\Lambda\), then \(\Gamma\) is biexact if and only if \(\Lambda\) is biexact. This should be compared with Sako's result [10] where he shows that biexactness for groups is stable under measure equivalence. One of the technical achievements of our approach (carried out in Section 4) is that, in fact, we do not need to use local reflexivity of \(C^{*}_{\lambda}\Gamma\) directly. If we have a von Neumann algebra \(M\) and an ultraweakly dense \(\mathrm{C}^{*}\)-subalgebra \(M_{0}\), we consider a version of the \(M\)-topology called the \((M_{0}\subset M)\)-topology that takes the dense \(\mathrm{C}^{*}\)-algebra \(M_{0}\) into account. If the inclusion \(M_{0}\subset\mathbb{S}(M)\) is \((M_{0}\subset M)\)-nuclear, we then have a general upgrading result showing that \(M\) is biexact. While this general result is not needed to show that biexactness is stable under \(W^{*}\)-equivalence, it is crucial in showing how biexactness interacts with general constructions. For instance, this allows us to show that the free product of biexact von Neumann algebras (with respect to arbitrary normal faithful states) is again biexact. This also allows us to show that tensor products of biexact von Neumann algebras satisfy a relative biexactness property from which one deduces the unique prime decomposition results of Ozawa and Popa [11]. This holds, in fact, for general biexact von Neumann algebras, and puts this into the framework of Houdayer and Isono [13] so that unique prime decomposition results can be deduced also in the type III setting as well (see Corollary 6.15). If we consider the larger \(M\)-bimodule \(\mathbb{B}(L^{2}M)\) instead of \(\mathbb{S}(M)\), then \((M_{0}\subset M)\)-nuclearity for the inclusion \(M\subset\mathbb{B}(L^{2}M)\) is related to Isono's notion of \(M_{0}\) being weakly exact in \(M\)[12]. In Section 5 we make this connection explicit, and as an application we show in Corollary 5.11 that the free product (with respect to arbitrary normal faithful states) of weakly exact von Neumann algebras is again weakly exact, which was previously shown by Isono to hold in many cases. We also use this perspective to provide a new characterization of weak exactness for von Neumann algebras, answering Problem 10.4.3 from [1] (see Theorem 5.8). Biexact von Neumann algebras in general also have many of the indecomposability type properties that are known to hold for the group von Neumann algebras associated to biexact groups. We show a biexact von Neumann algebra \(M\) is always full, solid, and properly proximal. Moreover, if \(M\) is finite and weakly amenable, then \(M\) is also strongly solid in the sense of Ozawa and Popa [11]. It was shown in [10] that if \(\Gamma\) is a nonamenable group and we have an embedding \(L\Gamma\subset L\mathbb{F}_{2}\), then it follows that \(\Gamma\) is properly proximal and consequently is not inner amenable (an alternate proof of this fact was also found recently in [13]). Since biexactness is inherited to subalgebras it follows that \(\Gamma\) is even biexact. In fact, what we establish is a general principle that higher-rank lattices cannot "\(W^{*}\)-embed" into rank one lattices. Specifically, if \(G\) is a connected real semisimple Lie group with finite center and \(\mathbb{R}-\mathrm{rank}(G)\geq 2\), and if \(\Gamma<G\) is a lattice, then by [14, Exercise 8.2.7], together with [10, 12], it follows that \(\Gamma\) is not biexact and hence \(L\Gamma\) cannot embed into \(L\Lambda\) for any lattice \(\Lambda\) in a rank one connected real simple Lie group with finite center. It was shown recently by Caspers [11] that if \(\mathcal{H}\) is an infinite dimensional real Hilbert space and \(-1<q<1\) with \(q\neq 0\), then the \(q\)-Gaussian von Neumann algebra \(M_{q}(\mathcal{H})\) is not isomorphic to a free group factor. Casper's result shows, in fact, that \(M_{q}(\mathcal{H})\) is not biexact, and hence it then follows that \(M_{q}(\mathcal{H})\) cannot even embed into a free group factor. This last fact can also be deduced directly from [1], together with Theorem 7.20 below, which is independent of the results in the earlier sections. We remark that this can also be deduced by combining Caspers work with Corollary 3.5 in [15]. Despite the fact that \(M_{q}(\mathcal{H})\) is not biexact when \(-1<q<1\), \(q\neq 0\), and \(\mathcal{H}\) is infinite dimensional, we show in Section 8, that \(M_{q}(\mathcal{H})\) shares many of the same indecomposability results that biexact von Neumann algebras have. In particular, it is strongly solid and every von Neumann subalgebra not having an amenable summand is properly proximal. Strong solidity for \(M_{q}(\mathcal{H})\) was shown previously by Avsec [1] in the case when \(\mathcal{H}\) is finite-dimensional. Our proof in the infinite dimensional case uses deformation/rigidity techniques, together with techniques developed in [1], to reduce the problem to the finite-dimensional case where Avsec's results may then be applied. In Section 8 we also give another source of solid von Neumann algebras that are not biexact through the use of Gaussian actions. Specifically, we show that if \(\Gamma\) is a biexact group and \(\pi:\Gamma\to\mathcal{O}(\mathcal{H})\) is an orthogonal representation satisfying \(\pi\nprec\lambda\), but \(\pi^{\otimes k}\prec\lambda\) for some \(k>1\), then the crossed product of the Gaussian action \(A_{\mathcal{H}\,\overline{\otimes}\,\ell_{\mathbb{N}}}\rtimes^{\sigma_{\pi \otimes 1}}\Gamma\) is solid, but is not biexact. Solidity of this von Neumann algebra is a consequence of Boutonnet's solid ergodicity result from [1], and in Section 8 we also give a relatively direct proof of this result. ### Acknowledgements We thank Cyril Houdayer and Amine Marrakchi for useful comments. ## 2. Preliminaries and notation For C\({}^{*}\)-algebras or operator systems \(A\) and \(B\), we denote the algebraic tensor product by \(A\odot B\) and the minimal tensor product by \(A\otimes B\). We will use the abbreviation c.c. for completely contractive, c.p. for completely positive, c.c.p. for contractive completely positive, and u.c.p. for unital completely positive. For a von Neumann algebra \(M\), we denote by \((M,L^{2}M,J,\mathfrak{B})\) its standard form [10]. If \(M\) is \(\sigma\)-finite and we have a normal faithful state \(\mu\) on \(M\), then the standard form may be realized as \((M,L^{2}(M,\mu),J_{\mu},\mathfrak{B}_{\mu})\), where the representation of \(M\) on \(L^{2}(M,\mu)\) is given by the GNS-construction, \(J_{\mu}\) is modular conjugation on \(L^{2}(M,\mu)\), and \(\mathfrak{B}_{\mu}\) is the positive part of \(L^{2}(M,\mu)\). We denote by \(\mu^{1/2}\in L^{2}(M,\mu)\) the canonical cyclic vector so that \(\mu(x)=\langle x\mu^{1/2},\mu^{1/2}\rangle\) for \(x\in M\). The centralizer algebra with respect to \(\mu\) is denoted by \(M^{\mu}\). For von Neumann algebras \(M\) and \(N\), a normal Hilbert \(M\)-\(N\) bimodule is a Hilbert space \(\mathcal{H}\), together with a unital \(*\)-representation \(\pi_{\mathcal{H}}\) of the algebraic tensor product \(M\odot N^{\mathrm{op}}\) such that \(\pi_{\mathcal{H}}\) is normal and faithful when restricted to \(M\) and \(N^{\mathrm{op}}\) separately. We will write simply \(x\xi y=\pi_{\mathcal{H}}(x\otimes y^{\mathrm{op}})\xi\), for \(x\in M\), \(y\in N\), and \(\xi\in\mathcal{H}\). A normal Hilbert \(M\)-\(N\) bimodule \(\mathcal{H}\) is weakly contained in another normal Hilbert \(M\)-\(N\) bimodule \(\mathcal{K}\) (written \(\mathcal{H}\prec\mathcal{K}\)) if the identity map on \(M\odot N^{\mathrm{op}}\) extends to a \(*\)-homomorphism from \(C^{*}(\pi_{\mathcal{K}}(M\odot N^{\mathrm{op}}))\) to \(C^{*}(\pi_{\mathcal{H}}(M\odot N^{\mathrm{op}}))\). The trivial bimodule for \(M\) is given by \(L^{2}M\) with the bimodule structure \(x\xi y=xJy^{*}J\xi\), and the coarse \(M\)-\(N\) bimodule is given by \(L^{2}M\,\overline{\otimes}\,L^{2}N\) with the bimodule structure given by the canonical representation of \(M\otimes N^{\mathrm{op}}\) in \(\mathbb{B}(L^{2}M\,\overline{\otimes}\,L^{2}N)\). We will say that a von Neumann subalgebra \(N\subset M\) is with expectation if there exists a normal faithful conditional expectation \(E:M\to N\). Given a normal faithful conditional expectation \(E:M\to N\), and a normal faithful semifinite weight \(\psi_{N}\) on \(N\), we obtain a normal faithful semifinite weight on \(M\) by \(\psi=\psi_{N}\circ E\), and this gives rise to an inclusion \(L^{2}(N,\psi_{N})\subset L^{2}(M,\psi)\). The Jones projection \(e_{N}:L^{2}(M,\psi)\to L^{2}(N,\psi_{N})\) is the corresponding orthogonal projection. The Jones projection does not depend on the normal faithful semifinite weight \(\psi_{N}\)[12, Appendix A] and so we may consider it abstractly as a coisometry \(e_{N}:L^{2}M\to L^{2}N\). The Jones projection \(e_{N}\) satisfies 1. \(e_{N}^{*}xe_{N}=E(x)\), for \(x\in M\). 2. \(e_{N}J_{M}=J_{N}e_{N}\). If we have a group action \(\Gamma\curvearrowright^{\sigma}M\) by automorphisms, then by [10, Theorem 3.2] there exists a unique unitary representation \(\sigma^{0}:\Gamma\to\mathcal{U}(L^{2}M)\) so that for each \(t\in\Gamma\) we have 1. \(\sigma_{t}(x)=\sigma_{t}^{0}x(\sigma_{t}^{0})^{*}\), for \(x\in M\). 2. \(J\sigma_{t}^{0}=\sigma_{t}^{0}J\). 3. \(\sigma_{t}^{0}(\mathfrak{B})=\mathfrak{B}\). We call the representation \(\sigma^{0}:\Gamma\to\mathcal{U}(L^{2}M)\) the Koopman representation associated to the action \(\sigma\). If we have an action \(\Gamma\curvearrowright A\) of a group on a C\({}^{*}\)-algebra, then there exists a faithful nondegenerate representation \(A\subset\mathbb{B}(\mathcal{H})\) and a covariant representation of \(\pi:\Gamma\to\mathcal{U}(\mathcal{H})\). The reduced crossed-product \(A\rtimes_{r}\Gamma\) is the C\({}^{*}\)-subalgebra generated by \(\{(a\otimes 1)\pi_{t}\otimes\lambda_{t}\mid a\in A,t\in\Gamma\}\), where \(\lambda:\Gamma\to\mathcal{U}(\ell^{2}\Gamma)\) denotes the left-regular representation. This C\({}^{*}\)-algebra is independent of the covariant representation, and we identify \(A\) with \(A\otimes\mathbb{C}\) so that \(A\subset A\rtimes_{r}\Gamma\). If we have an \(\Gamma\curvearrowright M\) of a group on a von Neumann algebra, then we denote by \(M\rtimes\Gamma\subset\mathbb{B}(L^{2}M\overline{\otimes}\ell^{2}\Gamma)\) the von Neumann crossed-product, which is the von Neumann completion of \(M\rtimes_{r}\Gamma\subset\mathbb{B}(L^{2}M\overline{\otimes}\ell^{2}\Gamma)\). We denote by \(u_{t}\) the unitary \(\sigma_{t}^{0}\otimes\lambda_{t}\in M\rtimes_{r}\Gamma\), although if \(M=\mathbb{C}\), then in this case we have \(u_{t}=\lambda_{t}\) and we will use both notations interchangeably. The von Neumann algebra \(M\rtimes\Gamma\) is standardly represented on \(L^{2}M\,\overline{\otimes}\,\ell^{2}\Gamma\), and we may explicitly compute the modular conjugation operator to see that it satisfies \[J_{M\rtimes\Gamma}(\xi\otimes\delta_{t})=\sigma_{t^{-1}}^{0}J_{M}\xi\otimes \delta_{t^{-1}}\] for \(\xi\in L^{2}M\) and \(t\in\Gamma\). If we use the canonical identification \(L^{2}M\,\overline{\otimes}\,\ell^{2}\Gamma\cong\oplus_{t\in\Gamma}L^{2}M\), then for \(x\in M\) and \(t\in\Gamma\) we have \[J_{M\rtimes\Gamma}\,x\,J_{M\rtimes\Gamma}=\oplus_{t\in\Gamma}J_{M}\sigma_{t} (x)J_{M},\qquad\quad J_{M\rtimes\Gamma}\,u_{t}\,J_{M\rtimes\Gamma}=1\otimes \rho_{t}, \tag{1}\] where \(\rho:\Gamma\to\mathcal{U}(\ell^{2}\Gamma)\) denotes the right-regular representation. If \(M\) is a von Neumann algebra, \(p,q\in\mathcal{P}(M)\) are nonzero projections and \(B\subset pMp\), \(Q\subset qMq\) are von Neumann subalgebras, then we say the \(B\) embeds with expectation into \(Q\) inside of \(M\) and write \(B\preceq M\) if there exist projections \(e\in\mathcal{P}(B)\), \(f\in\mathcal{P}(Q)\), a nonzero partial isometry \(v\in eMf\) and a unital normal \(*\)-homomorphism \(\phi:eBe\to fQf\) such that the inclusion \(\phi(eBe)\subset fQf\) is with expectation and \(bv=v\phi(b)\) for all \(b\in B\). By a fundamental theorem of Popa [11, Theorem 2.1], for a tracial von Neumann algebra \(M\), we have \(B\not\preceq_{M}Q\) if and only if there exists a net \(\{u_{i}\}_{i}\subset\mathcal{U}(B)\) so that \(E_{Q}(au_{i}b)\to 0\) in the ultrastrong-\({}^{*}\) topology for all \(a,b\in M\). Popa's theorem has been extended by Houdayer and Isono [12, Theorem 4.3] to the case when \(B\) is finite and both \(B\) and \(Q\) are with expectation. ## 3. A relative topology on C\({}^{*}\)-bimodules In this section we collect some needed background information, and we introduce a generalization of Magajna's \(M\)-\(N\)-topology on operator bimodules [14]. Many of the results here are adaptations of current techniques in the literature, although a few results (e.g., Lemma 3.7) use new techniques. Let \(M_{0}\) and \(N_{0}\) be C\({}^{*}\)-algebras. An (concrete) operator \(M_{0}\)-\(N_{0}\)-bimodule consists of a concrete operator space \(X\subset\mathbb{B}(\mathcal{H})\), together with two faithful nondegenerate representations \(\pi:M_{0}\to\mathbb{B}(\mathcal{H})\) and \(\rho:N_{0}\to\mathbb{B}(\mathcal{H})\) so that \(X\) is an \(\pi(M_{0})\)-\(\rho(N_{0})\)-bimodule, whose bimodule structure is given by composition of operators. An (concrete) operator \(M_{0}\)-system consists of concrete operator system \(X\subset\mathbb{B}(\mathcal{H})\), together with a faithful nondegenerate representation \(\pi:M_{0}\to\mathbb{B}(\mathcal{H})\) so that \(X\) is also an \(\pi(M_{0})\)-bimodule. Note that we then have \(M_{0}\cong\pi(M_{0})\subset X\). We say that \(X\) is a dual operator \(M_{0}\)-\(N_{0}\)-bimodule (resp. dual operator \(M_{0}\)-system) if \(X\) may be taken as above to be ultraweakly closed. If \(M_{0}=M\) and \(N_{0}=N\) are von Neumann algebras, then the operator \(M\)-\(N\)-bimodule (resp. operator \(M\)-system) is normal if this concrete realization can be made so that \(\pi\) and \(\rho\) (resp. \(\pi\)) is a normal representation. More generally, we will say that an operator \(M_{0}\)-\(N_{0}\)-bimodule (resp. operator \(M_{0}\)-system) is \((M_{0}\subset M)\)-\((N_{0}\subset N)\)-normal (resp. \((M_{0}\subset M)\)-normal) if there the concrete realization can be made so that \(\pi\) and \(\rho\) (resp. \(\pi\)) extend to normal representations of \(M\) and \(N\) (resp. \(M\)). Let \(X\) be a normal operator \(M\)-\(N\)-bimodule for von Neumann algebras \(M\) and \(N\). For each pair of positive functionals \(\omega\in M_{*}\) and \(\rho\in N_{*}\) we have an associated seminorm on \(X\) given by \[s^{\rho}_{\omega}(x)=\inf\{\rho(a^{*}a)^{1/2}\|y\|\omega(b^{*}b)^{1/2}\mid x= a^{*}yb,a,b\in M,y\in X\}.\] Following Magajna [10], the topology induced by these seminorms will be called the \(M\)-\(N\)-topology, or simply the \(M\)-topology if \(M=N\). If \(S\subset X\) is a subset, then we denote the closure of \(S\) in the \(M\)-\(N\)-topology by \(\overline{S}^{M-N}\). A linear functional \(\varphi\in X^{*}\) is \(M\)-\(N\)-normal (or simply \(M\)-normal if \(M=N\)) if for each \(a\in X\) the linear functionals \(M\ni x\mapsto\varphi(xa)\) and \(N\ni y\mapsto\varphi(ay)\) are normal. We let \(X^{M\sharp N}\) denote the space of \(M\)-\(N\)-normal linear functionals and we call the resulting weak topology on \(X\) the weak \(M\)-\(N\)-topology (or simply the weak \(M\)-topology if \(M=N\)). By [10, Theorem 3.7], \(X^{M\sharp N}\) coincides with the space of \(M\)-\(N\)-topology continuous linear functionals on \(X\). More generally, we may consider a similar topology that takes ultraweakly dense C\({}^{*}\)-subalgebras into consideration. Let \(M\) and \(N\) be von Neumann algebras, let \(M_{0}\subset M\) and \(N_{0}\subset N\) be ultraweakly dense C\({}^{*}\)-subalgebras with \(1_{M}\in M_{0}\) and \(1_{N}\in N_{0}\), and let \(X\) be an operator \(M_{0}\)-\(N_{0}\)-bimodule. We may equip \(M_{0}\) with the ultraweak topology from \(M\) and we denote by \(M_{0}^{\sharp}\subset M_{0}^{*}\) the space of normal functionals \(M_{*}\), restricted to \(M_{0}\). We similarly define \(N_{0}^{\sharp}\). Given positive functionals \(\omega\in M_{0}^{\sharp}\) and \(\rho\in N_{0}^{\sharp}\), we may consider a seminorm on \(X\) given by \[s^{\rho}_{\omega}(x)=\inf\{\rho(a^{*}a)^{1/2}\|y\|\omega(b^{*}b)^{1/2}\mid x= a^{*}yb,a\in C_{n}(M_{0}),b\in C_{n}(N_{0}),y\in\mathbb{M}_{n}(X),n\in\mathbb{N}\}.\] The triangle inequality for \(s^{\rho}_{\omega}\) follows from the standard argument showing that the Haagerup norm on tensor products satisfies the triangle inequality. In general, if \(M_{0}\) and \(N_{0}\) do not necessarily contain units, we may use the same formula above to define \(s^{\rho}_{\omega}\) on the subspace \(M_{0}XN_{0}\). Note that if we set \(\tilde{M}_{0}=M_{0}+\mathbb{C}1_{M}\) and \(\tilde{N}_{0}=N_{0}+\mathbb{C}1_{N}\), then the seminorm \(s^{\rho,M_{0}}_{\omega,N_{0}}\) on \(M_{0}XN_{0}\) obtained by viewing this as an operator \(M_{0}\)-\(N_{0}\) bimodule agrees with the corresponding seminorm \(s^{\rho,\tilde{M}_{0}}_{\omega,\tilde{N}_{0}}\) obtained by viewing \(M_{0}XN_{0}\) as a operator \(\tilde{M}_{0}\)-\(\tilde{N}_{0}\) bimodule. Indeed, we clearly have \(s^{\rho,\tilde{M}_{0}}_{\omega,\tilde{N}_{0}}\leq s^{\rho,M_{0}}_{\omega,N_{0}}\), and if \(x=a^{*}yb\in M_{0}XN_{0}\) with \(a\in C_{n}(\tilde{M}_{0})\) and \(b\in R_{N}(\tilde{N}_{0})\), then letting \(\{e_{i}\}_{i}\) and \(\{f_{j}\}_{j}\) denote approximate identities for \(M_{0}\) and \(N_{0}\) respectively we have \(s^{\rho,M_{0}}_{\omega,N_{0}}(e_{i}a^{*}ybf_{j})\leq\rho(e_{i}a^{*}ae_{i})^{1/ 2}\|y\|\omega(f_{j}b^{*}bf_{j})^{1/2}\). Since \(\lim_{i\to\infty}\rho(e_{i}a^{*}ae_{i})=\rho(a^{*}a),\lim_{j\to\infty}\omega(f_ {j}b^{*}bf_{j})=\omega(b^{*}b)\), and \(\lim_{i\to\infty,j\to\infty}\|x-e_{i}xf_{j}\|=0\), it then follows that \(s^{\rho,M_{0}}_{\omega,N_{0}}\leq s^{\rho,\tilde{M}_{0}}_{\omega,\tilde{N}_{0}}\). For a general operator \(M_{0}\)-\(N_{0}\) bimodule we may then define \(s^{\rho}_{\omega}\) on all of \(X\) by the formula \[s^{\rho}_{\omega}(x)=\max\left\{\inf_{z\in M_{0}XN_{0}}\|x-z\|,s^{\rho,\tilde {M}_{0}}_{\omega,\tilde{N}_{0}}(x)\right\}.\] We call the topology on \(X\) induced by the family of seminorms \(\{s^{\rho}_{\omega}\mid\omega\in(M_{0})_{+}^{\sharp},\rho\in(N_{0})_{+}^{\sharp}\}\) the \((M_{0}\subset M)\)-\((N_{0}\subset N)\)-topology (or simply the \((M_{0}\subset M)\)-topology when \(M_{0}=N_{0}\) and \(M=N\)). In the case when \(M_{0}=M\) and \(N_{0}=N\), the following lemma is contained in [10, Lemma 3.1]. **Lemma 3.1**.: _Let \(M\) and \(N\) be von Neumann algebras, let \(M_{0}\subset M\) and \(N_{0}\subset N\) be ultraweakly dense \(\mathrm{C}^{*}\)-subalgebras, let \(X\) be an operator \(M_{0}\)-\(N_{0}\)-bimodule, and \(Y\subset X\) an operator \(M_{0}\)-\(N_{0}\)-subbimodule. Then the restriction of the \((M_{0}\subset M)\)-\((N_{0}\subset N)\)-topology on \(X\) to \(Y\) coincides with the \((M_{0}\subset M)\)-\((N_{0}\subset N)\)-topology on \(Y\)._ Proof.: Given \(\omega\in(M_{0})_{+}^{\sharp}\) and \(\rho\in(N_{0})_{+}^{\sharp}\), we denote by \(s^{\rho}_{\omega,X}\) and \(s^{\rho}_{\omega,Y}\) the above seminorms on \(X\) and \(Y\), respectively. It is clear that \(s^{\rho}_{\omega,X}(y)\leq s^{\rho}_{\omega,Y}(y)\) for any \(y\in Y\). To see the reverse, suppose first that \(1_{M}\in M_{0}\) and \(1_{N}\in N_{0}\) and \(y\in Y\) is such that we have a decomposition \(y=a^{*}xb\) with \(x\in\mathbb{M}_{n}(X)\), \(a\in C_{n}(M_{0})\), and \(b\in C_{n}(N_{0})\). For \(k\geq 1\) we let \(f_{k}:[0,\infty)\to[0,\infty)\) denote the function given by \(f_{k}(t)=\begin{cases}1/k&0\leq t\leq 1/k\\ t&t>1/k\end{cases}\), then \(y_{k}=f_{k}(|a|)^{-1}a^{*}xbf_{k}(|b|)^{-1}\in Y\) with \(\|y_{k}\|\leq\|y\|\), and so \[s^{\rho}_{\omega,Y}(y)\leq\rho(f_{k}(|a|)^{2})^{1/2}\|y\|\omega(f_{k}(|b|)^{2 })^{1/2}.\] Since \(f_{k}(|a|)^{2}\to|a|^{2}\) and \(f_{k}(|b|)^{2}\to|b|^{2}\) in norm, we then have \(s^{\rho}_{\omega,Y}(y)\leq s^{\rho}_{\omega,X}(y)\). In the non-unital case we notice that if \(\{e_{i}\}_{i}\) and \(\{f_{j}\}_{j}\) are approximate units for \(N_{0}\) and \(M_{0}\), respectively, then we have \(\inf_{z\in M_{0}XN_{0}}\|x-z\|=\lim_{i\to\infty,j\to\infty}\|x-e_{i}xf_{j}\|\) for \(x\in X\), and so if \(y\in Y\), then we have \(\inf_{z\in M_{0}XN_{0}}\|y-z\|=\inf_{z\in M_{0}YN_{0}}\|y-z\|\). The general result then follows from the unital case. **Remark 3.2**.: If \(1_{M}\in M_{0}\), \(1_{N}\in N_{0}\) and we are given an operator \(M_{0}\)-\(N_{0}\)-bimodule \(X\), we may also consider the function \[\tilde{s}^{\rho}_{\omega}(x)=\inf\{\rho(a^{2})^{1/2}\|y\|\omega(b^{2})^{1/2} \mid x=ayb,a,b\in(M_{0})_{+},y\in X\}.\] The same argument above shows that if \(Y\subset X\) is an \(M_{0}\)-\(N_{0}\)-subbimodule and \(y\in Y\), then the value of \(\tilde{s}^{\rho}_{\omega}(y)\) does not depend on whether we consider \(y\) as an element in \(Y\) or \(X\). In the case when \(X=\mathbb{B}(\mathcal{H})\), we may use polar decomposition in \(\mathbb{B}(\mathcal{H})\) to see that \(\tilde{s}^{\rho}_{\omega}=s^{\rho}_{\omega}\). Since every operator \(M_{0}\)-\(N_{0}\)-bimodule is a subbimodule of \(\mathbb{B}(\mathcal{H})\) for some Hilbert space, it then follows that \(s^{\rho}_{\omega}=\tilde{s}^{\rho}_{\omega}\) in general. When \(M_{0}=N_{0}\), \(M=N\), and \(\omega=\rho\), we use the notation \(s_{\omega}\) in place of \(s^{\omega}_{\omega}\). Continuing as above with \(M\) and \(N\) von Neumann algebras, \(M_{0}\subset M\), \(N_{0}\subset N\) ultraweakly dense \(\mathrm{C}^{*}\)-subalgebras, and \(X\) an operator \(M_{0}\)-\(N_{0}\)-bimodule, we denote by \(X^{M_{0}\sharp N_{0}}\) (or simply \(X^{\sharp}\) if no confusion will arise) the space of functionals \(\varphi\in X^{*}\) such that for any \(x\in X\) the map \(M_{0}\times N_{0}\ni(a,b)\mapsto\varphi(axb)\) is separately ultraweak continuous. We call the \(\sigma(X,X^{M_{0}\sharp N_{0}})\)-topology on \(X\) the weak \((M_{0}\subset M)\)-\((N_{0}\subset N)\)-topology (or simply the weak \((M_{0}\subset M)\)-topology if \(M_{0}=N_{0}\) and \(M=N\)). The following proposition is from [10, Theorem 3.7], and we include a proof for convenience. **Proposition 3.3**.: _In the above setting, a functional \(\varphi\in X^{*}\) is continuous in the \((M_{0}\subset M)\)-\((N_{0}\subset N)\)-topology if and only if \(\varphi\in X^{M_{0}\sharp N_{0}}\)._ Proof.: The backward direction is clear. To see the forward direction, note that \[M_{0}\times X\times N_{0}\ni(a,x,b)\mapsto\varphi(axb)\in\mathbb{C}\] is a completely bounded multilinear map. It follows from the Christensen-Paulsen-Sinclair representation theorem (see, e.g., [13, Theorem 1.5.4]) that there exist \(*\)-representations \(\pi_{1}:M_{0}\to\mathbb{B}(\mathcal{K}_{1})\), \(\pi_{2}:N_{0}\to\mathbb{B}(\mathcal{K}_{2})\), a complete contraction \(\phi:X\to\mathbb{B}(\mathcal{H})\), bounded operators \(T_{1}:\mathcal{H}\to\mathcal{K}_{1}\), \(T_{2}:\mathcal{K}_{2}\to\mathcal{H}\), and vectors \(\xi_{i}\in\mathcal{K}_{i}\) such that \(\varphi(axb)=\langle\pi_{1}(a)T_{1}\phi(x)T_{2}\pi_{2}(b)\xi_{2},\xi_{1}\rangle\). We let \(p_{1}\in M_{0}^{**}\) (resp. \(p_{2}\in N_{0}^{**}\)) denote the support projection for the inclusion \(M_{0}\to M\) (resp. \(N_{0}\to N\)). Since for any \(x\in X\) the map \(M_{0}\times N_{0}\ni(a,b)\mapsto\varphi(axb)\) is separately ultraweak continuous, it then follows that we have \(\varphi(axb)=\langle\pi_{1}(a)T_{1}\phi(x)T_{2}\pi_{2}(b)p_{2}\xi_{2},p_{1}\xi_ {1}\rangle\) for \(x\in X\), \(a\in M_{0}\), and \(b\in N_{0}\). Then we have \(|\varphi(axb)|\leq\|T_{1}\|\|T_{2}\|\eta_{1}(aa^{*})^{1/2}\|x\|\eta_{2}(b^{*} )^{1/2}\), where \(\eta_{i}(\cdot)=\langle\pi_{i}(\cdot)p\xi_{i},p\xi_{i}\rangle\). Since \(\eta_{1}\in M_{0}^{\sharp}\) and \(\eta_{2}\in N_{0}^{\sharp}\), it is then easy to conclude that \(\varphi\) is continuous in the \((M_{0}\subset M)\)-\((N_{0}\subset N)\)-topology. The following result is in the same spirit as the Noncommutative Egorov Theorem. **Lemma 3.4**.: _Let \(M\) be a von Neumann algebra, \(M_{0}\subset M\) an ultraweakly dense \(\mathrm{C}^{*}\)-subalgebra and \(X\) an operator \(M_{0}\)-bimodule. Suppose for each \(1\leq k\leq n\), \(\{x_{i}^{k}\}_{i\in I}\subset X\) is a net converging to \(0\) in the \((M_{0}\subset M)\)-topology. Then, for any finite subset \(F=\{\varphi_{j}\mid 1\leq j\leq m\}\subset(M_{0}^{\sharp})_{+}\), we may find a net of positive contractions \(1-z_{i}\in M_{0}\), such that \(\|z_{i}x_{i}^{k}z_{i}\|\to 0\) and \(\varphi_{j}(1-z_{i})\to 0\) for any \(1\leq j\leq m\). When \(M_{0}=M\), we may choose \(\{z_{i}\}\) to be projections._ Proof.: Note first that it suffices to consider the case when \(\{x_{i}^{k}\}_{i\in I}\subset M_{0}XM_{0}\). For each \(i\), \(1\leq k\leq n\) and \(1\leq j\leq m\), we may then find a decomposition \(x_{i}^{k}=a_{i,j,k}^{*}y_{i,j,k}b_{i,j,k}\) with \(a_{i,j,k},b_{i,j,k}\in M_{0}\), \(y_{i,j,k}\in X\) and \(\varphi_{j}(a_{i,j,k}^{*}a_{i,j,k})^{1/2}=\|y_{i,j,k}\|=\varphi_{j}(b_{i,j,k}^ {*}b_{i,j,k})^{1/2}\to 0\). We fix \(N>1\) and let \(f_{N}:[0,\infty)\to[0,1]\) denote a piecewise linear continuous function such that \(f_{N}(t)=1\) if \(t\leq 1/N\), and \(f_{N}(t)=0\) if \(t\geq 2/N\). We then define \[z_{i}=f_{N}\left(\sum_{j=1}^{m}\sum_{k=1}^{n}(a_{i,j,k}^{*}a_{i,j,k}+b_{i,j,k} ^{*}b_{i,j,k})\right)\in M_{0}.\] Note that \(0\leq z_{i}\leq 1\), and \[\|z_{i}a_{i,j,k}^{*}\|^{2}\leq\|z_{i}\left(\sum_{j=1}^{m}\sum_{k=1}^{n}(a_{i, j,k}^{*}a_{i,j,k}+b_{i,j,k}^{*}b_{i,j,k})\right)z_{i}\|\leq 2/N.\] Similarly we have \(\|b_{i,j,k}z_{i}\|^{2}\leq 2/N\), and it follows that \(\lim_{i\to\infty}\|z_{i}x_{i}^{k}z_{i}\|=0\) for any \(1\leq k\leq n\). Also, note that we have \(1-z_{i}\leq N\sum_{j=1}^{m}\sum_{k=1}^{n}(a_{i,j,k}^{*}a_{i,j,k}+b_{i,j,k}^{*}b _{i,j,k})\) and hence \(\lim_{i\to\infty}\varphi_{j}(1-z_{i})=0\). When \(M_{0}=M\), replace \(f_{N}\) with \(1_{[0,1/N]}\) so that \(z_{i}\) is a projection in \(M\). The next lemma will allow us to adapt the standard c.c. to u.c.p. perturbation result to the setting of \(M\)-topology. **Lemma 3.5**.: _Let \(M\) be a von Neumann algebra, and let \(M_{0}\subset M\) be an ultraweakly dense \(\mathrm{C}^{*}\)-subalgebra. Let \(E\) be an operator \(M_{0}\)-system, and for \(i\in I\) suppose \(E_{i}\) is an operator system and \(\phi_{i}:E_{i}\to E\) is a net of c.c. maps such that \(1_{F}-\phi_{i}(1_{E_{i}})\to 0\) in the \((M_{0}\subset M)\)-topology. Then, for any finite subset of states \(F\subset M_{0}^{\sharp}\), there exist c.c. maps \(\psi_{i}:E_{i}\to E\) such that \(\|1_{E}-\psi_{i}(1_{E_{i}})\|\to 0\) and_ \[\lim_{i}\sup_{x\in(E_{i})_{1}}s_{\omega}^{\rho}((\psi_{i}-\phi_{i})(x))=0,\] _for any \(\omega,\rho\in F\)._ Proof.: Since \(1_{E}-\phi_{i}(1_{E_{i}})\to 0\) in the \((M_{0}\subset M)\)-topology, it follows as in the proof of Lemma 3.4 that there exist \(1-z_{i}\in M_{0}\) with \(0\leq z_{i}\leq 1\) such that \(\lim_{i}\omega(1-z_{i}^{2})=0\) for any \(\omega\in F\) and such that \(\|z_{i}(1_{E}-\phi_{i}(1_{E_{i}}))z_{i}\|\to 0\). For each \(i\), fix a state \(\eta_{i}\in E_{i}^{*}\) and define \(\psi_{i}:E_{i}\to E\) by \(\psi_{i}(x)=z_{i}\phi_{i}(x)z_{i}+(1-z_{i}^{2})\eta_{i}(x)=\left(\begin{smallmatrix} z_{i}\\ (1-z_{i}^{2})^{1/2}\end{smallmatrix}\right)^{*}\left(\begin{smallmatrix}\phi_{i }(x)&0\\ 0&\eta_{i}(x)\end{smallmatrix}\right)\left(\begin{smallmatrix}z_{i}\\ (1-z_{i}^{2})^{1/2}\end{smallmatrix}\right)\). Then, \(\psi_{i}\) is c.c. and satisfies \(\|1_{E}-\psi_{n}(1_{E_{i}})\|\to 0\). Moreover, for \(x\in E_{i}\) and \(\omega,\rho\in F\) we have \[s_{\omega}^{\rho}(\phi_{i}(x)-\psi_{i}(x)) \leq\omega(1-z_{i}^{2})\|x\|+s_{\omega}^{\rho}(\phi_{i}(x)-z_{i} \phi_{i}(x)z_{i})\] \[=\omega(1-z_{i}^{2})\|x\|+s_{\omega}^{\rho}\left(\begin{smallmatrix }1\\ -iz_{i}\end{smallmatrix}\right)^{*}\left(\begin{smallmatrix}\phi_{i}(x)&0\\ 0&\phi_{i}(x)\end{smallmatrix}\right)\left(\begin{smallmatrix}1\\ iz_{i}\end{smallmatrix}\right)\right)\] \[\leq\omega(1-z_{i}^{2})\|x\|+\omega(1-z_{i}^{2})^{1/2}\|x\|\rho( 1-z_{i}^{2})^{1/2}.\] **Lemma 3.6**.: _Let \(M\subset\mathbb{B}(\mathcal{H})\) be a von Neumann algebra, \(M_{0}\subset M\) an ultraweakly dense \(\mathrm{C}^{*}\)-subalgebra and \(\psi_{i}:E_{i}\to\mathbb{B}(\mathcal{H})\) a net c.c. maps from operator systems \(E_{i}\) to \(\mathbb{B}(\mathcal{H})\). Suppose \(\psi_{i}(1_{E_{i}})\to 1\) in the \((M_{0}\subset M)\)-topology. Then, for any finite set of states \(F\subset M_{0}^{\sharp}\), there exists a net of u.c.p. maps \(\phi_{i}:E_{i}\to\mathbb{B}(\mathcal{H})\) such that \(\lim_{i}\sup_{x\in(E_{i})_{1}}s_{\omega}^{\rho}((\phi_{i}-\psi_{i})(x))=0\) for any \(\omega,\rho\in F\)._ Proof.: By Lemma 3.5, we have c.c. \(\psi_{i}^{\prime}:E_{i}\to\mathbb{B}(\mathcal{H})\) such that \(\psi_{i}(1_{E_{i}})\to 1\) in norm and \(\lim_{i}\sup_{x\in(E_{i})_{1}}s_{\omega}^{\rho}((\psi_{i}-\psi_{i}^{\prime})(x))\) for any \(\omega,\rho\in F\). Replacing \(\psi_{i}^{\prime}(\cdot)\) with \((\psi_{i}^{\prime}(\cdot)+\psi_{i}^{\prime}(\cdot)^{*})/2\), we may further assume \(\psi_{i}^{\prime}\) is self-adjoint. Apply [1, Corollary B.9] and we obtain a u.c.p. map \(\phi_{i}:E_{i}\to\mathbb{B}(\mathcal{H})\) such that \(\|\phi_{i}-\psi_{i}^{\prime}\|_{cb}\leq 2\|\psi_{i}^{\prime}(1_{E_{i}})-1\|\). It then follows that for any \(x\in E_{i}\), \[s_{\omega}^{\rho}(\phi_{i}(x)-\psi_{i}(x)) \leq 2\|\psi_{i}^{\prime}(1_{E_{i}})-1\|\|x\|+s_{\omega}^{\rho}( \psi_{i}^{\prime}(x^{*})^{*}-\psi_{i}(x))/2+s_{\omega}^{\rho}(\psi_{i}^{\prime }(x)-\psi_{i}(x))/2\] \[\leq 2\|\psi_{i}^{\prime}(1_{E_{i}})-1\|\|x\|+\omega(1-z_{i}^{2}) \|x\|+\omega(1-z_{i}^{2})^{1/2}\|x\|\rho(1-z_{i}^{2})^{1/2}.\] ### \(M\)-\(\mathrm{C}^{*}\)-algebras and normal biduals If \(M_{0}\) is a \(\mathrm{C}^{*}\)-algebra, then an \(M_{0}\)-\(\mathrm{C}^{*}\)-algebra consists of a \(\mathrm{C}^{*}\)-algebra \(A\), together with a faithful nondegenerate \(*\)-homomorphism from \(M_{0}\) into the multiplier algebra \(M(A)\). If \(M_{0}=M\) is a von Neumann algebra, then an \(M\)-\(\mathrm{C}^{*}\)-algebra \(A\) is normal if \(M(A)\) is a normal operator \(M\)-system. More generally, if we have an ultraweakly dense \(\mathrm{C}^{*}\)-algebra \(M_{0}\subset M\), then an \(M_{0}\)-\(\mathrm{C}^{*}\)-algebra \(A\) is \((M_{0}\subset M)\)-normal if \(M(A)\) is \((M_{0}\subset M)\)-normal as an operator \(M_{0}\)-system. If \(M\) is a von Neumann algebra and \(M_{0}\subset M\) is an ultraweakly dense \(\mathrm{C}^{*}\)-subalgebra, and if we have a \((M_{0}\subset M)\)-normal \(M_{0}\)-\(\mathrm{C}^{*}\)-algebra \(A\), then we let \(p_{\mathrm{nor}}\in M_{0}^{**}\subset A^{**}\) denote the projection corresponding to the support of the identity representation \(M_{0}\to M\). On the predual \((p_{\mathrm{nor}}A^{**}p_{\mathrm{nor}})_{*}\) we may consider the restriction map to \(A\), and this gives rise to an operator space isomorphism \((p_{\mathrm{nor}}A^{**}p_{\mathrm{nor}})_{*}\cong A^{\sharp}\) (see [1, Section 2]). The dual map then allows us to equip \(A^{\sharp*}\) with a von Neumann algebraic structure so that \[A^{\sharp*}\cong p_{\mathrm{nor}}A^{**}p_{\mathrm{nor}}.\] Note that \(p_{\mathrm{nor}}\) commutes with \(M_{0}\subset M(A)\subset A^{**}\), and so this isomorphism preserves the natural \(M_{0}\)-bimodule structures on \(A^{\sharp*}\) and \(p_{\mathrm{nor}}A^{**}p_{\mathrm{nor}}\), so we will view \(A^{\sharp*}\) as a von Neumann algebra that contains \(M\) as a von Neumann subalgebra. It is worth noting that the canonical mapping of \(A\) into \(A^{\sharp*}\) is only a complete order isomorphism, rather than a \(*\)-isomorphism, since \(p_{\mathrm{nor}}\) need not to be central in \(A^{**}\) if \(A\) is not generated as a \(\mathrm{C}^{*}\)-algebra by \(M_{0}\) and \(M_{0}^{\prime}\cap A\). The von Neumann algebra \(A^{\sharp*}\) may be seen as the universal version of the construction considered in [1, Proposition 3.1]. If \(A\) is unital and \(E\subset A\) a normal operator \(M_{0}\)-subsystem, then we may equip \(E^{\sharp*}\) with a dual normal operator \(M\)-system structure, by identifying it as an operator \(M\)-subsystem of \(A^{\sharp*}\). We will denote by \(i_{E}:E\to E^{\sharp*}\) the canonical complete order isomorphism. If we have an inclusion of if \((M_{0}\subset M)\)-normal \(M_{0}\)-C\({}^{*}\)-algebras \(J\subset A\), then we have a canonical (not necessarily unital) embedding of von Neumann algebras \(J^{**}\subset A^{**}\), and, by considering an \(M_{0}\)-quasi-central approximate unit in \(J\), we see that the support projection \(p_{0}\) of \(J^{**}\) in \(A^{**}\) commutes \(p_{\mathrm{nor}}\in A^{**}\) and the corresponding projection \(p_{0}p_{\mathrm{nor}}\) gives the corresponding normal projection in \(J^{**}\). Thus, we also have a canonical (not necessarily unital) embedding of von Neumann algebras \(J^{\sharp*}\subset A^{\sharp*}\). In particular, if \(J\subset A\) is an ideal, then the support projection \(p\) for \(J^{\sharp*}\) is central and gives rise to the decomposition \[A^{\sharp*}=J^{\sharp*}\oplus p^{\perp}A^{\sharp*}.\] If \(J\) is not an ideal, but rather a hereditary C\({}^{*}\)-subalgebra, then \(p\) is no longer central and so in this case we obtain the matrix decomposition \[A^{\sharp*}=\begin{pmatrix}J^{\sharp*}&pA^{\sharp*}p^{\perp}\\ p^{\perp}A^{\sharp*}p&p^{\perp}A^{\sharp*}p^{\perp}\end{pmatrix}.\] If \(M\) is a von Neumann algebra and \(E\) is an operator \(M\)-system, then the Cauchy-Schwarz inequality easily shows that a positive linear functional \(\varphi\in E^{*}\) is contained in \(E^{M\sharp M}\) if and only if \(\varphi_{|M}\) is normal. This observation can be used to give the following useful lemma, giving a criterion for when a c.p. map is continuous in the \(M\)-topology. **Lemma 3.7**.: _Let \(M\) and \(N\) be von Neumann algebras and let \(E\) and \(F\) be operator \(M\) and \(N\)-systems, respectively. If \(\phi:E\to F\) is completely positive such that the restriction of \(\phi\) to \(M\) defines a normal map from \(M\) to \(N\), then \(\phi\) is a continuous map from \(E\) with the weak \(M\)-topology, to \(F\) with the weak \(N\)-topology._ Proof.: To prove that \(\phi\) is continuous from the weak \(M\)-topology to the weak \(N\)-topology, we need to check that if \(\eta\in F^{N\sharp N}\), then \(\eta\circ\phi\in E^{M\sharp M}\). Moreover, by viewing \(F\) as an operator subsystem of \((F^{N\sharp N})^{*}\), we see that every linear functional \(\eta\in F^{N\sharp N}\) is implemented by a vector linear functional in some \(N\)-bimodular c.p. representation of \(F\) where \(N\) is normally represented. Hence, by the polarization identity, we may write \(\eta\) as a span of states in \(F^{N\sharp N}\). Thus, it suffices to check that \(\eta\circ\phi\in E^{M\sharp M}\) whenever \(\eta\in F^{N\sharp N}\) is a state. If \(\eta\in F^{N\sharp N}\) is a state, then \(\eta_{|N}\) is normal and hence \((\eta\circ\phi)_{|M}\) is normal, from which it follows that \(\eta\circ\phi\in E^{M\sharp M}\). **Corollary 3.8**.: _Using the notation above, the map \(\mathrm{Ad}(e_{N}):\mathbb{B}(L^{2}M)\to\mathbb{B}(L^{2}N)\) is continuous from the weak \(M\)-topology to the weak \(N\)-topology, and also from the weak \(M^{\prime}\)-topology to the weak \(N^{\prime}\)-topology._ ### The small-at-infinity boundary and boundary pieces If \(M\subset\mathbb{B}(\mathcal{H})\) is a von Neumann algebra, then we will have occasion to use not only the \(M\)-topology, but also the \(M^{\prime}\)-topology on \(\mathbb{B}(\mathcal{H})\). We therefore introduce the seminorms \(r_{\omega}\) on \(\mathbb{B}(L^{2}M)\) given by \[r_{\omega}(T)=\inf\{(\omega(Ja^{*}aJ)+\omega(b^{*}b))^{1/2}\|Z\|(\omega(Jc^{*} cJ)+\omega(d^{*}d))^{1/2}\},\] where \(\omega\) is a normal state on \(M\), \(J\) is the modular conjugation operator, and the infimum is taken over all decompositions \(T=\left(\begin{smallmatrix}a\\ b\end{smallmatrix}\right)^{*}Z\left(\begin{smallmatrix}c\\ d\end{smallmatrix}\right)\) where \(a,c\in M^{\prime}\), \(b,d\in M\) and \(Z\in\mathbb{M}_{2}(\mathbb{B}(L^{2}M))\). These seminorms on bounded sets describe the coarsest locally convex topological vector space topology that contains both the \(M^{\prime}\)-topology and the \(M\)-topology. If \(X\) is a left operator \(M\)-module (i.e., an operator \(M\)-\(\mathbb{C}\)-bimodule), then for a positive linear functional \(\omega\in M^{\sharp}\) we denote by \(s^{\ell}_{\omega}\) the seminorm \(s^{\rho}_{\omega}\) where \(\rho:\mathbb{C}\to\mathbb{C}\) is the identity map. Given a right operator \(M_{0}\)-module, we similarly denote by \(s^{r}_{\omega}\) the seminorm \(s^{\omega}_{\rho}\). We will also use similar notation \(r^{\ell}_{\omega}\) and \(r^{r}_{\omega}\) for the corresponding seminorms on \(\mathbb{B}(L^{2}M)\). For easy reference, we summarize these seminorms on an operator \(T\in\mathbb{B}(L^{2}M)\) here: \[s_{\omega}(T)=\inf\{\omega(a^{*}a)\|S\|\omega(b^{*}b)\mid T=a^{*}Sb,a,b\in M,S \in\mathbb{B}(L^{2}M)\};\] \[s^{\ell}_{\omega}(T)=\inf\{\omega(a^{*}a)\|S\|\mid T=a^{*}S,a\in M,S\in \mathbb{B}(L^{2}M)\};\] \[s^{r}_{\omega}(T)=\inf\{\|S\|\omega(b^{*}b)\mid T=Sb,b\in M,S\in\mathbb{B}(L^{ 2}M)\};\] \[r_{\omega}(T)=\inf\{(\omega(Ja^{*}aJ)+ \omega(b^{*}b))^{1/2}\|Z\|(\omega(Jc^{*}cJ)+\omega(d^{*}d))^{1/2}\] \[\mid T=(\begin{smallmatrix}a\\ b\end{smallmatrix})^{*}\,Z\left(\begin{smallmatrix}c\\ d\end{smallmatrix}\right),a,c\in M^{\prime},b,d\in M,Z\in\mathbb{M}_{2}( \mathbb{B}(L^{2}M))\};\] \[r^{\ell}_{\omega}(T)=\inf\{(\omega(Ja^{*}aJ)+\omega(b^{*}b))^{1/2}\|Z\|\mid T =(\begin{smallmatrix}a\\ b\end{smallmatrix})^{*}\,Z,a\in M^{\prime},b\in M,Z\in C_{2}(\mathbb{B}(L^{2}M ))\};\] \[r^{r}_{\omega}(T)=\inf\{\|Z\|(\omega(Jc^{*}cJ)+\omega(d^{*}d))^{1/2}\mid T=Z \left(\begin{smallmatrix}c\\ d\end{smallmatrix}\right),c\in M^{\prime},d\in M,Z\in R_{2}(\mathbb{B}(L^{2}M ))\}.\] In the case when \(M\) is finite with a normal faithful trace \(\tau\), the norm \(r^{r}_{\tau}(T)\) is equivalent to the norm \(\|T\|_{\infty,2}=\sup_{a\in(M)_{1}}\|T\hat{a}\|\), which is nothing but the operator norm when thinking of \(T\) as an operator from \(M\subset L^{2}(M,\tau)\) with the uniform norm into \(L^{2}(M,\tau)\)[1]. This result was generalized in [1, Proposition 3.1] to show that the norm \(r_{\tau}(T)\) is equivalent to the norm \(\|T\|_{\infty,1}=\sup_{a,b\in(M)_{1}}|\langle T\hat{a},\hat{b}\rangle|\) where we think of \(T\) as an operator from \(M\subset L^{2}(M,\tau)\) to \(L^{1}(M,\tau)\supset L^{2}(M,\tau)\). The relationship between the seminorms is given by the following result. **Proposition 3.9**.: _Let \(M\) be a von Neumann algebra and \(A\) an \(M\)-\(C^{*}\)-algebra. Then, for \(\mu\) a normal positive linear functional and \(x\in A\), we have \(s^{\ell}_{\mu}(x^{*})=s^{r}_{\mu}(x)=s_{\mu}(x^{*}x)^{1/2}\). Also, if \(T\in\mathbb{B}(L^{2}M)\), then we have \(r^{\ell}_{\mu}(T^{*})=r^{r}_{\mu}(T)=r_{\mu}(T^{*}T)^{1/2}\)._ Proof.: First note that the equality \(s^{\ell}_{\mu}(x^{*})=s^{r}_{\mu}(x)\) is obvious. Also, if \(x=yb\) with \(y\in A\) and \(b\in M\), then \(x^{*}x=b^{*}y^{*}yb\), and so \(s_{\mu}(x^{*}x)^{1/2}\leq\mu(b^{*}b)^{1/2}\|y^{*}y\|^{1/2}\). Taking the infimum over all such decompositions gives \(s_{\mu}(x^{*}x)^{1/2}\leq s^{r}_{\mu}(x)\). To see the reverse inequality, we suppose \(x^{*}x=a^{*}yb\) for some \(y\in A\), and \(a,b\in M\) and set \(\kappa=\omega(a^{*}a)^{1/2}\|y\|\omega(b^{*}b)^{1/2}\). By rescaling, we will assume that \(\omega(a^{*}a)=\omega(b^{*}b)\). We then have \[x^{*}x=\frac{1}{2}(a^{*}yb+b^{*}y^{*}a)=\frac{1}{2}\left(\begin{smallmatrix}a\\ b\end{smallmatrix}\right)^{*}\left(\begin{smallmatrix}0&y\\ y^{*}&0\end{smallmatrix}\right)\left(\begin{smallmatrix}a\\ b\end{smallmatrix}\right),\] and note that \(\frac{1}{2}\mu(a^{*}a+b^{*}b)=\mu(a^{*}a)=\mu(a^{*}a)^{1/2}\mu(b^{*}b)^{1/2}\). We consider the polar decomposition \(\left(\begin{smallmatrix}a\\ b\end{smallmatrix}\right)=u(a^{*}a+b^{*}b)^{1/2}\), and we set \(z=u^{*}\left(\begin{smallmatrix}0&y\\ y^{*}&0\end{smallmatrix}\right)u\) and \(a_{0}=\frac{1}{\sqrt{2}}(a^{*}a+b^{*}b)^{1/2}\) so that we have \(x^{*}x=a_{0}za_{0}\) with \(\omega(a_{0}^{2})\|z\|\leq\kappa\). Note that by replacing \(z\) with \(pzp\) where \(p\) is the support of \(a_{0}\in M\) we may assume also that \(z\geq 0\). We then have \(|x|=|z^{1/2}a_{0}|\) and so if we consider the polar decompositions \(x=v|x|\) and \(z^{1/2}a_{0}=w|z^{1/2}a_{0}|=w|x|\), then we have \(x=v|x|=vw^{*}z^{1/2}a_{0}\), and hence \[s_{\mu}^{r}(x)\leq\|vw^{*}z^{1/2}\|\mu(a_{0}^{2})^{1/2}\leq\kappa^{1/2}.\] Taking the infimum over all such decompositions then gives \(s_{\mu}^{r}(x)\leq s_{\mu}(x^{*}x)^{1/2}\). The result for the seminorms \(r_{\mu}^{\ell}\), \(r_{\mu}^{r}\), and \(r_{\mu}\) follows similarly. **Corollary 3.10**.: _Let \(M\) be a von Neumann algebra, let \(A\) be an \(M\)-\(\mathrm{C}^{*}\)-algebra, and let \(\mu\) be a normal positive linear functional on \(M\). Then for \(x,y,z\in A\) we have_ \[s_{\mu}(x^{*}yz)\leq s_{\mu}(x^{*}x)^{1/2}\|y\|s_{\mu}(z^{*}z)^{1/2}.\] _Also, if \(x,y,z\in\mathbb{B}(L^{2}M)\), then we have_ \[r_{\mu}(x^{*}yz)\leq r_{\mu}(x^{*}x)^{1/2}\|y\|r_{\mu}(z^{*}z)^{1/2}.\] Proof.: We clearly have \(s_{\mu}(x^{*}yz)\leq s_{\mu}^{\ell}(x^{*})\|y\|s_{\mu}^{r}(z)\), and so the result follows directly from Proposition 3.9. The case for \(r_{\mu}\) follows similarly. Let \(M\) be a von Neumann algebra. Generalizing the group setting from [1], an \(M\)-boundary piece was defined in [10] to be a hereditary \(\mathrm{C}^{*}\)-subalgebra \(\mathbb{X}\subset\mathbb{B}(L^{2}M)\) such that \(M\cap M(\mathbb{X})\subset M\) and \(M^{\prime}\cap M(\mathbb{X})\subset M^{\prime}\) are weakly dense. Denote by \[\mathbb{K}_{\mathbb{X}}^{L}(M)=\overline{\mathbb{B}(L^{2}M)\mathbb{X}^{ \mathbb{C}-M^{\prime}}},\] which is a left ideal containing \(M\) and \(M^{\prime}\) in its space of right multipliers. If \(\mathcal{H}\) is a Hilbert space, then we set \[\mathbb{K}_{\mathbb{X}}^{L}(M,\mathcal{H})=\overline{\overline{\mathbb{B}(L^{ 2}M,\mathcal{H})\mathbb{X}^{\mathbb{C}-M^{\prime}}}}.\] A consequence of [11, Proposition 2.2] (see [10, Proposition 2.3]) is that an operator \(T\in\mathbb{B}(L^{2}M)\) is contained in \(\mathbb{K}_{\mathbb{X}}^{L}(M)\) if and only if there exist orthogonal families of projections \(\{f_{i}\}_{i\in I},\{\tilde{f}_{j}\}_{j\in J}\subset M\) such that \(Tfj_{i}Jf_{j}\in\mathbb{X}\) for each \(i\in I\), \(j\in J\). Let \[\mathbb{K}_{\mathbb{X}}(M)=(\mathbb{K}_{\mathbb{X}}^{L}(M))^{*}\cap\mathbb{K} _{\mathbb{X}}^{L}(M)=(\mathbb{K}_{\mathbb{X}}^{L}(M))^{*}\mathbb{K}_{\mathbb{ X}}^{L}(M)\subset\mathbb{B}(L^{2}M)\] be the hereditary \(\mathrm{C}^{*}\)-subalgebra associated to \(\mathbb{K}_{\mathbb{X}}^{L}(M)\), and note that both \(M\) and \(M^{\prime}\) are in its multiplier algebra. We also define \[\mathbb{K}_{\mathbb{X}}^{\infty,1}(M)=\overline{\overline{\mathbb{K}_{ \mathbb{X}}(M)}^{M-M}M^{\prime}-M^{\prime}},\] which agrees with the closure in the topology given by the seminorms \(r_{\mu}\) defined above. By Lemma 3.4 if \(T\in\mathbb{K}_{\mathbb{X}}^{\infty,1}(M)_{+}\), then there exists a net of positive contractions \(z_{i}\in M\) converging ultrastrstrongly to \(1\) so that in uniform norm we have \(d(z_{i}Tz_{i},\mathbb{X})\to 0\), and hence also \(d((z_{i}Tz_{i})^{1/2},\mathbb{X})\to 0\). By considering the polar decomposition we then have \(d(T^{1/2}z_{i},\mathbb{B}(L^{2}M)\mathbb{X})\to 0\), and hence \(T^{1/2}\in\mathbb{K}_{\mathbb{X}}^{L}(M)\), which shows that \(T\in(\mathbb{K}_{\mathbb{X}}^{L}(M))^{*}\cap\mathbb{K}_{\mathbb{X}}^{L}(M)= \mathbb{K}_{\mathbb{X}}(M)\). Thus, we have the equality of positive cones \[\mathbb{K}_{\mathbb{X}}^{\infty,1}(M)_{+}=\mathbb{K}_{\mathbb{X}}(M)_{+}.\] In particular, for a Hilbert space \(\mathcal{H}\), an operator \(T\in\mathbb{B}(L^{2}M,\mathcal{H})\) is contained in \(\mathbb{K}_{\mathbb{X}}^{L}(M,\mathcal{H})\) if and only if \(|T|\in\mathbb{K}_{\mathbb{X}}^{\infty,1}(M)\). The small-at-infinity boundary of \(M\) with respect to the \(M\)-boundary piece \(\mathbb{X}\) is defined as \[\mathbb{S}_{\mathbb{X}}(M)=\{T\in\mathbb{B}(L^{2}M)\mid[T,x]\in\mathbb{K}_{ \mathbb{X}}^{\infty,1}(M),\forall x\in M^{\prime}\}.\] Since the operator space \(\mathbb{K}_{\mathbb{X}}^{\infty,1}(M)\) is typically not a C\({}^{*}\)-algebra, the space \(\mathbb{S}_{\mathbb{X}}(M)\) is also typically not a C\({}^{*}\)-algebra, though it is a normal operator \(M\)-system. In the case when \(\mathbb{X}=\mathbb{K}(L^{2}M)\), we will denote \(\mathbb{K}_{\mathbb{X}}^{L}(M)\), \(\mathbb{K}_{\mathbb{X}}(M)\), \(\mathbb{K}_{\mathbb{X}}^{\infty,1}(M)\), and \(\mathbb{S}_{\mathbb{X}}(M)\) by \(\mathbb{K}^{L}(M)\), \(\mathbb{K}(M)\), \(\mathbb{K}^{\infty,1}(M)\), and \(\mathbb{S}(M)\), respectively. One reason for the utility of this definition for the small-at-infinity boundary is that if \(T\in\mathbb{B}(L^{2}M)\) and we consider the set \[P=\{x\in M^{\prime}\mid[T,x]\in\mathbb{K}_{\mathbb{X}}^{\infty,1}(M)\},\] then this set is always a von Neumann subalgebra of \(M^{\prime}\), and hence to show that \(T\in\mathbb{S}_{\mathbb{X}}(M)\) it suffices to check that \([T,x]\in\mathbb{K}_{\mathbb{X}}^{\infty,1}(M)\) for \(x\) in a set of generators for \(M^{\prime}\). When \(M\) is a tracial von Neumann algebra this is Lemma 6.1 in [4], but the proof works in general. Indeed, it is easy to see that \(P\) is a C\({}^{*}\)-algebra containing the unit, and if \(x\) is in the ultrastrong\({}^{*}\) closure of \(P\) and \(\omega\in M_{*}\) is a positive linear functional, then taking a net \(x_{i}\in P\) so that \(x_{i}\) converges to \(x\) in the ultrastrong\({}^{*}\)-topology, we have \(\omega((x-x_{i})^{*}(x-x_{i})),\omega((x-x_{i})(x-x_{i})^{*})\to 0\) and hence \[d_{r_{\omega}}([T,x],\mathbb{K}_{\mathbb{X}}^{\infty,1}(M))\leq r_{\omega}(T( x-x_{i}))+r_{\omega}((x-x_{i})T)\to 0.\] Just as we could associate to any \(M\)-C\({}^{*}\)-algebra \(A\) a universal von Neumann algebra \(A^{\sharp*}\) that contains \(M\) as a von Neumann subalgebra, we can also associate to \(\mathbb{B}(L^{2}M)\) a von Neumann algebra \(\mathbb{B}(L^{2}M)^{\sharp*}_{J}\) containing both \(M\) and \(JMJ\) as von Neumann subalgebras. Specifically, we let \(\mathbb{B}(L^{2}M)^{\sharp*}_{J}\) be the corner \(p_{\mathrm{nor}}q_{\mathrm{nor}}\mathbb{B}(L^{2}M)^{**}q_{\mathrm{nor}}p_{ \mathrm{nor}}\), where \(p_{\mathrm{nor}}\in M^{**}\) denotes the projection corresponding to the support of the identity representation \(M\to M\), and \(q_{\mathrm{nor}}\in(JMJ)^{**}\) denotes the projection corresponding to the support of the identity representation \(JMJ\to JMJ\). If \(\mathbb{X}\subset\mathbb{B}(L^{2}M)\) is a boundary piece, then we will also let \(\mathbb{K}_{\mathbb{X}}(M)^{\sharp*}_{J}\) denote \(p_{\mathrm{nor}}q_{\mathrm{nor}}\mathbb{K}_{X}(M)^{**}q_{\mathrm{nor}}p_{ \mathrm{nor}}\) so that we have a natural identification \(\mathbb{K}_{\mathbb{X}}(M)^{\sharp*}_{J}=q_{\mathbb{X}}\mathbb{B}(L^{2}M)^{ \sharp*}_{J}q_{\mathbb{X}}\) where \(q_{\mathbb{X}}\in\mathbb{B}(L^{2}M)^{\sharp*}_{J}\) denotes the support projection of \(\mathbb{K}_{\mathbb{X}}(M)^{\sharp*}_{J}\). ## 4. \(M\)-nuclearity Let \(M\) be a von Neumann algebra, \(F\) an operator system or C\({}^{*}\)-algebra, and \(E\) an operator \(M\)-system or an \(M\)-C\({}^{*}\)-algebra. We say a c.c.p. map \(\phi:F\to E\) is \(M\)-nuclear if there exist nets of c.c.p. maps \(\phi_{i}:F\to\mathbb{M}_{n(i)}(\mathbb{C})\) and \(\psi_{i}:\mathbb{M}_{n(i)}(\mathbb{C})\to E\) such that \(\psi_{i}\circ\phi_{i}(x)\) converges to \(\phi(x)\) in the \(M\)-topology for any \(x\in F\). By [10, Theorem 3.7] and a standard convexity argument, this is equivalent to the existence of such maps \(\phi_{i}\) and \(\psi_{i}\) so that \(\psi_{i}\circ\phi_{i}(x)\) converges to \(\phi(x)\) in the weak \(M\)-topology for any \(x\in F\). More generally, if \(M_{0}\subset M\) is an ultraweakly dense C\({}^{*}\)-subalgebra and \(E\) is an \(M_{0}\)-system or \(M_{0}\)-C\({}^{*}\)-algebra, we then say a c.c.p. map \(\phi:F\to E\) is \((M_{0}\subset M)\)-nuclear if there exist nets of c.c.p. maps \(\phi_{i}:F\to\mathbb{M}_{n(i)}(\mathbb{C})\) and \(\psi_{i}:\mathbb{M}_{n(i)}(\mathbb{C})\to E\) such that \(\psi_{i}\circ\phi_{i}(x)\) converges to \(\phi(x)\) in the \((M_{0}\subset M)\)-topology for any \(x\in F\). Equivalently, by Proposition 3.3, there exist a net of c.c.p. maps \(\phi_{i}:F\to\mathbb{M}_{n(i)}(\mathbb{C})\) and \(\psi_{i}:\mathbb{M}_{n(i)}(\mathbb{C})\to E\) such that \(\psi_{i}\circ\phi_{i}(x)\) converges to \(\phi(x)\) in the weak \((M_{0}\subset M)\)-topology. We remark that if \(F\subset\mathbb{B}(\mathcal{H})\), then in the definition of \((M_{0}\subset M)\)-nuclearity we may always take the map \(\phi_{i}\) to be a compression onto a finite-dimensional subspace \(\mathcal{K}\subset\mathcal{H}\), i.e., we may take \(\phi_{i}\) to be of the form \(F\ni x\mapsto P_{\mathcal{K}}xP_{\mathcal{K}}\in\mathbb{B}(L^{2}\mathcal{K}) \cong\mathbb{M}_{\mathrm{dim}(\mathcal{K})}(\mathbb{C})\). This can be seen easily from the following general lemma. **Lemma 4.1**.: _Let \(F\subset\mathbb{B}(\mathcal{H})\) be a \(\mathrm{C}^{*}\)-algebra or operator system and \(\phi:F\to\mathbb{M}_{k}(\mathbb{C})\) a c.c. (resp. c.p., u.c.p.) map. There exists a net of finite-dimensional subspaces \(\mathcal{K}_{i}\subset\mathcal{H}\) and c.c. (resp. c.p., u.c.p.) maps \(\phi_{i}:\mathbb{B}(\mathcal{K}_{i})\to\mathbb{M}_{k}(\mathbb{C})\) so that \((\phi_{i}\circ\mathrm{Ad}(P_{\mathcal{K}_{i}}))_{|F}\) converges pointwise in norm to \(\phi\)._ Proof.: Let \(\tilde{\phi}:\mathbb{B}(\mathcal{H})\to\mathbb{M}_{k}(\mathbb{C})\) be a c.c. extension of \(\phi\), and let \(\phi_{i}:\mathbb{B}(\mathcal{H})\to\mathbb{M}_{k}(\mathbb{C})\) be normal c.c. maps so that \(\phi_{i}\to\tilde{\phi}\) in the point-norm topology. Since each \(\phi_{i}\) is normal, if we consider the family of finite-dimensional subspaces of \(\mathcal{H}\) to be a net ordered by inclusion, then for each \(x\in F\) we have norm convergence \(\phi_{i}(x)=\lim_{\mathcal{K}\to\infty}\phi_{i}\circ\mathrm{Ad}(P_{\mathcal{K }})(x)\), and hence \(\phi(x)=\lim_{\mathcal{K}\to\infty}\lim_{i\to\infty}\phi_{i}\circ\mathrm{Ad}(P_ {\mathcal{K}})(x)\). The proof in the c.p. case or the u.c.p. case, when \(F\) is an operator system, is the same. Similar to the usual notion of nuclear maps, in the unital case we may consider u.c.p. maps instead of c.c.p. maps. **Lemma 4.2**.: _Let \(M\) be a von Neumann algebra with \(M_{0}\subset M\) an ultraweakly dense \(\mathrm{C}^{*}\)-subalgebra. Suppose \(F\) is an operator system, \(E\) is an \(M_{0}\)-system, and \(\phi:F\to E\) is a u.c.p. \((M_{0}\subset M)\)-nuclear map. There exists a net of u.c.p. maps \(\phi_{i}:F\to\mathbb{M}_{n(i)}(\mathbb{C})\) and \(\psi_{i}:\mathbb{M}_{n(i)}(\mathbb{C})\to E\) such that \(\psi_{i}\circ\phi_{i}(x)\) converges to \(\phi(x)\) in the \((M_{0}\subset M)\)-topology for any \(x\in F\)_ Proof.: Let \(\tilde{\phi}_{i}:F\to\mathbb{M}_{n(i)}(\mathbb{C})\) and \(\tilde{\psi}_{i}:\mathbb{M}_{n(i)}(\mathbb{C})\to E\) be c.c.p. maps such that \(\tilde{\psi}_{i}\circ\tilde{\phi}_{i}(x)-\phi(x)\to 0\) in the \((M_{0}\subset M)\)-topology for any \(x\in F\). By Lemma 2.2.5 in [1] there exist u.c.p. maps \(\phi_{i}:F\to\mathbb{M}_{n(i)}(\mathbb{C})\) so that \(\tilde{\phi}_{i}(x)=\tilde{\phi}_{i}(1)^{1/2}\phi_{i}(x)\tilde{\phi}_{i}(1)^{ 1/2}\). Note that \(\tilde{\psi}_{i}(\tilde{\phi}_{i}(1))\) is a net of positive contractions that converge to \(1\) in the \((M_{0}\subset M)\)-topology. By Lemma 3.4 we may find a net of positive contractions \(1-z_{i}\in M_{0}\), so that \(\|z_{i}(1-\tilde{\psi}_{i}(\tilde{\phi}_{i}(1)))z_{i}\|\to 0\), and \(z_{i}\to 1\) ultrastrongly in \(M\). It therefore follows that \(1-z_{i}\tilde{\psi}_{i}(\tilde{\phi}_{i}(1))z_{i}\) converges to \(0\) in the \((M_{0}\subset M)\)-topology. Hence, if we fix states \(\eta_{i}\in\mathbb{M}_{n(i)}(\mathbb{C})\) and set \[\psi_{i}(T)=z_{i}\tilde{\psi}_{i}(\tilde{\phi}_{i}(1)^{1/2}T\tilde{\phi}_{i}(1 )^{1/2})z_{i}+(1-z_{i}\tilde{\psi}_{i}(\tilde{\phi}_{i}(1))z_{i})\eta_{i}(T),\] then \(\psi_{i}\) are u.c.p. and we have \(\psi_{i}\circ\phi_{i}\) converges to the \(phi\) in the point-\((M_{0}\subset M)\)-topology. We also remark that for a bimodular map between \(M_{0}\) systems \(E\) and \(F\), the convergence in the definition of \((M_{0}\subset M)\)-nuclearity can be strengthened in the following sense. **Lemma 4.3**.: _Let \(M\) be a von Neumann algebra, \(M_{0}\subset M\) an ultraweakly dense \(\mathrm{C}^{*}\)-subalgebra. Suppose \(M_{0}\subset E\) and \(M_{0}\subset F\) are two \(M_{0}\)-systems, and \(\phi:E\to F\) is \(M_{0}\)-bimodular and u.c.p. Then \(\phi\) is \((M_{0}\subset M)\)-nuclear if and only if there exist nets of u.c.p. maps \(\phi_{i}:E\to\mathbb{M}_{n(i)}(\mathbb{C})\) and \(\psi_{i}:\mathbb{M}_{n(i)}(\mathbb{C})\to F\) so that \(\psi_{i}\circ\phi_{i}\) converges in the point-\((M_{0}\subset M)\)-topology to \(\phi\), and such that for each \(x\in M_{0}\) there exist \(T_{i}\in F\) and \(a_{i}\in M_{0}\) so that \(\psi_{i}\circ\phi_{i}(x)-x=T_{i}-a_{i}\) and we have_ 1. \(\|T_{i}\|\to 0\)_._ 2. \(\sup_{i}\|a_{i}\|<\infty\)_._ 3. \(a_{i}\to 0\) _ultrastrongly._ Proof.: We let \(\phi_{i}^{0}:E\to\mathbb{M}_{n(i)}(\mathbb{C})\) and \(\psi_{i}^{0}:\mathbb{M}_{n(i)}(\mathbb{C})\to F\) denote u.c.p. maps so that \(\psi_{i}^{0}\circ\phi_{i}^{0}\) converges to the identity in the point-\((M_{0}\subset M)\)-topology. Fix a finite set \(E=\{x_{1},\ldots,x_{m}\}\subset M_{0}\) and a finite set of states \(S=\{\omega_{1},\ldots,\omega_{n}\}\subset M_{*}\). Then, we may apply Lemma 3.4 to \(\{\psi_{i}^{0}\circ\phi_{i}^{0}(x_{j})-x_{j}\}_{i}\) and \(S\), and we obtain positive contractions \(\{1-z_{i}\}_{i}\subset M_{0}\) such that \(\lim_{i}\max_{1\leq k\leq n}\omega_{k}(1-z_{i})=0\) and \(\lim_{i}\max_{1\leq j\leq m}\|z_{i}(\psi_{i}^{0}\circ\phi_{i}^{0}(x_{j})-x_{j})z_ {i}\|=0\). We now fix a state \(\eta\in F^{M_{0}\sharp M_{0}}\) and define the u.c.p. map \(\psi_{i}:\mathbb{M}_{n(i)}(\mathbb{C})\to F\) by \(\psi_{i}(T)=z_{i}\psi_{i}^{0}(T)z_{i}+(1-z_{i}^{2})\eta(T)\). We then have \(\psi_{i}\circ\phi_{i}^{0}(x_{j})-x_{j}=z_{i}(\psi_{i}^{0}\circ\phi_{i}^{0}(x_{ j})-x_{j})z_{i}+(1-z_{i}^{2})\eta(\phi_{i}^{0}(x_{j}))-(x_{j}-z_{i}x_{j}z_{i})\). Note that \(c_{i,j}:=(1-z_{i}^{2})\eta(\phi_{i}^{0}(x_{j}))+(x_{j}-z_{i}x_{j}z_{i})\in M_{0}\) satisfies \(\|c_{i,j}\|\leq 1+2\|x_{j}\|\), and \(\lim_{i\to\infty}\omega_{k}(c_{i,j}^{*}c_{i,j})=0\) for each \(1\leq j\leq m\) and \(1\leq k\leq n\). Since \(F\) and \(S\) were arbitrary finite sets, the result then follows. Nuclearity of a map with respect to the \((M_{0}\subset M)\)-topology can be reformulated as a weak nuclearity property by adapting the argument from [10, Lemma 2.8(i)] and using the appropriate normal bidual. **Lemma 4.4**.: _Let \(M_{0}\) be an ultraweakly dense \(\mathrm{C}^{*}\)-subalgebra of a von Neumann algebra \(M\), and let \(F\) be a normal \(M_{0}\)-system, or a normal \(M_{0}\)-\(\mathrm{C}^{*}\)-algebra. If \(E\) is an operator system or \(\mathrm{C}^{*}\)-algebra and \(\theta:E\to F\) is a c.c.p. map, then the following conditions are equivalent:_ 1. _The c.c.p. map_ \(\theta\) _is_ \((M_{0}\subset M)\)_-nuclear._ 2. _The c.c.p. map_ \(i_{F}\circ\theta:E\to(F^{M_{0}\sharp M_{0}})^{*}\) _is weakly nuclear._ 3. _The c.c.p. map_ \(i_{F}\circ\theta:E\to(F^{M_{0}\sharp M_{0}})^{*}\) _is_ \((M_{0}\subset M)\)_-nuclear._ Proof.: Clearly \((1)\implies(3)\implies(2)\), so we need only show \((2)\implies(1)\). Notice that for each c.c.p. map \(\phi:\mathbb{M}_{n}(\mathbb{C})\to E^{\sharp*}\), there exists a net of c.c.p. maps \(\phi_{i}:\mathbb{M}_{n}(\mathbb{C})\to E\) such that \(\phi_{i}\to\phi\) in the point-weak\({}^{*}\) topology. Indeed, since \((\mathbb{M}_{n}(\mathbb{C})\otimes E)^{**}\) is completely isometrically isomorphic to \(\mathbb{M}_{n}(\mathbb{C})\otimes E^{**}\)[1, Proposition B.16], we have a completely isometrically isomorphism \((\mathbb{M}_{n}(\mathbb{C})\otimes E)^{\sharp*}\cong\mathbb{M}_{n}(\mathbb{C}) \otimes E^{\sharp*}\). Moreover, since c.p. maps from \(\mathbb{M}_{n}(\mathbb{C})\) to \(E\) (resp. \(E^{\sharp*}\)) are one-to-one correspondent to elements in the positive cone of \(\mathbb{M}_{n}(E)\) (resp. \(\mathbb{M}_{n}(E^{\sharp*})\)), it then follows from the density of \(\mathbb{M}_{n}(E)\subset\mathbb{M}_{n}(E)^{\sharp*}\) that we may approximate \(\phi\) with c.c.p. maps \(\phi_{i}^{\prime}:\mathbb{M}_{n}(\mathbb{C})\to E\) in the point-weak\({}^{*}\) topology. Since the weak\({}^{*}\) topology on \(i_{E}(E)\subset E^{\sharp*}\) agrees with the weak \((M_{0}\subset M)\)-topology on \(E\), the result then follows. **Lemma 4.5**.: _Let \(M\) be a von Neumann algebra and \(M_{0}\subset M\) a weakly dense \(\mathrm{C}^{*}\)-subalgebra. Let \(E\) and \(F\) be operator \(M_{0}\)-systems, and suppose \(\theta_{i}:E\to F\) are u.c.p. maps such that \(\theta_{i}|_{M_{0}}\) converges to the identity in the point-weak \((M_{0}\subset M)\)-topology, then for each \(a,b\in M_{0}\) and \(x\in E\) we have \(\theta_{i}(axb)-a\theta_{i}(x)b\to 0\) in the weak \((M_{0}\subset M)\)-topology._ Proof.: Let \(\theta:E\to(F^{M_{0}\sharp M_{0}})^{*}\) be any point-weak\({}^{*}\)-limit point of \(\{\theta_{i}\}\). Since \(\theta_{i}|_{M_{0}}\) converges to the identity in the point-weak \((M_{0}\subset M)\)-topology we have \(\theta_{|M_{0}}=\mathrm{id}\), and since \(\theta\) is u.c.p., we then have that \(\theta\) is \(M_{0}\)-bimodular. Thus, for all \(x\in F\) and \(a,b\in M_{0}\) we have \(\theta(axb)=a\theta(x)b\). Since \(\theta\) was an arbitrary point-weak\({}^{*}\)-limit point, the result follows. The following theorem uses ideas as in the same spirit as in Section 2 of [10], Theorem 4.7 in [1], and Theorem 1 in [1]. **Theorem 4.6**.: _Let \(M\) be a separable von Neumann algebra, \(M_{0}\subset M\) a unital weakly dense \(\mathrm{C}^{*}\)-subalgebra, and \(E\) a normal \(M_{0}\)-system. The following conditions are equivalent:_ 1. _The inclusion_ \(M_{0}\subset E\) _is_ \((M_{0}\subset M)\)_-nuclear._ 2. _There exists an amenable von Neumann algebra_ \(R\)_, and normal u.c.p. maps_ \(\Phi:M\to R\) _and_ \(\Psi:R\to(E^{M_{0}\sharp M_{0}})^{*}\) _so that_ \(\Psi\circ\Phi(x)=i_{E}(x)\) _for all_ \(x\in M_{0}\)_._ Proof.: Since injectivity for a von Neumann algebra is equivalent to semi-discreteness, the implication \((2)\implies(1)\) then follows from Lemma 4.4. We now suppose that (1) holds. Since \(M\) is separable, we may choose an increasing sequence of finite-dimensional operator systems inside \(M_{0}\) so that \(M_{00}:=\cup E_{n}\subset M_{0}\) is ultraweakly dense. By \((M_{0}\subset M)\)-nuclearity of the inclusion map \(j_{M_{0}}:M_{0}\to E\), we have nets of u.c.p. maps \(\phi_{i}:M_{0}\to\mathbb{M}_{k(i)}(\mathbb{C})\) and \(\psi_{i}:\mathbb{M}_{k(i)}(\mathbb{C})\to E\) so that \(\psi_{i}\circ\phi_{i}\to j_{M_{0}}\) in the point-\((M_{0}\subset M)\)-topology. By Arveson's Extension Theorem we may assume that each \(\phi_{i}\) is defined on \(E\). Denote by \(\mu\) a normal faithful state on \(M\) and \(\mu_{0}\) the restriction of \(\mu\) to \(M_{0}\). For each \(n\geq 1\), we inductively choose u.c.p. maps \(\phi_{n}:E\to\mathbb{M}_{k(n)}(\mathbb{C})\) and \(\psi_{n}:\mathbb{M}_{k(n)}(\mathbb{C})\to E\) so that setting \(\theta_{n}=\psi_{n}\circ\phi_{n}\), we have \[s_{\mu_{0}}(\theta_{n}(a)-a)<2^{-n+3}\|a\|,a\in E_{n}, \tag{2}\] and \[s_{\mu_{0}}(\theta_{n}\circ\cdots\circ\theta_{n-j}(\theta_{n-j-1}(a)-a))<2^{-n -j}\|a\|,a\in E_{n-j-1},0\leq j\leq n-2. \tag{3}\] Indeed, (2) follows directly from \((M_{0}\subset M)\)-nuclearity. We now verify (3) in the case of \(j=0\), the general case follows similarly. Notice that since \(s_{\mu_{0}}(\theta_{n-1}(a)-a)<2^{-n+2}\|a\|\) for \(a\in E_{n-1}\), we may find \(a_{1},a_{2}\in A\) and \(b\in E\) such that \(a=a_{1}^{*}ba_{2}\) and \(\mu(a_{1}^{*}a_{1})^{1/2}\|b\|\mu(a_{2}^{*}a_{2})^{1/2}<2^{-n+1}\|a\|\). Thus by Lemma 4.5 and Proposition 3.3, we may, after passing to convex combinations, choose \(\theta_{n}\) that satisfies \(s_{\mu_{0}}(\theta_{n}(a_{1}^{*}ba_{2})-a_{1}^{*}\theta_{n}(b)a_{2})<2^{-n-1} \|a\|\) for all \(a\in E_{n-1}\), and it follows that \[s_{\mu_{0}}(\theta_{n}(\theta_{n-1}(a)-a)) \leq s_{\mu_{0}}(\theta_{n}(a_{1}^{*}ba_{2})-a_{1}^{*}\theta_{n}( b)a_{2})+s_{\mu_{0}}(a_{1}^{*}\theta_{n}(b)a_{2})\] \[\leq 2^{-n-1}\|a\|+\mu(a_{1}^{*}a_{1})^{1/2}\|b\|\mu(a_{2}^{*}a_{2} )^{1/2}\leq 2^{-n}\|a\|.\] Therefore, we have the following commutative diagram: Let \(S\) be the operator system inductive limit of \((\mathbb{M}_{k(n)}(\mathbb{C}),\sigma_{n})\) in the sense of [10, Section 2]. More precisely, if \(\tilde{S}\subset\prod_{n\geq 1}\mathbb{M}_{k(n)}(\mathbb{C})\) is the norm closure of \[\left\{(x_{n})_{n}\in\prod_{n\geq 1}\mathbb{M}_{k(n)}(\mathbb{C})\mid\text{ there exists }m\geq 1\text{ s.t. }x_{n+1}=\sigma_{n}(x_{n})\text{ for all }n\geq m\right\},\] then \(S=Q(\tilde{S})\), where \[Q:\prod_{n\geq 1}\mathbb{M}_{k(n)}(\mathbb{C})\to\prod_{n\geq 1}\mathbb{M}_{k(n )}(\mathbb{C})/\sum_{n\geq 1}\mathbb{M}_{k(n)}(\mathbb{C})\] is the quotient map. It is easy to see that \(\tilde{S}\) is nuclear. As operator spaces, we may identify \(S^{**}=p^{\perp}\tilde{S}^{**}\) where \(p\) is the central projection in the bidual of \(\prod_{n\geq 1}\mathbb{M}_{k(n)}(\mathbb{C})\) given by the support of the ideal \(\sum_{n\geq 1}\mathbb{M}_{k(n)}(\mathbb{C})\). We therefore have that \(S^{**}\) is unitally completely order isomorphic to an amenable von Neumann algebra [1, Theorem 4.5], [1, Theorem 3.5]. Define \(\Phi:E\to S^{**}\) to be a point-weak\({}^{*}\) limit point of the sequence \[\phi_{m}:E\ni x\mapsto\phi_{m}(x)\in S\subset S^{**},\] where we view \(\phi_{m}(x)\in\mathbb{M}_{k(m)}(\mathbb{C})\to S\) is the canonical map. Let \(\tilde{\Psi}:\tilde{S}\to(E^{M_{0}\sharp M_{0}})^{*}\) be the point-weak\({}^{*}\) limit point of \(\tilde{\psi}_{m}:\tilde{S}\ni(x_{n})_{n}\to\psi_{m}(x_{m})\in E\subset(E^{M_{0} \sharp M_{0}})^{*}\). It is not hard to see that \(\tilde{\Psi}=\Psi\circ Q\) for some u.c.p. map \(\Psi:S\to(E^{M_{0}\sharp M_{0}})^{*}\). We still denote by \(\Psi\) the unique weak\({}^{*}\) continuous extension \(\Psi:S^{**}\to(E^{M_{0}\sharp M_{0}})^{*}\). Then for any \(a\in M_{00}\) with norm \(1\) and \(\varphi\in(E^{M_{0}\sharp M_{0}})^{*}\), we have that \[|\varphi(a-\Psi\circ\Phi(a))| =|\varphi(a-\lim_{m}\lim_{n}\tilde{\psi}_{n}\circ\phi_{m}(a))|\] \[=\lim_{m}\lim_{n}|\varphi(a-(\theta_{n}\circ\cdots\theta_{m})(a))|\] \[\leq\lim_{m}\lim_{n}\sum_{j=1}^{n-m}|\varphi((\theta_{n}\circ \cdots\circ\theta_{n-j+1})(a-\theta_{n-j}(a)))|+|\varphi(a-\theta_{n}(a))|.\] Since \(\varphi\in E^{M_{0}\sharp M_{0}}\), there exists some \(\kappa>0\) such that \(|\varphi(x)|\leq\kappa s_{\mu_{0}}(x)\) for \(x\in(E)_{2}\). It then follows that for \(m\) and \(n\) large enough, we have \[|\varphi((\theta_{n}\circ\cdots\circ\theta_{n-j+1})(a-\theta_{n-j}(a)))|\leq 2 ^{-n-j}\kappa,\] and hence \[|\varphi(a-\Psi\circ\Phi(a))|\leq\lim_{m}\lim_{n}\sum_{j=0}^{n-m}2^{-n-j} \kappa\leq\lim_{m}2^{-m+1}\kappa=0.\] From the above argument, we obtain a u.c.p map \(\Phi_{M_{00}}:M_{00}\to S^{**}\) and a weak\({}^{*}\)-continuous u.c.p. map \(\Psi:S^{**}\to(E^{M_{0}\sharp M_{0}})^{*}\) such that \(\Psi\circ\Phi_{|M_{00}}=i_{M_{00}}\), where \(i_{M_{00}}:M_{00}\to E^{\sharp*}\) is the canonical embedding. Denote by \(\bar{\Phi}:(M_{00})^{**}\to S^{**}\) the unique weak\({}^{*}\) continuous extension of \(\Phi_{|M_{00}}\), and realize \(M=p(M_{00})^{**}\), where \(p\in(M_{00})^{**}\) is the support projection of the identity representation from \(M_{00}\) into \(M\). Let \(\{e_{i}\}_{i}\subset M_{00}\) be a net that converges to \(p\) in the weak\({}^{*}\) topology. Then for any linear functional \(\varphi\in E^{M_{0}\sharp M_{0}}\), we have \(\varphi(\Psi\circ\bar{\Phi}(p))=\varphi(\Psi(\lim_{i}\Phi(e_{i})))=\lim_{i} \varphi(e_{i})=\varphi(1)\), and hence \(\Psi\circ\bar{\Phi}(p)=1\). Thus, \(\bar{\Phi}_{|M}:M\to S^{**}\) is a normal u.c.p. map such that \(\Psi\circ\bar{\Phi}(x)=i_{E}(x)\) for all \(x\in M_{0}\). To move beyond the separable case, we will use the following technical version of the previous theorem. The proof is essentially the same and so we will leave it to the reader. **Theorem 4.7**.: _Let \(M\) be a von Neumann algebra, \(M_{0}\subset M\) an ultraweakly dense C\({}^{*}\)-subalgebra, and \(E\) a normal \(M_{0}\)-system. If the inclusion \(M_{0}\subset E\) is \((M_{0}\subset M)\)-nuclear, then for every countably generated C\({}^{*}\)-subalgebra \(M_{00}\subset M_{0}\), and \(\sigma\)-finite projection \(p\in\mathcal{P}(M)\), there exists an amenable von Neumann algebra \(R\), and normal c.c.p. maps \(\Phi:M\to R\) and \(\Psi:R\to(E^{M_{0}\sharp M_{0}})^{*}\) so that \(\Psi\circ\Phi(p)=p\) and \(p\Psi\circ\Phi(x)=pi_{E}(x)p\) for all \(x\in M_{0}\)._ **Proposition 4.8**.: _Let \(M\) be a von Neumann algebra, \(M_{0},M_{1}\subset M\) weakly dense C\({}^{*}\)-subalgebras with \(M_{0}\subset M_{1}\), and let \(E\) be a normal \(M_{1}\)-system. Consider the following conditions:_ 1. _The inclusion_ \(M_{0}\subset E\) _is_ \((M_{0}\subset M)\)_-nuclear._ 2. _The inclusion_ \(M_{1}\subset E\) _is_ \((M_{1}\subset M)\)_-nuclear._ 3. _The inclusion_ \(M_{0}\subset E\) _is_ \((M_{1}\subset M)\)_-nuclear._ _Then the implications (1) \(\implies\) (2) \(\implies\) (3) hold._ _Moreover, if \(M_{0}\) is locally reflexive, then conditions (2) and (3) are equivalent._ Proof.: If \(M\) is separable, and if condition (1) holds, then by Theorem 4.6 the canonical map \(i_{0}:M\to(E^{M_{0}\sharp M_{0}})^{*}\) is weakly nuclear. Since \(M_{0}\subset M_{1}\), the identity map \(\mathrm{id}:E\to E\) induces a normal completely positive contraction \(i_{1,0}:(E^{M_{0}\sharp M_{0}})^{*}\to(E^{M_{1}\sharp M_{1}})^{*}\) so that the canonical embedding \(i_{1}:M\to(E^{M_{1}\sharp M_{1}})^{*}\) is given by the composition \(i_{1}=i_{1,0}\circ i_{0}\). Hence, the map \(i_{1}\) is weakly nuclear and it follows from Lemma 4.4 that condition (2) holds. For the general case, fix \(x_{1},\dots,x_{n}\in M_{1}\), \(\varepsilon>0\), and \(\eta_{1},\dots,\eta_{m}\in E^{M_{1}\sharp M_{1}}\subset E^{M_{0}\sharp M_{0}}\). Since each \(\eta_{i}\) may be implemented by vector linear functional in some representation of \(E\) where \(M\) is normally represented, we may use the polarization identity to assume that each \(\eta_{i}\) is a state. Let \(p\) denote the supremum of the supports of \(\eta_{i|M}\), so that \(p\) is a \(\sigma\)-finite. Since \(M_{0}\) is ultraweakly dense in \(M\) and since \(p\) is \(\sigma\)-finite, we may choose for each \(1\leq i\leq n\) a sequence \(a_{i,k}\in M_{0}\) so that \(pa_{i,k}p\to px_{i}p\) in the weak operator topology. We let \(M_{00}\) denote the \(\mathrm{C}^{*}\)-algebra generated by \(\{a_{i,k}\}_{1\leq i\leq n,k\geq 1}\), and by Theorem 4.7 there exists an amenable von Neumann algebra \(R\), and normal c.c.p. maps \(\Phi:M\to R\) and \(\Psi:R\to(E^{M_{0}\sharp M_{0}})^{*}\) so that \(\Psi\circ\Phi(p)=p\) and \(p\Psi\circ\Phi(x)=pi_{E}(x)p\) for all \(x\in M_{0}\). Since \(p\) is in the multiplicative domain of the normal c.c.p. map \(\Psi\circ\Phi\), we then have, after compressing down to \((E^{M_{1}\sharp M_{1}})^{*}\), a weak operator topology limit \[p\Psi\circ\Phi(x_{i})p=p\Psi\circ\Phi(px_{i}p)p=\lim_{k\to\infty}p\Psi\circ \Phi(a_{i,k})p=px_{i}p\in(E^{M_{1}\sharp M_{1}})^{*}.\] In particular, for each \(1\leq i\leq n\) and \(1\leq j\leq m\) we have \(\eta_{i}(\Psi\circ\Phi(x_{i})-x_{i})=\eta_{i}(p(\Psi\circ\Phi(x_{i})-x_{i})p)=0\). Since \(R\) is semi-discrete, there then exist c.c.p. maps \(\phi_{0}:R\to\mathbb{M}_{d}(\mathbb{C})\) and \(\psi_{0}:\mathbb{M}_{d}(\mathbb{C})\to R\) so that for each \(1\leq i\leq n\) and \(1\leq j\leq m\) we have \(\eta_{j}(\Psi\circ\psi_{0}\circ\phi_{0}\circ\Phi(x_{i})-x_{i})<\varepsilon\). Hence \(M_{1}\subset E\) is \((M_{1}\subset M)\)-nuclear. The implication (2) \(\implies\) (3) is trivial. Since weak nuclearity of the inclusion \(i_{1}:M\to(E^{M_{1}\sharp M_{1}})^{*}\) is a local property and since \(i_{1}\) is normal, if \(M_{0}\) is locally reflexive then we see that weak nuclearity of the inclusion \(M_{0}\) into \((E^{M_{1}\sharp M_{1}})^{*}\) implies that \(i_{1}\) is weakly nuclear and hence another application of Lemma 4.4 shows that (2) and (3) are equivalent in this case. The following special case of Proposition 4.8 will be used a number of times in the sequel, and so we explicitly state it as a corollary. **Corollary 4.9**.: _Let \(M\) be a von Neumann algebra with a dense \(\mathrm{C}^{*}\)-algebra \(M_{0}\subset M\). If \(E\) is a normal \(M\)-system and if the inclusion \(M_{0}\subset E\) is \((M_{0}\subset M)\)-nuclear, then the inclusion \(M\subset E\) is \(M\)-nuclear._ **Example 4.10**.: Let \(A\) be a \(\mathrm{C}^{*}\)-algebra, and let \(E\) be an \(A\)-system. Then the weak \((A\subset A^{**})\)-topology on \(E\) is simply the weak topology on \(E\), and hence the inclusion \(A\subset E\) is \((A\subset A^{**})\)-nuclear if and only if the inclusion \(A\subset E\) is nuclear. In particular, if we consider a universal representation \(\pi:A\to\mathbb{B}(\mathcal{H})\), then the inclusion \(\pi:A\to\mathbb{B}(\mathcal{H})\) is \((A\subset A^{**})\)-nuclear if and only if \(A\) is exact. However, we will see from Proposition 5.5 below that \(A^{**}\) is weakly exact if and only if the identity map from \(A^{**}=\pi(A)^{\prime\prime}\) to \(\mathbb{B}(\mathcal{H})\) is \(A^{**}\)-nuclear. Hence, the converse of Corollary 4.9 does not hold in general (see, e.g., Remark 14.1.3 in [1]). While the converse of Corollary 4.9 does not hold in general, we do have the following partial converse, which should be compared with Corollary 3.1.6 in [11]. **Proposition 4.11**.: _Let \(M\) be a separable von Neumann algebra and \(E\) a normal \(M\)-system. If \(M\subset E\) is \(M\)-nuclear, then for any separable \(\mathrm{C}^{*}\)-subalgebra \(M_{0}\subset M\) there exists a separable \(\mathrm{C}^{*}\)-subalgebra \(M_{1}\subset M\) containing \(M_{0}\) such that \(M_{1}\subset E\) is \((M_{1}\subset M)\)-nuclear._ Proof.: Choose \(\{a_{n}\}_{n}\subset M_{0}\) a countable norm dense subset. Since \(M\subset E\) is \(M\)-nuclear, there exist sequences of u.c.p. maps \(\phi_{k}:M_{0}\to\mathbb{M}_{n(k)}(\mathbb{C})\) and \(\psi_{k}:\mathbb{M}_{n(k)}(\mathbb{C})\to E\) such that \(\psi_{k}\circ\phi_{k}(a_{n})-\phi_{k}(a_{n})\) for all \(n\in\mathbb{N}\). Since \(M\subset E\) is \(M\)-nuclear, there exists a separable \(\mathrm{C}^{*}\)-subalgebra \(M_{0}\subset M\) containing \(M_{0}\) such that \(M_{1}\subset E\) is \((M_{1}\subset M)\)-nuclear. We now consider the case of \(M\). Let \(M\) be a separable von Neumann algebra and \(E\) a normal \(M\)-system. If \(M\subset E\) is \(M\)-nuclear, then for any separable \(\mathrm{C}^{*}\)-subalgebra \(M_{0}\subset M\) there exists a separable \(\mathrm{C}^{*}\)-subalgebra \(M_{1}\subset M\) containing \(M_{0}\) such that \(M_{1}\subset E\) is \((M_{1}\subset M)\)-nuclear. We now consider the case of \(M\). Let \(M\) be a separable von Neumann algebra and \(E\) a normal \(M\)-system. If \(M\subset E\) is \(M\)-nuclear, then for any separable \(\mathrm{C}^{*}\)-subalgebra \(M_{0}\subset M\) there exists a separable \(\mathrm{C}^{*}\)-subalgebra \(M_{1}\subset M\) containing \(M_{0}\) such that \(M_{1}\subset E\) is \((M_{1}\subset M)\)-nuclear. We now consider the case of \(M\). Let \(M\) be a separable von Neumann algebra and \(E\) a normal \(M\)-system. If \(M\subset E\) is \(M\)-nuclear, then for any separable \(\mathrm{C}^{*}\)-subalgebra \(M_{0}\subset M\) there exists a separable \(\mathrm{C}^{*}\)-subalgebra \(M_{1}\subset M\) containing \(M_{0}\) such that \(M_{1}\subset E\) is \((M_{1}\subset M)\)-nuclear. We now consider the case of \(M\). Let \(M\) be a separable von Neumann algebra and \(E\) a normal \(M\)-system. If \(M\subset E\) is \(M\)-nuclear, then for any separable \(\mathrm{C}^{*}\)-subalgebra \(M_{0}\subset M\) there exists a separable \(\mathrm{C}^{*}\)-subalgebra \(M_{1}\subset M\) containing \(M_{0}\) such that \(M_{1}\subset E\) is \((M_{1}\subset M)\)-nuclear. We now consider the case of \(M\). Let \(M\) be a separable von Neumann algebra and \(E\) a normal \(M\)-system. If \(M\subset E\) is \(M\)-nuclear, then for any separable \(\mathrm{C}^{*}\)-subalgebra \(M_{0}\subset M\) there exists a separable \(\mathrm{C}^{*}\)-subalgebra \(M_{1}\subset M\) containing \(M_{0}\) such that \(M_{1}\subset E\) is \((M_{1}\subset M)\)-nuclear. We now consider the case of \(M\). Let \(M\) be a separable von Neumann algebra and \(E\) a normal \(M\)-system. If \(M\subset E\) is \(M\)-nuclear, then for any separable \(\mathrm{C}^{*}\)-subalgebra \(M_{0}\subset M\) there exists a separable \(\mathrm{C}^{*}\)-subalgebra \(M_{1}\subset M\) containing \(M_{0}\) such that \(M_{1}\subset E\) is \((M_{1}\subset M)\)-nuclear. We now consider the case of \(M\). Let \(M\) be a separable von Neumann algebra and \(E\) a normal \(M\)-system. If \(M\subset E\) is \(M\)-nuclear, then for any separable \(\mathrm{C}^{*}\)-subalgebra \(M_{0}\subset M\) containing \(M_{0}\) such that \(M_{1}\subset E\) is \((M_{1}\subset M)\)-nuclear. We now consider the case of \ \(a_{n}=x_{n,k}T_{n,k}y_{n,k}\) with \(x_{n,k},y_{n,k}\in M_{+}\) and \(T_{n,k}\in E\), such that \(\lim_{k}s_{\rho}^{\omega}(x_{n,k}T_{n,k}y_{n,k})=0\) for any \(\rho,\omega\in M_{*,+}\). Set \(A_{1}=C^{*}(M_{0},\{x_{n,k},y_{n,k}\mid n\in\mathbb{N},k\in\mathbb{N}\})\) and note that \(M_{0}\subset E\) is \((A_{1}\subset M)\)-nuclear, implemented by \(\phi_{k}\) and \(\psi_{k}\). Repeating this process inductively, we then have a sequence of separable \(\mathrm{C}^{*}\)-subalgebras \(M_{0}\subset A_{1}\subset A_{2}\cdots\), such that \(A_{n}\subset E\) is \((A_{n+1}\subset M)\)-nuclear. Let \(M_{1}=\overline{\cup_{n}A_{n}}^{\mid\cdot\mid}\) and one checks that \(M_{1}\subset E\) is \((M_{1}\subset M)\)-nuclear. We end this section with a result showing the relationship between \(M\)-nuclearity and topologically amenable actions. To describe this we recall that if \(E\subset\mathbb{B}(\mathcal{H})\) is an operator space and \(I\) is a set, then the operator space \(\mathbb{M}_{I}(E)\subset\mathcal{B}(\mathcal{H}\overline{\otimes}\ell^{2}I)\) is the operator system that we may identify as those \(I\times I\) matrices with entries in \(E\) that correspond to bounded operators. This operator space is independent of the representation \(E\subset\mathbb{B}(\mathcal{H})\). Moreover, if \(\Gamma\!\curvearrowright^{\sigma}\!E\) by completely isometric isomorphisms, then we may choose a covariant representation \(E\subset\mathbb{B}(\mathcal{H})\) so that the equation \(\sigma_{t}(x)=\pi_{t}x\pi_{t}^{*}\) holds for \(x\in E\) and \(t\in\Gamma\) for some unitary representation \(\pi\). Then conjugation by \(\pi_{t}\otimes\rho_{t}\) gives an action of \(\Gamma\) on \(\mathbb{M}_{\Gamma}(E)\), and we let \(\mathbb{M}_{\Gamma}(E)^{\Gamma}\) denote the fixed points under this action. This is then a normal operator \(L\Gamma\)-bimodule, and if \(E\) is an operator system, then \(\mathbb{M}_{\Gamma}(E)^{\Gamma}\) will be a normal operator \(L\Gamma\)-system. Note however that \(\mathbb{M}_{\Gamma}(E)^{\Gamma}\) need not be a \(\mathrm{C}^{*}\)-algebra, even when \(E\) is a \(\mathrm{C}^{*}\)-algebra. **Theorem 4.12**.: _Let \(\Gamma\) be a group and suppose \(\Gamma\!\curvearrowright\!K\) is an action by homeomorphisms on a compact Hausdorff space. The action \(\Gamma\!\curvearrowright\!K\) is topologically amenable if and only if the inclusion \(L\Gamma\subset\mathbb{M}_{\Gamma}(C(K))^{\Gamma}\) is \(L\Gamma\)-nuclear._ Proof.: Note that by [1, Section 4.1] we have an operator space embedding \(C(K)\rtimes_{r}\Gamma\subset\mathbb{M}_{\Gamma}(C(K))^{\Gamma}\) that takes \(C_{\lambda}^{*}\Gamma\) canonically onto \(C_{\lambda}^{*}\Gamma\subset L\Gamma\) and takes \(C(K)\) to diagonal matrices \(C(K)\ni f\mapsto\oplus_{t\in\Gamma}\sigma_{t^{-1}}(f)\in\mathbb{M}_{\Gamma}(C (K))^{\Gamma}\), where \(\sigma:\Gamma\to\mathrm{Aut}(C(K))\) is the corresponding action \(\sigma_{s}(f)=f\circ s^{-1}\). If the action \(\Gamma\!\curvearrowright\!K\) is amenable, then the inclusion \(C_{\lambda}^{*}\Gamma\subset C(K)\rtimes_{r}\Gamma\) is nuclear [1, Section 4.3], and hence by Corollary 4.9 the inclusion \(L\Gamma\subset\mathbb{M}_{\Gamma}(C(K))^{\Gamma}\) is \(L\Gamma\)-nuclear. For the converse, note that we have a \(\Gamma\)-equivariant conditional expectation onto the diagonal matrices \(E_{\ell^{\infty}\Gamma}:\mathbb{M}_{\Gamma}(C(K))^{\Gamma}\to\ell^{\infty}( \Gamma;C(K))^{\Gamma}\) where the later space we may identify, in a \(\Gamma\)-equivariant way, with \(C(K)\) by evaluating at the identity. Also note that \(E_{\ell^{\infty}\Gamma}\) restricts to the trace on \(L\Gamma\), and so if \(a,b\in L\Gamma\), \(T\in\mathbb{M}_{\Gamma}(C(K))^{\Gamma}\), and \(\varphi\in C(K)^{*}\) is a state, then we have \(|\varphi\circ E_{\ell^{\infty}\Gamma}(a^{*}Tb)|\leq\|T\|(\varphi\circ E_{\ell^{ \infty}\Gamma}(a^{*}a))^{1/2}(\varphi\circ E_{\ell^{\infty}\Gamma}(b^{*}b))^{1/2}= \|T\|\tau(a^{*}a)^{1/2}\tau(b^{*}b)^{1/2}\). It therefore follows that \(E_{\ell^{\infty}\Gamma}:\mathbb{M}_{\Gamma}(C(K))^{\Gamma}\to C(K)\) is continuous from the \(L\Gamma\)-topology into the norm topology. If \(L\Gamma\subset\mathbb{M}_{\Gamma}(C(K))^{\Gamma}\) is \(L\Gamma\)-nuclear, \(F\subset\Gamma\) is finite, and \(\varepsilon>0\), then using Lemma 4.1 and the observations above there exists a finite subset \(E\subset\Gamma\) and a u.c.p. map \(\psi:\mathbb{M}_{E}(\mathbb{C})\to\mathbb{M}_{\Gamma}(C(K))^{\Gamma}\) so that setting \(h(t)=E_{\ell^{\infty}\Gamma}(\psi\circ\mathrm{Ad}(P_{\ell^{2}E})(\lambda_{t}) \lambda_{t}^{*})\in C(K)\) we have \(\|1-h(t)\|_{\infty}<\varepsilon\) for each \(t\in F\). The function \(\Gamma\ni t\mapsto h(t)\in C(K)\) is easily seen to be of positive type (see the implication (4) \(\implies\) (2) in [1, Theorem 4.4.3]), and hence it follows that the action \(\Gamma\curvearrowright\!K\) is amenable. ## 5. Weakly exact von Neumann algebras Recall from [13, Definition 3.1.1] that for a von Neumann algebra \(M\) and an ultraweakly dense \(\mathrm{C}^{*}\)-algebra \(M_{0}\subset M\), the \(\mathrm{C}^{*}\)-algebra \(M_{0}\) is said to be weakly exact in \(M\) if for any unital \(\mathrm{C}^{*}\)-algebra \(B\) with a closed two-sided ideal \(J\) and any representation \(\pi:M_{0}\otimes B\to\mathbb{B}(\mathcal{K})\) with \(M_{0}\otimes J\subset\ker\pi\) that is ultraweakly continuous on \(M\otimes\mathbb{C}\), the induced representation \(\tilde{\pi}:M_{0}\odot B/J\to\mathbb{B}(\mathcal{H})\) is min-continuous. This notion generalizes weak exactness for von Neumann algebras and exactness for \(\mathrm{C}^{*}\)-algebras simultaneously: if \(M_{0}=M\) is a von Neumann algebra, then \(M\) is weakly exact in \(M\) if and only if \(M\) is weakly exact in the sense of Kirchberg ([10], see also [1, Chapter 14]); at the other extreme, if \(M=(M_{0})^{**}\), then \(M_{0}\) is weakly exact in \((M_{0})^{**}\) if and only if \(M_{0}\) is exact. In this section, we characterize weak exactness via \((M_{0}\subset M)\)-nuclearity, analogous to the case of exactness for \(\mathrm{C}^{*}\)-algebras. **Theorem 5.1**.: _Let \(M\subset\mathbb{B}(\mathcal{H})\) be a von Neumann algebra and \(M_{0}\subset M\) an ultraweakly dense \(\mathrm{C}^{*}\)-subalgebra. The following conditions are equivalent._ 1. _The_ \(\mathrm{C}^{*}\)_-algebra_ \(M_{0}\) _is weakly exact in_ \(M\)_._ 2. _There exist nets of c.c. maps_ \(\phi_{i}:M_{0}\to\mathbb{M}_{n(i)}(\mathbb{C})\) _and_ \(\psi_{i}:\mathbb{M}_{n(i)}(\mathbb{C})\to\mathbb{B}(\mathcal{H})\) _such that_ \(\psi_{i}\circ\phi_{i}(a)\to a\) _in the_ \((M_{0}\subset M)\)_-topology for any_ \(a\in M_{0}\)_._ 3. _The inclusion_ \(M_{0}\subset\mathbb{B}(\mathcal{H})\) _is_ \((M_{0}\subset M)\)_-nuclear, i.e., there exist nets of c.c.p. maps_ \(\phi_{i}:M_{0}\to\mathbb{M}_{n(i)}(\mathbb{C})\) _and_ \(\psi_{i}:\mathbb{M}_{n(i)}(\mathbb{C})\to\mathbb{B}(\mathcal{H})\) _such that_ \(\psi_{i}\circ\phi_{i}(a)\to a\) _in the_ \((M_{0}\subset M)\)_-topology for any_ \(a\in M_{0}\)_._ Before proceeding to the proof, we collect a few lemmas. **Lemma 5.2**.: _Let \(M\) be a von Neumann algebra and \(M_{0}\subset M\) an ultraweakly dense \(\mathrm{C}^{*}\)-subalgebra. Then \(M_{0}\) is weakly exact in \(M\) if and only if for any closed two-sided ideal \(J\vartriangleleft B\) and any \(\pi:M_{0}\otimes B\to\mathbb{B}(\mathcal{K})\) such that \(\pi_{|M_{0}\otimes\mathbb{C}}\) is ultraweakly continuous and \(M_{0}\otimes J\subset\ker\pi\), we have \(\ker q\subset\ker\pi\), where \(q\) is the quotient map \(q:M_{0}\otimes B\to M_{0}\otimes B/J\)._ Proof.: It is clear that \(\ker q\subset\ker\pi\) implies that \(\tilde{\pi}:A\odot B/J\to\mathbb{B}(\mathcal{K})\) is min-continuous. Conversely, suppose there is some \(x\in\ker q\setminus\ker\pi\). Then, we may find a sequence \(x_{n}\in A\odot B\) such that \(x_{n}\to x\) in norm. Note that \(\|q(x_{n})\|\to 0\) while \(\liminf_{n}\|\pi(x_{n})\|\geq\delta\) for some \(\delta>0\). However, since \(x_{n}\in A\odot B\) and \(A\otimes J\subset\ker\pi\), we have \(\tilde{\pi}(q(x_{n}))=\pi(x_{n})\), which is a contradiction. **Lemma 5.3**.: _Let \(M\) be a von Neumann algebra and \(M_{0}\subset M\) an ultraweakly dense \(\mathrm{C}^{*}\)-subalgebra with \(1\not\in M_{0}\). Let \(\tilde{M}_{0}=M_{0}+\mathbb{C}1\subset M\). If \(M_{0}\) is weakly exact in \(M\), then \(\tilde{M}_{0}\) is also weakly exact in \(M\)._ Proof.: Let \(\pi:\tilde{M}_{0}\otimes B\to\mathbb{B}(\mathcal{K})\) and \(J\vartriangleleft B\) be given as in the definition of weak exactness. Denote by \(q:\tilde{M}_{0}\otimes B\to\tilde{M}_{0}\otimes B/J\) the quotient map. By Lemma 5.2, it suffices to show that \(\ker q\subset\ker\pi\). For \(x\in\ker q\), consider the following diagram: The \(3\times 3\) lemma shows that we may find \(y\in\tilde{M}_{0}\otimes J\) such that \(x-y\in\ker q_{|M_{0}\otimes B}\). Since \(M_{0}\subset\mathbb{B}(\mathcal{H})\) is weakly exact, we have \(\ker q_{|M_{0}\otimes B}\subset\ker\pi_{|M_{0}\otimes B}\), and it follows that \(x\in\ker\pi\). **Proposition 5.4**.: _Let \(M\subset\mathbb{B}(\mathcal{H})\) be a von Neumann algebra and \(M_{0}\subset M\) an ultraweakly dense \(\mathrm{C}^{*}\)-subalgebra. Then, \(M_{0}\) is weakly exact in \(M\) if and only if there exist nets of c.c. maps \(\phi_{i}:M_{0}\to\mathbb{M}_{\pi(i)}(\mathbb{C})\) and \(\psi_{i}:\mathbb{M}_{\pi(i)}(\mathbb{C})\to\mathbb{B}(\mathcal{H})\) such that \(\psi_{i}\circ\phi_{i}(a)\to a\) in the \((M_{0}\subset M)\)-topology for any \(a\in M_{0}\)._ Proof.: The backward direction is a direct consequence of [13, Theorem 3.1.3, (1), (v)], Wittstock's extension theorem, and the fact that the restriction of the weak \((M_{0}\subset M)\)-topology on \(M_{0}\) coincides with the ultraweak topology on \(M_{0}\) endowed from \(M\). By a standard convexity argument we obtain that the convergence is in the \((M_{0}\subset M)\)-topology instead of the weak \((M_{0}\subset M)\)-topology. For the forward direction, we follow the idea of [1, Proposition 3.7.8]. Let \(\pi\) and \(J\lhd B\) be given as in the definition. Denote by \(q:M_{0}\otimes B\to M_{0}\otimes B/J\) the quotient map, and it suffices to show \(\ker q\subset\ker\pi\). For any contraction \(x\in\ker q\), consider the following diagram: where \(D_{i}=\mathbb{M}_{n(i)}(\mathbb{C})\). Since \(D_{i}\) is exact, we have \(x_{i}:=\big{(}(\psi_{i}\circ\phi_{i})\otimes\operatorname{id}\big{)}(x)\in \mathbb{B}(\mathcal{H})\otimes J\). Let \(\{e_{n}\}\in J\) be an approximate unit. Fix a unit vector \(\xi\in\mathcal{H}\) and \(\varepsilon>0\). By Lemma 3.4, we may find contractions \(z_{i}\in(M_{0})_{+}\) such that \(\|(z_{i}\otimes 1)(x_{i}-x)(z_{i}\otimes 1)\|\to 0\) and \(\langle\pi((1-z_{i})\otimes 1)\xi,\xi\rangle\to 0\). We may then choose \(e_{n}\) and \(z_{j}\) such that \[\|(z_{j}\otimes 1)(x_{j}-x)(z_{j}\otimes 1)\|<\varepsilon/8,\ \ \ \ \ \langle\pi((1-z_{j})\otimes 1)\xi,\xi \rangle<\varepsilon/8\] and \[\|(1\otimes 1-1\otimes e_{n})((z_{j}\otimes 1)x_{j}(z_{j}\otimes 1))\|< \varepsilon/8.\] Note that \((1\otimes e_{n})(z_{j}\otimes 1)x(z_{j}\otimes 1)\in M_{0}\otimes J\) and \[\|(1\otimes e_{n})(z_{j}\otimes 1)x(z_{j}\otimes 1)-(z_{j}\otimes 1)x(z_{j} \otimes 1)\|<\varepsilon/2.\] It follows that \[|\langle\pi(x)\xi,\xi\rangle|=|\langle\pi((1-z_{j})\otimes 1)x+(z_{j}\otimes 1)x( (1-z_{j})\otimes 1)+(z_{j}\otimes 1)x(z_{j}\otimes 1))\xi,\xi\rangle|\leq\varepsilon.\] Since \(\xi\) and \(\varepsilon\) are both arbitrary, by polarization we conclude \(x\in\ker\pi\). We next argue that in the unital case, one may take \(\phi_{i}\) and \(\psi_{i}\) in the above proposition to be u.c.p. maps. **Proposition 5.5**.: _Let \(M\subset\mathbb{B}(\mathcal{H})\) be a von Neumann algebra and \(M_{0}\subset M\) an ultraweakly dense \(\mathrm{C}^{*}\)-subalgebra with \(1\in M_{0}\). Then, \(M_{0}\) is weakly exact in \(M\) if and only if the inclusion \(M_{0}\subset\mathbb{B}(\mathcal{H})\) is \((M_{0}\subset M)\)-nuclear._ Proof.: The forward direction follows from Proposition 5.4. To see the converse, take c.c. maps \(\psi_{i}\) and \(\phi_{i}\) as in Proposition 5.4. First notice that by Lemma 4.1 we may assume \(\phi_{i}=\operatorname{Ad}(P_{\mathcal{H}_{i}})\), where \(P_{\mathcal{H}_{i}}\) is the orthogonal projection onto a finite-dimensional subspace \(\mathcal{H}_{i}\subset\mathcal{H}\), by replacing \(\{\psi_{i}\}\) with another net of complete contractions. One then checks that \(\psi_{i}\circ\phi_{i}\to\operatorname{id}_{M_{0}}\) in the point-\((M_{0}\subset M)\)-topology after the replacement. For any finite set of states \(F\subset M_{0}^{\sharp}\), since \(\psi_{i}\circ\phi_{i}(1)=\psi_{i}(1_{\mathbb{M}_{n(i)}(\mathbb{C})})\to 1\) in the \((M_{0}\subset M)\)-topology, by Lemma 3.6 we obtain a net of u.c.p. maps \(\psi_{i,F}:\mathbb{M}_{n(i)}(\mathbb{C})\to\mathbb{B}(\mathcal{H})\) such that for any \(a\in M_{0}\), \(\lim_{i}s_{\omega}^{\rho}(\psi_{i,F}\circ\phi_{i}(a)-a)\leq\lim_{i}s_{\omega} ^{\rho}\big{(}(\psi_{i}-\psi_{i,F})(\phi_{i}(a))\big{)}+\lim_{i}s_{\omega}^{ \rho}(\psi_{i}\circ\phi_{i}(a)-a)=0\), for any \(\omega,\rho\in F\). By ordering the collection of finite subsets of states in \(M_{0}^{\sharp}\) by inclusion, we may then produce nets of u.c.p. maps that show the inclusion \(M_{0}\subset\mathbb{B}(\mathcal{H})\) is \((M_{0}\subset M)\)-nuclear. Proof of Theorem 5.1.: Clearly \((3)\Longrightarrow(2)\), and \((2)\Longrightarrow(1)\) by Proposition 5.4. To see \((1)\Longrightarrow(3)\), note that by Lemma 5.3 we have \(\tilde{M}_{0}=M_{0}+\mathbb{C}1\) is weakly exact in \(M\) and hence Proposition 5.5 shows the inclusion \(\tilde{M}_{0}\subset\mathbb{B}(\mathcal{H})\) is \((\tilde{M}_{0}\subset M)\)-nuclear. By Lemma 4.3 there then exist nets of u.c.p. maps \(\phi_{i}:\tilde{M}_{0}\to\mathbb{M}_{n(i)}(\mathbb{C})\) and \(\psi_{i}:\mathbb{M}_{n(i)}(\mathbb{C})\to\mathbb{B}(\mathcal{H})\) so that for each \(x\in M_{0}\) there exist \(T_{i}\in\mathbb{B}(\mathcal{H})\) and \(a_{i}\in\tilde{M}_{0}\) with \(\psi_{i}\circ\phi_{i}(x)-x=T_{i}-a_{i}\) where \(\|T_{i}\|\to 0\), and \(\{a_{i}\}_{i}\) is uniformly bounded with \(a_{i}\to 0\) ultrastrongly. We fix an approximate unit \(\{e_{n}\}_{n}\subset M_{0}\) and notice that for a fixed \(i\) and \(x\) we have \(\operatorname{Ad}(e_{n})\circ\psi_{i}\circ\phi_{i}(x)-x\) is asymptotically close in the uniform norm to \(e_{n}T_{i}e_{n}-e_{n}a_{i}e_{n}\). Since we have an ultrastrong limit \(\lim_{n\to\infty}e_{n}a_{i}e_{n}=a_{i}\) and since \(\operatorname{Ad}(e_{n})\circ\psi_{i}:\mathbb{M}_{n(i)}(\mathbb{C})\to M_{0}\) are complete contractions, this then shows that \(M_{0}\subset\mathbb{B}(\mathcal{H})\) is \((M_{0}\subset M)\)-nuclear. In particular, we obtain the following characterization of weakly exact von Neumann algebras. **Corollary 5.6**.: _Let \(M\subset\mathbb{B}(\mathcal{H})\) be a von Neumann algebra. Then \(M\) is weakly exact if and only if the inclusion \(M\subset\mathbb{B}(\mathcal{H})\) is \(M\)-nuclear._ **Corollary 5.7**.: _Let \(M\) be a von Neumann algebra and \(M_{0}\subset M\) a weakly dense \(\mathrm{C}^{*}\)-subalgebra such that \(M_{0}\) is weakly exact in \(M\). Suppose \(E\) is a dual \(M_{0}\)-system. The inclusion \(M_{0}\subset E\) is \((M_{0}\subset M)\)-nuclear if and only if the inclusion \(M_{0}\subset E\) is weakly nuclear._ Proof.: Suppose \(M_{0}\subset E\) is weakly nuclear, then let \(\phi_{i}:M_{0}\to\mathbb{M}_{n(i)}(\mathbb{C})\) and \(\psi_{i}:\mathbb{M}_{n(i)}(\mathbb{C})\to E\) denote the c.c.p. maps realizing weak nuclearity for the inclusion. We extend \(\phi_{i}\) to c.c.p. maps \(\tilde{\phi}_{i}:\mathbb{B}(L^{2}M)\to\mathbb{M}_{n(i)}(\mathbb{C})\) and let \(\Phi:\mathbb{B}(L^{2}M)\to E\) denote a point-ultraweak limit of the maps \(\psi_{i}\circ\tilde{\phi}_{i}\). Then \(\Phi\) is \(M_{0}\)-bimodular, and hence is continuous with respect to the \((M_{0}\subset M)\)-topologies. Since \(M_{0}\) is weakly exact in \(M\), we have that the inclusion \(M_{0}\subset\mathbb{B}(L^{2}M)\) is \((M_{0}\subset M)\)-nuclear, and composing the corresponding maps with \(\Phi\) shows then that the inclusion \(M_{0}\subset E\) is \((M_{0}\subset M)\)-nuclear. The following result answers Problem 10.4.3 from [1]. **Theorem 5.8**.: _Let \(M\) be a von Neumann algebra, then \(M\) is weakly exact if and only if for every normal faithful representation \(M\subset\mathbb{B}(\mathcal{H})\), and for any intermediate von Neumann algebra \(M\subset N\subset\mathbb{B}(\mathcal{H})\) such that there exists an \(M\)-bimodular u.c.p. map \(\phi:\mathbb{B}(\mathcal{H})\to N\), then the inclusion \(M\subset N\) is weakly nuclear._ Proof.: Suppose \(M\) is faithfully and normally represented on some Hilbert space \(\mathcal{K}\). We represent \((\mathbb{B}(\mathcal{K}))^{\sharp*}\) faithfully and normally on a Hilbert space \(\mathcal{H}\), and notice that since \(\mathbb{B}(\mathcal{K})\) is injective (as an operator system) and we have an \(M\)-system embedding of \(\mathbb{B}(\mathcal{K})\) into \(\mathbb{B}(\mathcal{K})^{\sharp*}\), there then exists an \(M\)-bimodular u.c.p. map from \(\mathbb{B}(\mathcal{H})\) into \(\mathbb{B}(\mathcal{K})\subset\mathbb{B}(\mathcal{K})^{\sharp*}\). From Corollary 5.6 and Lemma 4.4 we see that if \(M\) is not weakly exact, then the inclusion of von Neumann algebras \(M\subset\mathbb{B}(\mathcal{K})^{\sharp*}\) is not weakly nuclear. The converse is Exercise 14.1.4 in [1]. Or, just notice that when \(M\) is weakly exact we have that \(M\subset\mathbb{B}(\mathcal{H})\) is \(M\)-nuclear by Corollary 5.6, and since \(M\)-bimodular u.c.p. maps are continuous in the \(M\)-topology, if we are given an \(M\)-bimodular u.c.p. map \(\phi:\mathbb{B}(\mathcal{H})\to N\), it follows that the inclusion \(M\subset N\) is \(M\)-nuclear and hence is also weakly nuclear. ### Weak exactness and free products **Lemma 5.9**.: _Let \(M\subset\mathbb{B}(\mathcal{H})\) be a von Neumann algebra and let \(M_{0}\subset M\) be an ultraweakly dense \(\mathrm{C}^{*}\)-subalgebra. Suppose \(E\subset\mathbb{B}(\mathcal{H})\) is a normal \(M_{0}\)-system such that \(\mathbb{K}(\mathcal{H})\subset E\). If \(M_{0}\subset E\) is \((M_{0}\subset M)\)-nuclear, then for each vector state \(\varphi(\cdot)=\langle\cdot\xi,\xi\rangle\) there exist nets of u.c.p. maps \(\phi_{i}:M_{0}\to\mathbb{M}_{n(i)}(\mathbb{C})\) and \(\psi_{i}:\mathbb{M}_{n(i)}(\mathbb{C})\to E\), and pure states \(\mu_{i}\) on \(\mathbb{M}_{n(i)}(\mathbb{C})\) such that \(\mu_{i}\circ\phi_{i}=\varphi\) and \(\varphi\circ\psi_{i}=\mu_{i}\), and for each \(x\in M_{0}\) we have \(\psi_{i}\circ\phi_{i}(x)-x=T_{i}+a_{i}\), where \(\|T_{i}\|\to 0\), and \(a_{i}\in M_{0}\) is uniformly bounded with \(\varphi(a_{i}^{*}a_{i})\to 0\)._ Proof.: Let \(\mathcal{H}_{1}=\mathcal{H}\,\overline{\otimes}\,\ell^{2}\mathbb{N}\) and let \(E_{1}=E\otimes\mathbb{C}+\mathbb{K}(\mathcal{H}_{1})\subset\mathbb{B}(\mathcal{ H}_{1})\). Then, the inclusion map \(M_{0}=M_{0}\otimes\mathbb{C}\subset E_{1}\) is also \((M_{0}\subset M)\)-nuclear and we also have \(M_{0}\cap\mathbb{K}(\mathcal{H}_{1})=\{0\}\). Note that \(E_{1}\) is a \(\mathbb{K}(\mathcal{H}_{1})\)-system, and we have \((E_{1}^{M_{0}\sharp M_{0}})^{*}=(\mathbb{K}(\mathcal{H}_{1})^{M_{0}\sharp M_ {0}})^{*}\oplus q_{\mathbb{K}(\mathcal{H}_{1})}^{\perp}(E_{1}^{M_{0}\sharp M_ {0}})^{*}\) where \(q_{\mathbb{K}(\mathcal{H}_{1})}\in E_{1}^{**}\) is the support projection for \(\mathbb{K}(\mathcal{H}_{1})\). Since \(M_{0}\subset E_{1}\) is \((M_{0}\subset M)\)-nuclear, we have that the inclusion map from \(M_{0}\) into \((E_{1}^{M_{0}\sharp M_{0}})^{*}\) is weakly nuclear. Since \((\mathbb{K}(\mathcal{H}_{1})^{M_{0}\sharp M_{0}})^{*}=\mathbb{K}(\mathcal{H}_ {1})^{**}\cong\mathbb{B}(\mathcal{H}_{1})\) and since \(M_{0}\cap\mathbb{K}(\mathcal{H}_{1})=\{0\}\), it then follows that the inclusion map from \(M_{0}+\mathbb{K}(\mathcal{H}_{1})\) into \((E_{1}^{M_{0}\sharp M_{0}})^{*}\) is also weakly nuclear, and hence the inclusion \(M_{0}+\mathbb{K}(\mathcal{H}_{1})\subset E_{1}\) is \((M_{0}\subset M)\)-nuclear. By Lemmas 4.1 and 4.3, there exist finite-dimensional subspaces \(\mathcal{K}_{i}\subset\mathcal{H}_{1}\), and u.c.p. maps \(\psi_{i}:\mathbb{B}(\mathcal{K}_{i})\to E_{1}\) so that setting \(\phi_{i}(x)=P_{\mathcal{K}_{i}}xP_{\mathcal{K}_{i}}\) we have that \(\psi_{i}\circ\phi_{i}\) converges to the identity in the point-\((M_{0}\subset M)\)-topology on \(M_{0}+\mathbb{K}(\mathcal{H}_{1})\), and such that for each \(x\in M_{0}\) we have \(\psi_{i}\circ\phi_{i}(x)-x=T_{i}+a_{i}\), where \(\|T_{i}\|\to 0\), and \(a_{i}\in M_{0}\) is uniformly bounded with \(\varphi(a_{i}^{*}a_{i})\to 0\). If we fix \(\xi\in\mathcal{H}\), then we may assume that \(\xi_{1}:=\xi\otimes\delta_{1}\in\mathcal{K}_{i}\) and we define a pure state \(\mu_{i}\) on \(\mathbb{B}(\mathcal{K}_{i})\) to be the vector state associated to \(\xi_{1}\). Note that we have \(\mu_{i}\circ\phi_{i}=\varphi\). We let \(P_{\xi_{1}}\) denote the rank-one projection onto \(\mathbb{C}\xi_{1}\), and note that we have \(\psi_{i}(P_{\xi_{1}})-P_{\xi_{1}}\to 0\) in the \((M_{0}\subset M)\)-topology, and hence as in the proof of Lemma 3.4 there exists a net of positive contractions \(z_{i}\in M_{0}\) so that \(\|z_{i}(\psi_{i}(P_{\xi_{1}})-P_{\xi_{1}})z_{i}\|\to 0\) and \(z_{i}\to 1\) ultrastrongly. Since \(z_{i}\to 1\) ultrastrongly, we have that \(\|z_{i}P_{\xi_{1}}z_{i}-P_{\xi_{1}}\|\to 0\), and hence if we fix a state \(\eta\in(E_{1})^{\sharp}\) such that \(\eta\) vanishes on \(\mathbb{K}(\mathcal{H}_{1})\) and define \(\psi_{i}^{\prime}(T)=z_{i}\psi_{i}(T)z_{i}+(1-z_{i}^{2})\eta(T)\), then we have \(\|\psi_{i}^{\prime}(P_{\xi_{1}})-P_{\xi_{1}}\|\to 0\), and \(\psi_{i}^{\prime}\circ\phi_{i}\) still satisfies the pointwise convergence described above. Since \(P_{\xi_{1}}\) is a rank-one projection, a standard additional perturbation in norm gives u.c.p. maps \(\psi_{i}^{\prime\prime}:\mathbb{M}_{n(i)}(\mathbb{C})\to E_{1}\) so that \(\psi_{i}^{\prime\prime}(P_{\xi_{1}})=P_{\xi_{1}}\). (Note that \(E_{1}\) is an operator \(\mathbb{K}(\mathcal{H}_{1})\)-system, so such a perturbation still maps into \(E_{1}\)). Since \(P_{\xi_{1}}\) is then in the multiplicative domain of \(\psi_{i}^{\prime\prime}\), we have that \(\varphi\circ\psi_{i}^{\prime\prime}(T)=P_{\xi_{1}}\psi_{i}^{\prime\prime}(T)P_{\xi_ {1}}=\psi_{i}^{\prime\prime}(P_{\xi_{1}}TP_{\xi_{1}})=\mu_{i}(T)\) for all \(T\in\mathbb{M}_{n(i)}(\mathbb{C})\). We may then compose \(\psi_{i}^{\prime\prime}\) with the compression map given by \(\mathrm{id}\otimes\mathrm{Ad}(P_{\delta_{1}}):E_{1}\to E\) (which is \(M\)-bimodular) to finish the proof. In order to set notation, we briefly recall the free product construction. We refer the reader to [20] for a more detailed discussion. If \((\mathcal{H}_{i},\xi_{i})\) are Hilbert spaces with distinguished unit vectors, for \(i=1,2\), then we set \(\mathcal{H}_{i}^{o}=\mathcal{H}_{i}\ominus\mathbb{C}\xi_{i}\). The Hilbert space free product is \((\mathcal{H},\xi)\) given by \[\mathcal{H}=\mathbb{C}\xi\oplus\bigoplus_{n\geq 1}\left(\bigoplus_{\iota_{1} \neq\iota_{2}\neq\cdots\neq\iota_{n}}\mathcal{H}_{\iota_{1}}^{o}\overline{ \otimes}\,\cdots\overline{\otimes}\,\mathcal{H}_{\iota_{n}}^{o}\right).\] We also consider \[\mathcal{H}(i)=\mathbb{C}\xi\oplus\bigoplus_{n\geq 1}\left(\bigoplus_{ \iota_{1}\neq\iota_{2}\neq\cdots\neq\iota_{n}\atop\iota_{1}\neq\iota_{i}} \mathcal{H}_{\iota_{1}}^{o}\overline{\otimes}\,\cdots\overline{\otimes}\, \mathcal{H}_{\iota_{n}}^{o}\right),\] and note that we have a canonical isomorphism \(\mathcal{H}\cong\mathcal{H}_{i}\,\overline{\otimes}\,\mathcal{H}(i)\). Via this isomorphism we obtain a representation \(\lambda:\mathbb{B}(\mathcal{H}_{i})\to\mathbb{B}(\mathcal{H})\). We may similarly consider the Hilbert space \[\mathcal{H}(r,i)=\mathbb{C}\xi\oplus\bigoplus_{n\geq 1}\left(\bigoplus_{ \iota_{1}\neq\iota_{2}\neq\cdots\neq\iota_{n}\atop\iota_{n}\neq\iota_{n}} \mathcal{H}_{\iota_{1}}^{o}\overline{\otimes}\,\cdots\overline{\otimes}\, \mathcal{H}_{\iota_{n}}^{o}\right),\] and note that we again have a canonical isomorphism \(\mathcal{H}\cong\mathcal{H}(r,i)\,\overline{\otimes}\,\mathcal{H}_{i}\), so that we obtain another representation \(\rho:\mathbb{B}(\mathcal{H}_{i})\to\mathbb{B}(\mathcal{H})\). We let \(\omega_{i}\) denote the vector state on \(\mathbb{B}(\mathcal{H}_{i})\) given by \(\xi_{i}\). If \(A_{i}\subset\mathbb{B}(\mathcal{H}_{i})\) are C\({}^{*}\)-algebras, then the reduced free product C\({}^{*}\)-algebra \((A_{1},\omega_{1})\ast_{r}(A_{2},\omega_{2})\) is the C\({}^{*}\)-subalgebra of \(\mathbb{B}(\mathcal{H})\) generated by \(\lambda_{1}(A_{1})\) and \(\lambda_{2}(A_{2})\). If \(M_{i}\subset\mathbb{B}(\mathcal{H}_{i})\) are von Neumann algebras, then the free product von Neumann algebra \((M_{1},\omega_{1})\ast(M_{2},\omega_{2})\) is the von Neumann algebra generate by \(\lambda(M_{1})\) and \(\lambda(M_{2})\). The representations \(\lambda_{i}\) give rise to a representation of the unital algebraic free product \(\mathbb{B}(\mathcal{H}_{1})\ast_{\mathbb{C}}\mathbb{B}(\mathcal{H}_{1})\) and if \(E_{i}\subset\mathbb{B}(\mathcal{H}_{i})\) are operator systems, then we denote by \((E_{1},\omega_{1})\ast_{r}(E_{2},\omega_{2})\) the operator subsystem of \((\mathbb{B}(\mathcal{H}_{1}),\omega_{1})\ast_{r}(\mathbb{B}(\mathcal{H}_{2}), \omega_{2})\) generated as the closed span of applying the representation \(\lambda\) to all reduced words that appear in \(E_{1}\) and \(E_{2}\). If \(A_{i},B_{i}\subset\mathbb{B}(\mathcal{H}_{i})\) are C\({}^{*}\)-subalgebras, and we have u.c.p. maps \(\phi_{i}:A_{i}\to B_{i}\) that preserve the state \(\omega_{i}\), then there is a unique u.c.p. map \((\phi_{1}\ast\phi_{2}):A_{1}\ast_{r}A_{2}\to B_{1}\ast_{r}B_{2}\) such that \((\phi_{1}\ast\phi_{2})(a_{1}b_{1}a_{2}\cdots a_{n}b_{n})=\phi_{1}(a_{1})\phi_{2 }(b_{1})\cdots\phi_{1}(a_{n})\phi_{2}(b_{n})\), whenever \(a_{i}\in A_{1}\) with \(\omega_{1}(a_{i})=0\) for \(i>1\), and \(b_{i}\in A_{2}\) with \(\omega_{2}(b_{i})=0\) for \(i<n\). The following is adapted from [1, Lemma 2.4]. **Theorem 5.10**.: _For each \(i=1,2\), let \(M_{i}\subset\mathbb{B}(\mathcal{H}_{i})\) be a von Neumann algebra with cyclic vector \(\xi_{i}\in\mathcal{H}_{i}\), and suppose \(A_{i}\subset M_{i}\) is an ultraweakly dense C\({}^{*}\)-algebra. Suppose \(E_{i}\subset\mathbb{B}(\mathcal{H}_{i})\) is an operator \(A_{i}\)-system such that \(\mathbb{K}(\mathcal{H}_{i})\subset E_{i}\). If for each \(i=1,2\), the inclusion \(A_{i}\subset E_{i}\) is \((A_{i}\subset M_{i})\)-nuclear, then the inclusion \((A_{1},\omega_{1})\ast_{r}(A_{2},\omega_{2})\subset(E_{1},\omega_{1})\ast_{r}(E _{2},\omega_{2})\) is \(((A_{1},\omega_{1})\ast_{r}(A_{2},\omega_{2})\subset(M_{1},\omega_{1})\ast(M_ {2},\omega_{2}))\)-nuclear._ Proof.: By Lemma 5.9 for each \(i=1,2\), there exist two nets of state preserving u.c.p. maps \(\phi_{k}^{i}:(M_{i},\omega_{i})\to(\mathbb{M}_{n(k,i)}(\mathbb{C}),\mu_{k}^{i})\), and \(\psi_{k}^{i}:(\mathbb{M}_{n(k,i)}(\mathbb{C}),\mu_{k}^{i})\to E_{i}\) such that if we set \(\theta_{k}^{i}=\psi_{k}^{i}\circ\phi_{k}^{i}\), then for any \(x\in A_{i}\), \(\theta_{k}^{i}(a)-a\) is of the form \[\theta_{k}^{i}(a)-a=T_{k}^{i}+a_{k} \tag{4}\] with \(a_{k}\in A_{i}\) satisfying \(\omega_{i}(a_{k}^{*}a_{k})\to 0\), and such that \(T_{k}^{i}\in E_{i}\) is converging to \(0\) in norm, where \(\mu_{k}^{i}\) is a pure state on \(\mathbb{M}_{n(k,i)}(\mathbb{C})\). We consider \[\phi_{(j,k)}:\phi_{j}^{1}\ast\phi_{k}^{2}:(A_{1},\omega_{1})\ast_{r}(A_{2}, \omega_{2})\to(\mathbb{M}_{n(j,1)}(\mathbb{C}),\mu_{j}^{1})\ast_{r}(\mathbb{M}_ {n(k,2)}(\mathbb{C}),\mu_{k}^{2})\] and \[\psi_{(j,k)}:\psi_{j}^{1}*\psi_{k}^{2}:(\mathbb{M}_{n(k,1)}(\mathbb{C}),\mu_{j}^{1 })*_{r}(\mathbb{M}_{n(k,2)}(\mathbb{C}),\mu_{k}^{2})\to(E_{1},\omega_{1})*_{r}(E_ {2},\omega_{2}).\] Notice that for \(x=a_{1}b_{1}a_{2}b_{2}\cdots a_{n}b_{n}\in A_{1}*_{r}A_{2}\), with \(a_{i}\in A_{1}\) with \(\omega_{1}(a_{i})=0\) for \(i>1\), and \(b_{i}\in A_{2}\) with \(\omega_{2}(b_{i})=0\) for \(i<m\), we have \(\psi_{(j,k)}\circ\phi_{(j,k)}(x)\to x\) in the point-weak-\((M_{0}\subset M)\)-topology, where \(M_{0}=M_{1}*_{r}M_{2}\) and \(M=M_{1}*M_{2}\). Indeed, for any \(\varphi\in(E_{1}*_{r}E_{2})^{M_{0}\sharp M_{0}}\), it follows from (4) that for each \(1\leq m\leq n\) we have \[|\varphi(\theta_{j}^{1}(a_{1})\theta_{k}^{2}(b_{1})\cdots\theta_{j}^{1}(a_{m} )(\theta_{k}^{2}(b_{m})-b_{m})a_{m+1}\cdots a_{n}b_{n})|\to 0,\] and similarly we also have \[|\varphi(\theta_{j}^{1}(a_{1})\theta_{k}^{2}(b_{1})\cdots\theta_{k}^{2}(b_{m- 1})(\theta_{j}^{1}(a_{m})-a_{m})b_{m}\cdots a_{n}b_{n})|\to 0.\] Taking closed spans then shows that \(\theta_{(j,k)}\) converges pointwise in the weak-\((M_{0}\subset M)\)-topology. Since \((\mathbb{M}_{n(j,1)}(\mathbb{C}),\mu_{j}^{1})*_{r}(\mathbb{M}_{n(k,2)}( \mathbb{C}),\mu_{k}^{2})\) is nuclear [1], the result then follows. We note that the assumption that each \(E_{i}\) contains the compact operators is necessary even in the case when each \(M_{i}\) is finite-dimensional, since otherwise by taking \(E_{i}=M_{i}\) the previous result would falsely claim that reduced free products of finite-dimensional C\({}^{*}\)-algebras are nuclear. We recall that a state on a C\({}^{*}\)-algebra is said to be nondegenerate if the corresponding GNS-representation is faithful. Some special cases of the following corollary were previously obtained by Isono [14]. **Corollary 5.11**.: _For \(i=1,2\), let \(M_{i}\) be von Neumann algebras with nondegenerate normal states \(\omega_{i}\). If each \(M_{i}\) is weakly exact, then \(M=(M_{1},\omega_{1})*(M_{2},\omega_{2})\) is weakly exact._ Proof.: Using Corollary 5.6, we may set \(E_{i}=\mathbb{B}(L^{2}(M_{i},\omega_{i}))\) and then apply Theorem 5.10 and Corollary 4.9. Note that an increasing union of weakly exact von Neumann algebras is again weakly exact. When the union is separable, this is [14, Proposition 4.1.2]. The general case also follows from [14, Proposition 4.1.2], but one needs to then apply Corollary 4.9 and use Corollary 5.6. It therefore follows that the free product of an arbitrary family of weakly exact von Neumann algebras with nondegenerate states is again weakly exact. ## 6. Biexact von Neumann algebras We recall from Section 3.2 that if \(M\) is a von Neumann algebra and \(\mathbb{X}\subset\mathbb{B}(L^{2}M)\) is a boundary piece, then the small-at-infinity boundary relative to \(\mathbb{X}\) is the normal operator \(M\)-system \[\mathbb{S}_{\mathbb{X}}(M)=\{T\in\mathbb{B}(L^{2}M)\mid[T,x]\in\mathbb{K}_{ \mathbb{X}}^{\infty,1}(M),\forall x\in M^{\prime}\}.\] **Definition 6.1**.: Let \(M\) be a von Neumann algebra with an \(M\)-boundary piece \(\mathbb{X}\subset\mathbb{B}(L^{2}M)\). We say \(M\) is _biexact relative to \(\mathbb{X}\)_ if the inclusion \(M\subset\mathbb{S}_{\mathbb{X}}(M)\) is \(M\)-nuclear. Given a discrete group \(\Gamma\), a boundary piece \(I\) is a \(\Gamma\times\Gamma\) invariant closed ideal such that \(c_{0}\Gamma\subset I\subset\ell^{\infty}\Gamma\)[10]. The small at infinity compactification of \(\Gamma\) relative to \(I\) is the spectrum of the C\({}^{*}\)-algebra \(\mathbb{S}_{I}(\Gamma)=\{f\in\ell^{\infty}\Gamma\mid f-R_{t}f\in I,\text{ for any }t\in\Gamma\}\). Recall that \(\Gamma\) is biexact relative to \(X\) if \(\Gamma\!\curvearrowright\!\mathbb{S}_{I}(\Gamma)/I\) is topologically amenable [1, 1, 1]1. We remark that this is equivalent to amenability of \(\Gamma\!\curvearrowright\!\mathbb{S}_{I}(\Gamma)\). Indeed, we may embed \(\ell^{\infty}\Gamma\hookrightarrow I^{**}\) in a \(\Gamma\)-equivariant way by taking \(\{e_{i}\}_{i}\) a \(\Gamma\)-asymptotically invariant approximate identity for \(I\) and then letting \(\phi:\ell^{\infty}\Gamma\to I^{**}\) be a point-weak\({}^{*}\) cluster point of the u.c.p. maps \(\phi_{i}:\ell^{\infty}\Gamma\to I\subset I^{**}\) given by \(\phi_{i}(f)=e_{i}f\). Since \(\Gamma\) is exact, we then have \(\Gamma\curvearrowright I^{**}\oplus(\mathbb{S}_{I}(\Gamma)/I)^{**}=\mathbb{S}_{ I}(\Gamma)^{**}\) is amenable, and it follows that \(\Gamma\curvearrowright\mathbb{S}_{I}(\Gamma)\) is an amenable action [1, Proposition 2.7]. **Theorem 6.2**.: _Let \(\Gamma\) be a discrete group with a boundary piece \(I\), and \(\mathbb{X}\) an \(L\Gamma\)-boundary piece. Let \(I_{\mathbb{X}}\subset\ell^{\infty}\Gamma\) denote the closed ideal generated by \(E(\mathbb{X})\), where \(E:\mathbb{B}(\ell^{2}\Gamma)\to\ell^{\infty}\Gamma\) is canonical conditional expectation, and let \(\mathbb{X}_{I}\) denote the \(L\Gamma\)-boundary piece \(\overline{I\mathbb{B}(\ell^{2}\Gamma)I}^{\|\cdot\|}\) generated by \(I\). The following statements are true:_ 1. _If_ \(L\Gamma\) _is biexact relative to_ \(\mathbb{X}\)_, then_ \(\Gamma\) _is biexact relative to_ \(I_{\mathbb{X}}\)_._ 2. \(\Gamma\) _is biexact relative to_ \(I\) _if and only if_ \(L\Gamma\) _is biexact relative to_ \(\mathbb{X}_{I}\)_._ Proof.: (1) Suppose \(L\Gamma\) is biexact relative to \(\mathbb{X}\). By the argument in [1, Theorem 6.4], we have \(E:\mathbb{S}_{\mathbb{X}}(L\Gamma)\to\mathbb{S}_{I_{\mathbb{X}}}(\Gamma)\) where \(E\) is the canonical conditional expectation from \(\mathbb{B}(\ell^{2}\Gamma)\) to the diagonal subalgebra \(\ell^{\infty}\Gamma\). Now for each \(n\in\mathbb{N}\), consider \(h_{n}:\Gamma\to\mathbb{S}_{I_{\mathbb{X}}}(\Gamma)\) given by \(h_{n}(\gamma)=E(\theta_{n}(\lambda_{\gamma})\lambda_{\gamma}^{*})\), where \(\theta_{n}=\psi_{n}\circ\phi_{n}:L\Gamma\to\mathbb{S}_{\mathbb{X}}(L\Gamma)\) are u.c.p. maps that factor through matrix algebras coming from the \(M\)-nuclear embedding. Note that we may assume \(\theta_{n}(\lambda_{s})=0\) for all but finitely many \(s\in\Gamma\), since by Lemma 4.1 we may take the first u.c.p. map \(\phi_{n}\) from \(M\)-nuclearity to be a compression down to \(\mathbb{B}(\ell^{2}E_{n})\) for some finite subsets \(E_{n}\subset\Gamma\). Viewing \(h_{n}\) as an element in \(C_{c}(\Gamma,\mathbb{S}_{I_{\mathbb{X}}}(\Gamma))\subset\mathbb{S}_{I_{ \mathbb{X}}}(\Gamma)\rtimes_{r}\Gamma\), we now check that \(h_{n}\) is positive. For a finite subset \(\{s_{i}\}_{i=1}^{m}\subset\Gamma\), observe that \[[\lambda_{s_{i}}^{-1}h_{n}(s_{i}s_{j}^{-1})]_{i,j} =[E(\lambda_{s_{i}}\theta_{n}(\lambda_{s_{i}s_{j}^{-1}})\lambda_{s _{j}^{-1}})]_{i,j}\] \[=E(\operatorname{diag}(\lambda_{s_{1}},\ldots,\lambda_{s_{m}})^{* }\theta_{n}^{(m)}([\lambda_{s_{i}}\lambda_{s_{j}}^{*}]_{i,j})\operatorname{diag }(\lambda_{s_{1}},\ldots,\lambda_{s_{m}}))\geq 0.\] The same argument as in the proof of Theorem 4.12 shows that \(E\) is continuous from the \(L\Gamma\)-topology into the norm topology, and so for any \(s\in\Gamma\), we have \(\|h_{n}(s)-1\|_{\mathbb{S}_{I_{\mathbb{X}}}(\Gamma)}=\|E((\lambda_{s}-\theta_{ n}(\lambda_{s}))\lambda_{s}^{*})\|\to 0\). We therefore conclude that \(\Gamma\curvearrowright\mathbb{S}_{I_{\mathbb{X}}}(\Gamma)\) is amenable [1, Theorem 4.4.3]. (2) As \(\mathbb{S}_{I}(\Gamma)\subset\mathbb{S}_{\mathbb{X}_{I}}(L\Gamma)\), we have an embedding \(\iota:\mathbb{S}_{I}(\Gamma)\rtimes_{r}\Gamma\to\mathbb{S}_{\mathbb{X}_{I}}(L\Gamma)\) given by \(\iota(f)=M_{f}\) and \(\iota(u_{\gamma})=\lambda_{\gamma}\) for any \(f\in\mathbb{S}_{I}(\Gamma)\) and \(\gamma\in\Gamma\) by [1, Proposition 5.1.3]. Since \(\mathbb{S}_{I}(\Gamma)\rtimes_{r}\Gamma\) is nuclear, in particular, the inclusion \[C_{\lambda}^{*}\Gamma\subset\mathbb{S}_{I}(\Gamma)\rtimes_{r}\Gamma\subset \mathbb{S}_{\mathbb{X}_{I}}(L\Gamma)\] is \((C_{\lambda}^{*}\Gamma\subset L\Gamma)\)-nuclear. Corollary 4.9 then gives that \(L\Gamma\subset\mathbb{S}_{\mathbb{X}_{I}}(L\Gamma)\) is \(L\Gamma\)-nuclear. The "only if" part follows from the first statement upon noticing that \(E(\mathbb{X}_{I})=I\). **Corollary 6.3**.: _Let \(\Gamma\) and \(\Lambda\) be two discrete groups. If \(\Gamma\) is biexact and \(L\Gamma\cong L\Lambda\), then \(\Lambda\) is also biexact._ ### Properties of biexact von Neumann algebras Next we collect some basic properties of biexact von Neumann algebras, generalizing some results from the group von Neumann algebra setting. Given \(M\) a von Neumann algebra with a normal faithful representation \(\pi:M\to\mathbb{B}(\mathcal{H})\), we may consider the following normal \(M\)-system, which is a natural variant of \(\mathbb{S}(M)\), \[\mathbb{S}(M;\mathcal{H})=\{T\in\mathbb{B}(\mathcal{H})\mid[T,x]\in\mathbb{K}(M ;\mathcal{H}),\forall x\in M^{\prime}\},\] where \(\mathbb{K}(M;\mathcal{H})\) is the \(M\)-\(M\) and \(M^{\prime}\)-\(M^{\prime}\) closure of \(\mathbb{K}(\mathcal{H})\). We say \(M\) is biexact with respect to the normal faithful representation \(\pi:M\to\mathbb{B}(\mathcal{H})\) if the inclusion \(M\subset\mathbb{S}(M;\mathcal{H})\) is \(M\)-nuclear. Since every normal faithful representation of \(M\) is a reduction of an amplification of the standard form of \(M\), the next lemma shows that biexactness is independent of representations of \(M\). **Lemma 6.4**.: _Let \(M\subset\mathbb{B}(\mathcal{H})\) be a von Neumann algebra._ 1. _If_ \(M\subset\mathbb{S}(M;\mathcal{H})\) _is_ \(M\)_-nuclear, then_ \(M\subset\mathbb{S}(M;\mathcal{H}\otimes\ell^{2}S)\) _is_ \(M\)_-nuclear for any set_ \(S\)_._ 2. _If_ \(M\subset\mathbb{S}(M;\mathcal{H})\) _is_ \(M\)_-nuclear,_ \(p\in\mathcal{P}(M)\)_, and_ \(q\in\mathcal{P}(M^{\prime})\) _with_ \(pq\neq 0\)_, then_ \(pMpq\subset\mathbb{S}(pMpq;pq\mathcal{H})\) _is_ \(pMpq\)_-nuclear._ Proof.: (1) It suffices to show \(\mathbb{S}(M;\mathcal{H})\otimes\mathrm{id}_{\ell^{2}S}\subset\mathbb{S}(M; \mathcal{H}\otimes\ell^{2}S)\). For any \(T\in\mathbb{S}(M;\mathcal{H})\), \(x\in M^{\prime}\subset\mathbb{B}(\mathcal{H})\) and \(i,j\in S\), we denote by \(e_{i,j}=\langle\cdot,\delta_{i}\rangle\delta_{j}\) and compute \([T\otimes\mathrm{id}_{\ell^{2}S},x\otimes e_{i,j}]=[T,x]\otimes e_{i,j}\in \mathbb{K}(M;\mathcal{H}\otimes\ell^{2}S)\). Since the span of \(\{x\otimes e_{i,j}\mid x\in M^{\prime},i,j\in S\}\subset M^{\prime}\cap\mathbb{ B}(\mathcal{H}\otimes\ell^{2}S)\) is ultraweakly dense, we conclude that \(\mathbb{S}(M;\mathcal{H})\otimes\mathrm{id}_{\ell^{2}S}\subset\mathbb{S}(M; \mathcal{H}\otimes\ell^{2}S)\). (2) If \(S\in\mathbb{B}(\mathcal{H})\) and \(a,b\in M\), then for a non-zero projection \(p\in\mathcal{P}(M)\) we have \(pa^{*}Sbp=(pa^{*}ap)^{1/2}v^{*}Sw(pb^{*}bp)^{1/2}\), where \(pa=v|pa|\) and \(pb=w|pb|\) are the polar decompositions. In particular, if \(\varphi\) is a normal state on \(pMp\), and we denote by \(\tilde{\phi}\) the extension to \(M\) given by \(\tilde{\varphi}(x)=\varphi(pxp)\), then we have \(s_{\varphi}(pa^{*}Sbp)\leq\varphi(pa^{*}ap)^{1/2}\|S\|\varphi(pb^{*}bp)^{1/2}= \tilde{\varphi}(a^{*}a)^{1/2}\|S\|\tilde{\varphi}(b^{*}b)^{1/2}\). Taking infimums shows that \(s_{\varphi}(pTp)\leq s_{\tilde{\varphi}}(T)\) for each \(T\in\mathbb{B}(\mathcal{H})\), and so conjugation by \(p\) is continuous from the \(M\)-topology on \(\mathbb{B}(\mathcal{H})\) to the \(pMp\)-topology on \(\mathbb{B}(p\mathcal{H})\). Note that conjugation by \(p\) is also continuous from the \(M^{\prime}\)-topology on \(\mathbb{B}(\mathcal{H})\) to the \(pMp^{\prime}\)-topology on \(\mathbb{B}(p\mathcal{H})\), and we have similar continuity properties for conjugation by \(q\in\mathcal{P}(M^{\prime})\). Hence, we have that \(pq\mathbb{K}(M;\mathcal{H})pq\subset\mathbb{K}(pMpq;pq\mathcal{H})\), and so for \(T\in\mathbb{S}(M;\mathcal{H})\) and \(x\in M^{\prime}\subset\mathbb{B}(\mathcal{H})\) we have \([pqTpq,qxqp]=pq[T,qxq]pq\in\mathbb{K}(pMpq;pq\mathcal{H})\). Therefore if we consider the maps associated to the \(M\)-nuclear embedding \(M\subset\mathbb{S}(M;\mathcal{H})\), then conjugation by \(pq\) gives rise to an \(pMpq\)-nuclear embedding of \(pMpq\) in to \(\mathbb{S}(pMpq;pq\mathcal{H})\). **Proposition 6.5** (cf. [17, Proposition 2.10]).: _Let \(M\) be a biexact von Neumann algebra. Then \(M\mathbin{\overline{\otimes}}\mathbb{B}(\mathcal{K})\) and \(pMp\) are also biexact, for any Hilbert space \(\mathcal{K}\) and nonzero projection \(p\in\mathcal{P}(M)\)._ Proof.: Suppose \(M\) is represented on \(\mathcal{H}\). To see that \(N:=M\mathbin{\overline{\otimes}}\mathbb{B}(\mathcal{K})\) is bi-exact, first we note that \(\mathbb{K}(M;\mathcal{H})\otimes\mathbb{B}(\mathcal{K})\subset\mathbb{K}(N; \mathcal{H}\otimes\mathcal{K})\). Indeed, it is easy to see that we have \(\mathbb{K}(M;\mathcal{H})\otimes\mathbb{K}(\mathcal{K})\subset\mathbb{K}(N; \mathcal{H}\otimes\mathcal{K})\) and \(\mathbb{K}(\mathcal{K})\) is dense in \(\mathbb{B}(\mathcal{K})\) in the ultrastrong topology, and hence also in the \(\mathbb{B}(\mathcal{K})\)-topology. It then follows that \(\mathbb{S}(M;\mathcal{H})\otimes\mathbb{B}(\mathcal{K})\subset\mathbb{S}(N; \mathcal{H}\otimes\mathcal{K})\), since \([T\otimes S,a\otimes\mathrm{id}_{\mathcal{K}}]=[T,a]\otimes S\in\mathbb{K}(N; \mathcal{H}\otimes\mathcal{K})\) for any \(a\in M^{\prime}\cap\mathbb{B}(\mathcal{H})\), \(T\in\mathbb{S}(M;\mathcal{H})\), and \(S\in\mathbb{B}(\mathcal{K})\). Since the identity map on \(\mathbb{B}(\mathcal{K})\) is weakly nuclear, by tensoring with the maps coming from the \(M\)-nuclearity of the embedding \(M\subset\mathbb{S}(M;\mathcal{H})\) we may obtain u.c.p. maps \(\phi_{i}:M\otimes\mathbb{B}(\mathcal{K})\to\mathbb{M}_{n(i)}(\mathbb{C})\) and \(\psi_{i}:\mathbb{M}_{n(i)}(\mathbb{C})\to\mathbb{S}(M;\mathcal{H})\otimes \mathbb{B}(\mathcal{K})\) such that \(\psi_{i}\circ\phi_{i}\to\mathrm{id}_{M\otimes\mathbb{B}(\mathcal{K})}\) in the \((M\otimes\mathbb{B}(\mathcal{K})\subset M\mathbin{\overline{\otimes}}\mathbb{B }(\mathcal{K}))\)-topology. Corollary 4.9 then gives that the embedding \(N\subset\mathbb{S}(N;\mathcal{H}\mathbin{\overline{\otimes}}\mathcal{K})\) is \(N\)-nuclear. That \(pMp\) is biexact when \(M\) is already follows from Lemma 6.4. Let \(M\) be a von Neumann algebra, and \(\mathbb{X}\subset\mathbb{B}(L^{2}M)\) an \(M\)-boundary piece. A net of unitaries \(\{u_{i}\}\subset\mathcal{U}(M)\) is said to converge to \(0\) over \(\mathbb{X}\) if we have \(\mathrm{Ad}(u_{i})(T)\to 0\) ultraweakly for each \(T\in\mathbb{K}_{\mathbb{X}}^{\infty,1}(M)\). If \(N\subset M\) is a von Neumann subalgebra, then we say that \(N\) is weak mixing over \(\mathbb{X}\) if there exists a net \(\{u_{i}\}\subset\mathcal{U}(N)\) such that \(\{u_{i}\}\) converges to \(0\) over \(\mathbb{X}\). If \(M\) is finite, then it is not difficult to see that \(\{u_{i}\}\) converges to \(0\) over \(\mathbb{X}\) if and only if \(\mathrm{Ad}(u_{i})(T)\to 0\) ultraweakly for each \(T\in\mathbb{X}\) (see, e.g., [1, Lemma 4.2]). For instance, when \(M\) is finite, a net \(\{u_{i}\}\) converges to \(0\) over \(\mathbb{K}(L^{2}M)\) if and only if the net \(\{u_{i}\}\) converges to \(0\) ultraweakly. This does not hold for general von Neumann algebra, e.g., if \(M=\mathbb{B}(\mathcal{H})\), then \(\mathbb{K}_{\mathbb{K}(\mathcal{H})}(M)=\mathbb{B}(\mathcal{H})\), and so there is no net \(\{u_{i}\}\) that converges to \(0\) over \(\mathbb{K}(\mathcal{H})\) in this case. The following lemma gives a general condition for concluding that unitaries converge to \(0\) over \(\mathbb{X}\). **Lemma 6.6**.: _Let \(M\) be a \(\sigma\)-finite von Neumann algebra with a normal faithful state \(\omega\), and let \(\mathbb{X}\subset\mathbb{B}(L^{2}(M,\omega))\) be an \(M\)-boundary piece. Suppose \(\{u_{i}\}\subset\mathcal{U}(M)\) is such that the weak\({}^{*}\)-limit \(\lim_{i\to\infty}\omega\circ\operatorname{Ad}(u_{i})\) exists and is in \(M_{*}\subset M^{*}\). If \(u_{i}Tu_{i}^{*}\to 0\) ultraweakly for each \(T\in\mathbb{X}\), then \(\{u_{i}\}\) converges to \(0\) over \(\mathbb{X}\)._ Proof.: If \(a,b\in M\), \(S\in\mathbb{B}(L^{2}(M,\omega))\), and \(x,y\in M^{\prime}\), then by Cauchy-Schwarz we have \[|\langle u_{i}^{*}a^{*}Sbu_{i}x\omega^{1/2},y\omega^{1/2}|\leq\omega(u_{i}^{* }a^{*}au_{i})^{1/2}\|y^{*}\|\|S\|\|x\|\omega(u_{i}^{*}b^{*}bu_{i})^{1/2}.\] Taking infimums over all decompositions of the type \(T=a^{*}Sb\), we then have \[\limsup_{i\to\infty}|\langle u_{i}^{*}Tu_{i}x\omega^{1/2},y\omega^{1/2}\rangle |\leq\|x\|\|y\|_{\tilde{\omega}}(T),\] where \(\tilde{\omega}\) is the weak\({}^{*}\)-limit of \(\omega\circ\operatorname{Ad}(u_{i})\). Thus, if \(u_{i}^{*}Tu_{i}\to 0\) ultraweakly for each \(T\in\mathbb{X}\), then we also have \(u_{i}^{*}Tu_{i}\to 0\) ultraweakly for each \(T\in\overline{\mathbb{X}}^{M-M}\). Since conjugation by \(u_{i}\) is \(M^{\prime}\)-bimodular, we then also have that \(u_{i}^{*}Tu_{i}\to 0\) ultraweakly for each \(T\in\overline{\overline{\mathbb{X}}^{M-M}M^{\prime}-M^{\prime}}=\mathbb{K}_{ \mathbb{X}}^{\infty,1}(M)\). If \(M\) is a \(\sigma\)-finite von Neumann algebra and \(\mathcal{U}\in\beta N\setminus N\) is a nonprincipal ultrafilter, then we define \[\mathcal{I}_{\mathcal{U}}=\{(x_{n})_{n}\in\ell^{\infty}(M)\mid x_{n}\to 0\;* -\text{strongly as }n\to\mathcal{U}\};\] \[\mathcal{M}^{\mathcal{U}}(M)=\{(x_{n})_{n}\in\ell^{\infty}(M)\mid(x_{n})_{n} \mathcal{I}_{\mathcal{U}}\subset\mathcal{I}_{\mathcal{U}}\text{ and }\mathcal{I}_{\mathcal{U}}(x_{n})_{n}\subset \mathcal{I}_{\mathcal{U}}\}.\] The multiplier algebra \(\mathcal{M}^{\mathcal{U}}(M)\) is a C\({}^{*}\)-algebra containing \(\mathcal{I}_{\mathcal{U}}\) as a norm closed two-sided ideal. The Ocneanu ultraproduct \(M^{\mathcal{U}}\) is the quotient \(\mathcal{M}^{\mathcal{U}}/\mathcal{I}_{\mathcal{U}}\), which is a von Neumann algebra [3]. If \((x_{n})_{n}\in\mathcal{M}^{\mathcal{U}}\), then we denote by \((x_{n})^{\mathcal{U}}\) the image of \((x_{n})_{n}\) in \(M^{\mathcal{U}}\). We may view \(M\) as a von Neumann subalgebra of \(M^{\mathcal{U}}\) by considering the constant sequences. We have a canonical normal conditional expectation \(E_{M}:M^{\mathcal{U}}\to M\) by considering the weak\({}^{*}\)-limit \(E_{M}((x_{n})^{\omega})=\lim_{n\to\mathcal{U}}x_{n}\). If \(Q\subset M\) is a von Neumann subalgebra with expectation, then given \((x_{n})_{n}\in\ell^{\infty}(Q)\) we have that \((x_{n})_{n}\in\mathcal{M}^{\mathcal{U}}(Q)\) if and only if \((x_{n})_{n}\in\mathcal{M}^{\mathcal{U}}(M)\), and this gives rise to a canonical embedding \(Q^{\mathcal{U}}\subset M^{\mathcal{U}}\). Moreover, the map \(M^{\mathcal{U}}\ni(x_{n})^{\omega}\mapsto(E(x_{n}))^{\omega}\in Q^{\mathcal{U}}\) gives a normal faithful conditional expectation onto \(Q^{\mathcal{U}}\) that satisfies \[E_{M}^{M^{\mathcal{U}}}E_{Q^{\mathcal{U}}}^{M^{\mathcal{U}}}=E_{Q^{\mathcal{U }}}^{M^{\mathcal{U}}}E_{M}^{M^{\mathcal{U}}}=E_{Q}^{M}E_{M}^{M^{\mathcal{U}}}= E_{Q}^{Q^{\mathcal{U}}}E_{Q^{\mathcal{U}}}^{M^{\mathcal{U}}}.\] We refer to [1] for more information on the Ocneanu ultraproduct. **Theorem 6.7**.: _Let \(M\) be a \(\sigma\)-finite biexact von Neumann algebra, and let \(\mathcal{U}\in\beta\mathbb{N}\setminus\mathbb{N}\) be a nonprincipal ultrafilter. If \(A\subset M\) is a von Neumann subalgebra such that there exists \(u\in\mathcal{U}(A^{\prime}\cap M^{\mathcal{U}})\) with \(E_{M}(u)=0\), then the inclusion \(A\subset M\) is weakly nuclear._ Proof.: Fix a normal faithful state \(\varphi\) on \(M\). Suppose \(A\subset M\) is a von Neumann subalgebra and \(u\in\mathcal{U}(A^{\prime}\cap M^{\mathcal{U}})\) with \(E_{M}(u)=0\). We may then choose a sequence \(\{u_{n}\}\subset\mathcal{U}(M)\) such that \(u=(u_{n})_{n}\in\mathcal{U}(M^{\mathcal{U}})\)[16, Lemma 2.1], i.e., we have \(\lim_{n\to\mathcal{U}}u_{n}=0\) ultraweakly, \(\lim_{n\to\mathcal{U}}[u_{n},a]=0\)\(*\)-strongly for any \(a\in A\), and \(\hat{\varphi}:=\lim_{n\to\mathcal{U}}\varphi\circ\operatorname{Ad}(u_{n})= \varphi\circ E_{M}\circ\operatorname{Ad}(u)_{|M}\) is contained in \(M_{*}\). We let \(\theta:\mathbb{B}(L^{2}(M,\varphi))\to\mathbb{B}(L^{2}(M,\varphi))\) be the point-ultraweak limit \(\theta=\lim_{n\to\mathcal{U}}\operatorname{Ad}(u_{n})\). By Lemma 6.6 we have \(\theta_{|\mathbb{K}^{\infty,1}(M)}=0\). From the definition of \(\mathbb{S}(M)\), we therefore have \(\theta(\mathbb{S}(M))\subset M\). Note that since \(\lim_{n\to\mathcal{U}}[u_{n},a]\to 0\)\(*\)-strongly for any \(a\in A\), we also have \(\theta_{|A}=\operatorname{id}\) and \(\theta_{|M^{\prime}}=\operatorname{id}\). Also, note that, by Kadison's Inequality, if \(x\in M\) and \(a\in M^{\prime}\), we have \(\|\theta(x)a\varphi^{1/2}\|^{2}\leq\|a\|^{2}\varphi\circ\theta(x^{*}x)=\|a\|^{2} \hat{\varphi}(x^{*}x)\), and from this it follows that \(\theta_{|M}\) defines a normal map from \(M\) to \(M\). Now consider \[A\subset M\subset\mathbb{S}(M;\mathcal{H})\xrightarrow{\theta}M.\] Since \(M\subset\mathbb{S}(M)\) is \(M\)-nuclear, and since by Lemma 3.7\(\theta\) is continuous from the weak-\(M\)-topology to the ultraweak topology on \(M\), it follows that the inclusion \(A\subset M\) is weakly nuclear. We remark that it may be the case that a von Neumann subalgebra \(A\subset M\) of a biexact von Neumann algebra has diffuse relative commutant even though \(A\) is nonamenable. For example, we may consider \(A=L^{\infty}(X,\mu)\overline{\otimes}\,L\mathbb{F}_{2}\subset\mathbb{B}(L^{2} (X,\mu))\overline{\otimes}\,L\mathbb{F}_{2}\) where \((X,\mu)\) is some diffuse probability space. This type of situation cannot occur, though, if the subalgebra is with expectation, i.e., if there exists a normal faithful conditional expectation \(E:M\to A\). If \(M\) is \(\sigma\)-finite and we have a nonprincipal ultrafilter \(\mathcal{U}\in\beta\mathbb{N}\setminus\mathbb{N}\), then \(M\) is \(\mathcal{U}\)-solid if \(A\) is amenable for any von Neumann subalgebra \(A\subset M\) with expectation such that \(A^{\prime}\cap M^{\mathcal{U}}\) is diffuse. **Lemma 6.8**.: _Let \(M\) be a diffuse \(\sigma\)-finite von Neumann algebra, then there exists a unitary \(u\in\mathcal{U}(M^{\mathcal{U}})\) such that \(E_{M}(u)=0\)._ Proof.: By [14, Lemma 2.1] there exists a normal faithful state \(\psi\) on \(M\) such that \(M^{\psi}\) is diffuse. If we let \(A\subset M^{\psi}\) denote a masa, then as \(M^{\psi}\) is finite there exists a norm \(\psi\)-preserving conditional expectation from \(M\) to \(A\), and hence \(A^{\mathcal{U}}\subset M^{\mathcal{U}}\). Since \(A\) is abelian, it is easy to find \(u\in\mathcal{U}(A^{\mathcal{U}})\) such that \(E_{A}(u)=0\), and we then have \(E_{M}(u)=E_{A}(u)=0\). **Theorem 6.9**.: _Let \(M\) be a \(\sigma\)-finite biexact von Neumann algebra, then \(M\) is \(\mathcal{U}\)-solid._ Proof.: Suppose \(A\subset M\) is with expectation such that \(A^{\prime}\cap M^{\mathcal{U}}\) is diffuse. We let \(p\in\mathcal{Z}(A^{\prime}\cap M)\) denote the central projection such that \((A^{\prime}\cap M)p\) is diffuse and \(p^{\perp}=\sum_{i\in I}p_{i}\) with \(N_{i}=(A^{\prime}\cap M)p_{i}\) a type I factor. Since \(N_{i}\) is a type I factor, we have \(p_{i}(A^{\prime}\cap M^{\mathcal{U}})p_{i}\cong P_{i}\,\overline{\otimes}\,N_ {i}\) where \(P_{i}\) is a diffuse von Neumann algebra and then the restriction of \(E_{M}\) to \(P_{i}\) maps into \(\mathcal{Z}(N_{i})=\mathbb{C}\), i.e., \(E_{M}\) defines a normal state on \(P_{i}\). Since \(P_{i}\) is diffuse, we may choose some \(u_{i}\in\mathcal{U}(P_{i})\subset\mathcal{U}(p_{i}(A^{\prime}\cap M^{ \mathcal{U}})p_{i})\) so that \(E_{M}(u_{i})=0\). Since \(A\subset M\) is with expectation, we also have that \(p(A^{\prime}\cap M)p\) is with expectation in \(pMp\), and hence we have a canonical embedding \(p(A^{\prime}\cap M)^{\mathcal{U}}p\subset pM^{\mathcal{U}}p\). By Lemma 6.8 we may then choose \(u_{0}\in p(A^{\prime}\cap M)^{\mathcal{U}}p\) so that \(E_{pMp}(u_{0})=E_{p(A^{\prime}\cap M)p}(u_{0})=0\). Setting \(u=u_{0}+\sum_{i\in I}u_{i}\) then gives a unitary in \(A^{\prime}\cap M^{\mathcal{U}}\) so that \(E_{M}(u)=0\). By Theorem 6.7 we then have that \(A\subset M\) is weakly nuclear, and since \(A\) is with expectation it follows that \(A\) is amenable. Suppose \(M\) is a von Neumann algebra and \(N\subset M\) is a von Neumann subalgebra with a normal faithful conditional expectation \(E:M\to N\). Denote by \(e_{N}:L^{2}M\to L^{2}N\) the corresponding projection between the standard forms. If \(\mathbb{X}\subset\mathbb{B}(L^{2}M)\) is an \(M\)-boundary piece, then we have that the norm closure of \(e_{N}\mathbb{K}_{\mathbb{X}}(M)e_{N}\) is an \(N\)-boundary piece, which we denote by \(\mathbb{X}^{N}\). **Proposition 6.10**.: _Let \(M\) be a von Neumann algebra and \(N\subset M\) a von Neumann subalgebra with a normal faithful conditional expectation \(E:M\to N\). Suppose \(\mathbb{X}\subset\mathbb{B}(L^{2}M)\) is an \(M\)-boundary piece and \(M\) is biexact relative to \(\mathbb{X}\). Then \(N\) is biexact relative to \(\mathbb{X}^{N}\), where \(\mathbb{X}^{N}=\overline{e_{N}\mathbb{K}_{\mathbb{X}}(M)e_{N}}\) is the \(N\)-boundary piece associated with \(\mathbb{X}\). In particular, \(N\) is biexact if \(M\) is biexact._ Proof.: By Corollary 3.8 we have \(\operatorname{Ad}(e_{N})(\mathbb{K}_{\mathbb{X}}^{\infty,1}(M))\subset\mathbb{K}_{e _{N}\mathbb{K}_{\mathbb{X}}(M)e_{N}}^{\infty,1}(N)\), and since \([JNJ,e_{N}]=0\) it then follows that \(\operatorname{Ad}(e_{N})\) maps \(\mathbb{S}_{\mathbb{X}}(M)\) to \(\mathbb{S}_{\mathbb{X}^{N}}(N)\). Since \(M\subset\mathbb{S}_{\mathbb{X}}(M)\) is \(M\)-nuclear, by composing the corresponding maps with \(\operatorname{Ad}(e_{N})\) and using again Corollary 3.8, we see that \(N\subset\mathbb{S}_{\mathbb{X}^{N}}(N)\) is \(N\)-nuclear. We recall from [10] that a von Neumann algebra \(M\) is properly proximal if for every non-zero central projection \(p\in\mathcal{Z}(M)\) there does not exist a conditional expectation \(E:\mathbb{S}(pM)\to pM\). Note that if \(M\) is biexact, then by composing the maps giving \(M\)-nuclearity with such an expectation we see that \(pM\) is amenable. Hence, there then exists a unique projection \(p\in\mathcal{Z}(M)\) so that \(pM\) is amenable and \(p^{\perp}M\) is properly proximal. The previous proposition, therefore, gives a generalization of Theorem 1.1 from [10]. **Corollary 6.11**.: _Let \(M\) be a biexact von Neumann algebra, then every von Neumann subalgebra is either properly proximal, or else has a non-zero amenable summand._ Suppose \(\mathcal{F}=\{(B_{j},E_{j})\}_{j\in J}\) is a family of von Neumann subalgebras \(B_{j}\subset M\), together with normal faithful conditional expectations. For each \(j\in J\), we let \(e_{B_{j}}:L^{2}M\to L^{2}B_{j}\) denote the orthogonal projection corresponding to \(E_{j}\). We let \(\mathbb{X}_{\mathcal{F}}\) denote the hereditary \(\operatorname{C}^{*}\)-subalgebra of \(\mathbb{B}(L^{2}M)\) generated by \(\{xJyJe_{B_{j}}\mid x,y\in M,j\in J\}\), and we say that \(M\) is biexact relative to \(\mathcal{F}\) if it is biexact relative to the boundary piece \(\mathbb{X}_{\mathcal{F}}\). If the conditional expectations are understood from the context (e.g., if \(M\) is a II\({}_{1}\) factor), then we may write simply \(\mathbb{X}_{\{B_{j}\}_{j\in J}}\) instead. For a single von Neumann subalgebra \(B\subset M\) with normal faithful conditional expectation \(E:M\to B\), we will write even more simply \(\mathbb{X}_{(B,E)}\) or \(\mathbb{X}_{B}\) for this boundary piece, and we will say that \(M\) is biexact relative to \(B\) if it is biexact relative to \(\mathbb{X}_{B}\). **Lemma 6.12**.: _Using the above notations, assume \(M\) is \(\sigma\)-finite. For a finite von Neumann subalgebra \(N\subset M\) with expectation, either there exists a net \(\{u_{i}\}_{i}\subset\mathcal{U}(N)\) that converges to \(0\) over \(\mathbb{X}_{\mathcal{F}}\), or else we have \(N\preceq_{M}B_{j}\) for some \(j\in J\)._ Proof.: Let \(\varphi\) denote a normal faithful state on \(M\) such that \(\varphi_{|N}\) is a trace. Suppose \(N\not\preceq_{M}B_{j}\) for any \(j\in J\), then we may choose a net \(\{u_{i}\}\subset\mathcal{U}(N)\) such that \(E_{j}(au_{i}b)\to 0\) strongly for any \(a,b\in M\) and \(j\in J\) by [14, Definition 4.1] and [15, Remark 1.2]. Notice that for any \(T\in\mathbb{B}(L^{2}M)\), \(x_{n},a_{n}\in M\) and \(j_{1},j_{2}\in J\) \[|\langle u_{i}^{*}a_{1}e_{B_{j_{1}}}Te_{B_{j_{2}}}a_{2}u_{i}x_{1} \varphi,x_{2}1_{\varphi}\rangle|\] \[\leq \|T\|\|E_{j_{1}}(a_{2}u_{i}x_{1})\|_{\varphi}\|E_{j_{2}}(a_{1}^{ *}u_{i}x_{2})\|_{\varphi}\to 0,\] and hence \(\operatorname{Ad}(u_{i})(a_{1}Jb_{1}Je_{B_{j}}Te_{B_{j}}Jb_{2}Ja_{2})\to 0\) ultraweakly for any \(a_{1},a_{2},b_{1},b_{2}\in M\). It then follows that \(\operatorname{Ad}(u_{i})(S)\to 0\) ultraweakly for any \(S\in\mathbb{X}_{\mathcal{F}}\), and since \(u_{i}\in M\), which is in the centralizer of \(\varphi\), it follows from Lemma 6.6 that \(\{u_{i}\}\) converges to \(0\) over \(\mathbb{X}_{\mathcal{F}}\). Continuing in the above setting, notice that the proof of Theorem 6.7 shows that if there exists a net \(\{u_{i}\}\subset\mathcal{U}(N)\) such that \(\{u_{i}\}\) converges to \(0\) over \(\mathbb{X}\), then there exists a u.c.p. map \(\theta:\mathbb{B}(L^{2}M)\to\mathbb{B}(L^{2}M)\) such that \(\theta\) vanishes on \(\mathbb{K}_{\mathbb{X}}^{\infty,1}(M)\) and \(\theta_{|N^{\prime}\cap M}\) is the identity. Therefore, the same proof as in Theorem 6.7 yields the following. **Proposition 6.13**.: _Let \(M\) be a \(\sigma\)-finite von Neumann algebra, and \(\mathcal{F}=\{(B_{j},E_{j})\}_{j\in J}\) a family of von Neumann subalgebras of \(M\) with expectation. Suppose \(M\) is biexact relative to \(\mathbb{X}_{\mathcal{F}}\) and \(N\subset M\) is a finite von Neumann subalgebra with expectation. Then either \(N^{\prime}\cap M\) is amenable, or else \(N\preceq_{M}B_{j}\) for some \(j\in J\)._ ### Permanence properties **Proposition 6.14** (cf. [1, Lemma 15.3.3]).: _For \(i=1,2\), let \(M_{i}\) be a von Neumann algebra and \(\mathbb{X}_{i}\subset\mathbb{B}(L^{2}M_{i})\) be an \(M_{i}\)-boundary piece. If \(M_{i}\) is biexact relative to \(\mathbb{X}_{i}\), then \(M=M_{1}\overline{\otimes}M_{2}\) is biexact relative to the hereditary \(\mathrm{C}^{*}\)-algebra generated by \(\mathbb{X}_{1}\otimes\mathbb{B}(L^{2}M_{1})\) and \(\mathbb{B}(L^{2}M_{1})\otimes\mathbb{X}_{2}\). In particular, if each \(M_{i}\) is biexact, then \(M\) is biexact relative to \(\mathbb{X}_{\{M_{1},M_{2}\}}\)._ Proof.: Let \(\mathbb{X}\subset\mathbb{B}(L^{2}M)\) denote the hereditary \(\mathrm{C}^{*}\)-algebra generated by \(\mathbb{X}_{1}\otimes\mathbb{B}(L^{2}M_{1})\) and \(\mathbb{B}(L^{2}M_{1})\otimes\mathbb{X}_{2}\). First we show that \(\mathbb{S}_{\mathbb{X}_{1}}(M_{1})\otimes\mathbb{S}_{\mathbb{X}_{2}}(M_{2}) \subset\mathbb{S}_{\mathbb{X}}(M)\). Indeed, note that for any \(T\in\mathbb{S}_{\mathbb{X}_{1}}(M_{1})\), \(S\in\mathbb{S}_{\mathbb{X}_{2}}\), \(a\in M_{1}\), \(b\in M_{2}\) we have \[[T\otimes S,J(a\otimes 1)J]=[T,JaJ]\otimes S\in\mathbb{K}_{\mathbb{X}_{1}}^{ \infty,1}(M_{1})\otimes\mathbb{B}(L^{2}M_{2})\subset\mathbb{K}_{\mathbb{X}}^{ \infty,1}(M)\] and \[[T\otimes S,J(1\otimes b)J]=T\otimes[S,JbJ]\in\mathbb{B}(L^{2}M_{1})\otimes \mathbb{K}_{\mathbb{X}_{2}}^{\infty,1}(M_{2})\subset\mathbb{K}_{\mathbb{X}}^{ \infty,1}(M).\] It then follows from [5, Lemma 6.1] that \(T\otimes S\in\mathbb{S}_{\mathbb{X}}(M)\). For each \(i=1,2\), denote by \(\phi_{j}^{i}:M_{i}\to\mathbb{M}_{n(i,j)}(\mathbb{C})\) and \(\psi_{j}^{i}:\mathbb{M}_{n(i,j)}(\mathbb{C})\to\mathbb{S}_{\mathbb{X}_{i}}(M_{ i})\) those u.c.p. maps given by relative biexactness of \(M_{i}\). Then, by considering the u.c.p. maps \(\phi_{j}^{1}\otimes\phi_{j}^{2}:M_{1}\otimes M_{2}\to\mathbb{M}_{n(1,j)}( \mathbb{C})\otimes\mathbb{M}_{n(2,j)}(\mathbb{C})\) and \(\psi_{j}^{1}\otimes\psi_{j}^{2}:\mathbb{M}_{n(1,j)}(\mathbb{C})\otimes\mathbb{ M}_{n(2,j)}(\mathbb{C})\to\mathbb{S}_{\mathbb{X}_{1}}(M_{1})\otimes\mathbb{S}_{ \mathbb{X}_{2}}(M_{2})\subset\mathbb{S}_{\mathbb{X}}(M)\), we see that \(M_{1}\otimes M_{2}\subset\mathbb{S}_{\mathbb{X}}(M)\) is \((M_{1}\otimes M_{2}\subset M_{1}\overline{\otimes}M_{2})\)-nuclear. It follows from Corollary 4.9 that \(M\subset\mathbb{S}_{\mathbb{X}}(M)\) is \(M\)-nuclear. In [10], Houdayer and Isono generalize Ozawa and Popa's unique prime decomposition results from [11] to the type III setting. The techniques developed in [10] also apply to general biexact factors. **Corollary 6.15**.: _Let \(m,n\geq 1\) be integers. For each \(1\leq i\leq n\), let \(M_{i}\) be a nonamenable biexact factor. For each \(1\leq j\leq n\), let \(N_{j}\) be any non-type \(I\) factor and suppose \(M=\overline{\otimes}_{i=1}^{\ m}M_{i}=\overline{\otimes}_{j=1}^{\ n}N_{j}\). There then exists a surjection \(\sigma:\{1,\dots,m\}\to\{1,\dots,n\}\), type \(I\) factors \(F_{1},\dots,F_{n}\), and a unitary \(u\in(\overline{\otimes}_{j=1}^{\ n}F_{j})\overline{\otimes}M\) such that \(u(F_{j}\overline{\otimes}N_{j})u^{*}=(\overline{\otimes}_{i\in\sigma^{-1}(j) }M_{i})\overline{\otimes}F_{j}\)._ Proof.: We simply replace the class \(\mathcal{C}_{(AO)}\) from [10] by the class of biexact von Neumann algebras. The class of biexact von Neumann algebras is stable under amplification and tensoring by type I factors by Proposition 6.5, biexact von Neumann algebras are \(\mathcal{U}\)-solid by Theorem 6.9, and by Propositions 6.14 and 6.13 we have that biexact von Neumann algebras satisfy the conclusion of [10, Theorem 5.1]. The rest of the argument form [10] may then be applied while using [1, Application 4]. We now consider biexactness for free products, and we will continue to use the notation for free products as in Section 5.1 above. If \((M_{1},\omega_{1})\) and \((M_{2},\omega_{1})\) are von Neumann algebras with normal faithful states, then we let \(e_{i}\in\mathbb{B}(L^{2}((M_{1},\omega_{1})*(M_{2},\omega_{2})))\) denote the orthogonal projection onto \(L^{2}(M_{i},\omega_{i})\). We also let \(p_{i}\in\mathbb{B}(L^{2}(M_{i},\omega_{i}))\) denote the rank-one projection onto \(\mathbb{C}\omega_{i}^{1/2}\). If \(\mathbb{X}_{i}\subset\mathbb{B}(L^{2}(M_{i},\omega_{i}))\) is an \(M_{i}\)-boundary piece for each \(i=1,2\), then we denote by \(\mathbb{X}_{1}\vee\mathbb{X}_{2}\subset\mathbb{B}(L^{2}((M_{1},\omega_{1})*(M_{ 2},\omega_{2})))\) the boundary piece generated by \(e_{1}\mathbb{X}_{1}e_{1}\) and \(e_{2}\mathbb{X}_{2}e_{2}\). Note that when \(\mathbb{X}_{i}=\mathbb{B}(L^{2}(M_{i},\omega_{i}))\) for \(i=1,2\), then \(\mathbb{X}_{1}\vee\mathbb{X}_{2}\) is the boundary piece \(\mathbb{X}_{\{M_{1},M_{2}\}}\). Also, if \(\mathbb{X}_{i}=\mathbb{K}(L^{2}(M,\omega_{i}))\) for \(i=1,2\), then \(\mathbb{X}_{1}\vee\mathbb{X}_{2}=\mathbb{K}(L^{2}((M_{1},\omega_{1})*(M_{2}, \omega_{2})))\). The following proposition generalizes Proposition 15.3.13 from [10] in the case of group von Neumann algebras. **Proposition 6.16**.: _Let \((M_{1},\omega_{1})\) and \((M_{2},\omega_{1})\) be von Neumann algebras with normal faithful states. Suppose for \(i=1,2\), \(\mathbb{X}_{i}\subset\mathbb{B}(L^{2}(M_{i},\omega_{i}))\) is an \(M_{i}\) boundary piece such that each \(M_{i}\) is biexact relative to \(\mathbb{X}_{i}\). Then, \(M=M_{1}*M_{2}\) is biexact relative to \(\mathbb{X}_{1}\vee\mathbb{X}_{2}\). In particular, if each \(M_{i}\) is weakly exact, then \((M_{1},\omega_{1})*(M_{2},\omega_{2})\) is biexact relative to \(\mathbb{X}_{\{M_{1},M_{2}\}}\), and if each \(M_{i}\) is biexact, then \((M_{1},\omega_{1})*(M_{2},\omega_{2})\) is also biexact._ Proof.: We set \(M_{0}=(M_{1},\omega_{1})*_{r}(M_{2},\omega_{2})\) and \(M=(M_{1},\omega_{1})*(M_{2},\omega_{2})\). By Theorem 5.10, we have that the inclusion \(M_{0}\subset(\mathbb{S}_{\mathbb{X}_{1}}(M_{1}),\omega_{1})*_{r}(\mathbb{S}_ {\mathbb{X}_{2}}(M_{2}),\omega_{2})\) is \((M_{0}\subset M)\)-nuclear, and hence by Corollary 4.9 it suffices to check that we have an inclusion \[(\mathbb{S}_{\mathbb{X}_{1}}(M_{1}),\omega_{1})*_{r}(\mathbb{S}_{\mathbb{X}_{ 2}}(M_{2}),\omega_{2})\subset\mathbb{S}_{\mathbb{X}_{1}\vee\mathbb{X}_{2}}((M _{1},\omega_{1})*(M_{2},\omega_{2})).\] The same computation as in [11, Proposition 3.2] shows that for \(T\in\mathbb{S}_{\mathbb{X}_{1}}(M_{1})\), \(a\in M_{1}^{\prime}\subset\mathbb{B}(L^{2}(M_{1},\omega_{1}))\) and \(b\in M_{2}^{\prime}\subset\mathbb{B}(L^{2}(M_{2},\omega_{2}))\), we have \([\lambda(T),\rho(b)]=0\) and \[[\lambda(T),\rho(a)]=e_{1}\lambda([T,a])e_{1}\in e_{1}\mathbb{K}_{\mathbb{X}_ {1}}^{\infty,1}(M_{1})e_{1}\subset\mathbb{K}_{\mathbb{X}_{1}\vee\mathbb{X}_{2} }^{\infty,1}(M). \tag{5}\] Similarly, if \(S\in\mathbb{S}_{\mathbb{X}_{2}}(M_{2})\), we have \[[\lambda(S),\rho(b)]=e_{2}\lambda([S,b])e_{2}\in e_{2}\mathbb{K}_{\mathbb{X}_ {2}}^{\infty,1}(M_{2})e_{2}\subset\mathbb{K}_{\mathbb{X}_{1}\vee\mathbb{X}_{2} }^{\infty,1}(M) \tag{6}\] and \([\lambda(S),\rho(a)]=0\). Note that for each \(T\in\mathbb{B}(L^{2}(M_{1},\omega_{1}))\) with \(\omega_{1}(T)=0\), we have \(\lambda(T)e_{2}=\lambda(Tp_{1})e_{2}\), and since \(\omega_{1}^{1/2}\) is cyclic for \(M_{1}\) we may find a sequence \(x_{n}\in M_{1}\) with \(\omega_{1}(x_{n})=0\) so that \(\|\lambda(Tp_{1})-\lambda(x_{n}p_{1})\|\to 0\). More generally, we see that, for any operator of the form \(\lambda(T_{1})\lambda(S_{1})\cdots\lambda(S_{n-1})\lambda(T_{n})e_{2}\) with \(T_{i}\in\mathbb{B}(L^{2}(M_{1},\omega_{1}))\) satisfying \(\omega_{1}(T_{i})=0\), and \(S_{i}\in\mathbb{B}(L^{2}(M_{2},\omega_{2}))\) satisfying \(\omega_{2}(S_{i})=0\), we have \[\lambda(T_{1})\lambda(S_{1})\cdots\lambda(S_{n-1})\lambda(T_{n})e_{2}=\lambda (T_{1}p_{1})\lambda(S_{1}p_{2})\cdots\lambda(S_{n-1}p_{2})\lambda(T_{n}p_{1}) e_{2},\] and hence may be approximated in uniform norm by operators of the form \(\lambda(x_{1})\lambda(y_{1})\cdots\lambda(y_{n-1})\lambda(x_{n})e_{2}\) where \(x_{i}\in M_{1}\) with \(\omega_{1}(x_{i})=0\), and \(y_{i}\in M_{2}\) with \(\omega_{2}(y_{i})=0\). This similarly follows for operators of any of the following forms \[\lambda(S_{1})\cdots\lambda(S_{n-1})\lambda(T_{n})e_{2},\] \[\lambda(S_{1})\lambda(T_{1})\cdots\lambda(T_{n-1})\lambda(S_{n})e_ {1},\] \[\lambda(T_{1})\cdots\lambda(T_{n-1})\lambda(S_{n})e_{1}.\] As a consequence, if \(T_{1},\ldots,T_{n}\in\mathbb{S}_{\mathbb{X}_{1}}(M_{1})\) with \(\omega_{1}(T_{i})=0\), and if \(S_{1},\ldots,S_{n}\in\mathbb{S}_{\mathbb{X}_{2}}(M_{2})\) with \(\omega_{2}(S_{i})=0\), then from (5) and (6) it follows that for \(a\in M_{1}^{\prime}\subset\mathbb{B}(L^{2}(M_{1},\omega_{2}))\) or \(a\in M_{2}^{\prime}\subset\mathbb{B}(L^{2}(M_{2},\omega_{2}))\) we have \[\lambda(T_{1}S_{1}\cdots S_{k-1})[\lambda(T_{k}),\rho(a)]\lambda(S_{k}\cdots T _{n}S_{n})\in\mathbb{K}_{\mathbb{X}_{1}\vee\mathbb{X}_{2}}^{\infty,1}(M)\] and \[\lambda(T_{1}S_{1}\cdots S_{k-1}T_{k})[\lambda(S_{k}),\rho(a)]\lambda(T_{k+1} \cdots T_{n}S_{n})\in\mathbb{K}_{\mathbb{X}_{1}\vee\mathbb{X}_{2}}^{\infty,1}(M).\] By summing these terms we then have \[[\lambda(T_{1}S_{1}\cdots T_{n}S_{n}),\rho(a)]\in\mathbb{K}_{\mathbb{X}_{1} \vee\mathbb{X}_{2}}^{\infty,1}(M).\] This similarly holds for words starting with operators in \(\mathbb{S}_{\mathbb{X}_{2}}(M_{2})\) or ending with operators in \(\mathbb{S}_{\mathbb{X}_{1}}(M_{1})\). Since \(M^{\prime}\) is generated by \(\rho(M_{1}^{\prime})\cup\rho(M_{2}^{\prime})\)[12, Theorem 1.6.5], it then follows from [1, Lemma 6.1] that \[(\mathbb{S}_{\mathbb{X}_{1}}(M_{1}),\omega_{1})*_{r}(\mathbb{S}_{\mathbb{X}_{2}}(M _{2}),\omega_{2})\subset\mathbb{S}_{\mathbb{X}_{1}\vee\mathbb{X}_{2}}((M_{1}, \omega_{1})*(M_{2},\omega_{2})).\] The previous theorem can be used to derive Kurosh type theorems for general free products of weakly exact von Neumann algebras similar to the results in [10]. However, more general Kurosh type theorems already exist in [11], and so we will not pursue this direction further. ### Biexactness relative to the amenable boundary piece In [13, Section 6.1], distinguished canonical amenable boundary pieces \(I_{\mathrm{amen}}\subset\ell^{\infty}\Gamma\) and \(\mathbb{X}_{\mathrm{amen}}\subset\mathbb{B}(L^{2}M)\) were associated to groups and von Neumann algebras, respectively, which we will briefly describe. Given a discrete group \(\Gamma\), \(I_{\mathrm{amen}}\subset\ell^{\infty}\Gamma\) consists of all functions \(f\) that satisfy \(\lim_{i\to\infty}f(t_{i})=0\) whenever \(\{t_{i}\}\subset\Gamma\) is net such that \(\lambda_{t_{i}}\to 0\) in the weak topology in \(C_{\lambda}^{*}\Gamma\). If \((M,\tau)\) is a tracial von Neumann algebra and \(\mathcal{K}\) is a Hilbert \(M\)-bimodule, we denote by \(\mathcal{K}^{0}\subset\mathcal{K}\) the subspace of left and right bounded vectors, i.e., those vectors \(\xi\in\mathcal{K}\) such that the operators \(L_{\xi},R_{\xi}:M\to\mathcal{K}\) given by \(L_{\xi}x=\xi x\) and \(R_{\xi}x=x\xi\) extend to bounded operators from \(L^{2}M\) to \(\mathcal{K}\). We let \[B_{0}=\mathrm{sp}\{T_{\xi}\mid\xi\in\mathcal{H}^{0}\text{ for some Hilbert bimodule }\mathcal{H}\prec L^{2}M\,\overline{\otimes}\,L^{2}M\}.\] Note that \(B_{0}\) forms a \(*\)-subalgebra of \(\mathbb{B}(L^{2}M)\), since for \(\xi\in\mathcal{H}^{0}\) and \(\eta\in\mathcal{K}^{0}\) we have \(T_{\xi}^{*}T_{\eta}=T_{\tilde{\xi}\otimes\eta}\) where \(\tilde{\xi}\otimes\eta\in\tilde{\mathcal{H}}\otimes_{M}\mathcal{K}\), and we have \(\tilde{\mathcal{H}}\otimes_{M}\mathcal{K}\prec L^{2}M\otimes L^{2}M\) if \(\mathcal{H}\prec L^{2}M\otimes L^{2}M\), or if \(\mathcal{K}\prec L^{2}M\otimes L^{2}M\). The amenable \(M\)-boundary piece \(\mathbb{X}_{\mathrm{amen}}\) is defined to be \(B\mathbb{B}(L^{2}M)B\), where \(B\subset\mathbb{B}(L^{2}M)\) is the C\({}^{*}\)-algebra generated by \(B_{0}\), i.e., \(\mathbb{X}_{\mathrm{amen}}\) is the hereditary C\({}^{*}\)-subalgebra of \(\mathbb{B}(L^{2}M)\) generated by \(B_{0}\). It is clear that \(\mathbb{X}_{\mathrm{amen}}\) is an \(M\)-boundary piece as \(M\) and \(JMJ\) are contained in its multiplier algebra of \(B\). It was shown in [13, Theorem 6.14] that if \(E:\mathbb{B}(\ell^{2}\Gamma)\to\ell^{\infty}\Gamma\) is the canonical conditional expectation, then \(E(\mathbb{X}_{\mathrm{amen}})\subset\ell^{\infty}\Gamma\) generates \(I_{\mathrm{amen}}\), and \(I_{\mathrm{amen}}\subset\mathbb{X}_{\mathrm{amen}}\subset\mathbb{B}(L^{2}(L \Gamma))\). As a consequence, we also obtain the following corollary from Theorem 6.2. **Corollary 6.17**.: _Let \(\Gamma\) be a discrete group, then \(\Gamma\) is biexact relative to its amenable boundary piece if and only if \(L\Gamma\) is biexact relative to its amenable boundary piece._ If \(M\) is a tracial von Neumann algebra and \(N\subset M\) is a von Neumann subalgebra, then by [13, Lemma 6.13] we have \(e_{N}\in\mathbb{X}_{\mathrm{amen}}\) if and only if \(N\) is amenable. In particular, it follows that if \(M\) is biexact relative to \(N\) and \(N\) is amenable, then \(M\) is biexact relative to its amenable boundary piece. Note that if \(\Gamma\) is a countable group with a normal amenable subgroup \(\Sigma\lhd\Gamma\) such that \(\Gamma/\Sigma\) is biexact, then \(\Gamma\) is biexact relative to its amenable boundary piece. In fact, it suffices to construct a left \(\Gamma\)-equivariant embedding from \(\mathbb{S}(\Gamma/\Sigma)\) to \(\mathbb{S}_{I_{\mathrm{amen}}}(\Gamma)\). Consider the embedding \(\iota:\ell^{\infty}(\Gamma/\Sigma)\to\ell^{\infty}\Gamma\) given by \(\iota(f)(t)=f(t\Sigma)\). Note that \(\iota(c_{0}(\Gamma/\Sigma))\subset I_{\mathrm{amen}}\) since \(1_{\Sigma}\in I_{\mathrm{amen}}\) by [13, Lemma 6.12] and \(R_{t}\iota(f)(s)=\iota(R_{t\Sigma}f)(s)\) for \(t\in\Gamma\). It then follows that \(\iota(\mathbb{S}(\Gamma/\Sigma))\subset\mathbb{S}_{I_{\mathrm{amen}}}(\Gamma)\). Similarly, if \(M\) is biexact and \(R\) is amenable, then \(M\,\overline{\otimes}\,R\) is biexact relative to its amenable boundary piece. This follows easily from the proof of Proposition 6.14 and the above lemma. We also note that, in Proposition 8.3 below, we show that if \(R\) is amenable, \(\Gamma\) is biexact, and \(\Gamma\curvearrowright R\) is a trace-preserving action, then \(R\rtimes\Gamma\) is biexact relative to \(R\), and hence is biexact relative to its amenable boundary piece. We remark that the amenable boundary piece is still small enough to obtain structural indecomposability results. The following is a direct consequence of [13, Lemma 6.13] and the proof of Proposition 6.13. **Proposition 6.18** (cf. Remark 4.8 in [10]).: _Let \(M\) be a tracial von Neumann algebra that is biexact relative to its amenable boundary piece. If a von Neumann subalgebra \(N\subset M\) has no amenable direct summand, then \(N^{\prime}\cap M\) is amenable. In particular, any subfactor \(N\subset M\) is either McDuff or prime._ **Lemma 6.19**.: _Let \(M\) be a tracial von Neumann algebra and \(N\subset M\) a von Neumann subalgebra. Denote by \(\mathbb{X}_{\rm amen}\) the amenable \(M\)-boundary piece and \(\mathbb{X}_{\rm amen}^{N}\) the amenable \(N\)-boundary piece. Then \(e_{N}\mathbb{X}_{\rm amen}e_{N}\subset\mathbb{X}_{\rm amen}^{N}\) and \(\overline{e_{N}\mathbb{K}_{\mathbb{X}_{\rm amen}}(M)e_{N}}\subset\mathbb{K}_{ \mathbb{X}_{\rm amen}^{N}}(N)\)._ Proof.: Let \(\mathcal{H}\) be a universal Hilbert \(M\)-bimodule that is weakly contained in the coarse bi-module and denote by \(\mathcal{H}^{0}\) the set of left and right bounded vectors in \(\mathcal{H}\). For \(\xi\in\mathcal{H}^{0}\), note that \(e_{N}T_{\xi}^{*}T_{\xi}e_{N}=T_{\xi\otimes\xi}\), where \(\bar{\xi}\otimes\xi\) is viewed in \({}_{N}\bar{\mathcal{H}}\otimes_{M}\mathcal{H}_{N}\prec L^{2}N\otimes L^{2}N\). It follows that \(e_{N}(\mathbb{X}_{\rm amen})_{+}e_{N}\subset\mathbb{X}_{\rm amen}^{N}\). By Corollary 3.8\({\rm Ad}(e_{N}):\mathbb{B}(L^{2}M)\to\mathbb{B}(L^{2}N)\) is continuous from the weak \(M\)-topology (resp. weak \(M^{\prime}\)-topology) to the weak \(N\)-topology (resp. weak \(N^{\prime}\)-topology), and hence we then have \(\overline{e_{N}\mathbb{K}_{\mathbb{X}_{\rm amen}}(M)e_{N}}\subset\mathbb{K}_{ \mathbb{X}_{\rm amen}^{N}}(N)\). As a consequence of the Lemma 6.19, Proposition 6.18 and Proposition 6.10, we obtain the following proposition. **Proposition 6.20**.: _Let \(M\) be a tracial von Neumann algebra that is biexact relative to its amenable boundary piece. Then, any von Neumann subalgebra \(N\subset M\) is also biexact relative to its amenable \(N\)-boundary piece._ ## 7. Additional formulations and examples of biexact von Neumann algebras We now give some equivalent characterizations of biexactness for von Neumann algebras that will be useful for providing additional examples. **Lemma 7.1**.: _Let \(M\) be a von Neumann algebra and \(\mathbb{X}\subset\mathbb{B}(L^{2}M)\) be an \(M\)-boundary piece. There then exists a u.c.p. \(M\)-bimodular map \(\phi:\mathbb{B}(L^{2}M)\to\mathbb{K}_{\mathbb{X}}(M)^{\sharp*}\)._ Proof.: Let \(\{e_{i}\}_{i}\subset\mathbb{K}_{\mathbb{X}}(M)\) be an approximate unit that is quasi-central with respect to \(M\), and let \(\phi_{i}:\mathbb{B}(L^{2}M)\to\mathbb{K}_{\mathbb{X}}(M)\subset\mathbb{K}_{ \mathbb{X}}(M)^{**}\) be the c.c.p. map given by \(\phi_{i}(T)=e_{i}^{1/2}Te_{i}^{1/2}\). Then, any point-ultraweak cluster point of \(\{\phi_{i}\}_{i}\) gives a u.c.p. \(M\)-bimodular map \(\phi:\mathbb{B}(L^{2}M)\to\mathbb{K}_{\mathbb{X}}(M)^{**}\), and we may then take the compression to \(\mathbb{K}_{\mathbb{X}}(M)^{\sharp*}\). **Lemma 7.2**.: _Let \(M\) be a von Neumann algebra with an \(M\)-boundary piece \(\mathbb{X}\subset\mathbb{B}(L^{2}M)\). Suppose \(A\subset\mathbb{B}(L^{2}M)\) is a \(\mathrm{C}^{*}\)-subalgebra containing the identity operator, and \(\phi:A\to\mathbb{B}(L^{2}M)\) is a u.c.p. map such that \(\phi(a)-a\in\mathbb{K}_{\mathbb{X}}^{\infty,1}(M)\) for each \(a\in A\), then \(\phi(a)-a\in\mathbb{K}_{\mathbb{X}}(M)\) for each \(a\in A\)._ Proof.: Take the Stinespring representation \(\phi(a)=V^{*}\pi(a)V\) where \(\mathcal{H}\) is a Hilbert space, \(V:L^{2}M\to\mathcal{H}\) is an isometry, and \(\pi:A\to\mathbb{B}(\mathcal{H})\) is a \(*\)-representation. If \(a\in A\), then we have \(|\pi(a)V-Va|^{2}=\phi(a^{*}a)-\phi(a^{*})a-a^{*}\phi(a)+a^{*}a\in\mathbb{K}_{ \mathbb{X}}^{\infty,1}(M)\), and hence \(\pi(a)V-Va\in\mathbb{K}_{\mathbb{X}}^{L}(M,\mathcal{H})\). Thus, \(\phi(a)-a=V^{*}(\pi(a)V-Va)\in\mathbb{K}_{\mathbb{X}}^{L}(M)\). Replacing \(a\) by \(a^{*}\) shows that we also have \(\phi(a^{*})-a^{*}\in\mathbb{K}_{\mathbb{X}}^{L}(M)\), and hence \(\phi(a)-a\in\mathbb{K}_{\mathbb{X}}^{L}(M)\cap\mathbb{K}_{\mathbb{X}}^{L}(M)^ {*}=\mathbb{K}_{\mathbb{X}}(M)\). If \(M\) is a von Neumann algebra with a normal faithful state \(\mu\), then we continue to denote by \(\mu\) the vector state on \(\mathbb{B}(L^{2}(M,\mu))\) given by \(\mathbb{B}(L^{2}(M,\mu))\ni T\mapsto\langle T\mu^{1/2},\mu^{1/2}\rangle\), where \(\mu^{1/2}\) is the cyclic vector given by the GNS-construction of \(M\) with respect to \(\mu\). We recall that \(r_{\mu}\) denotes the seminorm on \(\mathbb{B}(L^{2}(M,\mu))\) given by \[r_{\mu}(T)=\inf\{(\mu(a^{*}a)+\mu(b^{*}b))^{1/2}\|Z\|(\mu(c^{*}c)+\mu(d^{*}d))^{ 1/2}\}\] where the infimum is taken over all decompositions \(T=(\begin{smallmatrix}a\\ b\end{smallmatrix})^{*}Z\,(\begin{smallmatrix}c\\ d\end{smallmatrix})\) where \(a,c\in M^{\prime}\), \(b,d\in M\) and \(Z\in\mathbb{M}_{2}(\mathbb{B}(L^{2}(M,\mu)))\). **Theorem 7.3**.: _Let \(M\) be a separable von Neumann algebra with a normal faithful state \(\mu\). Let \(\mathbb{X}\subset\mathbb{B}(L^{2}M)\) be an \(M\)-boundary piece. The following conditions are equivalent:_ [MISSING_PAGE_POST] 1. \(M\) _is biexact relative to_ \(\mathbb{X}\)_._ 2. \(M\) _is weakly exact, and for every_ \(\varepsilon>0\) _and finite-dimensional operator systems_ \(E\subset F\subset\mathbb{B}(L^{2}M)\) _with_ \(E\subset M\) _there exists a u.c.p. map_ \(\phi:\mathbb{B}(L^{2}M)\to\mathbb{B}(L^{2}M)\) _such that_ \(d_{r_{\mu}}(x-\phi(x),\mathbb{K}_{\mathbb{X}}^{\infty,1}(M))<\varepsilon\)_, and_ \(d_{r_{\mu}}([JxJ,\phi(T)],\mathbb{K}_{\mathbb{X}}^{\infty,1}(M))<\varepsilon\) _for all_ \(x\in E\) _and_ \(T\in F\)_._ 3. _There exists a separable ultraweakly dense_ \(\mathrm{C}^{*}\)_-subalgebra_ \(M_{0}\subset M\) _and a separable_ \(\mathrm{C}^{*}\)_-subalgebra_ \(\mathbb{B}\subset\mathbb{B}(L^{2}M)\) _containing_ \(M_{0}\) _and the identity operator such that the inclusion_ \(M_{0}\subset\mathbb{B}\) _is_ \((M_{0}\subset M)\)_-nuclear and such that for every_ \(\varepsilon>0\) _and finite-dimensional operator systems_ \(E\subset F\subset\mathbb{B}\) _with_ \(E\subset M_{0}\) _there exists a u.c.p. map_ \(\phi:\mathbb{B}(L^{2}M)\to\mathbb{B}(L^{2}M)\) _such that_ \(d_{r_{\mu}}(x-\phi(x),\mathbb{K}_{\mathbb{X}}^{\infty,1}(M))<\varepsilon\)_, and_ \(d_{r_{\mu}}([JxJ,\phi(T)],\mathbb{K}_{\mathbb{X}}^{\infty,1}(M))<\varepsilon\) _for all_ \(x\in E\) _and_ \(T\in F\)_._ 4. \(M\) _is weakly exact, and for every separable_ \(\mathrm{C}^{*}\)_-subalgebra_ \(M_{0}\subset M\) _and separable_ \(\mathrm{C}^{*}\)_-subalgebra_ \(\mathbb{B}\subset\mathbb{B}(L^{2}M)\) _containing_ \(M_{0}\) _and the identity operator, there exists a u.c.p. map_ \(\phi:\mathbb{B}\to\mathbb{S}_{\mathbb{X}}(M)\) _satisfying_ \(\phi(x)-x\in\mathbb{K}_{\mathbb{X}}(M)\) _for all_ \(x\in M_{0}\)_._ 5. _There exists a separable ultraweakly dense_ \(\mathrm{C}^{*}\)_-subalgebra_ \(M_{0}\subset M\) _and a separable_ \(\mathrm{C}^{*}\)_-subalgebra_ \(\mathbb{B}\subset\mathbb{B}(L^{2}M)\) _containing_ \(M_{0}\) _and the identity operator such that the inclusion_ \(M_{0}\subset\mathbb{B}\) _is_ \((M_{0}\subset M)\)_-nuclear, and such that there exists a normal u.c.p. map_ \(\phi:\mathbb{B}\to\mathbb{S}_{\mathbb{X}}(M)\) _satisfying_ \(\phi(x)-x\in\mathbb{K}_{\mathbb{X}}(M)\) _for all_ \(x\in M_{0}\)_._ Proof.: First note that (1) \(\implies\) (2) is trivial. We now show that (2) \(\implies\) (4) and (3) \(\implies\) (5) using an idea inspired by Exercise 15.1.1 in [1]. Let \(M_{0}\subset M\) be a separable \(\mathrm{C}^{*}\)-subalgebra, and let \(\mathbb{B}\subset\mathbb{B}(L^{2}M)\) be a separable \(\mathrm{C}^{*}\)-algebra containing \(M_{0}\) and the identity operator. Enumerate dense subsets \(\{x_{k}\}_{k}\subset M_{0}\) and \(\{T_{k}\}_{k}\subset\mathbb{B}\). By hypothesis, for each \(n\geq 1\) there exists a u.c.p. map \(\phi_{n}:\mathbb{B}(L^{2}M)\to\mathbb{B}(L^{2}M)\) so that the following conditions hold: 1. For each \(i\leq n\), there exists \(A_{i,0,n}\in\mathbb{K}_{\mathbb{X}}(M)\) such that \(r_{\mu}(x_{i}-\phi_{n}(x_{i})-A_{i,0,n})<2^{-n}\). 2. For each \(1\leq i,j\leq n\), there exists \(A_{i,j,n}\in\mathbb{K}_{\mathbb{X}}(M)\) such that \(r_{\mu}([Jx_{i}J,T_{j}]-A_{i,j,n})<2^{-n}\). Using the definition of the norm \(r_{\mu}\) there then exist, for each \(n\geq 1\), \(1\leq i\leq n\), and \(0\leq j\leq n\), elements \(a_{i,j,n},c_{i,j,n}\in M^{\prime}\), \(b_{i,j,n},d_{i,j,n}\in M\) and \(Z_{i,j,n}\in\mathbb{B}(L^{2}M)\) with \(\|Z_{i,j,n}\|=1\) and \[(\mu(a_{i,j,n}^{*}a_{i,j,n})+\mu(b_{i,j,n}^{*}b_{i,j,n}))^{1/2}=(\mu(c_{i,j,n} ^{*}c_{i,j,n})+\mu(d_{i,j,n}^{*}d_{i,j,n}))^{1/2}<2^{-n}\] so that \[x_{i}-\phi_{n}(x_{i})-A_{i,0,n}=\left(\begin{smallmatrix}a_{i,0,n}\\ b_{i,0,n}\end{smallmatrix}\right)^{*}Z_{i,0,n}\left(\begin{smallmatrix}c_{i,0,n} \\ d_{i,0,n}\end{smallmatrix}\right),\ \ \ \ [Jx_{i}J,T_{j}]-A_{i,j,n}=\left(\begin{smallmatrix}a_{i,j,n}\\ b_{i,j,n}\end{smallmatrix}\right)^{*}Z_{i,j,n}\left(\begin{smallmatrix}c_{i,j,n} \\ d_{i,j,n}\end{smallmatrix}\right).\] We let \(\tilde{\mathbb{B}}\subset\mathbb{B}(L^{2}M)\) denote the separable \(\mathrm{C}^{*}\)-algebra generated by \(\mathbb{B}\), together with the elements \(A_{i,j,n},Z_{i,j,n},a_{i,j,n},c_{i,j,n}\) for all \(n\geq 1\), \(1\leq i\leq n\), and \(0\leq j\leq n\). We let \(I=\tilde{\mathbb{B}}\cap\mathbb{K}_{\mathbb{X}}(M)\) and note that \(I\) is a separable ideal in \(\tilde{\mathbb{B}}\). We may, therefore, choose an approximate unit \(\{e_{n}\}_{n}\subset I_{+}\), so that whenever \(n\geq 1\), \(1\leq i\leq n\), and \(0\leq j\leq n\) we have \[\|(e_{n+1}-e_{n})^{1/2}A_{i,j,n}(e_{n+1}-e_{n})^{1/2}\|<2^{-n},\ \ \ \ \ \|[e_{n},x_{i}]\|<2^{-n},\] \[\|[(e_{n+1}-e_{n})^{1/2},a_{i,j,n}]\|<2^{-n}\delta_{i,j,n},\ \ \ \ \|[(e_{n+1}-e_{n})^{1/2},b_{i,j,n}]\|<2^{-n} \delta_{i,j,n},\] \[\|[(e_{n+1}-e_{n})^{1/2},c_{i,j,n}]\|<2^{-n}\delta_{i,j,n},\ \ \ \ \text{and}\ \ \ \|[(e_{n+1}-e_{n})^{1/2},d_{i,j,n}]\|<2^{-n} \delta_{i,j,n},\] where \(\delta_{i,j,n}=(1+\|a_{i,j,n}\|)^{-1}(1+\|b_{i,j,n}\|)^{-1}(1+\|c_{i,j,n}\|)^{-1}( 1+\|d_{i,j,n}\|)^{-1}\). We define the u.c.p. map \(\phi:\mathbb{B}\to\mathbb{B}(L^{2}M)\) by \[\phi(T)=e_{1}Te_{1}+\sum_{n=1}^{\infty}(e_{n+1}-e_{n})^{1/2}\phi_{n}(T)(e_{n+1 }-e_{n})^{1/2}.\] If we denote by \(\equiv\) equality modulo \(\mathbb{K}_{\mathbb{X}}(M)\), it is then easy to see that for each \(i\geq 1\) and \(k\geq 1\) we have \[\phi(x_{i})-x_{i} \equiv\sum_{n=1}^{\infty}(e_{n+1}-e_{n})^{1/2}(\phi_{n}(x_{i})-x_{ i})(e_{n+1}-e_{n})^{1/2}\] \[\equiv\sum_{n=k}^{\infty}(e_{n+1}-e_{n})^{1/2}(\phi_{n}(x_{i})-x_{ i})(e_{n+1}-e_{n})^{1/2}\] \[\equiv\sum_{n=k}^{\infty}(e_{n+1}-e_{n})^{1/2}(\phi_{n}(x_{i})-x_{ i}-A_{i,0,n})(e_{n+1}-e_{n})^{1/2}\] \[\equiv\sum_{n=k}^{\infty}\left(\begin{smallmatrix}a_{i,0,n}\\ b_{i,0,n}\end{smallmatrix}\right)^{*}\left(\begin{smallmatrix}(e_{n+1}-e_{n}) ^{1/2}&0\\ 0&(e_{n+1}-e_{n})^{1/2}\end{smallmatrix}\right)Z_{i,0,n}\left(\begin{smallmatrix} (e_{n+1}-e_{n})^{1/2}&0\\ 0&(e_{n+1}-e_{n})^{1/2}\end{smallmatrix}\right)\left(\begin{smallmatrix}c_{i,0,n}\\ d_{i,0,n}\end{smallmatrix}\right)\] Since \(\left\|\left(\begin{smallmatrix}(e_{n+1}-e_{n})^{1/2}&0\\ 0&(e_{n+1}-e_{n})^{1/2}\end{smallmatrix}\right)Z_{i,0,n}\left(\begin{smallmatrix} (e_{n+1}-e_{n})^{1/2}&0\\ 0&(e_{n+1}-e_{n})^{1/2}\end{smallmatrix}\right)\right\|\leq 1\), it then follows that \[d_{r_{\mu}}\left(\phi(x_{i})-x_{i},\mathbb{K}_{\mathbb{X}}^{\infty,1}(M)\right)\] \[\leq\limsup_{k\to\infty}\left(\sum_{n=k}^{\infty}\mu(a_{i,0,n}^{* }a_{i,0,n})+\mu(b_{i,0,n}^{*}b_{i,0,n})\right)^{1/2}\left(\sum_{n=k}^{\infty} \mu(c_{i,0,n}^{*}c_{i,0,n})+\mu(d_{i,0,n}^{*}d_{i,0,n})\right)^{1/2}=0.\] Since \(k\geq 1\) was arbitrary, we then have \(\phi(x_{i})-x_{i}\in\mathbb{K}_{\mathbb{X}}^{\infty,1}(M)\) for each \(i\geq 1\). Since \(\{x_{i}\}_{i\geq 1}\) is dense in \(M_{0}\), it follows that \(\phi(x)-x\in\mathbb{K}_{\mathbb{X}}^{\infty,1}(M)\) for each \(x\in M\), and hence \(\phi(x)-x\in\mathbb{K}_{\mathbb{X}}(M)\) for each \(x\in M_{0}\) by Lemma 7.2. A similar computation as above shows that \([JxJ,\phi(T)]\in\mathbb{K}_{\mathbb{X}}^{\infty,1}(M)\) for each \(T\in\mathbb{B}\) and \(x\in M_{0}\), and hence \(\phi(\mathbb{B})\subset\mathbb{S}_{\mathbb{X}}(M)\). To see the implications \((2)\implies(3)\) and \((4)\implies(5)\), just note that, from Proposition 4.11 and Corollary 3.1.6 in [13], there exists a separable ultraweakly dense C\({}^{*}\)-subalgebra \(M_{0}\subset M\) so that the inclusion \(M_{0}\subset\mathbb{B}(L^{2}M)\) is \((M_{0}\subset M)\)-exact. If we take sequences of u.c.p. maps \(\phi_{n}:M_{0}\to\mathbb{M}_{k(n)}(\mathbb{C})\) and \(\psi_{n}:\mathbb{M}_{k(n)}(\mathbb{C})\to\mathbb{B}(L^{2}M)\) that realize \((M_{0}\subset M)\)-exactness, then letting \(\mathbb{B}\) be the (separable) C\({}^{*}\)-algebra generated by \(M_{0}\) and \(\cup_{n\geq 1}\psi_{n}(\mathbb{M}_{k(n)}(\mathbb{C}))\) we have that the inclusion \(M_{0}\subset\mathbb{B}\) is \((M_{0}\subset M)\)-nuclear. Suppose now that (5) holds and let \(M_{0}\subset M\), \(\mathbb{B}\subset\mathbb{B}(L^{2}M)\), and \(\phi:\mathbb{B}\to\mathbb{S}_{\mathbb{X}}(M)\) satisfy the condition in (5). We let \(p\in\mathbb{K}_{\mathbb{X}}(M)^{\sharp*}\subset\mathbb{B}(L^{2}M)^{\sharp*}\) denote the support projection of \(\mathbb{K}_{\mathbb{X}}(M)^{\sharp*}\) so that, as an operator subsystem of the von Neumann algebra \(\mathbb{B}(L^{2}M)^{\sharp*}=\begin{pmatrix}\mathbb{K}_{\mathbb{X}}(M)^{ \sharp*}&p\mathbb{B}(L^{2}M)^{\sharp*}p^{\perp}\\ p^{\perp}\mathbb{B}(L^{2}M)^{\sharp*}p&p^{\perp}\mathbb{B}(L^{2}M)^{\sharp*}p^{ \perp}\end{pmatrix}\), we have \(\mathbb{S}_{\mathbb{X}}(M)^{\sharp*}=\begin{pmatrix}\mathbb{K}_{\mathbb{X}}(M)^ {\sharp*}&p\mathbb{S}_{X}(M)^{\sharp*}p^{\perp}\\ p^{\perp}\mathbb{S}_{X}(M)^{\sharp*}p&p^{\perp}\mathbb{S}_{X}(M)^{\sharp*}p^{ \perp}\end{pmatrix}.\) We let \(\phi_{0}:\mathbb{B}(L^{2}M)\to\mathbb{K}_{\mathbb{X}}(M)^{\sharp*}\) be a u.c.p. \(M\)-bimodular map coming from Lemma 7.1. Since \(\phi(x)-x\in\mathbb{K}_{\mathbb{X}}(M)\) for all \(x\in M_{0}\), it follows that \(\tilde{\phi}:\mathbb{B}\to\mathbb{S}_{\mathbb{X}}(M)^{\sharp*}\) defined by \(\tilde{\phi}(T)=\phi_{0}(T)+p^{\perp}\phi(T)p^{\perp}\) is a u.c.p. \(M_{0}\)-bimodular map. Since the inclusion \(M_{0}\subset\mathbb{B}\) is \((M_{0}\subset M)\)-nuclear, it then follows that the inclusion \(M_{0}\subset\mathbb{S}_{\mathbb{X}}(M)^{\sharp*}\) is also \((M_{0}\subset M)\)-nuclear, and hence the inclusion \(M\subset\mathbb{S}_{\mathbb{X}}(M)^{\sharp*}\) is \(M\)-nuclear by Corollary 4.9. Lemma 4.4 then shows that \(M\) is biexact, thereby proving the implication \((5)\implies(1)\). **Lemma 7.4**.: _Let \(M\) be a separable von Neumann and let \(\mathbb{X}_{1},\mathbb{X}_{2},\mathbb{Y}\subset\mathbb{B}(L^{2}M)\) be boundary pieces such that each \(\mathbb{X}_{k}\) is generated as a hereditary C\({}^{*}\)-subalgebra of \(\mathbb{B}(L^{2}M)\) by a C\({}^{*}\)-subalgebra \(A_{k}\subset\mathbb{X}_{k}\). Suppose also that \(M,M^{\prime}\subset M(A_{1})\cap M(A_{2})\) and \(A_{1}A_{2}A_{1}\subset\mathbb{Y}\). If \(M\) is biexact relative to both \(\mathbb{X}_{1}\) and \(\mathbb{X}_{2}\), then \(M\) is biexact relative to \(\mathbb{Y}\)._ Proof.: We fix a normal faithful state \(\mu\) on \(M\). First note that we may take quasi-central approximate units \(\{e_{i}^{2}\}_{i}\) for \(A_{1}\) and \(\{f_{j}^{2}\}_{j}\) for \(A_{2}\) that are also approximate units for \(\mathbb{X}_{1}\) and \(\mathbb{X}_{2}\), respectively. For notational simplicity we will write \(e_{i}^{\perp}=(1-e_{i}^{2})^{1/2}\) and \(f_{j}^{\perp}=(1-f_{j}^{2})^{1/2}\). Since \(M,M^{\prime}\subset M(A_{1})\cap M(A_{2})\) we have \[\limsup_{i\to\infty}r_{\mu}(e_{i}Se_{i})\leq r_{\mu}(S),\qquad\limsup_{i\to \infty}r_{\mu}(e_{i}^{\perp}Se_{i}^{\perp})\leq r_{\mu}(S),\] for \(S\in\mathbb{B}(L^{2}M)\), and similarly we have \(\limsup_{j\to\infty}r_{\mu}(f_{j}Tf_{j})\leq r_{\mu}(T)\) and \(\limsup_{j\to\infty}r_{\mu}(f_{j}^{\perp}Tf_{j}^{\perp})\) for \(T\in\mathbb{B}(L^{2}M)\). Also, note that if \(T\in\mathbb{K}_{\mathbb{X}_{1}}^{\infty,1}(M)\), then for every \(\varepsilon>0\) there exists \(S\in\mathbb{X}_{1}\) so that \(r_{\mu}(T-S)<\varepsilon\). Since \(\{e_{i}^{2}\}_{i}\) is an approximate unit for \(\mathbb{X}_{1}\) we then have \(\|e_{i}^{\perp}Se_{i}^{\perp}\|\to 0\), and hence \(\limsup_{i\to\infty}r_{\mu}(e_{i}^{\perp}Te_{i}^{\perp})\leq\varepsilon\). Since \(\varepsilon>0\) was arbitrary, we have \(\lim_{i\to\infty}r_{\mu}(e_{i}^{\perp}Te_{i}^{\perp})=0\), and we similarly have \(\lim_{j\to\infty}r_{\mu}(f_{j}^{\perp}Tf_{j}^{\perp})=0\) for all \(T\in\mathbb{K}_{\mathbb{X}_{2}}^{\infty,1}(M)\). We fix \(\varepsilon>0\) and \(E\subset F\subset\mathbb{B}(L^{2}(M,\mu))\) finite subsets such that \(E\subset M\). Since \(M\) is biexact relative to both \(\mathbb{X}_{1}\) and \(\mathbb{X}_{2}\), there exist u.c.p. maps \(\phi_{k}:\mathbb{B}(L^{2}(M,\mu))\to\mathbb{S}_{\mathbb{X}_{k}}(M)\) such that \(r_{\mu}(\phi_{k}(x)-x)<\varepsilon\) for each \(x\in E\) and \(k\in\{1,2\}\). From the discussion above, there then exists some \(j\) so that we have \[r_{\mu}(f_{j}^{\perp}(x-\phi_{2}(x))f_{j}^{\perp})<\varepsilon,\qquad r_{\mu} (f_{j}^{\perp}[\phi_{2}(T),JxJ]f_{j}^{\perp})<\varepsilon,\] \[\|[f_{j},x]\|,\|[f_{j}^{\perp},x]\|<\varepsilon,\] for all \(x\in E\) and \(T\in F\), and we may then choose some \(i\) so that \[r_{\mu}(e_{i}^{\perp}(x-\phi_{1}(x))e_{i}^{\perp})<\varepsilon,\qquad r_{\mu} (e_{i}^{\perp}[\phi_{1}(T),JxJ]e_{i}^{\perp})<\varepsilon,\] \[r_{\mu}(e_{i}f_{j}^{\perp}[\phi_{2}(T),JxJ]f_{j}^{\perp}e_{i})<\varepsilon, \qquad\|[e_{i},x]\|,\|[e_{i}^{\perp},x]\|<\varepsilon,\] for all \(x\in E\) and \(T\in F\). We define \(\psi:\mathbb{B}(L^{2}M)\to\mathbb{B}(L^{2}M)\) by \[\psi(T)=e_{i}^{\perp}\phi_{1}(T)e_{i}^{\perp}+e_{i}f_{j}^{\perp}\phi_{2}(T)f_ {j}^{\perp}e_{i}+e_{i}f_{j}Tf_{j}e_{i}.\] For \(x\in E\) we have \[r_{\mu}(x-\psi(x)) \leq r_{\mu}(e_{i}^{\perp}\phi_{1}(x)e_{i}^{\perp}-(e_{i}^{\perp}) ^{2}x)+r_{\mu}(e_{i}f_{i}^{\perp}\phi_{2}(x)f_{i}^{\perp}e_{i}-e_{i}(f_{i}^{ \perp})^{2}e_{i}x)+r_{\mu}(e_{i}f_{j}xf_{i}e_{j}-e_{i}f_{j}^{2}e_{i}x)\] \[<r_{\mu}(e_{i}^{\perp}(\phi_{1}(x)-x)e_{i}^{\perp})+r_{\mu}(e_{i} f_{j}^{\perp}(\phi_{2}(x)-x)f_{j}^{\perp}e_{i})+5\varepsilon\|x\|<2 \varepsilon+5\varepsilon\|x\|.\] Since \(e_{i}f_{j}e_{i}\in A_{1}A_{2}A_{1}\subset\mathbb{Y}\), we have \(f_{j}e_{i}\in\mathbb{B}(L^{2}M)\mathbb{Y}\subset K_{\mathbb{Y}}^{L}(M)\) and so \(e_{i}f_{j}\mathbb{B}(L^{2}M)f_{j}e_{i}\subset\mathbb{K}_{\mathbb{Y}}(M)\). Hence, for \(T\in F\) and \(x\in E\), we have \[d_{r_{\mu}}([\psi(T),JxJ],\mathbb{K}_{\mathbb{Y}}^{\infty,1}(M)) <r_{\mu}([e_{i}^{\perp}\phi_{1}(T)e_{i}^{\perp},JxJ])+r_{\mu}([e_{ i}f_{i}^{\perp}\phi_{2}(T)f_{i}^{\perp}e_{i},JxJ])\] \[\leq r_{\mu}(e_{i}^{\perp}[\phi_{1}(T),JxJ]e_{i}^{\perp})+r_{\mu}(e _{i}f_{i}^{\perp}[\phi_{2}(T),JxJ]f_{i}^{\perp}e_{i})+6\|T\|\varepsilon\] \[\leq 2\varepsilon+6\|T\|\varepsilon.\] Part (2) of Theorem 7.3 then gives that \(M\) is biexact relative to \(\mathbb{Y}\). **Lemma 7.5**.: _Let \(M\) be a von Neumann algebra and, for each \(1\leq i\leq n\), let \(B_{i}\subset M\) be a von Neumann subalgebra with a normal conditional expectation \(E_{B_{i}}:M\to B_{i}\). Let \(e_{B_{i}}\) denote the corresponding Jones projection. Consider_ \[x_{1}Jy_{1}Je_{B_{i_{1}}}x_{2}Jx_{2}Je_{B_{i_{2}}}\cdots x_{n}Jy_{n}Je_{B_{i_{n}}} \in\mathbb{B}(L^{2}M)\] _where \(x_{1},\dots,x_{n},y_{1},\dots,y_{n}\in M\). Then for each \(1\leq k\leq n\), this is continuous as a function of \(x_{k}\) or of \(y_{k}\) from the unit ball of \(M\) with the ultrastrong-\({}^{*}\) topology to \(\mathbb{B}(L^{2}M)\) with either of the locally convex topologies generated by \(\{r^{\ell}_{\omega}\}_{\omega\in(M_{*})_{+}}\) or \(\{r^{r}_{\omega}\}_{\omega\in(M_{*})_{+}}\), where \(r^{\ell}_{\omega}\) and \(r^{r}_{\omega}\) are defined as in Section 3.2._ Proof.: We will show continuity in the variable \(x_{k}\) with respect to the topology generated by \(\{r^{\ell}_{\omega}\}_{\omega\in(M^{*})_{+}}\) as the cases for the other variable \(y_{k}\), and the other topology follow similarly. We will prove the result by induction on \(k\) (with \(n\geq k\) arbitrary). First note that if \(k=1\), then the result if obvious. In general, if we set \(x=x_{1}Jy_{1}Je_{B_{i_{1}}}x_{2}Jy_{2}Je_{B_{i_{2}}}\cdots e_{B_{i_{k}}}x_{k}\) and \(y=Jy_{k}Je_{B_{i_{k+1}}}\cdots x_{n}Jy_{n}Je_{B_{i_{n}}}\), then by Proposition 3.9 and Corollary 3.10 for any positive normal linear functional \(\mu\) on \(M\) we have \[r^{\ell}_{\mu}(x_{1}Jy_{1}Je_{B_{1}}x_{2}Jy_{2}Je_{B_{2}}\cdots x_{n}Jy_{n}Je_{ B_{n}})\leq r_{\mu}(xx^{*})^{1/2}\|y\|\leq r^{\ell}_{\mu}(xx^{*})^{1/2}\|y\|.\] But \[xx^{*}=x_{1}Jy_{1}Je_{B_{i_{1}}}\cdots e_{B_{i_{k-2}}}x_{k-1}E_{B_{i_{k-1}}}(x _{k}x_{k}^{*})Jy_{k-1}Je_{B_{i_{k-1}}}Jy_{k-1}^{*}Jx_{k-1}^{*}e_{B_{i_{k-2}}} \cdots e_{B_{i_{1}}}Jy_{1}^{*}Jx_{1}^{*},\] and so, by the induction hypothesis, if \(x_{k}\) is in the unit ball, then as \(x_{k}\) approaches \(0\) in the ultrastrong-\({}^{*}\) operator topology \(s^{\ell}_{\mu}(xx^{*})\) also approaches zero, proving the induction step. The following theorem gives an analog of [1, Proposition 15.2.7] in the setting of von Neumann algebras. **Theorem 7.6**.: _Let \(M\) be a separable von Neumann algebra, let \(\mathbb{X},\mathbb{Y}\subset\mathbb{B}(L^{2}M)\) be boundary pieces and let \(\mathcal{F}=\{(B_{i},E_{i})\}_{i\in I}\) be a family of von Neumann subalgebras \(B_{i}\subset M\), with normal faithful conditional expectations \(E_{i}:M\to B_{i}\). Suppose that there is an ultraweakly dense \(\mathrm{C}^{*}\)-subalgebra \(M_{0}\subset M\) such that \(\mathbb{B}(L^{2}M)\mathbb{X}A\subset\mathbb{K}^{L}_{\mathbb{Y}}(M)\), where \(A\subset\mathbb{B}(L^{2}M)\) is the \(\mathrm{C}^{*}\)-algebra generated by \(\{xJyJe_{B_{i}}\mid x,y\in M_{0},i\in I\}\). If \(M\) is biexact relative to both \(\mathbb{X}\) and \(\mathbb{X}_{\mathcal{F}}\), then \(M\) is biexact relative to \(\mathbb{Y}\)._ Proof.: If \(S\in\mathbb{B}(L^{2}M)\), \(T\in\mathbb{X}\), \(x_{1},x_{2},\dots,x_{n},y_{1},\dots,y_{n}\in M_{0}\), and \(B_{i_{1}},\dots,B_{i_{n}}\in\mathcal{F}\), then by hypothesis we have \[STx_{1}Jy_{1}Je_{B_{i_{1}}}x_{2}Jx_{2}Je_{B_{i_{2}}}\cdots x_{n}Jy_{n}Je_{B_{i_ {n}}}\in\mathbb{K}^{L}_{\mathbb{Y}}(M).\] By Lemma 7.5, it then follows that if \(A_{1}\) denotes the \(\mathrm{C}^{*}\)-algebra generated by \(MJMJ\mathbb{X}\), and \(A_{2}\) denotes the \(\mathrm{C}^{*}\)-algebra generated by \(\{MJMJe_{B_{i}}\mid i\in I\}\), then we still have \(\mathbb{B}(L^{2}M)A_{1}A_{2}\subset\mathbb{K}^{L}_{\mathbb{Y}}(M)\), and hence \(A_{2}A_{1}A_{2}\subset\mathbb{K}_{\mathbb{Y}}(M)\) so that the result then follows from Lemma 7.4. **Corollary 7.7**.: _Suppose \(M\) is a separable von Neumann algebra and \(P,Q\subset M\) are von Neumann subalgebras with expectation such that for an ultraweakly dense \(\mathrm{C}^{*}\)-subalgebra \(M_{0}\subset M\) we have \(e_{P}xJyJe_{Q}\in\mathbb{K}(L^{2}M)\) for all \(x,y\in M_{0}\). If \(M\) is biexact relative to both \(P\) and \(Q\), then \(M\) is biexact._ ### Biexactness and malleable deformations We now refine some of the techniques introduced in Section 9 of [1] to show how biexactness can be deduced from the existence of Popa's malleable deformations. **Theorem 7.8**.: _Let \(M\) be a separable weakly exact von Neumann algebra with a normal faithful state \(\varphi\), and let \(\mathbb{X}\subset\mathbb{B}(L^{2}M)\) be an \(M\)-boundary piece. Suppose \(M\subset\tilde{M}\) is an extension with a normal faithful conditional expectation \(E_{M}:\tilde{M}\to M\), and such that as a Hilbert \(M\)-imodule \(L^{2}(\tilde{M},\varphi)\ominus L^{2}(M,\varphi)\) is weakly contained in coarse bimodule, where by abuse of notation we also let \(\varphi\) denote the state \(\varphi\circ E_{M}\) on \(\tilde{M}\). Suppose that we have a sequence of state-preserving automorphisms \(\alpha_{n}\in\operatorname{Aut}(\tilde{M},\psi)\) so that \(\alpha_{n}(x)\to x\) ultraweakly for each \(x\in M\). Let \(e_{M}\) denote the orthogonal projection onto \(L^{2}(M,\varphi)\), and let \(V_{n}:L^{2}(\tilde{M},\psi)\to L^{2}(\tilde{M},\psi)\) denote the unitary given by \(V_{n}(x\psi^{1/2})=\alpha_{n}^{-1}(x)\psi^{1/2}\). Suppose also that for each \(n\geq 1\), we have \(e_{M}V_{n}e_{M}\in\mathbb{K}_{\mathbb{X}}^{L}(M,L^{2}(\tilde{M},\psi))\), then \(M\) is biexact relative to \(\mathbb{X}\). Proof.: We define the u.c.p. map \(\phi_{n}:\mathbb{B}(L^{2}\tilde{M})\to\mathbb{B}(L^{2}M)\) by \(\phi_{n}(T)=e_{M}V_{n}^{*}TV_{n}e_{M}\). Since \(\alpha_{n|M}\) converges to the identity pointwise in the ultraweak topology, it follows that for \(x\in M\) we have that \[|[V_{n},x]e_{M}|^{2}=E_{M}(x^{*}x-\alpha_{n}(x^{*})x-x^{*}\alpha_{n}(x)+\alpha_ {n}(x^{*}x))\] converges to \(0\) in the ultraweak topology, so that \(|[V_{n},x]e_{M}|\) converges in the strong operator topology in \(M\). Considering the polar decomposition \([V_{n},x]e_{M}=W_{n}|[V_{n},x]e_{M}|\), we then see that \([V_{n},x]e_{M}\) converges in the \(\mathbb{C}\)-\(M\)-topology. Hence, for any \(T\in\mathbb{B}(L^{2}\tilde{M})\), we have \(\phi_{n}(T)x-\phi_{n}(Tx)=e_{M}V_{n}^{*}T[V_{n},x]e_{M}\) converges to \(0\) in the \(\mathbb{C}\)-\(M\)-topology, and, in particular, we have \(r_{\varphi}(\phi_{n}(T)x-\phi_{n}(Tx))\to 0\). By similar arguments we then conclude that \(r_{\varphi}(x\phi_{n}(T)y-\phi_{n}(xTy))\to 0\) for any \(T\in\mathbb{B}(L^{2}\tilde{M})\) and \(x,y\in M\cup JMJ\). Since \(L^{2}(\tilde{M},\varphi)\ominus L^{2}(M,\varphi)\) is weakly contained in the coarse \(M\)-bimodule, there exists an \(M\)-bimodular u.c.p. map \(\psi:\mathbb{B}(L^{2}M)\to M^{\operatorname{op}\prime}\cap\mathbb{B}(L^{2}( \tilde{M},\varphi)\ominus L^{2}(M,\varphi))\). We may then define a u.c.p. map \(\tilde{\psi}:\mathbb{B}(L^{2}M)\to\mathbb{B}(L^{2}\tilde{M})\) by \(\tilde{\psi}(T)=e_{M}Te_{M}+\psi(T)\), where we identify \(\mathbb{B}(L^{2}(\tilde{M},\varphi)\ominus L^{2}(M,\varphi))\) with \(e_{M}^{\perp}\mathbb{B}(L^{2}(\tilde{M},\varphi))e_{M}^{\perp}\). Since \(e_{M}V_{n}e_{M}\in\mathbb{K}_{\mathbb{X}}^{L}(M,L^{2}(\tilde{M},\varphi))\), it then follows that for \(x\in M\) we have \[d_{r_{\varphi}}(\phi_{n}\circ\tilde{\psi}(x)-x,\mathbb{K}_{\mathbb{X}}^{ \infty,1}(M))\leq r_{\varphi}(\phi_{n}(x)-x)\to 0.\] Similarly, if \(T\in\mathbb{B}(L^{2}M)\) and \(x\in JMJ\), we have \[d_{r_{\varphi}}([\phi_{n}\circ\tilde{\psi}(T),x],\mathbb{K}_{\mathbb{X}}^{ \infty,1}(M))\leq r_{\varphi}([\phi_{n}\circ\tilde{\psi}(T),x]-\phi_{n}([x, \tilde{\psi}(T)]))\to 0.\] By Theorem 7.3, we then have that \(M\) is biexact relative to \(\mathbb{X}\). **Remark 7.9**.: The assumptions that \(L^{2}(\tilde{M},\varphi)\ominus L^{2}(M,\varphi)\) be weakly contained in coarse bimodule and that \(e_{M}V_{n}e_{M}\in\mathbb{K}_{\mathbb{X}}^{L}(M,L^{2}(\tilde{M},\varphi))\) are only used to produce the existence of a c.c.p. map \(\psi:\mathbb{B}(L^{2}M)\to\mathbb{B}(L^{2}(\tilde{M},\varphi))\cap JMJ^{\prime}\) such that \((1-\psi(x))^{1/2}V_{n}e_{M}\in\mathbb{K}_{\mathbb{X}}^{L}(M,L^{2}(\tilde{M}, \varphi))\) for each \(n\). ### Biexactness and closable derivations In Section 9.2 of [4], a connection was established between proper proximality and the existence of certain closable real derivations into Hilbert bimodules. In this section, we refine the techniques from [4] to show that under natural conditions on the derivation and bimodule one can deduce biexactness. This, then, formally shows how under the assumption of weak exactness one can obtain the results from [10] via the techniques from [11]. This should also be compared with Theorem 5.13 in [12] where property (AO)\({}^{+}\) (and consequently biexactness) is obtained under additional assumptions on the growth of the eigenvalues of the associated Laplacian \(\Delta=\delta^{*}\overline{\delta}\). Let \(M\) be a tracial von Neumann algebra and \(\mathcal{H}\) an \(M\)-\(M\) correspondence that has a real structure, i.e., there exists an antilinear isometric involution \(\mathcal{J}:\mathcal{H}\to\mathcal{H}\) such that \(\mathcal{J}(x\xi y)=y^{*}\mathcal{J}(\xi)x^{*}\) for all \(x,y\in M\), \(\xi\in\mathcal{H}\). A closable real derivation is an unbounded closable linear map \(\delta:L^{2}M\to\mathcal{H}\), such that the domain \(D(\delta)\) is an ultraweakly dense unital \(*\)-subalgebra of \(M\subset L^{2}M\), and such that \(\delta\) preserves the real structure (\(\delta(x^{*})=\mathcal{J}(\delta(x))\) for \(x\in D(\delta)\)) and satisfies Leibniz's formula \[\delta(xy)=x\delta(y)+\delta(x)y\ \ \ \ \ x,y\in D(\delta).\] A result of Davies and Lindsay in [10] shows that \(D(\overline{\delta})\cap M\) is then again a \(*\)-subalgebra and \(\overline{\delta}_{|D(\overline{\delta})\cap M}\) again gives a closable real derivation. We recycle the following notation from [12, 13] \[\Delta=\delta^{*}\overline{\delta},\ \ \ \ \ \rho_{\alpha}=\frac{\alpha}{\alpha+ \Delta},\ \ \ \ \ \zeta_{\alpha}=\rho_{\alpha}^{1/2},\ \ \ \ \ \tilde{\delta}_{\alpha}=\frac{1}{\sqrt{\alpha}}\overline{\delta}\circ\zeta_{ \alpha},\ \ \ \ \tilde{\Delta}_{\alpha}=\frac{1}{\sqrt{\alpha}}\Delta^{1/2}\circ\zeta_{ \alpha}=(1-\rho_{\alpha})^{1/2}.\] We note that from [13, 14] we have that \(\zeta_{\alpha}\) is a \(\tau\)-symmetric u.c.p. map for \(\alpha>0\). Building on ideas from [12] and [13], the following approximate bimodularity was established in Lemma 9.3 of [1] for \(x\in M\cap D(\overline{\delta})\), \(a\in M\), and \(\alpha>1\). \[\|x\tilde{\delta}_{\alpha}(a)-\tilde{\delta}_{\alpha}(xa)\| \leq\alpha^{-1/4}(2\|\delta(x)\|^{1/2}+6\|x\|^{1/2})\|\delta(x) \|^{1/2}\|a\|. \tag{8}\] \[\|\tilde{\delta}_{\alpha}(a)x-\tilde{\delta}_{\alpha}(ax)\| \leq\alpha^{-1/4}(2\|\delta(x)\|^{1/2}+6\|x\|^{1/2})\|\delta(x) \|^{1/2}\|a\|. \tag{7}\] The following lemma is evident from [13]. **Lemma 7.10**.: _Let \(M\) be a von Neumann algebra and \(M_{0}\subset M\) an ultraweakly dense \(\mathrm{C}^{*}\)-subalgebra containing the unit of \(M\). Suppose that, for each finite-dimensional operator system \(F\subset M\), there exists a net of u.c.p. maps \(\theta_{i}:F\to M_{0}\subset M\) converging pointwise ultraweakly to the identity operator. If \(E\) is a dual normal \(M\)-system, then there exists a \(M_{0}\)-bimodular u.c.p. map \(\Psi:\mathbb{B}(L^{2}M)\to E\) if and only if there exists an \(M\)-bimodular u.c.p. map \(\Phi:\mathbb{B}(L^{2}M)\to E\)._ **Theorem 7.11**.: _Let \(M\) be a weakly exact tracial von Neumann algebra and \(\mathcal{H}\) an \(M\)-\(M\) correspondence with a real structure so that \(\mathcal{H}\) is weakly contained in the coarse correspondence \(L^{2}M\,\overline{\otimes}\,L^{2}M\), and suppose \(\delta:L^{2}M\to\mathcal{H}\) is a closable real derivation. If \(\mathbb{X}\subset\mathbb{B}(L^{2}M)\) is a boundary piece such that \(\rho_{\alpha}\in\mathbb{K}_{\mathbb{X}}(M)\) for each \(\alpha>0\), then \(M\) is biexact relative to \(\mathbb{X}\)._ Proof.: We consider the polar decomposition \(\overline{\delta}=V\Delta^{1/2}\), and note that \(V\tilde{\Delta}_{\alpha}=\tilde{\delta}_{\alpha}\) for \(\alpha>0\). From (7) and (8) we see that if \(x\in D(\overline{\delta})\cap M\), then we have \[\lim_{\alpha\to\infty}\|xV\tilde{\Delta}_{\alpha}-V\tilde{\Delta}_{\alpha}x\| _{\mathbb{B}(M,\mathcal{H})}=0,\ \ \ \ \ \lim_{\alpha\to\infty}\|\mathcal{J}x\mathcal{J}V\tilde{\Delta}_{\alpha}-V\tilde {\Delta}_{\alpha}JxJ\|_{\mathbb{B}(M,\mathcal{H})}=0.\] Since \(\rho_{\alpha}\in\mathbb{K}_{\mathbb{X}}(M)\), we then have \(1-\tilde{\Delta}_{\alpha}=1-(1-\rho_{\alpha})^{1/2}\in\mathbb{K}_{\mathbb{X} }(M)\). Hence, \(V(1-\tilde{\Delta}_{\alpha})\in\mathbb{K}_{\mathbb{X}}(M,\mathcal{H})\), and it follows that \(xV-Vx,\mathcal{J}x\mathcal{J}V-VJxJ\in\mathbb{K}_{\mathbb{X}}(M,\mathcal{H})\) since this space is closed in \(\mathbb{B}(M,\mathcal{H})\). The u.c.p. map \(\Psi:\mathcal{J}M\mathcal{J}^{\prime}\cap\mathbb{B}(\mathcal{H})\to\mathbb{B} (L^{2}M)\) given by \(\Psi(T)=V^{*}TV\) then satisfies \([\Psi(T),JxJ]\in\mathbb{K}_{\mathbb{X}}^{\infty,1}(M)\) for \(x\in D(\overline{\delta})\cap M\), and hence \(\Psi:\mathcal{J}M\mathcal{J}^{\prime}\cap\mathbb{B}(\mathcal{H})\to\mathbb{S} _{\mathbb{X}}(M)\). We also have \(\Psi(x)-x\in\mathbb{K}_{\mathbb{X}}(M)\) for \(x\in D(\overline{\delta})\cap M\). Hence, just as in the proof of Theorem 7.17, we consider the map \(\Psi^{\prime}:\mathcal{J}M\mathcal{J}^{\prime}\cap\mathbb{B}(\mathcal{H})\to \mathbb{S}_{\mathbb{X}}(M)^{\sharp*}\) given by \(\Psi^{\prime}(T)=p^{\perp}\Psi(T)+pT\) where \(p\) is the support projection of \(\mathbb{K}^{\sharp*}\subset\mathbb{S}_{\mathbb{X}}(M)^{\sharp*}\), and note that this map is bimodular with respect to the \(\mathrm{C}^{*}\)-algebra \(M_{0}\subset M\) generated by \(D(\overline{\delta})\cap M\). Since \(\mathcal{H}\) is weakly contained in the coarse correspondence, there exists a \(M\)-bimodular u.c.p. map \(\theta:\mathbb{B}(L^{2}M)\to\mathcal{J}M\mathcal{J}^{\prime}\cap\mathbb{B}( \mathcal{H})\) so that \(\Psi^{\prime}\circ\theta\) gives an \(M_{0}\)-bimodular u.c.p. map from \(\mathbb{B}(L^{2}M)\) into \(\mathbb{S}_{\mathbb{X}}(M)^{\sharp*}\). Since for each \(\alpha>0\) we have \(\zeta_{\alpha}:M\to M_{0}\) and \(\zeta_{\alpha}(x)\to x\) ultraweakly for each \(x\in M\), it then follows from Lemma 7.10 that there exists an \(M\)-bimodular u.c.p. map from \(\mathbb{B}(L^{2}M)\) to \(\mathbb{S}_{\mathbb{X}}(M)^{\sharp*}\). Since \(M\) is weakly exact, Corollary 5.6 and Lemma 4.4 give that \(M\) is biexact relative to \(\mathbb{X}\). **Remark 7.12**.: Even without the weak exactness assumption, the previous theorem actually gives the following weak (AO) type property: There exist ultraweakly dense \(\mathrm{C}^{*}\)-subalgebras \(B\subset M\) and \(C\subset M^{\prime}\) such that for each finite-dimensional operator system \(E\subset M\) there exists a net of u.c.p. maps \(\theta_{i}:E\to B\subset M\) converging pointwise ultraweakly to the identity operator, and such that there exists a u.c.p. map \(\mu:B\otimes C\to\mathbb{B}(L^{2}M)\) satisfying \(\mu(b\otimes c)-bc\in\mathbb{K}_{\mathbb{X}}(M)\) for each \(b\in B\) and \(c\in C\). Using Lemma 7.10 as in [10], this can then give the solidity results from [12]. The proof of the previous theorem gives a more direct and more general result than in Proposition 9.4 from [11]. This also gives the following analog in the von Neumann algebra setting of Example 4.8 from [1]. We leave the details of the proof to the reader. **Proposition 7.13**.: _If \(M\) is a tracial von Neumann algebra, \(\mathcal{H}\) is an \(M\)-\(M\) correspondence with a real structure so that \((M^{\mathrm{op}})\cap\mathbb{B}(\mathcal{H})\) does not have an \(M\)-central state \(\psi\) such that \(\psi_{M}\) is normal, and \(\delta:L^{2}M\to\mathcal{H}\) is a closable real derivation with \(\rho_{\alpha}\) contained in the boundary piece \(\mathbb{X}\) for each \(\alpha>0\), then \(M\) is properly proximal relative to \(\mathbb{X}\)._ ### Biexactness and Akemann-Ostrand type properties Recall from [10] that a von Neumann algebra \(M\subset\mathbb{B}(\mathcal{H})\) has property (AO) if there exists ultraweakly dense unital C\({}^{*}\)-subalgebras \(B\subset M\) and \(C\subset M^{\prime}\) such that 1. \(B\) is locally reflexive; 2. the multiplication map \(\nu:B\odot C\ni b\otimes c\mapsto bc+\mathbb{K}(\mathcal{H})\in\mathbb{B}( \mathcal{H})/\mathbb{K}(\mathcal{H})\) extends continuously to \(B\otimes C\). The principal example of a property (AO) von Neumann algebra is the group von Neumann algebra \(L\Gamma\) associated to a biexact group \(\Gamma\), where one may take \(B=C_{\alpha}^{*}\Gamma\) and \(C=C_{\rho}^{*}\Gamma\). In fact, biexact groups can be characterized as exact groups such that the multiplication map \(\nu:C_{\lambda}^{*}\Gamma\odot C_{\rho}^{*}\Gamma\to\mathbb{B}(\ell^{2}\Gamma) /\mathbb{K}(\ell^{2}\Gamma)\) extends continuously to \(C_{\lambda}^{*}\Gamma\otimes C_{\rho}^{*}\Gamma\) and has a u.c.p. lift into \(\mathbb{B}(\ell^{2}\Gamma)\)[1, Lemma 15.1.4]. Several variations on this definition have since appeared in the literature in connection with (strong) solidity type properties [13, 14, 15, 16]. In this section, we will investigate the connection between these properties and biexactness. As there is extensive literature devoted toward von Neumann algebras with (AO)-type properties [10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22], this will provide us with a rich source of examples of biexact von Neumann algebras. We begin by recalling some of these definitions. **Definition 7.14** (Definition 3.1.1 in [13]).: A von Neumann algebra \(M\) has property (AO)\({}^{+}\) if there exist unital weak\({}^{*}\) dense C\({}^{*}\)-algebras \(B\subset M\), \(C\subset JMJ\) such that 1. \(B\) is locally reflexive. 2. The multiplication map \(\nu:B\odot C\ni\sum_{i=1}^{n}b_{i}\otimes c_{i}\mapsto\sum_{i=1}^{n}b_{i}c_{ i}+\mathbb{K}(M)\in\mathbb{B}(\mathcal{H})/\mathbb{K}(\mathcal{H})\) extends continuously to \(B\otimes C\) and has a u.c.p. lifting. Clearly, property (AO)\({}^{+}\) implies property (AO), but property (AO)\({}^{+}\) has better stability properties, and in certain situations the u.c.p. lifting can be exploited [10]. Von Neumann algebras with property (AO)\({}^{+}\) include all von Neumann algebras associated to discrete quantum groups that are biexact in the sense of [13, Definition 3.1]. **Definition 7.15** (Definition 2.6 in [14]).: A von Neumann algebra \(M\) satisfies strong condition (AO) if there exists a unital weak\({}^{*}\) dense C\({}^{*}\)-algebra \(A\subset M\), and if there exists a nuclear C\({}^{*}\)-algebra \(\mathcal{C}\subset\mathbb{B}(L^{2}M)\) containing \(A\) such that the commutators \(\{[c,JaJ]\mid a\in A,c\in\mathcal{C}\}\) are contained in \(\mathbb{K}(L^{2}M)\). The strong (AO) property implies property (AO) in general, and if the nuclear C\({}^{*}\)-algebra \(\mathcal{C}\) above can be taken to be separable and satisfying \([\mathcal{C},J\mathcal{C}J]\subset\mathbb{K}(L^{2}M)\), then this implies (AO)\({}^{+}\)[17, Remarks 2.7]. Many von Neumann algebras are known to satisfy the strong (AO) property including separable amenable von Neumann algebras, any group von Neumann algebra \(L\Gamma\) associated to a biexact group, the von Neumann algebra associated to any discrete quantum group in class \(\mathcal{C}\) from [18], and any free Araki-Woods factor [17, Theorem C.2]. Note that the C\({}^{*}\)-algebra \(A\) in the definition of the strong (AO) condition is exact, being a C\({}^{*}\)-subalgebra of a nuclear C\({}^{*}\)-algebra, and consequently the von Neumann algebra \(M\) is weakly exact since it contains an ultraweakly dense exact C\({}^{*}\)-subalgebra. **Definition 7.16** (Definition 2.1 in [11]).: A von Neumann algebra \(M\) has (W\({}^{*}\)AO) if the map \[M\odot JMJ\ni\sum_{i=1}^{n}a_{i}\otimes x_{i}\mapsto\sum_{i=1}^{n}a_{i}x_{i}+ \mathbb{K}(M)\in(C^{*}(M,JMJ)+\mathbb{K}(M))/\mathbb{K}(M)\] is continuous with respect to the minimal tensor norm. Ozawa showed in [14] that if \(\Gamma\) is a biexact group, then \(L\Gamma\) has (W\({}^{*}\)AO). Whereas Caspers showed in [11] that the \(q\)-Gaussian von Neumann algebra \(M_{q}(\mathcal{H}_{\mathbb{R}})\) associated to an infinite-dimensional real Hilbert space \(\mathcal{H}_{\mathbb{R}}\) does not have (W\({}^{*}\)AO) when \(-1<q<1\) with \(q\neq 0\). **Theorem 7.17**.: _Let \(M\) be a weakly exact von Neumann algebra, then both conditions (AO)\({}^{+}\) and strong (AO) imply that \(M\) is biexact._ Proof.: We first suppose that \(M\) is a weakly exact von Neumann algebra satisfying condition (AO)\({}^{+}\). Using the notation from Definition 7.14, we denote by \(\theta:B\otimes C\to\mathbb{B}(L^{2}M)\) the u.c.p. lifting of the multiplication map \(\nu\) and extend \(\theta\) to a u.c.p. map \(\bar{\theta}:\mathbb{B}(L^{2}M)\otimes C\to\mathbb{B}(L^{2}M)\). Set \(\bar{\nu}:=\pi\circ\bar{\theta}\), where \(\pi:\mathbb{B}(L^{2}M)\to\mathbb{B}(L^{2}M)/\mathbb{K}(L^{2}M)\) is the canonical quotient map. Define \(\Psi:\mathbb{B}(L^{2}M)\to\mathbb{B}(L^{2}M)\) by \(\Psi(T)=\bar{\theta}(T\otimes 1)\) and note that for any \(c\in C\subset JMJ\), \[\pi([\bar{\theta}(T\otimes 1),c])=[\bar{\nu}(T\otimes 1),\bar{\nu}(1\otimes c )]=0,\] as \(C\) is in the multiplicative domain of \(\bar{\nu}\). Thus, the range of \(\Psi\) is contained in \(\mathbb{S}(M)\). Consider \(\Psi^{\prime}:\mathbb{B}(L^{2}M)\to\mathbb{S}(M)^{\sharp*}\) given by \(\Psi^{\prime}(T)=p^{\perp}\pi\circ\Psi(T)+pT\), where \(p\) is the support projection of \(\mathbb{B}(L^{2}M)\cong\mathbb{K}(L^{2}M)^{\sharp*}\subset\mathbb{S}(M)^{ \sharp*}\), and it is easy to check that \(\Psi^{\prime}\mid_{B}=\mathrm{id}_{B}\). We may then apply Lemma 7.10 to obtain an \(M\)-bimodular u.c.p. map from \(\mathbb{B}(L^{2}M)\) into \(\mathbb{S}(M)^{\sharp*}\), and, since \(M\) is weakly exact, it then follows from Corollary 5.6 and Lemma 4.4 that \(M\) is biexact. We now suppose that \(M\) satisfies strong condition (AO). Using the notation from Definition 7.15, we note that as \([c,JaJ]\in\mathbb{K}(L^{2}M)\) for each \(a\in A\), it follows that \(\mathcal{C}\subset\mathbb{S}(M)\). Since \(\mathcal{C}\) is nuclear, we then have that the inclusion \(A\subset\mathbb{S}(M)\) is nuclear, and so \(M\) is biexact by Corollary 4.9. **Corollary 7.18**.: _For an exact group \(\Gamma\), the notions of (AO)\({}^{+}\), strong (AO), and biexactness for \(L\Gamma\) are equivalent, and coincide with biexactness for \(\Gamma\)._ Concerning the connection between biexactness and (W\({}^{*}\)AO), we have the following. **Theorem 7.19**.: _Let \(M\) be a von Neumann algebra and \(\mathbb{X}\subset\mathbb{B}(L^{2}M)\) an \(M\)-boundary piece. If \(M\) is biexact relative to \(\mathbb{X}\), then the multiplication map_ \[v:M\odot M^{\mathrm{op}}\ni\sum_{i=1}^{n}a_{i}\otimes x_{i}\mapsto\sum_{i=1}^{ n}a_{i}x_{i}+\mathbb{K}_{\mathbb{X}}(M)\in\mathrm{C}^{*}(M,M^{\mathrm{op}}, \mathbb{K}_{\mathbb{X}}(M))/\mathbb{K}_{\mathbb{X}}(M)\] _is min-continuous. In particular, biexact von Neumann algebras satisfy condition (W\({}^{*}\)AO)._ Proof.: Denote by \(\iota:\mathbb{B}(L^{2}M)\to\mathbb{B}(L^{2}M)^{\sharp*}_{J}\) the canonical embedding. Let \(q_{\mathbb{X}}\in\mathbb{B}(L^{2}M)^{\sharp*}_{J}\) denote the support projection of \(\mathbb{K}_{\mathbb{X}}(M)^{\sharp*}_{J}\) and set \[\pi:=\operatorname{Ad}(q_{\mathbb{X}}^{\perp}):\mathbb{B}(L^{2}M)^{\sharp*}_{J} \to q_{\mathbb{X}}^{\perp}\mathbb{B}(L^{2}M)^{\sharp*}_{J}q_{\mathbb{X}}^{\perp}.\] Notice that for any \(T\in\mathbb{S}_{\mathbb{X}}(M)\), we have \(\pi\circ\iota(JMJ)\) commutes with \(\pi\circ\iota(JMJ)\). Let \(\phi_{n}\), \(\psi_{n}\) be u.c.p. maps coming from the \(M\)-nuclear inclusion \(M\subset\mathbb{S}_{\mathbb{X}}(M)\). We may consider the following diagram where \[(\pi\circ\iota\circ\psi_{n})\times(\pi\circ\iota):\mathbb{M}_{k(n)}(\mathbb{C })\otimes JMJ\to q_{\mathbb{X}}^{\perp}\mathbb{B}(L^{2}M)^{\sharp*}_{J}q_{ \mathbb{X}}^{\perp}\] is justified since \([(\pi\circ\iota\circ\psi_{n})(\mathbb{M}_{k(n)}(\mathbb{C})),\pi\circ\iota(JMJ )]=0\) and \(\mathbb{M}_{k(n)}(\mathbb{C})\) is nuclear. Denote by \(\tilde{v}\) the u.c.p. map that is a point-weak\({}^{*}\) limit point of these maps, and it is clear that \(\tilde{v}(a\otimes x)=q_{\mathbb{X}}^{\perp}\iota(a)\iota(x)\) for any \(a\in M\) and \(x\in JMJ\). Moreover, notice that \(\tilde{v}(M\otimes JMJ)\subset q_{\mathbb{X}}^{\perp}\mathrm{C}^{*}(\iota(M), \iota(JMJ))\) and we claim that \[q_{\mathbb{X}}^{\perp}\mathrm{C}^{*}(\iota(M),\iota(JMJ))\cong\mathrm{C}^{*}( M,JMJ)/I\] as \(\mathrm{C}^{*}\)-algebras, where \(I=\mathbb{K}_{\mathbb{X}}(M)\cap\mathrm{C}^{*}(M,JMJ)\). To see this, first observe that \(\iota_{\mathrm{|C}^{*}(M,JMJ)}\) implements a \(*\)-isomorphism between \[\mathbb{B}(L^{2}M)\supset\mathrm{C}^{*}(M,JMJ)\cong\mathrm{C}^{*}(\iota(M), \iota(JMJ))\subset\mathbb{B}(L^{2}M)^{\sharp*}_{J}.\] Indeed, this follows from the fact that \(\iota\) is a complete isometry and \(\iota_{\mathrm{|alg}(M,JMJ)}\) is a \(*\)-homomorphism. Next, note that \(\theta:=\pi\circ\iota:\mathrm{C}^{*}(M,JMJ)\to q_{\mathbb{X}}^{\perp}\mathrm{C }^{*}(\iota(M),\iota(JMJ))\) is a \(*\)-homomorphism with \[\ker\theta=\iota^{-1}(\ker(\pi))\cap\mathrm{C}^{*}(M,JMJ).\] Since \(\ker(\pi)=\mathbb{K}_{\mathbb{X}}(M)^{\sharp*}_{J}\) and \(\iota^{-1}(\mathbb{K}_{\mathbb{X}}(M)^{\sharp*}_{J})=\mathbb{K}_{\mathbb{X}} ^{\infty,1}(M)\), we have \(\ker\theta=\mathbb{K}_{\mathbb{X}}^{\infty,1}(M)\cap\mathrm{C}^{*}(M,JMJ)\). Finally, if \(x\in\ker\theta\), then \(x^{*}x\in\ker\theta\subset\mathbb{K}_{\mathbb{X}}^{\infty,1}(M)_{+}=\mathbb{K }_{\mathbb{X}}(M)_{+}\), and hence \(x\in\mathbb{K}_{\mathbb{X}}^{L}(M)\). We similarly have \(x^{*}\in\mathbb{K}_{\mathbb{X}}^{L}(M)\), and hence \(x\in\mathbb{K}_{\mathbb{X}}(M)\), i.e., \(\ker\theta=C^{*}(M,JMJ)\cap\mathbb{K}_{\mathbb{X}}(M)=I\). If we denote by \[\theta^{\prime}:q_{\mathbb{X}}^{\perp}\mathrm{C}^{*}(\iota(JMJ))\to\mathrm{C}^ {*}(M,JMJ)/I\] the \(*\)-isomorphism, then the u.c.p. map \[v:=\theta^{\prime}\circ\tilde{v}:M\otimes JMJ\to\mathrm{C}^{*}(M,JMJ)/I\] satisfies \(v(a\otimes x)=ax+\ker\theta\) for \(a\in M\) and \(x\in JMJ\). The result then follows, since the inclusion map of \(\mathrm{C}^{*}(M,JMJ)\) into \(\mathrm{C}^{*}(M,M^{\mathrm{op}},\mathbb{K}_{\mathbb{X}}(M))\) induces a \(\mathrm{C}^{*}\)-isomorphism from \(\mathrm{C}^{*}(M,JMJ)/I\) onto its image in \(\mathrm{C}^{*}(M,M^{\mathrm{op}},\mathbb{K}_{\mathbb{X}}(M))/\mathbb{K}_{ \mathbb{X}}(M)\). By the previous theorem biexact von Neumann algebras will have the following consequence of condition (W\({}^{*}\)AO). **Theorem 7.20**.: _Let \(M\) be a \(\sigma\)-finite von Neumann algebra and consider the following conditions:_ 1. \(M\) _satisfies condition (W_\({}^{*}\)AO_)._ 2. \(M\) _does not have property_ \((\Gamma)\)_, and if_ \(\mathcal{H}\) _is any normal Hilbert_ \(M\)_-bimodule such that_ \(\mathcal{H}\prec L^{2}M\)_, and such that_ \(\mathcal{H}\) _is disjoint from_ \(L^{2}M\)_, then we have_ \(\mathcal{H}\prec L^{2}M\overline{\otimes}L^{2}M\) _._ 3. _If_ \(\mathcal{U}\) _is a nonprincipal ultrafilter on_ \(\mathbb{N}\)_, then we have the following weak containment of Hilbert_ \(M\)_-bimodule_ \[L^{2}(M^{\mathcal{U}})\ominus L^{2}M\prec L^{2}M\,\overline{\otimes}\,L^{2}M.\] _Then, we have the implications (1) \(\implies\) (2) \(\implies\) (3). Moreover, if \(M\) is finite, then all three conditions are equivalent._ Proof.: To see (1) \(\implies\) (2), note that since \(\mathcal{H}\prec L^{2}M\), we have a \(*\)-homomorphism \(\pi:\mathrm{C}^{*}(M,JMJ)\to\mathbb{B}(\mathcal{H})\) satisfying \(\pi(ab)=ab\) for \(a\in M\) and \(b\in JMJ\). As \(\mathcal{H}\) is disjoint from \(L^{2}M\), we have \(\mathbb{K}(L^{2}M)\cap\mathrm{C}^{*}(M,M^{\mathrm{op}})\subset\ker\pi\). Note that for any \(\xi\in\mathcal{H}\), \(\langle\pi(\cdot)\xi,\xi\rangle\) is a state that is normal when restricted to \(M\) and \(M^{\mathrm{op}}\), therefore we further have \(\mathbb{K}(M)\cap\mathrm{C}^{*}(M,M^{\mathrm{op}})\subset\ker\pi\). The condition (W\({}^{*}\)AO) then implies \(\mathcal{H}\prec L^{2}M\otimes L^{2}M\). Essentially the same argument is used to prove the implication (1) \(\implies\) (3), and we note that (3) easily implies that \(M\) does not have property (\(\Gamma\)). The implication (2) \(\implies\) (3) is trivial. If \(M\) is finite, then it is shown in [10] that \(T\in\mathbb{K}^{L}(M)\) if and only if the \(\|T\widehat{x_{n}}\|_{2}\to 0\) for any uniformly bounded sequence \(\{x_{n}\}_{n}\subset M\) with \(\widehat{x_{n}}\) converging weakly to \(0\). It then follows that \(\ker\pi=C^{*}(M,JMJ)\cap\mathbb{K}^{L}(M)=C^{*}(M,JMJ)\cap\mathbb{K}(M)\). If \(\pi\) factors through the minimal tensor product, we then see that \(M\) satisfies condition (W\({}^{*}\)AO). **Remark 7.21**.: In an initial version of this article, we presented only the implication (1) \(\implies\) (3) in the previous theorem. We would like to thank Amine Marrakchi for explaining to us [11, Corollary 3.5] where the implication (1) \(\implies\) (2) is shown for finite von Neumann algebras, and for suggesting to us that the implication (1) \(\implies\) (2) might hold for general von Neumann algebras. We also remark that when \(M\) has separable predual the requirement in condition (2) that \(M\) does not have property (\(\Gamma\)) is superfluous (see Remark 3.3 in [1]). Note that Theorems 7.19 and 7.20 give a generalization of Theorem 6.7. Indeed, if \(M\) is biexact, \(A\subset M\) is a von Neumann subalgebra and \(u\in A^{\prime}\cap M^{\mathcal{U}}\) is a unitary with \(E_{M}(u)=0\), then fixing a normal faithful state \(\varphi_{M}\) on \(M\) and setting \(\varphi=\varphi_{M}\circ E_{M}\) on \(M^{\mathcal{U}}\), it then follows from Theorems 7.19 and 7.20 that for \(a_{1},\dots,a_{m}\in A\) and \(b_{1},\dots,b_{m}\in M\) we have \[\|\sum_{i=1}^{m}a_{i}Jb_{i}J\varphi_{M}^{1/2}\|=\|u\sum_{i=1}^{m}a_{i}Jb_{i}J \varphi^{1/2}\|=\|\sum_{i=1}^{m}a_{i}Jb_{i}^{*}Ju\varphi^{1/2}\|\leq\|\sum_{i =1}^{m}a_{i}\otimes b_{i}^{\mathrm{op}}\|_{A\otimes M^{\mathrm{op}}}.\] The inclusion \(A\subset M\) is then weakly nuclear by [1, Theorem 3.8.5]. ## 8. Solid factors that are not biexact In this section we study biexactness for crossed-products of von Neumann algebras, and we give examples of solid factors that are not biexact. Suppose \(\Gamma\) is a discrete group and we have an action \(\Gamma\,{\curvearrowright}\,{}^{\sigma}M\) on a von Neumann algebra. We let \(\sigma^{0}:\Gamma\to\mathcal{U}(L^{2}M)\) denote the Koopman representation, and note that conjugation by the Koopman representation gives us an extension of the action to an action on the standard representation \(\Gamma\,{\curvearrowright}\,{}^{\mathrm{Ad}(\sigma^{0})}\mathbb{B}(L^{2}M)\). If \(\mathbb{X}\subset\mathbb{B}(L^{2}M)\) is an \(M\)-boundary piece, then we say that \(\mathbb{X}\) is \(\Gamma\)-invariant if it is invariant under this conjugation action. In this case the conjugation action also preserves the small-at-infinity boundary \(\mathbb{S}_{\mathbb{X}}(M)\). By a normal operator \((\Gamma\,{\curvearrowright}\,{}^{\sigma}M)\)-system, we mean an operator system \(E\subset\mathbb{B}(\mathcal{H})\), together with a normal faithful representation of \(M\) in \(\mathbb{B}(\mathcal{H})\), and a unitary representation \(\sigma^{0}:\Gamma\to\mathcal{U}(\mathcal{H})\) so that \(E\) gives a normal operator \(M\)-system that is also invariant under conjugation by \(\sigma^{0}(\Gamma)\), and such that the representations of \(\Gamma\) and \(M\) are covariant in the sense that \(\sigma_{t}(x)=\sigma_{t}^{0}x(\sigma_{t}^{0})^{*}\) for each \(x\in M\) and \(t\in\Gamma\). So, for example, \(\mathbb{S}_{\mathbb{X}}(M)\subset\mathbb{B}(L^{2}M)\) gives a normal \((\Gamma\mathop{\curvearrowright}\nolimits^{\sigma}M)\)-system whenever \(\mathbb{X}\) is \(\Gamma\)-invariant. If \(E\subset\mathbb{B}(\mathcal{H})\) is ultraweakly closed, then we say that this is a dual normal operator \((\Gamma\mathop{\curvearrowright}\nolimits^{\sigma}M)\)-system. If \(E\) is a normal operator \((\Gamma\mathop{\curvearrowright}\nolimits^{\sigma}M)\)-system and \(\Gamma\mathop{\curvearrowright}\nolimits A\) is an action on a unital \(\mathrm{C}^{*}\)-algebra, then, by considering a faithful unital covariant representation of \(A\) into \(\mathbb{B}(\mathcal{K})\), we see that the diagonal action on \(E\otimes A\) turns \(E\otimes A\) into a normal operator \((\Gamma\mathop{\curvearrowright}\nolimits^{\sigma}M)\)-system. Also, if \(E\subset\mathbb{B}(\mathcal{H})\) is a normal operator \((\Gamma\mathop{\curvearrowright}\nolimits^{\sigma}M)\)-system, we then let \(E\rtimes_{r}\Gamma\) denote the closed subspace of \(\mathbb{B}(\mathcal{H})\rtimes_{r}\Gamma\subset\mathbb{B}(\mathcal{H}\mathbin {\overline{\otimes}}\nolimits^{2}\Gamma)\) spanned by elements of the form \((a\otimes 1)(\sigma_{t}^{0}\otimes\lambda_{t})\) for \(a\in E\), \(t\in\Gamma\). Note that \(E\rtimes_{r}\Gamma\) is, then, an operator \(M\rtimes_{r}\Gamma\)-system that is \((M\rtimes_{r}\Gamma\subset M\rtimes\Gamma)\)-normal. **Lemma 8.1**.: _Let \(E\) denote a normal operator \((\Gamma\mathop{\curvearrowright}\nolimits^{\sigma}M)\)-system. If \(\Gamma\mathop{\curvearrowright}\nolimits^{\alpha}K\) is an amenable action on a compact Hausdorff space \(K\), and if the inclusion \(M\subset E\) is \(M\)-nuclear, then the inclusion \(M\rtimes_{r}\Gamma\subset(E\otimes C(K))\rtimes_{r}\Gamma\) is \((M\rtimes_{r}\Gamma\subset M\rtimes\Gamma)\)-nuclear._ Proof.: The proof of Lemma 4.3.3 in [1] shows that there exists a net of u.c.p. maps \(\phi_{i}:(E\otimes C(K))\rtimes_{r}\Gamma\to\mathbb{M}_{F_{i}}(E\otimes C(K))\) and \(\psi_{i}:\mathbb{M}_{F_{i}}(E\otimes C(K))\to(E\otimes C(K))\rtimes_{r}\Gamma\), where \(F_{i}\subset\Gamma\) are finite, such that \(\psi_{i}\circ\phi_{i}\) converges to the identity in the point-norm topology. Moreover, if we consider \(\mathbb{M}_{F_{i}}(E\otimes C(K))\) to be an \(M\)-system via the twisted diagonal embedding \(M\ni a\mapsto\oplus_{t\in F_{i}}(\sigma_{t-1}\otimes\mathrm{id})(a)\in \mathbb{M}_{F_{i}}(E\otimes C(K))\), then the maps \(\phi_{i}\) and \(\psi_{i}\) above are \(M\)-bimodular, and hence continuous in the \(M\)-topology. Thus, the lemma follows from the fact that, for each finite set \(F\subset\Gamma\), the inclusion \(M\subset\mathbb{M}_{F}(E\otimes C(K))\) is \(M\)-nuclear. The following gives an equivariant version of Proposition 6.14. **Theorem 8.2**.: _Suppose \(\Gamma\mathop{\curvearrowright}\nolimits M\) and \(\mathbb{X}\subset\mathbb{B}(L^{2}M)\) is a \(\Gamma\)-equivariant \(M\)-boundary piece. Suppose also that \(I\subset\ell^{\infty}\Gamma\) is a boundary piece, that \(\Gamma\) is biexact relative to \(I\), and that \(M\) is biexact relative to \(\mathbb{X}\). Then, \(M\rtimes\Gamma\) is biexact relative to the hereditary \(\mathrm{C}^{*}\)-algebra generated by \(\ell^{\infty}(\Gamma;\mathbb{X})\) and \(\mathbb{B}(L^{2}M)\otimes I\)._ Proof.: We let \(\mathbb{Y}\subset\mathbb{B}(L^{2}M\mathbin{\overline{\otimes}}\nolimits\ell^{2 }\Gamma)\) denote the hereditary \(\mathrm{C}^{*}\)-algebra generated by \(\ell^{\infty}(\Gamma;\mathbb{X})\) and \(\mathbb{B}(L^{2}M)\otimes I\). By Lemma 8.1, we have that the inclusion \(M\rtimes_{r}\Gamma\subset(\mathbb{S}_{\mathbb{X}}(M)\otimes\mathbb{S}_{I}( \Gamma))\rtimes_{r}\Gamma\) is \((M\rtimes_{r}\Gamma\subset M\rtimes\Gamma)\)-nuclear, and so by Corollary 4.9 it suffices to show that we have an inclusion \[(\mathbb{S}_{\mathbb{X}}(M)\otimes\mathbb{S}_{I}(\Gamma))\rtimes_{r}\Gamma \subset\mathbb{S}_{\mathbb{Y}}(M\rtimes\Gamma). \tag{9}\] Since the latter space is an \(L\Gamma\)-bimodule, it then suffices to show that, for all \(T\in\mathbb{S}_{\mathbb{X}}(M)\) and \(f\in\mathbb{S}_{I}(\Gamma)\), we have \(T\otimes f\in\mathbb{S}_{\mathbb{Y}}(M\rtimes\Gamma)\). Letting \(J\) denote the modular conjugation operator on \(M\rtimes\Gamma\), and \(J_{M}\) denote the modular conjugation operator on \(M\), for \(x\in M\) and \(t\in\Gamma\) we may use (1) to compute \[Ju_{t}J(T\otimes f)Ju_{t}^{*}J-(T\otimes f)=T\otimes(\rho_{t}f\rho_{t}^{*}-f) \in\mathbb{B}(L^{2}M)\otimes I,\] and \[[T\otimes f,JxJ]=\oplus_{t\in\Gamma}f(t)[T,J_{M}\sigma_{t}(x)J_{M}]\in\ell^{ \infty}(\Gamma,\mathbb{X}).\] By [1, Lemma 6.1], we then see that the inclusion (9) holds. We note that when \(M\) or \(\Gamma\) is amenable, then in the proof above we can replace \(\mathbb{S}_{\mathbb{X}}(M)\) with \(M\), or \(\mathbb{S}_{I}(\Gamma)\) with \(\mathbb{C}\), respectively. In the former case, we then need only to show the inclusion above for \(T\otimes f\) with \(T\in M\), while in the latter case we may assume \(f\in\mathbb{C}\). This, then, gives the following two results. **Proposition 8.3** (Cf. Lemma 15.3.5 in [1]).: _Suppose \(\Gamma\!\curvearrowright\!M\), with \(M\) amenable, and \(I\subset\ell^{\infty}\Gamma\) is a boundary piece so that \(\Gamma\) is biexact relative to \(I\), then \(M\rtimes\Gamma\) is biexact relative to the boundary piece generated by \(\mathbb{B}(L^{2}M)\otimes I\). In particular, if \(M\) is amenable and \(\Gamma\) is biexact, then \(M\rtimes\Gamma\) is biexact relative to \(M\)._ **Proposition 8.4**.: _Suppose \(\Gamma\curvearrowright\!M\), with \(\Gamma\) amenable, and \(\mathbb{X}\subset\mathbb{B}(L^{2}M)\) is a \(\Gamma\)-invariant boundary piece so that \(M\) is biexact relative to \(\mathbb{X}\), then \(M\rtimes\Gamma\) is biexact relative to the boundary piece generated by \(\ell^{\infty}(\Gamma,\mathbb{X})\)._ We now briefly recall the \(q\)-Gaussian construction. Let \(\mathcal{H}\) be a real Hilbert space, which we will always assume has dimension greater than \(1\). Let \(\mathcal{H}_{\mathbb{C}}=\mathcal{H}\otimes_{\mathbb{R}}\mathbb{C}\) be its complexification and let \(\mathcal{F}(\mathcal{H}_{\mathbb{C}})=\mathbb{C}\Omega\oplus\bigoplus_{n\geq 1 }\mathcal{H}_{\mathbb{C}}^{\otimes n}\) be the algebraic Fock space over \(\mathcal{H}_{\mathbb{C}}\). Fix \(-1\leq q\leq 1\). Following [1], we consider the sequilinear form on \(\mathcal{F}(\mathcal{H}_{\mathbb{C}})\) satisfying \[\langle\xi_{1}\otimes\cdots\otimes\xi_{n},\eta_{1}\otimes\cdots\eta_{m}\rangle _{q}=\delta_{n,m}\sum_{\sigma\in S_{n}}q^{i(\sigma)}\prod_{j}\langle\xi_{j}, \eta_{\sigma(j)}\rangle\] where \(S_{n}\) is the symmetric group on \(n\) characters, and \(\iota(\sigma)\) denotes the number of inversions of \(\sigma\in S_{n}\). This form is nonnegative definite, and we let \(\mathcal{F}_{q}(\mathcal{H}_{\mathbb{C}})\) denote the Hilbert space obtained after separation and completion. We will abuse notation, and for \(\xi\in\mathcal{F}(\mathcal{H}_{\mathbb{C}})\) we will continue to write \(\xi\) for its image in \(\mathcal{F}_{q}(\mathcal{H}_{\mathbb{C}})\). For \(\xi\in\mathcal{H}\), we let \(l_{q}(\xi)\) denote the left creation operator on \(\mathcal{F}_{q}(\mathcal{H}_{\mathbb{C}})\), satisfying \[l_{q}(\xi)\xi_{1}\otimes\cdots\otimes\xi_{n}=\xi\otimes\xi_{1}\otimes\cdots \otimes\xi_{n}.\] This operator is bounded if \(-1\leq q<1\), and is a closed densely defined operator if \(q=1\). We let \(s_{q}(\xi)=l_{q}(\xi)+l_{q}(\xi)^{*}\). If \(-1\leq q<1\), the \(q\)-Gaussian C\({}^{*}\)-algebra is defined as the unital C\({}^{*}\)-subalgebra \(A_{q}(\mathcal{H})\) of \(\mathbb{B}(\mathcal{F}_{q}(\mathcal{H}_{\mathbb{C}}))\) generated by \(s(\xi)\) for \(\xi\in\mathcal{H}\). The \(q\)-Gaussian von Neumann algebra \(M_{q}(\mathcal{H})\) is the von Neumann completion of \(A_{q}(\mathcal{H})\) in \(\mathbb{B}(\mathcal{F}_{q}(\mathcal{H}_{\mathbb{C}}))\). When \(q=1\), the \(q\)-Gaussian von Neumann algebra \(M_{1}(\mathcal{H})\) is generated by the spectral projections of the self adjoint unbounded operators \(s_{1}(\xi)\). The vacuum vector \(\Omega\) is cyclic and separating for \(M_{q}(\mathcal{H})\) so that \(\mathcal{F}_{q}(\mathcal{H}_{\mathbb{C}})\) may be identified with \(L^{2}(M_{q}(\mathcal{H}))\). Note that for \(\xi\in\mathcal{H}\), we have \(s(\xi)\Omega=\xi\), and if \(q\neq 1\), it is then easy to see by induction on the maximal length of simple tensors that for a general vector \(\xi\in\mathcal{F}(\mathcal{H})\subset\mathcal{F}(\mathcal{H}_{\mathbb{C}})\) there still exists a bounded operator \(s(\xi)\in M_{q}(\mathcal{H})\) so that \(s(\xi)\Omega=\xi\). The operator \(s(\xi)\) is called the Wick operator associated to \(\xi\). In the case when \(q=1\), the Wick operator \(s(\xi)\) is a densely defined closed operator and contains the image of \(\mathcal{F}(\mathcal{H}_{\mathbb{C}})\) in \(\mathcal{F}_{1}(\mathcal{H}_{\mathbb{C}})\) in its domain. We define a linear idempotent \(\xi\mapsto\xi^{*}\) on \(\mathcal{F}(\mathcal{H})\) by letting \((\xi_{1}\otimes\cdots\otimes\xi_{n})^{*}=\xi_{n}\otimes\cdots\otimes\xi_{1}\). Note that the Wick operators then satisfy \(s(\xi)^{*}=s(\xi^{*})\) for \(\xi\in\mathcal{F}(\mathcal{H})\). We also have \[s(s(\xi_{1})\xi_{2})\Omega=s(\xi_{1})s(\xi_{2})\Omega\ \ \ \ s(Js(\eta_{2})J\eta_{1})\Omega=Js(\eta_{2})Js(\eta_{1})\Omega \tag{10}\] for \(\xi_{1},\xi_{2},\eta_{1},\eta_{2}\in\mathcal{F}(\mathcal{H})\). The modular conjugation operator \(J\) for \(M_{q}(\mathcal{H})\) satisfies \(J(\xi)=\xi^{*}\) for all \(\xi\in\mathcal{F}(\mathcal{H})\), or rather for \(\xi\) in the image of \(\mathcal{F}(\mathcal{H})\) in \(\mathcal{F}_{q}(\mathcal{H}_{\mathbb{C}})\). The \(q\)-Gaussian von Neumann algebra is abelian when \(q=1\), amenable when \(q=-1\), a free group factor when \(q=0\), and a nonamenable \(\Pi_{1}\) factor when \(-1<q<1\). Moreover, when \(\dim(\mathcal{H})<\infty\), it is known that \(M_{q}(\mathcal{H})\cong M_{0}(\mathcal{H})\) for \(|q|\) small enough (depending on \(\dim(\mathcal{H})\)) [10]. We note that when \(\dim(\mathcal{H})<\infty\), then \(M_{q}(\mathcal{H})\) is biexact. Indeed, Shlyakhtenko noted in [12, Section 4] that for \(\dim(\mathcal{H})<\infty\), if the C\({}^{*}\)-algebra generated by \(l(\xi)\) for \(\xi\in\mathcal{H}\) is nuclear, then \(M_{q}(\mathcal{H})\) has strong property (AO) (see also [11]). Generalizing a result of Dykema and Nica [10], Kuzmin has recently shown in [14] that this C\({}^{*}\)-algebra is always nuclear for \(-1\leq q<1\), and hence it then follows from Shlyakhtenko's result and Theorem 7.17 that \(M_{q}(\mathcal{H})\) is biexact. If \(\dim(\mathcal{H})=\infty\) and \(q\not\in\{-1,0,1\}\), then from Theorem 7.19 and Caspers' result [11] we see that \(M_{q}(\mathcal{H})\) is not biexact in this case. Indeed, using [13, Theorem 2], it is shown in [1, Theorem 3.3] that if \(d\geq 1\) is such that \(d>q^{2}d>1\), and \(\mathcal{H}_{0}\subset\mathcal{H}\) with \(d=\dim(\mathcal{H}_{0})<\dim(\mathcal{H})\), then the Hilbert \(M_{q}(\mathcal{H}_{0})\)-bimodule \(L^{2}(M_{q}(\mathcal{H}))\ominus L^{2}(M_{q}(\mathcal{H}_{0}))\) is not weakly contained in the coarse bimodule. If \(\mathcal{U}\) is a nonprincipal ultrafilter on \(\mathbb{N}\), then by considering the natural embedding \[L^{2}(M_{q}(\mathcal{H}^{\mathcal{U}}))\ominus L^{2}(M_{q}(\mathcal{H})) \subset L^{2}((M_{q}(\mathcal{H}))^{\mathcal{U}})\ominus L^{2}(M_{q}( \mathcal{H}))\] we then see that \(M_{q}(\mathcal{H})\) fails to satisfy the conclusion of Theorem 7.20 whenever \(\dim(\mathcal{H})=\infty\) and \(q\not\in\{-1,0,1\}\). If we have an isometry \(V\in\mathbb{B}(\mathcal{H},\mathcal{K})\), then we obtain an isometry \(V^{\mathcal{F}}\in\mathbb{B}(\mathcal{F}(\mathcal{H}_{\mathbb{C}}),\mathcal{F }(\mathcal{K}_{\mathbb{C}}))\), and conjugation by this isometry then gives us a normal embedding \(M_{q}(\mathcal{H})\hookrightarrow M_{q}(\mathcal{K})\). The trace-preserving conditional expectation is then given by conjugation by the coisometry \((V^{*})^{\mathcal{F}}\). In particular, if we have an orthogonal transformation \(V\in\mathcal{O}(\mathcal{H})\), then conjugation by \(V^{\mathcal{F}}\) gives rise to a trace-preserving automorphism \(\sigma_{V}\in\operatorname{Aut}(M_{q}(\mathcal{H}))\) that satisfies \(\sigma_{V}(s(\xi))=s(V\xi)\) for \(\xi\in\mathcal{H}\). If we have an orthogonal representation \(\pi:\Gamma\to\mathcal{O}(\mathcal{H})\), then the resulting action \(\Gamma\curvearrowright^{\sigma_{\pi}}\operatorname{Aut}(M_{q}(\mathcal{H}))\) is called the \(q\)-Gaussian action associated to \(\pi\), or simply the Gaussian action when \(q=1\). If \(\theta\in(0,\pi/2)\), then we let \(V_{\theta}\in\mathcal{O}(\mathcal{H}\oplus\mathcal{H})\) denote the isometry \(V_{\theta}=\begin{pmatrix}\cos\theta&-\sin\theta\\ \sin\theta&\cos\theta\end{pmatrix}\), and we let \(\alpha_{\theta}\in\operatorname{Aut}(M_{q}(\mathcal{H}\oplus\mathcal{H}))\) denote the corresponding automorphism. We will identify \(M_{q}(\mathcal{H})\) with \(M_{q}(\mathcal{H}\oplus 0)\subset M_{q}(\mathcal{H}\oplus\mathcal{H})\). Note that since \(V_{\theta}^{\mathcal{F}}\) preserves each direct summand in the decomposition \(\mathcal{F}(\mathcal{H}_{\mathbb{C}})=\mathbb{C}\Omega\oplus\bigoplus_{n\geq 1 }\mathcal{H}_{\mathbb{C}}^{\otimes n}\), we may then explicitly compute \[e_{M_{q}(\mathcal{H})}V_{\theta}^{\mathcal{F}}e_{M_{q}(\mathcal{H})}=\sum_{n \geq 0}\cos^{n}\theta P_{n} \tag{11}\] where \(P_{n}\) denotes the orthogonal projection onto \(\mathcal{H}_{\mathbb{C}}^{\otimes n}\subset\mathcal{F}(\mathcal{H}_{\mathbb{ C}})\). In particular, \(e_{M_{q}(\mathcal{H})}V_{\theta}^{\mathcal{F}}e_{M_{q}(\mathcal{H})}\) is a positive operator on \(\mathcal{F}(\mathcal{H}_{\mathbb{C}})\). The equivalence between (1) and (2) in the following lemma is claimed in [1]. It is easy to check in the case when \(\dim(\mathcal{H})<\infty\), which is the main case of interest in [1]. Our proof for the general case is adapted from [10, Lemma 2.4]. **Lemma 8.5**.: _Let \(-1\leq q\leq 1\), let \(p\in\mathcal{P}(M_{q}(\mathcal{H}))\) be a nonzero projection, and suppose \(B\subset pM_{q}(\mathcal{H})p\) is a von Neumann subalgebra. The following conditions are equivalent:_ 1. \(B\) _is completely atomic._ 2. _We have uniform convergence_ \(\alpha_{\theta}\to\operatorname{id}\) _in_ \(\|\cdot\|_{2}\) _on_ \(\mathcal{U}(B)\) _as_ \(\theta\to 0\)_._ 3. _For each nonzero projection_ \(r\in\mathcal{P}(B^{\prime}\cap pM_{q}(\mathcal{H})p)\) _and_ \(\theta\in(0,\pi/2)\) _we have_ \[\inf_{u\in\mathcal{U}(Br)}\|E_{M_{q}(\mathcal{H})}\circ\alpha_{\theta}(u)\|_{2 }>0.\] 4. _For each nonzero projection_ \(r\in\mathcal{P}(B^{\prime}\cap pM_{q}(\mathcal{H})p)\) _there exists_ \(\theta\in(0,\pi/2)\)_, an orthogonal transformation_ \(\gamma\in\mathcal{O}(\mathcal{H})\)_, and a nonzero partial isometry_ \(v\in\alpha_{\theta}(r)M_{q}(\mathcal{H}\oplus\mathcal{H})\sigma_{\gamma}(r)\) _such that_ \(\alpha_{\theta}(b)v=v\sigma_{\gamma}(b)\) _for all_ \(b\in B\) Proof.: The implications \((1)\implies(2)\) and \((2)\implies(3)\) are trivial. If \((3)\) holds, and if \(r\in\mathcal{P}(B^{\prime}\cap pM_{q}(\mathcal{H})p)\) and \(\theta\in(0,\pi/2)\) are such that \[\inf_{u\in\mathcal{U}(Br)}\|E_{M_{q}(\mathcal{H})}\circ\alpha_{\theta}(u)\|_{2} >0,\] then by \((11)\) we have \[\inf_{u\in\mathcal{U}(Br)}\tau(\alpha_{\theta}(u)u^{*})>0.\] We may then apply a ubiquitous convexity argument to get a nonzero partial isometry \(v\in\alpha_{\theta}(r)M_{q}(\mathcal{H}\oplus\mathcal{H})r\) so that \(\alpha_{\theta}(b)v=vb\) for all \(b\in B\). Indeed, if \(d>0\) is such that \(\tau(\alpha_{\theta}(u)u^{*})\geq d\) for all \(u\in\mathcal{U}(Br)\), then the unique element \(x\) of minimal \(\|\cdot\|_{2}\) in the strongly closed convex hull of \(\{\alpha_{\theta}(u)u^{*}\mid u\in\mathcal{U}(Br)\}\subset\alpha_{\theta}(r)M _{q}(\mathcal{H}\oplus\mathcal{H})r\) is nonzero (since \(\tau(x)\geq d\)) and satisfies \(\alpha_{\theta}(u)xu^{*}=x\) for \(u\in\mathcal{U}(B)\). If \(x\) has polar decomposition \(x=v|x|\), then we have \(v\in\alpha_{\theta}(r)M_{q}(\mathcal{H}\oplus\mathcal{H})r\) is a nonzero partial isometry that satisfies \(\alpha_{\theta}(b)v=vb\) for \(b\in B\). This then shows that \((3)\implies(4)\), with \(\gamma\) being the identity automorphism. To show \((4)\implies(1)\), we argue by way of contradiction and assume that \((4)\) holds and that \(B\) is not completely atomic. We may then choose a projection \(z\in\mathcal{Z}(B)\) so that \(zB\) is diffuse, and replacing \(B\) with \(zB\) we may assume that \(z=p\). We let \(\theta\in(0,\pi/2)\), \(\gamma\in\mathcal{O}(\mathcal{H})\), and \(v\in\alpha_{\theta}(p)M_{q}(\mathcal{H})\sigma_{\gamma}(p)\) be given as in \((4)\). We now take \(\varepsilon>0\) to be chosen later and let \(\mathcal{H}_{0}\subset\mathcal{H}\) be a finite-dimensional subspace so that \(v^{\prime}=E_{M_{q}(\mathcal{H}_{0}\oplus\mathcal{H}_{0})}(v)\) satisfies \(\|v^{\prime}-v\|_{2}<\varepsilon\). Set \(r=\sigma_{\gamma^{-1}}(v^{*})\in B^{\prime}\cap pM_{q}(\mathcal{H})p\). Then, for \(b\in(rB)_{1}\) we have \[\|\alpha_{\theta}(b)v^{\prime}-v^{\prime}\sigma_{\gamma}(b)\|_{2}\leq 2\varepsilon.\] Note that \(\alpha_{\theta}(b)v^{\prime}\in M_{q}(\mathcal{K})=Q\), where \(\mathcal{K}\subset\mathcal{H}\oplus\mathcal{H}\) is the closed subspace spanned by \(V_{\theta}((\mathcal{H}\ominus\mathcal{H}_{0})\oplus 0)\) and \(\mathcal{H}_{0}\oplus\mathcal{H}_{0}\). Since \(v^{\prime}\in Q\), we have \(E_{Q}(v^{\prime}\sigma_{\gamma}(b))=v^{\prime}E_{Q}(\sigma_{\gamma}(b))\) for \(b\in rB\), and hence for all \(b\in(rB)_{1}\) we have \[\|v^{\prime}E_{Q}(\sigma_{\gamma}(b))-v^{\prime}\sigma_{\gamma}(b)\|_{2}\leq 4\varepsilon.\] Since \(v^{\prime}\sigma_{\gamma}(b)\in M_{q}(\mathcal{H}\oplus\mathcal{H}_{0})\), by a simple computation we also have \[\|v^{\prime}E_{Q}(\sigma_{\gamma}(b))\|_{2}^{2} \leq\cos\theta\|v^{\prime}\sigma_{\gamma}(b)-E_{M_{q}(\mathcal{H} _{0}\oplus\mathcal{H}_{0})}(v^{\prime}\sigma_{\gamma}(b))\|_{2}^{2}+\|E_{M_{q} (\mathcal{H}_{0}\oplus\mathcal{H}_{0})}(v^{\prime}\sigma_{\gamma}(b))\|_{2}^{2}\] \[=\cos\theta\|v^{\prime}\sigma_{\gamma}(b)\|_{2}^{2}+(1-\cos\theta) \|E_{M_{q}(\mathcal{H}_{0}\oplus\mathcal{H}_{0})}(v^{\prime}\sigma_{\gamma}(b ))\|_{2}^{2}.\] Hence \[(1-\cos\theta)\|v^{\prime}E_{M_{q}(\mathcal{H}_{0}\oplus\mathcal{H }_{0})}(\sigma_{\gamma}(b))\|_{2}^{2} \geq\|v^{\prime}E_{Q}(\sigma_{\gamma}(b))\|_{2}^{2}-\cos\theta\|v^{ \prime}\sigma_{\gamma}(b)\|_{2}^{2}\] \[\geq(1-\cos\theta)\|v^{\prime}\sigma_{\gamma}(b)\|_{2}^{2}-(4 \varepsilon)^{2}.\] Choosing \(\varepsilon>0\) sufficiently small, we may then find \(c>0\) and a finite-dimensional subspace \(\mathcal{H}_{0}\subset\mathcal{H}\) so that \[\|E_{M_{q}(\mathcal{H}_{0})}(\sigma_{\gamma}(u))\|_{2}=\|E_{M_{q}(\mathcal{H}_{ 0}\oplus\mathcal{H}_{0})}(\sigma_{\gamma}(u))\|_{2}\geq\|vE_{M_{q}(\mathcal{H }_{0}\oplus\mathcal{H}_{0})}(\sigma_{\gamma}(u))\|_{2}\geq c\] for all \(u\in\mathcal{U}(rB)\). By Popa's Intertwining Theorem [Pop06, Theorem 2.1], we then have \(\sigma_{\gamma}(rB)\preceq_{M_{q}(\mathcal{H})}M_{q}(\mathcal{H}_{0})\), i.e., there exist projections \(e\in rB\), \(f\in M_{q}(\mathcal{H}_{0})\), a nonzero partial isometry \(w\in eM_{q}(\mathcal{H})f\) and a unital normal \(*\)-homomorphism \(\phi:\sigma_{\gamma}(eBe)\to fM_{q}(\mathcal{H}_{0})f\) such that \(bw=w\phi(b)\) for all \(b\in\sigma_{\gamma}(eBe)\). We then have that \(u=\alpha_{\theta}(\sigma_{\gamma^{-1}}(w^{*}))vw\) is a nonzero partial isometry with \(u^{*}u\leq f\) and satisfying \(\alpha_{\theta}(x)u=u\sigma_{\gamma}(x)\) for \(x\in\tilde{B}:=\sigma_{\gamma^{-1}}(\phi(\sigma_{\gamma}(eBe)))\subset M_{q}( \gamma^{-1}\mathcal{H}_{0})\). If \(\delta>0\), then we may take \(\mathcal{H}_{1}\subset\mathcal{H}\) finite-dimensional with \(\gamma^{-1}\mathcal{H}_{0}\subset\mathcal{H}_{1}\) so that \(\|u-E_{M_{q}(\mathcal{H}_{1})}(u)\|_{2}<\delta\), and it then follows as above that \[\|E_{M_{q}(\mathcal{H}_{1})}\circ\alpha_{\theta}(x)u-u\sigma_{\gamma}(x)\|_{2}= \|E_{M_{q}(\mathcal{H}_{1})}\circ\alpha_{\theta}(x)u-uE_{M_{q}(\mathcal{H}_{1} )}(\sigma_{\gamma}(x))\|_{2}<4\delta\] for all \(x\in\tilde{B}\). Since \(\mathcal{H}_{1}\) is finite-dimensional we have that \(E_{M_{q}(\mathcal{H}_{1})}\circ\alpha_{\theta}\) is compact as an operator on \(\mathcal{F}(\mathcal{H}_{\mathbb{C}})\), and hence, if \(\{x_{k}\}_{k}\in\tilde{B}\) is any uniformly bounded sequence that converges weakly to \(0\), we have \(\limsup_{k\to\infty}\|ux_{k}\|_{2}<4\delta\). Since \(\delta>0\) was arbitrary, we then have that \(\tilde{B}\) is not diffuse, and hence, neither is \(B\), since it has a corner that is isomorphic to \(\tilde{B}\). We continue to let \(V_{\theta}=\begin{pmatrix}\cos\theta&-\sin\theta\\ \sin\theta&\cos\theta\end{pmatrix}\), and let \(\alpha_{\theta}\in\operatorname{Aut}(M_{q}(\mathcal{H}\oplus\mathcal{H}))\) be the associated automorphism as defined above. Note that if we have an orthogonal representation \(\pi:\Gamma\to\mathcal{O}(\mathcal{H})\), then \(\alpha_{\theta}\) is \(\Gamma\)-equivariant with respect to the \(q\)-Gaussian action associated to \(\pi\oplus\pi\) and so we may extend it to an automorphism (again denoted by \(\alpha_{\theta}\)) of \(\tilde{M}=M_{q}(\mathcal{H}\oplus\mathcal{H})\rtimes^{\sigma_{\pi\otimes\pi}}\Gamma\) so that \(\alpha_{\theta}\) is the identity map on \(L\Gamma\subset\tilde{M}\). We may bootstrap the previous lemma to obtain the following version adapted to the setting of crossed-products. **Lemma 8.6**.: _Let \(-1\leq q\leq 1\), and \(\pi:\Gamma\to\mathcal{O}(\mathcal{H})\) be an orthogonal representation. Set \(M=M_{q}(\mathcal{H})\rtimes^{\sigma_{\pi}}\Gamma\), and let \(p\in\mathcal{P}(M)\) be a nonzero projection. Suppose \(B\subset pM\) is a von Neumann subalgebra such that \(Br\preceq_{M}M_{q}(\mathcal{H})\) for each nonzero projection \(r\in B^{\prime}\cap pMp\). The following conditions are equivalent:_ 1. \(B\) _is completely atomic._ 2. _We have uniform convergence_ \(\alpha_{\theta}\to\operatorname{id}\) _in_ \(\|\cdot\|_{2}\) _on_ \(\mathcal{U}(B)\) _as_ \(\theta\to 0\)_._ 3. _For each nonzero projection_ \(r\in\mathcal{P}(B^{\prime}\cap pM)\) _and_ \(\theta\in(0,\pi/2)\)_, we have_ \[\inf_{u\in\mathcal{U}(Br)}\|E_{M}\circ\alpha_{\theta}(u)\|_{2}>0.\] 4. _For each nonzero projection_ \(r\in\mathcal{P}(B^{\prime}\cap pM)\)_, there exists_ \(\theta\in(0,\pi/2)\)_, and a nonzero partial isometry_ \(v\in\alpha_{\theta}(r)\tilde{M}r\) _such that_ \(\alpha_{\theta}(b)v=vb\) _for all_ \(b\in B\)_._ Proof.: The proofs of the implications (1) \(\implies\) (2) and (2) \(\implies\) (3) are trivial. Moreover, the same proof as in Lemma 8.5 shows (3) \(\implies\) (4). We now suppose that (4) holds, and, by way of contradiction, we may restrict to the diffuse corner of \(B\) and assume that \(B\) itself is diffuse. Let \(r\in\mathcal{P}(B^{\prime}\cap pM)\), \(\theta\in(0,\pi/2)\), and \(v\in\alpha_{\theta}(r)\tilde{M}r\) be as above. Since \(Bv^{*}v\preceq_{M}M_{q}(\mathcal{H})\), we may find projections \(e\in Bv^{*}v\), \(f\in M_{q}(\mathcal{H})\), a nonzero partial isometry \(w\in eMf\), and a unital normal \(*\)-homomorphism \(\phi:eBe\to fM_{q}(\mathcal{H})f\) so that \(bw=w\phi(b)\) for all \(b\in eBe\). Then \(u=\alpha_{\theta}(w^{*})vw\) is a nonzero partial isometry with \(u^{*}u\leq f\) so that \(\alpha_{\theta}(x)u=u\phi(x)\) for all \(x\in\phi(eBe)\subset M_{q}(\mathcal{H})\). If we take the Fourier representation \(u=\sum_{t\in\Gamma}a_{t}u_{t}\) with \(a_{t}\in M_{q}(\mathcal{H}\oplus\mathcal{H})\), then by uniqueness of the Fourier representation we have \(\alpha_{\theta}(x)a_{t}=a_{t}\sigma_{\pi(t)}(x)\) for all \(x\in\phi(eBe)\) and \(t\in\Gamma\). It then follows from Lemma 8.5 that \(\phi(eBe)\) is not diffuse, and hence, we conclude that \(B\) is also not diffuse. The previous lemmas are of interest even in the case of the classical Gaussian actions when \(q=1\), where they can be used to give the following relatively simple proof of Boutonnet's result [13] showing solid ergodicity for Gaussian actions associated to representations \(\pi\) having the property that \(\pi^{\otimes k}\prec\lambda\) for some \(k\geq 1\). Moreover, like in [10, Section 9], this approach avoids any hypothesis of mixingness for the representation. **Theorem 8.7**.: _Let \(\pi:\Gamma\to\mathcal{O}(\mathcal{H})\) be an orthogonal representation such that \(\pi^{\otimes k}\prec\lambda\) for some \(k\geq 1\). Set \(A=M_{1}(\mathcal{H})\) and \(M=A\rtimes^{\sigma_{\pi^{\otimes 1}}}\Gamma\). If \(p\in\mathcal{P}(M)\) is a nonzero projection and \(B\subset pMp\) is a diffuse von Neumann subalgebra such that \(Br\preceq_{M}A\) for any nonzero projection \(r\in B^{\prime}\cap pMp\), then \(B^{\prime}\cap pMp\) is amenable._ Proof.: Since \(B\) is diffuse and \(Br\preceq_{M}A\) for any nonzero projection \(r\in B^{\prime}\cap pMp\), it follows from Lemma 8.6 that there exist sequences \(\{\theta_{n}\}_{n}\subset(0,\pi/2)\) with \(\theta_{n}\to 0\) and \(\{u_{n}\}_{n}\subset\mathcal{U}(B)\) such that \(\|E_{M}\circ\alpha_{\theta_{n}}(u_{n})\|_{2}\to 0\). Setting \(\xi_{n}=\alpha_{\theta_{n}}(u_{n})-E_{M}\circ\alpha_{\theta_{n}}(u_{n})\), we then have that \(\{\xi_{n}\}_{n}\subset L^{2}\tilde{M}\ominus L^{2}M\) defines a sequence of asymptotically left and right tracial vectors that are also asymptotically \(B^{\prime}\cap pMp\)-central. Taking the \(k\)-fold tensor product then gives a sequence \(\{\xi_{n}\otimes_{M}\cdots\otimes_{M}\xi_{n}\}_{n}\subset(L^{2}\tilde{M} \ominus L^{2}M)^{\otimes_{M}k}\), which is also asymptotically left and right tracial and asymptotically \(B\)-central. Since \(\pi^{\otimes k}\prec\lambda\), the Hilbert \(M\)-bimodule \((L^{2}\tilde{M}\ominus L^{2}M)^{\otimes_{M}k}\) is weakly contained in the coarse bimodule [1, Lemma 3.3], and it therefore follows that \(B^{\prime}\cap pMp\) is amenable. **Corollary 8.8**.: _Let \(\Gamma\) be biexact and let \(\pi:\Gamma\to\mathcal{O}(\mathcal{H})\) be an orthogonal representation such that \(\pi^{\otimes k}\prec\lambda\) for some \(k\geq 1\), then \(M_{1}(\mathcal{H})\rtimes^{\sigma_{\pi}}\Gamma\) is solid._ Proof.: We let \(B\subset M\) be a diffuse von Neumann subalgebra. Set \(M=M_{1}(\mathcal{H})\rtimes^{\sigma_{\pi}}\Gamma\) and \(A=M_{1}(\mathcal{H})\). We let \(p\in B^{\prime}\cap M\) denote the maximal projection so that \(Bpr\preceq_{M}A\) for all nonzero projections \(r\in(Bp)^{\prime}\cap pMp\)[1, Proposition 1.1]. By Theorem 8.7, we have that \((Bp)^{\prime}\cap M\) is amenable. By Proposition 8.3, \(M\) is biexact relative to \(A\), and so it follows from Proposition 6.13 that \((Bp^{\perp})^{\prime}\cap M\) is also amenable. Hence, \(B^{\prime}\cap M\) is amenable. The previous corollary should be contrasted with the following result, which together show that there are solid von Neumann algebras that are not biexact. **Theorem 8.9**.: _Let \(\pi:\Gamma\to\mathcal{O}(\mathcal{H})\) be an orthogonal representation such that \(\pi\np\lambda\), then \(M=M_{1}(\mathcal{H}\overline{\otimes}\ell^{2}\mathbb{N})\rtimes^{\sigma_{\pi ^{\otimes 1}}}\Gamma\) is not biexact relative to \(L\Gamma\)._ Proof.: The proof is similar to the proof of Theorem 7.20. Note that, for \(t\in\Gamma\), we have \(u_{t}Ju_{t}J(\xi\otimes\delta_{n}\otimes\delta_{e})=\pi(t)\xi\otimes\delta_{n }\otimes\delta_{e}\), and since \(\pi\np\lambda\) there then exists a unit vector \(\eta\in\mathcal{H}\) and a finitely supported function \(a=\sum_{t\in\Gamma}\alpha_{t}t\in\mathbb{C}\Gamma\) so that \[\|(\sum_{t\in\Gamma}\alpha_{t}u_{t}J)(\eta\otimes\delta_{n}\otimes\delta_{e}) \|>\|\sum_{t\in\Gamma}\alpha_{t}\lambda_{t}\|=\|\sum_{t\in\Gamma}\alpha_{t}u_{ t}\otimes Ju_{t}J\|.\] We fix a nonprincipal ultrafilter \(\mathcal{U}\) on \(\mathbb{N}\) and define a state \(\varphi\) on \(\mathbb{B}(L^{2}M)\) by \[\varphi(T)=\lim_{n\to\mathcal{U}}\langle T(\eta\otimes\delta_{n}\otimes\delta _{e}),(\eta\otimes\delta_{n}\otimes\delta_{e})\rangle.\] It is then easy to see that \(\varphi_{|M}\) and \(\varphi_{|JMJ}\) are both normal, and \(\varphi(T)=0\) for any \(T\in\mathbb{K}(\mathcal{F}(\mathcal{H}\overline{\otimes}\ell^{2}\mathbb{N})) \otimes\mathbb{B}(\ell^{2}\Gamma)\). Hence, we have \(\varphi_{|\mathbb{K}_{\mathbb{X}_{L\Gamma}}^{\infty,1}(M)}=0\), and so \(\varphi\) defines a state on \(C^{*}(M,JMJ,\mathbb{K}_{\mathbb{X}_{L\Gamma}}(M))/\mathbb{K}_{\mathbb{X}_{L \Gamma}}\). If \(x=\sum_{t\in\Gamma}\alpha_{t}u_{t}Ju_{t}J\), then from above we have \[\|x\|^{2}\geq\varphi(x^{*}x)>\|\sum_{t\in\Gamma}\alpha_{t}u_{t}\otimes Ju_{t}J\| ^{2},\] so that the map \[C^{*}_{\lambda}\Gamma\odot JC^{*}_{\lambda}J\ni\sum_{i=1}^{m}b_{i}\otimes c_{i }\mapsto\sum_{i=1}^{m}b_{i}c_{i}+\mathbb{K}_{\mathbb{X}_{L\Gamma}}(M)\in C^{*} (M,JMJ,\mathbb{K}_{\mathbb{X}_{L\Gamma}}(M))/\mathbb{K}_{\mathbb{X}_{L\Gamma}}(M)\] is not min-continuous and hence \(M\) is not biexact relative to \(L\Gamma\) by Theorem 7.19. ### Strong Solidity of \(q\)-Gaussian von Neumann algebras for infinite variables If \(M\) and \(N\) are tracial von Neumann algebras and \(\mathcal{H}\) is a normal Hilbert \(M\)-\(N\) bimodule, then a vector \(\xi\in\mathcal{H}\) is left-bounded if the map \(L_{\xi}:N\to\mathcal{H}\) defined by \(L_{\xi}(x)=\xi x\) is bounded when \(N\) is endowed with the norm \(\|\cdot\|_{2}\). We may then view \(L_{\xi}\) as an operator in \(\mathbb{B}(L^{2}N,\mathcal{H})\). We let \(\mathcal{H}^{0}\) denote the space of left-bounded vectors. Given two left-bounded vectors \(\xi,\eta\in\mathcal{H}\) we may check that \(L_{\eta}^{*}L_{\xi}\in JNJ^{\prime}\cap\mathbb{B}(L^{2}N)=N\). If \(Q\) is another tracial von Neumann algebra and \(\mathcal{K}\) is a normal Hilbert \(N\)-\(Q\) bimodule, then we have a non-negative definite sequilinear form on \(\mathcal{H}^{0}\otimes_{\mathrm{alg}}\mathcal{K}\), satisfying \[\langle\xi\otimes\eta,\xi^{\prime}\otimes\eta^{\prime}\rangle=\langle(L_{\xi^{ \prime}}^{*}L_{\xi})\eta,\eta^{\prime}\rangle\] for \(\xi,\xi\in\mathcal{H}^{0}\) and \(\eta,\eta^{\prime}\in\mathcal{K}\). The Connes fusion on \(\mathcal{H}\) and \(\mathcal{K}\) over \(N\) is the separation and completion of \(\mathcal{H}^{0}\otimes_{\mathrm{alg}}\mathcal{K}\) with respect to this sesquilinear form, and is denoted by \(\mathcal{H}\,\overline{\otimes}_{N}\,\mathcal{K}\) (see [11]). We denote by \(\xi\otimes_{N}\eta\) the image of \(\xi\otimes\eta\) in \(\mathcal{H}\,\overline{\otimes}_{N}\,\mathcal{K}\). This is, then, a normal Hilbert \(M\)-\(Q\) bimodule satisfying \[a(\xi\otimes_{N}\eta)b=(a\xi)\otimes_{N}(\eta b)\] for \(a\in M\), \(b\in Q\), \(\xi\in\mathcal{H}^{0}\) and \(\eta\in\mathcal{K}\). Note that if \(\mathcal{H}\) is a real Hilbert space and \(-1\leq q<1\), then each vector \(\xi\in\mathcal{F}(\mathcal{H})\) defines a left-bounded vector in the trivial \(M_{q}(\mathcal{H})\) bimodule \(L^{2}(M_{q}(\mathcal{H}))\cong\mathcal{F}_{q}(\mathcal{H}_{\mathbb{C}})\), and in this case we have \(L_{\xi}\hat{x}=Jx^{*}J\xi=s(\xi)\hat{x}\) so that \(L_{\xi}=s(\xi)\). If \(B\subset M_{q}(\mathcal{H})\) is a von Neumann subalgebra and we view \(\mathcal{F}(\mathcal{H})\) as a left Hilbert \(B\) module, then we have \(L_{\eta}^{*}L_{\xi}=E_{B}(s(\eta)^{*}s(\xi))\) for all \(\xi,\eta\in\mathcal{F}(\mathcal{H})\). **Proposition 8.10**.: _Let \(-1\leq q<1\) and let \(\mathcal{H}_{1},\mathcal{H}_{2}\), and \(\mathcal{K}\) be real Hilbert spaces, then we have an \(M_{q}(\mathcal{H}_{1}\oplus\mathcal{K})\)-\(M_{q}(\mathcal{K}\oplus\mathcal{H}_{2})\) bimodular isometry_ \[\Xi:L^{2}(M_{q}(\mathcal{H}_{1}\oplus\mathcal{K}))\,\overline{\otimes}_{M_{q} (\mathcal{K})}\,L^{2}(M_{q}(\mathcal{K}\oplus\mathcal{H}_{2}))\to L^{2}(M_{q} (\mathcal{H}_{1}\oplus\mathcal{K}\oplus\mathcal{H}_{2}))\] _satisfying_ \[\Xi(\xi\otimes_{M_{q}(\mathcal{K})}\eta)=s(\xi\oplus 0)s(0\oplus\eta)\Omega\] _for all \(\xi\in\mathcal{F}(\mathcal{H}_{1}\oplus\mathcal{K})\) and \(\eta\in\mathcal{F}(\mathcal{K}\oplus\mathcal{H}_{2})\)._ Proof.: We define the real-linear map \[\Xi_{0}:\mathcal{F}(\mathcal{H}_{1}\oplus\mathcal{K})\otimes_{\mathrm{alg}} \mathcal{F}(\mathcal{K}\oplus\mathcal{H}_{2})\to L^{2}(M_{q}(\mathcal{H}_{1} \oplus\mathcal{K}\oplus\mathcal{H}_{2}))\] by setting \(\Xi_{0}(\xi\otimes\eta)=s(\xi\oplus 0)s(0\oplus\eta)\Omega\) for \(\xi\in\mathcal{F}(\mathcal{H}_{1}\oplus\mathcal{K})\) and \(\eta\in\mathcal{F}(\mathcal{K}\oplus\mathcal{H}_{2})\). Using that \(s(\eta)\Omega=Js(\eta^{*})J\Omega\), we then check that \[\langle\Xi_{0}(\xi_{1}\otimes\eta_{1}),\Xi_{0}(\xi_{2}\otimes \eta_{2})\rangle =\langle s(\xi_{2}^{*}\oplus 0)s(\xi_{1}\oplus 0)s(0\oplus\eta_{1}) \Omega,s(0\oplus\eta_{2})\Omega\rangle\] \[=\langle E_{M_{q}(0\oplus\mathcal{K}\oplus 0)}(s(\xi_{2}^{*} \oplus 0)s(\xi_{1}\oplus 0))s(0\oplus\eta_{1})\Omega,s(0\oplus\eta_{2})\Omega\rangle\] \[=\langle\xi_{1}\otimes\eta_{1},\xi_{2}\otimes\eta_{2}\rangle,\] where the last inner-product is taken in \(L^{2}(M_{q}(\mathcal{H}_{1}\oplus\mathcal{K}))\,\overline{\otimes}_{M_{q}( \mathcal{K})}\,L^{2}(M_{q}(\mathcal{K}\oplus\mathcal{H}_{2}))\). It then follows that \(\Xi\) is a well-defined isometry, and it is easy to see that \(\Xi\) is \(M_{q}(\mathcal{H}_{1}\oplus\mathcal{K})\)-\(M_{q}(\mathcal{K}\oplus\mathcal{H}_{2})\) bimodular from (10). We remark that, when \(q=1\), the above proposition still holds and is easy to check, although in this case \(\xi\in\mathcal{F}(\mathcal{H}_{1}\oplus\mathcal{K})\) does not define a left-bounded vector, and so one has to properly interpret the vector \(\xi\otimes_{M_{1}(\mathcal{K})}\eta\). **Theorem 8.11**.: _Let \(\mathcal{H}\) be a real Hilbert space and \(-1\leq q\leq 1\). If \(p\in\mathcal{P}(M_{q}(\mathcal{H}))\) is a nonzero projection and \(P\subset pM_{q}(\mathcal{H})p\) is a von Neumann subalgebra with no amenable direct summand, then \(P\) is properly proximal._ Proof.: We use the same strategy as in [10, Propositions 9.1 and 9.2]. We set \(M=M_{q}(\mathcal{H})\) and \(\tilde{M}=M_{q}(\mathcal{H})\). First, note that we have a grading of Hilbert \(M\)-bimodules \(L^{2}(\tilde{M})=\oplus_{k=0}^{\infty}\mathcal{C}_{k}\), where \(\mathcal{L}_{k}\) denotes the span of all simple tensors of the form \(\xi_{1}\otimes\cdots\otimes\xi_{n}\in\mathcal{F}(\mathcal{H}_{\mathbb{C}} \oplus\mathcal{H}_{\mathbb{C}})\) such that exactly \(k\) of the terms \(\xi_{i}\) are contained in \(0\oplus\mathcal{H}_{\mathbb{C}}\) and all other terms are contained in \(\mathcal{H}_{\mathbb{C}}\oplus 0\). Note also that \(\mathcal{L}_{k}\cong\overline{\mathcal{L}_{k}}\) and by Proposition 8.10 it follows that for \(j,k\geq 0\) we have embedding of Hilbert \(M\)-bimodules \(\mathcal{L}_{j}\,\overline{\otimes}_{M}\,\mathcal{L}_{k}\hookrightarrow \mathcal{L}_{j+k}\). Suppose \(p\in\mathcal{P}(M)\) and \(P\subset pMp\) is a von Neumann subalgebra. We may take \(r\leq p\) to be the maximal subprojection in \(\mathcal{Z}(P)\) such that \(rP\) is properly proximal. Our goal is, then, to show that \((p-r)P\) is amenable. By replacing \(P\) with \((p-r)P+(p-r)^{\perp}\mathbb{C}\), we may assume that \(1\in P\). Let \(A_{n}\in\mathbb{K}(\mathcal{H})\) be an approximate unit, and take \(\theta_{n}\in(0,\pi/2)\) with \(\theta_{n}\to 0\). We let \(V_{n}\in\mathcal{O}(\mathcal{H}\oplus\mathcal{H})\) be given by the matrix \[V_{n}=\begin{pmatrix}\cos\theta&-\sin\theta\\ \sin\theta&\cos\theta\end{pmatrix}\begin{pmatrix}A_{n}&-\sqrt{1-A_{n}^{2}}\\ \sqrt{1-A_{n}^{2}}&A_{n}\end{pmatrix}\] and we then have that \(e_{M}V_{n}^{\mathcal{F}}e_{M}\) defines a compact operator on \(L^{2}(M)\cong\mathcal{F}_{q}(\mathcal{H}_{\mathbb{C}})\) and the corresponding automophisms \(\alpha_{n}\in\operatorname{Aut}(\tilde{M})\) satisfy \(\alpha_{n}\to\operatorname{id}\) in the point-ultraweak topology as \(n\to\infty\). Since \(P\) has no properly proximal direct summand, and since \(e_{P}V_{n}^{\mathcal{F}}e_{P}\) is compact, it then follows from the argument in [10, Proposition 9.1] that there exists a \(P\)-central state \(\varphi\) on \((M^{\operatorname{op}})^{\prime}\cap\mathbb{B}(\oplus_{m\geq 1}\mathcal{L}_{m})\) such that \(\varphi_{|M}=\tau\). By Connes's versions of Day's and Namioka's tricks (see, e.g., Section 10.3 in [1]), there then exists a net of unit vectors \(\xi_{i}\in(\oplus_{m\geq 1}\mathcal{L}_{m})\,\overline{\otimes}_{M}\,( \overline{\oplus_{m\geq 1}\mathcal{L}_{m}})\cong L^{2}((M^{\operatorname{op}})^{ \prime}\cap\mathbb{B}(\oplus_{m\geq 1}\mathcal{L}_{m})\) such that \(\langle x\xi_{i},\xi_{i}\rangle=\langle\xi_{i}x,\xi_{i}\rangle=\tau(x)\) for all \(x\in M\) and \(\|a\xi_{i}-\xi_{i}a\|\to 0\) for all \(a\in P\). We may then take tensor powers \(\xi_{i,k}=\xi_{i}^{\otimes_{M}^{k}}\) and view \(\zeta_{i,k}\) as vectors in \(\oplus_{m\geq 2k}\mathcal{L}_{m}\) that are left and right \(M\)-tracial, and are asymptotically \(P\)-central. We now show that the existence of the vectors \(\zeta_{i,k}\) implies that \(P\) is amenable. We fix \(u_{1},\ldots,u_{n}\in\mathcal{U}(P)\) and \(\varepsilon>0\). There then exists a finite-dimensional subspace \(\mathcal{H}_{0}\subset\mathcal{H}\) so that \(\|E_{M_{q}(\mathcal{H}_{0})}(u_{j})-u_{j}\|_{2}<\varepsilon/2n\) for all \(1\leq j\leq n\). Since \(\mathcal{H}_{0}\) is finite-dimensional, Proposition 4.1 in [11] shows that for some \(k\geq 1\) we have that \(\oplus_{m\geq 2k}\mathcal{L}_{m}\) is weakly contained in the coarse correspondence as a Hilbert \(M_{q}(\mathcal{H}_{0})\)-bimodule. Thus, we have \[\|\sum_{j=1}^{n}u_{j}\otimes u_{j}^{\operatorname{op}}\|_{M\otimes M ^{\operatorname{op}}} \geq\|\sum_{j=1}^{n}E_{M_{q}(\mathcal{H}_{0})}(u_{j})\otimes E_{M_{q}( \mathcal{H}_{0})}(u_{j})^{\operatorname{op}}\|\] \[\geq\lim_{i\to\infty}\|\sum_{j=1}^{n}E_{M_{q}(\mathcal{H}_{0})}(u_ {j})\zeta_{i,k}E_{M_{q}(\mathcal{H}_{0})}(u_{j})^{*}\|\] \[=\lim_{i\to\infty}\|\sum_{j=1}^{n}E_{M_{q}(\mathcal{H}_{0})}(u_{j} )E_{M_{q}(\mathcal{H}_{0})}(u_{j})^{*}\zeta_{i,k}\|\] \[=\|\sum_{j=1}^{n}E_{M_{q}(\mathcal{H}_{0})}(u_{j})E_{M_{q}( \mathcal{H}_{0})}(u_{j})^{*}\|_{2}\] \[\geq n-\varepsilon.\] Since \(\varepsilon>0\) was arbitrary, it then follows from [10] that \(P\) is amenable. **Corollary 8.12**.: _Let \(\mathcal{H}\) be a real Hilbert space and \(-1\leq q\leq 1\), then \(M_{q}(\mathcal{H})\) is strongly solid._ Proof.: By [1, Theorem A] the von Neumann algebra \(M=M_{q}(\mathcal{H})\) has the complete metric approximation property (see [11] for a simple proof), and hence, by [1, Theorem 3.5] for any embedding of a diffuse amenable von Neumann algebra \(P\subset M_{q}(\mathcal{H})\), we have that \(\mathcal{N}_{M}(P)\!\curvearrowright\!P\) is weakly compact. Theorem 8.11, together with [1, Theorem 6.11] then shows that \(\mathcal{N}_{M}(P)^{\prime\prime}\) is amenable.
2309.09424
Drastic Circuit Depth Reductions with Preserved Adversarial Robustness by Approximate Encoding for Quantum Machine Learning
Quantum machine learning (QML) is emerging as an application of quantum computing with the potential to deliver quantum advantage, but its realisation for practical applications remains impeded by challenges. Amongst those, a key barrier is the computationally expensive task of encoding classical data into a quantum state, which could erase any prospective speed-ups over classical algorithms. In this work, we implement methods for the efficient preparation of quantum states representing encoded image data using variational, genetic and matrix product state based algorithms. Our results show that these methods can approximately prepare states to a level suitable for QML using circuits two orders of magnitude shallower than a standard state preparation implementation, obtaining drastic savings in circuit depth and gate count without unduly sacrificing classification accuracy. Additionally, the QML models trained and evaluated on approximately encoded data display an increased robustness to adversarially generated input data perturbations. This partial alleviation of adversarial vulnerability, possible due to the "drowning out" of adversarial perturbations while retaining the meaningful large-scale features of the data, constitutes a considerable benefit for approximate state preparation in addition to lessening the requirements of the quantum hardware. Our results, based on simulations and experiments on IBM quantum devices, highlight a promising pathway for the future implementation of accurate and robust QML models on complex datasets relevant for practical applications, bringing the possibility of NISQ-era QML advantage closer to reality.
Maxwell T. West, Azar C. Nakhl, Jamie Heredge, Floyd M. Creevey, Lloyd C. L. Hollenberg, Martin Sevior, Muhammad Usman
2023-09-18T01:49:36Z
http://arxiv.org/abs/2309.09424v1
# Drastic Circuit Depth Reductions with Preserved Adversarial Robustness ###### Abstract Quantum machine learning (QML) is emerging as an application of quantum computing with the potential to deliver quantum advantage, but its realisation for practical applications remains impeded by challenges. Amongst those, a key barrier is the computationally expensive task of encoding classical data into a quantum state, which could erase any prospective speed-ups over classical algorithms. In this work, we implement methods for the efficient preparation of quantum states representing encoded image data using variational, genetic and matrix product state based algorithms. Our results show that these methods can approximately prepare states to a level suitable for QML using circuits two orders of magnitude shallower than a standard state preparation implementation, obtaining drastic savings in circuit depth and gate count without unduly sacrificing classification accuracy. Additionally, the QML models trained and evaluated on approximately encoded data display an increased robustness to adversarially generated input data perturbations. This partial alleviation of adversarial vulnerability, possible due to the "drowning out" of adversarial perturbations while retaining the meaningful large-scale features of the data, constitutes a considerable benefit for approximate state preparation in addition to lessening the requirements of the quantum hardware. Our results, based on simulations and experiments on IBM quantum devices, highlight a promising pathway for the future implementation of accurate and robust QML models on complex datasets relevant for practical applications, bringing the possibility of NISQ-era QML advantage closer to reality. ## Introduction The incredible capabilities of Transformer based models [1, 2, 3, 4, 5] has provoked society-wide interest in artificial intelligence (AI) and machine learning (ML), which is increasingly moving beyond academic and scientific applications and into business, industrial and military use cases. Concurrently, the emergence of programmable quantum computers has led to intense interest in the prospect of quantum machine learning (QML) [6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17] - the study of ML algorithms which exploit the capabilities of quantum computers. Given the rapid proliferation of ML technology, any speedups, enhancements to robustness or other advantages which can be afforded by quantum computing have the potential to be highly impactful. Indeed QML models have been shown to in principle possess the ability to make use of classically intractable features of data to outperform conventional classical methods through exponential speed-ups [15] and enhanced resilience to adversarial attacks [18]. However, it is unclear whether such features will generally prove useful for generic classification tasks, particularly classical data which has no inherently quantum mechanical source or structure. Before it can be processed by a quantum computer, such data must first be encoded into a quantum state, a generically \(\mathcal{O}(2^{n_{\text{qubit}}})\) process [19] which has the potential to dominate the runtime of a QML algorithm and negate any potential quantum advantage (see Figure 1(a)). Quantum state preparation [20, 21, 22, 19] is therefore often the first and most computationally expensive subroutine of a QML model, but remains a comparatively understudied component of the QML pipeline. With the quantum devices of the current generation offering only limited capabilities in terms of both number of qubits and gate fidelities, the implementation and benchmarking of efficient quantum state preparation techniques (reducing circuit depths) such as performed here is an important step towards improving the prospects of QML models for datasets of practical interest. This work demonstrates that one can preserve the accuracy and increase the adversarial robustness of QML models classifying image data by moving to approximate data encoding schemes, while simultaneously reducing the encoding circuit complexities by orders of magnitude, providing a crucial advantage in experimentally implementing QML on noisy quantum hardware platforms. The ability to do this stems from the generic resilience of machine learning models to random perturbations of their inputs [29]. For example, Figure 1(b) shows that our QML models are capable of maintaining their accuracy on noisy image data (with fidelities as low as 60%) encoded exactly into quantum states. This indicates that an approximation to the input quantum state compromising fidelity but resulting in an easier circuit preparation with reduced depths (speeding up the state preparation) may have only a minor impact on the QML accuracy. Motivated by this remarkable property, we consider three independent approximate state preparation methods (see Figure 1(c)), based on matrix product states (MPS) [30, 31, 32], genetic algorithms (specifically, the GASP method of Ref. [23]) and variational circuits. Our results show that all three methods are capable of preparing states representing images from the standard MNIST [33] and
2309.06717
Bias Amplification Enhances Minority Group Performance
Neural networks produced by standard training are known to suffer from poor accuracy on rare subgroups despite achieving high accuracy on average, due to the correlations between certain spurious features and labels. Previous approaches based on worst-group loss minimization (e.g. Group-DRO) are effective in improving worse-group accuracy but require expensive group annotations for all the training samples. In this paper, we focus on the more challenging and realistic setting where group annotations are only available on a small validation set or are not available at all. We propose BAM, a novel two-stage training algorithm: in the first stage, the model is trained using a bias amplification scheme via introducing a learnable auxiliary variable for each training sample; in the second stage, we upweight the samples that the bias-amplified model misclassifies, and then continue training the same model on the reweighted dataset. Empirically, BAM achieves competitive performance compared with existing methods evaluated on spurious correlation benchmarks in computer vision and natural language processing. Moreover, we find a simple stopping criterion based on minimum class accuracy difference that can remove the need for group annotations, with little or no loss in worst-group accuracy. We perform extensive analyses and ablations to verify the effectiveness and robustness of our algorithm in varying class and group imbalance ratios.
Gaotang Li, Jiarui Liu, Wei Hu
2023-09-13T04:40:08Z
http://arxiv.org/abs/2309.06717v2
# Bias Amplification Enhances Minority Group Performance ###### Abstract Neural networks produced by standard training are known to suffer from poor accuracy on rare subgroups despite achieving high accuracy on average, due to the correlations between certain spurious features and labels. Previous approaches based on worst-group loss minimization (_e.g._ Group-DRO) are effective in improving worse-group accuracy but require expensive group annotations for all the training samples. In this paper, we focus on the more challenging and realistic setting where group annotations are only available on a small validation set or are not available at all. We propose Bam, a novel two-stage training algorithm: in the first stage, the model is trained using a _bias amplification_ scheme via introducing a learnable _auxiliary variable_ for each training sample; in the second stage, we upweight the samples that the bias-amplified model misclassifies, and then continue training the same model on the reweighted dataset. Empirically, Bam achieves competitive performance compared with existing methods evaluated on spurious correlation benchmarks in computer vision and natural language processing. Moreover, we find a simple stopping criterion based on _minimum class accuracy difference_ that can remove the need for group annotations, with little or no loss in worst-group accuracy. We perform extensive analyses and ablations to verify the effectiveness and robustness of our algorithm in varying class and group imbalance ratios. ## 1 Introduction The presence of spurious correlations in the data distribution, also referred to as "shortcuts" (Geirhos et al., 2020), is known to cause machine learning models to generate unintended decision rules that rely on spurious features. For example, image classifiers can largely use background instead of the intended combination of object features to make predictions (Beery et al., 2018). Similar phenomenon is also prevalent in natural language processing (Gururangan et al., 2018) and reinforcement learning (Lehman et al., 2020). In this paper, we focus on the _group robustness_ formulation of such problems (Sagawa et al., 2019), where we assume the existence of _spurious attributes_ in the training data and define _groups_ to be the combination of class labels and spurious attributes. The objective is to achieve high _worst-group accuracy_ on test data, which would indicate that the model is not exploiting the spurious attributes. Under this setup, one type of methods use a distributionally robust optimization framework to directly minimize the worst-group training loss (Sagawa et al., 2019). While these methods are effective in improving worst-group accuracy, they require knowing the group annotations for all training examples, which is expensive and oftentimes unrealistic. In order to resolve this issue, a line of recent work focused on designing methods that do not require group annotations for the training data, but need them for a small set of validation data (Liu et al., 2021; Nam et al., 2020; 2022; Zhang et al., 2022). A common feature shared by these methods is that they all consist of training two models: the first model is trained using plain empirical risk minimization (ERM) and is intended to be "biased" toward certain groups; then, certain results from the first model are utilized to train a debiased second model to achieve better worst-group performance. For instance, a representative method is Jtt(Liu et al., 2021), which, after training the first model using ERM for a few epochs, trains the second model while upweighting the training examples incorrectly classified by the first model. The core question that motivates this paper is: _Since the first model is intended to be biased, can we amplify its bias in order to improve the final group robustness?_ Intuitively, a bias-amplified first model can provide better information to guide the second model to be debiased, which can potentially lead to improving group robustness. To this end, we propose a novel two-stage algorithm, Bam(Bias AMplification), for improving worst-group accuracy without any group annotations for training data: * _Stage 1: Bias amplification._ We train a bias-amplified model by introducing a trainable auxiliary variable for each training example. * _Stage 2: Rebalanced training._ We upweight the training examples that are misclassified in Stage 1, and continue training the same model instead of retraining a new model.1 Footnote 1: In Figure 1, we use Grad-CAM visualization to illustrate that our bias-amplified model from Stage 1 focuses more on the image background while the final model after Stage 2 focuses on the object target. Evaluated on various standard benchmark datasets for spurious correlations, including Waterbirds (Wah et al., 2011; Sagawa et al., 2019), CelebA (Liu et al., 2015; Sagawa et al., 2019), MultiNLI (Williams et al., 2018; Sagawa et al., 2019), and CivilComments-WILDS (Borkan et al., 2019; Koh et al., 2021), we find that Bam achieves competitive worst-group accuracy compared to existing methods in the setting where group annotations are only available on a validation set. Digging deeper into the mechanism of Bam, we observe that auxiliary variables learned in Stage 1 exhibit clearly different magnitudes between majority and minority group examples, thus confirming their effectiveness in bias amplification. We also find that Bam achieves robust performance across hyperparameter choices. In addition, our ablation studies demonstrate the clear advantage of continued training in Stage 2 over training a separate model from scratch and the effectiveness of each of our proposed components. Furthermore, we explore the possibility of _completely_ removing the need for group annotations. We find that low class accuracy difference (which does not require any group annotations to evaluate) is strongly correlated with high worst-group accuracy. Using minimum class accuracy difference as the stopping criterion, Figure 1: Using Grad-CAM (Selvaraju et al., 2017) to visualize the effect of bias amplification and rebalanced training stages, where the classifier heavily relies on the background information to make predictions after bias amplification but focuses on the useful feature (bird) itself after the rebalanced training stage. Bam outperforms the previous state-of-the-art annotation-free method, GEORGE (Sohoni et al., 2020), by a considerable margin, and closes the performance gap between GEORGE and fully-supervised Group-DRO by an average of 88% on the image classification datasets. We also perform extensive controlled experiments and find that this approach is robust across different datasets, varying dataset sizes, and varying imbalance ratios. ## 2 Related Works A variety of recent work discussed different realms of robustness, for instance, class imbalance (He and Garcia, 2009; Huang et al., 2016; Khan et al., 2017; Johnson and Khoshgoftaar, 2019; Thabtah et al., 2020), and robustness in distribution shift, where the target data distribution is different from the source data distribution (Clark et al., 2019; Zhang et al., 2020; Marklund et al., 2020; Lee et al., 2022; Yao et al., 2022). In this paper, we mainly focus on improving group robustness. Categorized by the amount of information available for training and validation, we discuss three directions below. **Improving Group Robustness with Training Group Annotations.** Multiple works used training group annotations to improve worst-group accuracy (Byrd and Lipton, 2019; Khani et al., 2019; Goel et al., 2020; Cao et al., 2020; Sagawa et al., 2020). Other works include minimizing the worst-group training loss using distributionally robust optimization (Group-DRO) (Sagawa et al., 2019), simple training data balancing (SUBG) (Idrissi et al., 2022), and retraining the last layer of the model on the group-balanced dataset (DFR) (Kirichenko et al., 2022). These methods achieve state-of-the-art performance on all benchmark datasets. However, the acquisition of spurious attributes of the entire training set is expensive and often unrealistic. **Improving Group Robustness with Validation Group Annotations Only.** Acknowledging the cost of obtaining group annotations, many recent works focus on the setting where training group annotations are not available (Duchi and Namkoong, 2019; Oren et al., 2019; Levy et al., 2020; Pezeshki et al., 2021). Taghanaki et al. (2021) proposed a transformation network to remove the spurious correlated features from image datasets and then choose classifier architectures according to the downstream task. Shu et al. (2019) utilized a small set of unbiased meta-data to reweight data samples. CVaR DRO (Duchi et al., 2019) introduced a variant of distributionally robust optimization that dynamically reweights data samples that have the highest losses. Jeon et al. (2022) proposed a method that utilizes the multi-variant and unbiased characteristics of CNNs through hierarchical feature extraction and orthogonal regularization. In particular, the most popular recent methods of achieving high worst-group accuracy involve training two models. CNC (Zhang et al., 2022) first trains an ERM model to help infer pseudo-group labels by clustering output features and then adopts a standard contrastive learning approach to improve robustness. SSA (Nam et al., 2022) uses a subset of the validation samples with group annotations for training to obtain pseudo spurious attributes, and then trains a robust model by minimizing worst-group loss, the same as Group-DRO. Similarly, \(\text{DFR}^{\text{Val}}_{\text{Tr}}\)(Kirichenko et al., 2022) uses validation data with group annotations for training and tuning hyperparameters, though it just requires retraining the last layer of the model. Kim et al. (2022) introduces a multi-model approach that identifies hard-to-learn samples and obtains their weights based on the consensus of member classifiers of a committee, and simultaneously trains a main classifier through knowledge distillation. Most related to this paper are the approaches that use one model to identify minority samples and then train a second model based on the results predicted by the first model (Yaghoobzadeh et al., 2019; Utama et al., 2020). LfF (Nam et al., 2020) trains two models concurrently, where one model is intentionally biased, and the other one is debiased by reweighting the gradient of the loss according to a relative difficulty score. Jtt(Liu et al., 2021) first trains an ERM model to identify minority groups in the training set (similar to EIIL (Creager et al., 2021)), and then trains a second ERM model with these selected samples to be upweighted. Hwang et al. (2022) trains an auxiliary model with generalized supervised contrastive loss to learn biased features, and then implicitly upweights of minority groups based on the mixup method (Zhang et al., 2017). Our algorithm improves over these methods by introducing trainable per-example auxiliary variables for bias amplification in the first model, as well as using continued training instead of training a separate second model. **Improving Group Robustness without any Group Annotations.** Relatively less work has been done under the condition that no group information is provided for either training or validation. Idrissi et al. (2022); Liu et al. (2021) observed a significant drop (10% - 25%) in worst-group test accuracy if using the highest _average_ validation accuracy as the stopping criterion without any group annotations. GEORGE (Sohoni et al., 2020) generates pseudo group annotations by performing clustering in the neural network's feature space, and uses the pseudo annotations to train a robust model via distributionally robust optimization. Seo et al. (2022) clustered the pseudo-attributes based on the embedding feature of a naively trained model, and then defined a trainable factor to reweight different clusters based on their sizes and target losses. However, there is a considerable performance gap between annotation-free methods and those that use group annotations. The method of per-example auxiliary variables was first introduced in Hu et al. (2020) for a different motivation (learning with noisy labels). Hu et al. (2020) proved that it recovers \(\ell_{2}\) regularization when the neural network is in the Neural Tangent Kernel (Jacot et al., 2018) regime, and provided a theoretical guarantee for the noisy label learning problem. Our result shows that this method is also applicable to improving worst-group accuracy through its bias amplification effect. ## 3 Preliminaries We adopt the group robustness formulation for spurious correlation (Sagawa et al., 2019). Consider a classification problem where each sample consists of an input \(x\in\mathcal{X}\), a label \(y\in\mathcal{Y}\), and a spurious attribute \(a\in\mathcal{A}\). For example, in CelebA, \(\mathcal{X}\) contains images of human faces and we want to classify hair color into the labels \(\mathcal{Y}=\{\text{blonde},\text{not blonde}\}\). Hair color may be highly correlated with gender \(\mathcal{A}=\{\text{male},\text{female}\}\) which is a spurious feature that can also predict the label. We say that each example belongs to a group \(g=(y,a)\in\mathcal{G}=\mathcal{Y}\times\mathcal{A}\). Let \(f:\mathcal{X}\rightarrow\mathcal{Y}\) be a classifier learned from a training dataset \(D=\{(x_{i},y_{i})\}_{i=1}^{n}\). We hope that \(f\) does not overly rely on the spurious feature \(a\) to predict the label. To this end, we evaluate the model through its _worst-group error_: \[\text{Err}_{\text{wg}}(f):=\max_{g\in\mathcal{G}}\mathbb{E}_{x,y|g}[1[f(x) \neq y]].\] We focus on the setting where no group annotations are available in the training dataset. We consider two cases under this setting: (1) group annotations are available in a validation set solely for the purpose of hyperparameter tuning, and (2) no group annotations are available at all. We will distinguish between these cases when comparing them with existing methods. ## 4 Our Approach: Bam ``` Input: Training dataset \(D\), number of epochs \(T\) in Stage 1, auxiliary coefficient \(\lambda\), and upweight factor \(\mu\). Stage 1: Bias Amplification 1. Optimize \(R_{1}(\theta,B)\) equation 1 for \(T\) epochs and save the model parameters \(\hat{\theta}_{\text{bias}}\). 2. Construct the error set \(E\) equation 2 misclassified by \(\hat{f}_{\text{bias}}(\cdot)=f_{\hat{\theta}_{\text{bias}}}(\cdot)\). Stage 2: Rebalanced Training 3. Continue training the model starting from \(\hat{\theta}_{\text{bias}}\) to optimize \(R_{2}(\theta)\) equation 3. 4. Apply a stopping criterion: * If group annotations are available for validation, stop at the highest worst-group validation accuracy; * If no group annotations are available, stop at the lowest validation class difference equation 4. ``` **Algorithm 1**Bam We now present Bam, a two-stage approach to improving worst-group accuracy without any group annotations at training time. In Stage 1, we train a _bias-amplified model_ and select examples that this model makes mistakes on. Then, in Stage 2, we continue to train the same model while upweighting the samples selected from Stage 1. ### Stage 1: Bias Amplification The key intuition behind previous two-stage approaches (_e.g._ Jtt) is that standard training via ERM tends to first fit easy-to-learn groups with spurious correlations, but not the other hard-to-learn groups where spurious correlations are not present. Therefore, the samples that the model misclassified in the first stage can be treated as a proxy for hard-to-learn groups and used to guide the second stage. We design a bias-amplifying scheme in Stage 1 with the aim of identifying a higher-quality error set to guide training in Stage 2. In particular, we introduce a trainable auxiliary variable for each example and add it to the output of the network. Let \(f_{\theta}:\mathcal{X}\rightarrow\mathbb{R}^{C}\) be the neural network with parameters \(\theta\), where \(C=|\mathcal{Y}|\) is the total number of classes. We use the following objective function in Stage 1: \[R_{1}(\theta,B)=\frac{1}{n}\sum_{i=1}^{n}\ell(f_{\theta}(x_{i})+\lambda b_{i}, y_{i}). \tag{1}\] Here, \(b_{i}\in\mathbb{R}^{C}\) is the auxiliary variable for the \(i\)-th example in the training set, and the collection of auxiliary variables \(B=(b_{1},\ldots,b_{n})\) is learnable and is learned together with the network parameters \(\theta\) via gradient-based optimization (\(B\) is initialized to be all 0). \(\ell\) is the loss function. \(\lambda\) is a hyperparameter that controls the strength of the auxiliary variables: if \(\lambda=0\), this reduces to standard ERM and auxiliary variables are not used; the larger \(\lambda\) is, the more we are relying on auxiliary variables to reduce the loss. The introduction of auxiliary variables makes it more difficult for the network \(f_{\theta}\) to learn, because the auxiliary variables can do the job of fitting the labels. We expect this effect to be more pronounced for hard-to-learn examples. For example, if in normal ERM training it takes a long time for the network \(f_{\theta}\) to fit a hard-to-learn example \((x_{i},y_{i})\), after introducing the auxiliary variable \(b_{i}\), it will be much easier to use \(b_{i}\) to fit the label \(y_{i}\), thus making the loss \(\ell(f_{\theta}(x_{i})+\lambda b_{i},y_{i})\) drop relatively faster. This will prohibit the network \(f_{\theta}\) itself from learning this example. Such effect will be smaller for easy-to-learn examples, since the network itself can still quickly fit the labels without much reliance on the auxiliary variables. Therefore, **adding auxiliary variables amplifies the bias toward easy-to-learn examples**, making hard examples even harder to learn. We note that the trivial solution where the model achieves zero loss without actually learning will not occur with a proper choice of \(\lambda\). At the end of Stage 1, we evaluate the obtained model \(\hat{f}_{\text{bias}}(\cdot)=f_{\hat{\theta}_{\text{bias}}}(\cdot)\) on the training set and identify an _error set_: (note that auxiliary variables are now removed) \[E=\{(x_{i},y_{i})\colon\hat{f}_{\text{bias}}(x_{i})\neq y_{i}\}. \tag{2}\] ### Stage 2: Rebalanced Training In Stage 2, we continue training the model starting from the parameters \(\hat{\theta}_{\text{bias}}\) from Stage 1, using a rebalanced loss that upweights the examples in the error set \(E\): \[R_{2}(\theta)=\mu\sum_{(x,y)\in E}\ell(f_{\theta}(x),y)+\sum_{(x,y)\in D\setminus E }\ell(f_{\theta}(x),y), \tag{3}\] where \(\mu\) is a hyperparamter (upweight factor). For fair comparison, our implementation of upweighting follows that of Jtt(Liu et al., 2021), i.e., we construct an upsampled dataset containing examples in the error set \(\mu\) times and all other examples once. We note that more complicated approaches have been proposed for Stage 2, _e.g._Zhang et al. (2022), but we stick with the simple rebalanced training method in order to focus on the bias amplification effect in Stage 1. ### Stopping Criterion without Any Group Annotations - Class Difference When group annotations are available in a validation set, we can simply use the _worst-group validation accuracy_ as a stopping criterion and to tune hyperparameters, similar to prior approaches (Nam et al., 2020; Liu et al., 2021; Creager et al., 2021). When no group annotations are available, a naive approach is to use the validation average accuracy as a proxy, but this results in poor worst-group accuracy (Liu et al., 2021; Idrissi et al., 2022). We identify a simple heuristic when no group annotations are available, using _minimum class difference_, which we find to be highly effective and result in little or no loss in worst-group accuracy. For a classification problem with \(C\) classes, we calculate the average of pairwise validation accuracy differences between classes as \[\text{ClassDiff}=\frac{1}{\binom{C}{2}}\sum_{1\leq i<j\leq C}|\text{Acc(class }i)-\text{Acc(class }j)|. \tag{4}\] ClassDiff can be calculated on a validation set without any group annotations. The main intuition for ClassDiff is that we expect the worst-group accuracy to be (near) highest when all group accuracies have a relatively small gap between each other. Below we present a simple claim showing that ClassDiff being small is a necessary condition for having a small group accuracy difference. **Claim 4.1** (proved in Appendix C).: _If the accuracies of any two groups differ by at most \(\epsilon\), then \(\text{ClassDiff}\leq\epsilon\)._ In all the datasets (where \(C=2\) or \(3\)) we experiment with, as well as for varying dataset size, class imbalance ratio, and group imbalance ratio, we observe that ClassDiff inversely correlates with worst-group accuracy (see Section 6.3) as long as ClassDiff does not fluctuate around the same value. Our results suggest that ClassDiff is a useful stopping criterion when no group annotations are available. Our algorithm is summarized in Algorithm 1. It has three hyperparameters: \(T\) (number of epochs in Stage 1), \(\lambda\) (auxiliary variable coefficient), and \(\mu\) (upweight factor). We provide full training details in Appendix B. ## 5 Experiments In this section, we demonstrate the effectiveness of Bam in improving worst-group accuracy compared to prior methods that are trained without spurious attributes on standard benchmark datasets. ### Setup We conduct our experiments on four popular benchmark datasets containing spurious correlations. Two of them are image datasets: Waterbirds and CelebA, and the other two are NLP datasets: MultiNLI and CivilComments-WILDS. The full dataset details are in Appendix A. Bam is trained in the absence of training group annotations throughout all experiments. We obtain the main results of Bam via tuning with and without group annotations on the validation set, following Algorithm 1. For a fair comparison, we adopt the general settings from previous methods (Jtt) and stay consistent with other approaches without extensive hyperparameter tuning (batch size, learning rate, regularization strength in Stage 2, etc.). We use pretrained ResNet-50 (He et al., 2016) for image datasets, and pretrained BERT (Devlin et al., 2019) for NLP datasets. More details can be found in Appendix B. ### Results Tables 1 and 2 report the average and worst-group test accuracies of Bam and compare it against standard ERM and recently proposed methods under different conditions, including SUBG (Idrissi et al., 2022), JTT (Liu et al., 2021), SSA (Nam et al., 2022), CNC (Zhang et al., 2022), GEORGE (Sohoni et al., 2020), and Group-DRO (Sagawa et al., 2019). We tune Bam and Jtt according to the highest worst-group validation accuracy (not annotation-free) and minimum class difference (annotation-free). First, compared with other methods that use group annotations only for hyperparameter tuning, Bam consistently achieves higher worst-group accuracies across all datasets, with the exception of the CelebA dataset on which CNC achieves better performance. We note that CNC primarily focuses on improving Stage 2 with a more complicated contrastive learning method, while Bam uses the simple upweighting method. It is possible that the combination of CNC and Bam could lead to better results. Nevertheless, the result of Bam is promising on all other datasets, even comparable to the weakly-supervised method SSA and the fully-supervised method Group-DRO on Waterbirds and CivilComments-WILDS. Second, if the validation group annotations are not available at all, Bam achieves the best performance on all four benchmark datasets among all the baseline methods. Notably, Bam recovers a significant portion of the gap in worst-group accuracy between GEORGE (previous state-of-the-art that requires no group annotations) and Group-DRO/SSA (previous state-of-the-art requiring supervision) by an average of 88% on the image classification datasets. Bam's improved performance in worst-group accuracy comes at the expense of a moderate drop in average accuracy. The tradeoff between average accuracy and worst-group accuracy is consistent with the observation made by Liu et al. (2021); Sagawa et al. (2019). We note that our implementation of Jtt follows directly from its published code, and we obtain a much higher performance on the CivilComments-WILDS dataset than originally reported by Liu et al. (2021). ## 6 Analyses and Ablations In this section, we perform detailed analyses and ablations to better understand various components of Bam, including auxiliary variables (Section 6.2), ClassDiff (Section 6.3), sensitivity of hyperparameters \(\lambda\) and \(T\) (Section 6.4), and the advantage of continued training in Stage 2 (Section 6.5). ### Additional Controlled Experiments Setup Despite the popularity of the benchmark datasets, they have significantly different major-versus-minor class/group ratios and uncommon learning patterns (Idrissi et al., 2022). For a more rigorous analysis of our proposed auxiliary variable and classDiff, we introduce three additional controlled datasets where we adjust different class/group imbalance ratios, including Colored-MNIST, Controlled-Waterbirds, and CIFAR10-MNIST. See Appendix A for details. ### Analysis of Auxiliary Variables We verify the intuition behind the auxiliary variables by visualizing their learned values and comparing between majority and minority examples. Recall that we expect minority examples to rely more on auxiliary variables (see Section 4.1). #### 6.2.1 Visualization of Auxiliary Variables In Figure 2, we visualize the distribution of the learned auxiliary variables on each group of the original Waterbirds dataset at \(T=20\). We have two observations. First, auxiliary variables for examples in majority and minority groups have clear distinctions. Second, auxiliary variables for majority group examples are in general closer to the origin, while those in minority groups tend to have larger (positive) logit values on the ground truth class they actually belong to and have smaller (negative) logit values on the class they do not belong to. The visualization shows that the auxiliary variables help with fitting hard-to-learn samples, which corroborates the intuition described in Section 4.1. We observe the same trend for any \(\lambda\in\{0.5,5,20,50,70,100\}\) as well as any \(T\) in Stage 1 in \(\{20,50,100\}\). This supports our intuition that "adding auxiliary variables amplifies the bias toward easy-to-learn examples" in Section 4.1. Figure 2: Distributions of the auxiliary variable w.r.t. the waterbird class (left) and landbird class (right) on the training set at \(T=20\). We use two distinct colors to illustrate the distributions of two groups in each class. The coordinates of data sample \(i\) relative to the origin show the bias learned by the auxiliary variable. #### 6.2.2 Visualization as Training Progresses To further explore if the auxiliary variables keep such trends in different scenarios, we take a deeper look into the training progression of the auxiliary variables in our controlled experiments, where we set a variety of different imbalance sizes/ratios. In Figures 4 and 4, scatter plots show how the values of auxiliary variables change as Stage 1 proceeds. Carefully controlling all other parameters and hyperparameters, we observe clear patterns on Controlled-Waterbirds and Colored-MNIST: First, as \(T\) grows, the logits become larger and increasingly influence the model predictions. Second, minority and majority groups are more separated from each other as training progresses, resulting in the bias amplification effect. Appendix D illustrates the values of \(b\) at more intermediate epochs \(T\) and clearly shows the trends. ### Results of ClassDiff We perform extensive experiments on the four benchmark datasets, as well as on our controlled datasets by altering class and group imbalance ratios. We find that ClassDiff works well on every single experiment, which shows the effectiveness and robustness of ClassDiff across different settings. Real-World Benchmark DatasetsFigure 5 plots the trend of the absolute class difference and the worst-group accuracy on the validation set in Stage 2 for the Waterbirds, CelebA, MultiNLI, and CivComments. Clearly, there is a strong inverse relationship between the absolute class difference and worst-group accuracy in the validation set on all four datasets, which justifies the use of class difference as a stopping criterion. Controlled DatasetsOn our controlled datasets (Colored-MNIST, CIFAR10-MNIST, and Controlled-Waterbirds), we vary the dataset sizes, class size ratios, and group size ratios. In Appendix E, we show the performance of ClassDiff in several randomly selected settings (we explain our procedure of selecting settings which does not involve cherry-picking). In _every single setting_ we have tested, we observe a similar negative correlation as in Figure 5. These findings suggest that ClassDiff could be generally effective when no group annotation is available, as it is robust across different datasets, varying dataset sizes, and imbalance characteristics. ### Sensitivity of Hyperparameters We find that Bam is robust to the choice of \(\lambda\), as Table 3 below presents the best worst-group accuracies for a wide range of \(\lambda\) on Waterbirds. Clearly, Bam demonstrates resilience against the choice of \(\lambda\). Furthermore, using a larger \(\lambda\) can eliminate the need to carefully tune the epoch number \(T\) in Stage 1. This resolves a major drawback of previous methods such as Jtt, whose performance is sensitive with respect to the choice of \(T\). Table 4 shows that even trained until full convergence in Stage 1, an appropriate choice of \(\lambda\) can still guarantee a competitive performance. ### Ablation Studies on _One-M_ vs. _Two-M_ We conduct ablation studies on the Waterbirds dataset to test the effectiveness of our proposed method of continued training in Stage 2. For consistent terminology, we define the approach that loads the model from Stage 1 and continues training the same model in Stage 2 as _One-M_. We define the approach that trains a separate ERM model in Stage 2 as _Two-M_. For a fair comparison, we employ the same hyperparameter settings throughout this section. We use the same stopping criterion here (highest worst-group validation accuracy). We tune over \(T=\{10,15,20,25,30,40,50,60,80,100,120\}\) and \(\mu=\{50,100,140\}\) for each setting, and fix \(\lambda=50\). Each experiment is replicated thrice with varying seeds to counteract randomness, subsequently averaging results. Figure 6 compares the performance between _One-M_ and _Two-M_ over a wide range of Stage 1 epochs \(T\). Notably, _One-M_ and _Two-M_ share the same error sets in each \(T\). The result suggests that _One-M_ Bam outperforms _Two-M_ Bam in every single Stage 1 epoch \(T\), further verifying our intuition that the biased knowledge accumulated from Stage 1 helps with improving the final robustness. Additionally, as explained in Section 6.4 and also shown here in Figure 6, Bam demonstrates a stable performance, regardless of the choice of \(T\)'s. By stark contrast, Jtt's performance falters with escalating \(T\), as documented in Figure 3 of Liu et al. (2021). ## 7 Conclusion In this work, we introduced Bam, a method that can effectively improve the worst-group accuracy across different NLP and CV benchmarks. Further experiments suggest the effectiveness of the bias amplification scheme of the auxiliary variable and ClassDiff through comprehensive analysis and under various experiment settings. Future work can further leverage the idea of Bam and apply the auxiliary variables to other applications, such as continual learning and curriculum learning. Conducting theoretical analysis of the bias amplification scheme can also be of great interest.